ISTQB® Certified Tester Foundation Level: A Self-Study Guide Syllabus v4.0 [1 ed.] 3031427661, 9783031427664

This book is aimed at everyone preparing for the ISTQB® Certified Tester – Foundation Level exam based on the Foundation

177 36 12MB

English Pages 422 [409] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Purpose of This Book
Book Structure
Part I: Certificate, Syllabus, and Foundation Level Exam
Part II: Discussion of the Content of the Syllabus
Part III: Answers to Questions and Exercises
Part IV: Official Sample Exam and Additional Questions
Contents
About the Authors
List of Abbreviations
Part I: Certification, Syllabus, and Foundation Level Exam
Foundation Level Certificate
History of the Foundation Level Certificate
Career Paths for Testers
Target Audience
Objectives of the International Qualification System
Foundation Level Syllabus
Business Outcomes
Learning Objectives and K-Levels
Requirements for Candidates
References to Norms and Standards
Continuous Update
Release Notes for Foundation Level Syllabus v4.0
Content of the Syllabus
Chapter 1. Fundamentals of Testing
Learning Objectives
Chapter 2. Testing Throughout the Software Development Life Cycle
Learning Objectives
Chapter 3. Static Testing
Learning Objectives
Chapter 4. Test Analysis and Design
Learning Objectives
Chapter 5. Managing the Test Activities
Learning Objectives
Chapter 6. Test Tools
Learning Objectives
Foundation Level Exam
Structure of the Exam
Exam Rules
Distribution of Questions
Tips: Before and During the Exam
Part II: The Syllabus Content
Chapter 1 Fundamentals of Testing
1.1 What Is Testing?
1.1.1 Test Objectives
1.1.2 Testing and Debugging
1.2 Why Is Testing Necessary?
1.2.1 Testing´s Contribution to Success
1.2.2 Testing and Quality Assurance (QA)
1.2.3 Errors, Defects, Failures, and Root Causes
1.3 Testing Principles
1.4 Test Activities, Testware, and Test Roles
1.4.1 Test Activities and Tasks
1.4.2 Test Process in Context
1.4.3 Testware
1.4.4 Traceability Between the Test Basis and Testware
1.4.5 Roles in Testing
1.5 Essential Skills and Good Practices in Testing
1.5.1 Generic Skills Required for Testing
1.5.2 Whole Team Approach
1.5.3 Independence of Testing
Sample Questions
Chapter 2 Testing Throughout the Software Development Life Cycle
2.1 Testing in the Context of a Software Development Cycle
2.1.1 Impact of the Software Development Life Cycle on Testing
Sequential Models
Iterative and Incremental Models
Development Methodologies and Agile Practices
2.1.2 Software Development Life Cycle and Good Testing Practices
2.1.3 Testing as a Driver for Software Development
2.1.4 DevOps and Testing
2.1.5 Shift-Left Approach
2.1.6 Retrospectives and Process Improvement
2.2 Test Levels and Test Types
2.2.1 Test Levels
2.2.1.1 Component Testing
2.2.1.2 Component Integration Testing and System Integration Testing
2.2.1.3 System Testing
2.2.1.4 Acceptance Testing
2.2.2 Test Types
2.2.2.1 Functional Testing
2.2.2.2 Nonfunctional Testing
2.2.2.3 White-Box Testing
2.2.2.4 Black-Box Testing
2.2.2.5 Test Levels vs. Test Types
2.2.3 Confirmation Testing and Regression Testing
2.3 Maintenance Testing
Sample Questions
Chapter 3 Static Testing
3.1 Static Testing Basics
3.1.1 Work Products Examinable by Static Testing
3.1.2 Value of Static Testing
3.1.3 Differences Between Static Testing and Dynamic Testing
3.2 Feedback and Review Process
3.2.1 Benefits of Early and Frequent Stakeholder Feedback
3.2.2 Review Process Activities
3.2.3 Roles and Responsibilities in Reviews
3.2.4 Review Types
3.2.5 Success Factors for Reviews
3.2.6 (*) Review Techniques
Sample Questions
Chapter 4 Test Analysis and Design
4.1 Test Techniques Overview
4.2 Black-Box Test Techniques
4.2.1 Equivalence Partitioning (EP)
4.2.2 Boundary Value Analysis (BVA)
4.2.3 Decision Table Testing
4.2.4 State Transition Testing
4.2.5 (*) Use Case Testing
4.3 White-Box Test Techniques
4.3.1 Statement Testing and Statement Coverage
4.3.2 Branch Testing and Branch Coverage
4.3.3 The Value of White-Box Testing
4.4 Experience-Based Test Techniques
4.4.1 Error Guessing
4.4.2 Exploratory Testing
4.4.3 Checklist-Based Testing
4.5 Collaboration-Based Test Approaches
4.5.1 Collaborative User Story Writing
4.5.2 Acceptance Criteria
4.5.3 Acceptance Test-Driven Development (ATDD)
Sample Questions
Exercises
Chapter 5 Managing the Test Activities
5.1 Test Planning
5.1.1 Purpose and Content of a Test Plan
5.1.2 Tester´s Contribution to Iteration and Release Planning
5.1.3 Entry Criteria and Exit Criteria
5.1.4 Estimation Techniques
5.1.5 Test Case Prioritization
5.1.6 Test Pyramid
5.1.7 Testing Quadrants
5.2 Risk Management
5.2.1 Risk Definition and Risk Attributes
5.2.2 Project Risks and Product Risks
5.2.3 Product Risk Analysis
5.2.4 Product Risk Control
5.3 Test Monitoring, Test Control, and Test Completion
5.3.1 Metrics Used in Testing
5.3.2 Purpose, Content, and Audience for Test Reports
5.3.3 Communicating the Status of Testing
5.4 Configuration Management
5.5 Defect Management
Sample Questions
Exercises for Chapter 5
Chapter 6 Test Tools
6.1 Tool Support for Testing
6.2 Benefits and Risks of Test Automation
Sample Questions
Part III: Answers to Questions and Exercises
Answers to Sample Questions
Answers to Questions from Chap. 1
Answers to Questions from Chap. 2
Answers to Questions from Chap. 3
Answers to Questions from Chap. 4
Answers to Questions from Chap. 5
Answers to Questions from Chap. 6
Solutions to Exercises
Solutions to Exercises from Chap. 4
Solutions to Exercises from Chap. 5
Part IV: Official Sample Exam
Exam Set A
Additional Sample Questions
Exam Set A: Answers
Additional Sample Questions-Answers
References
Index
Recommend Papers

ISTQB® Certified Tester Foundation Level: A Self-Study Guide Syllabus v4.0 [1 ed.]
 3031427661, 9783031427664

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lucjan Stapp · Adam Roman Michaël Pilaeten

ISTQB® Certified Tester Foundation Level A Self-Study Guide Syllabus v4.0

ISTQB® Certified Tester Foundation Level

Lucjan Stapp • Adam Roman • Michaël Pilaeten

ISTQB® Certified Tester Foundation Level A Self-Study Guide Syllabus v4.0

Lucjan Stapp Warszawa, Poland

Adam Roman Jagiellonian University Kraków, Poland

Michaël Pilaeten Londerzeel, Belgium

ISBN 978-3-031-42766-4 ISBN 978-3-031-42767-1 https://doi.org/10.1007/978-3-031-42767-1

(eBook)

Translation from the Polish language edition: “Certyfikowany Tester ISTQB. Przygotowanie do egzaminu według sylabusa w wersji 4.0” by Lucjan Stapp et al., © Helion.pl sp. z o.o., Gliwice, Poland (https:// helion.pl/) - Polish rights only, all other rights with the authors 2023. Published by Helion.pl sp. z o.o., Gliwice, Poland (https://helion.pl/). All Rights Reserved. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover Illustration: © trahko / Stock.adobe.com This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

Purpose of This Book This book is aimed at those preparing for the ISTQB® Certified Tester—Foundation Level exam based on the Foundation Level syllabus (version 4.0) published in 2023. Our goal was to provide candidates with reliable knowledge based on this document. We know from experience that one can find a lot of information about ISTQB® syllabi and exams on the Internet, but much of it is unfortunately of poor quality. It even happens that materials found on the Web contain serious errors. In addition, due to the significant changes that have taken place in the syllabus compared to the previous version (3.1.1) published in 2018, the amount of material available to candidates based on the new syllabus is still small. This book expands and details many issues that are described in the syllabus itself in a perfunctory or general way. According to the ISTQB® guidelines for syllabusbased training, an exercise must be provided for each learning objective at the K3 level and a practical example must be provided for each objective at the K2 or K3 level.1 In order to satisfy these requirements, we have prepared exercises and examples for all learning objectives at these levels. In addition, for each learning objective, we present one or more sample exam questions similar to those that the candidate will see on the exam. This makes the book an excellent aid for studying, preparing for the exam, and verifying acquired knowledge.

Book Structure The book consists of four main parts.

1

More about the learning objectives and K levels is given below. v

vi

Preface

Part I: Certificate, Syllabus, and Foundation Level Exam Part I provides official information on the content and structure of the syllabus and the ISTQB® Certified Tester—Foundation Level exam. It also discusses the ISTQB® certification structure. This section also explains the basic technical concepts on which the syllabus and exam structure are based. We explain what the learning objectives and K levels are and what are the rules for building and administering the actual exam. It is worth familiarizing yourself with these issues, as understanding them will help you prepare much better for the exam.

Part II: Discussion of the Content of the Syllabus Part II is the main part of the textbook. Here, we discuss in detail all the content and learning objectives of the Foundation Level syllabus. This part consists of six chapters, corresponding to the six chapters of the syllabus. Each learning objective at K2 or K3 level is illustrated with a practical example, and each learning objective at K3 level is illustrated with a practical exercise. At the beginning of each chapter, definitions of keywords applicable to the chapter are given. Each keyword, at the place of its first relevant use in the text, is marked in bold and with a book icon. At the end of each chapter, the reader will find sample exam questions covering all the learning objectives included in this chapter. The book contains 70 original sample exam questions covering all the learning objectives, as well as 14 practical exercises corresponding to the K3 level learning objectives. These questions and exercises are not part of the official ISTQB® materials but are constructed by the authors using the principles and rules that apply to their creation for the actual exams. Thus, they are additional material for the readers, allowing them to verify their knowledge after reading each chapter and better understand the material presented. Optional Material The text in the box denotes optional material. It relates to the content of the syllabus but goes beyond it and is not subject to examination. It is “for those, who are curious.” Sections with titles marked with an asterisk (*) are optional. They cover the material that was mandatory for the exam according to the old version of the syllabus. We decided to leave these chapters in the book because of their importance and practical application. The reader who uses the textbook only to study for the exam can skip these chapters while reading. These optional sections are:

Preface

vii

Section 3.2.6—Review Techniques Section 4.2.5—Use Case Testing

Part III: Answers to Questions and Exercises In Part III, we provide solutions to all sample exam questions and exercises appearing in Part II of the book. The solutions are not limited to just giving the correct answers but also include their justifications. They will help the reader to better understand how the real exam questions are created and to better prepare for solving them during the real exam.

Part IV: Official Sample Exam and Additional Questions The last part, Part IV of the textbook, contains the official sample ISTQB® exam for the Foundation Level certification, additional questions covering learning objectives not covered in the exam, and information about the correct answers and justifications for those answers. The book is therefore structured in such a way that all important and useful information is in one place: • Exam structure and rules • The content discussed in the syllabus with its comprehensive discussion and examples • Definitions of terms, the knowledge of which is mandatory for the exam • Original sample test questions and exercises, with correct answers and their justification • Sample ISTQB® exam with correct answers and their justification We hope that the material presented in this publication will help all those interested in obtaining the ISTQB® Certified Tester—Foundation Level certification. Warszawa, Poland Kraków, Poland Londerzeel, Belgium

Lucjan Stapp Adam Roman Michaël Pilaeten

Contents

Part I

Certification, Syllabus, and Foundation Level Exam

Foundation Level Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History of the Foundation Level Certificate . . . . . . . . . . . . . . . . . . . . . . . Career Paths for Testers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Target Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Objectives of the International Qualification System . . . . . . . . . . . . . . . . .

. . . . .

3 3 4 5 5

Foundation Level Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Business Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Learning Objectives and K-Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements for Candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References to Norms and Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continuous Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Release Notes for Foundation Level Syllabus v4.0 . . . . . . . . . . . . . . . . . . . Content of the Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 1. Fundamentals of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 2. Testing Throughout the Software Development Life Cycle . . . Chapter 3. Static Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Test Analysis and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Managing the Test Activities . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 7 7 9 9 10 10 11 11 12 13 13 14 15

Foundation Level Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structure of the Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exam Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tips: Before and During the Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 17 17 18 20

. . . . .

ix

x

Part II

Contents

The Syllabus Content

Chapter 1 Fundamentals of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 What Is Testing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Testing and Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Why Is Testing Necessary? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Testing’s Contribution to Success . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Testing and Quality Assurance (QA) . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Errors, Defects, Failures, and Root Causes . . . . . . . . . . . . . . . . . . 1.3 Testing Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Test Activities, Testware, and Test Roles . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Test Activities and Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Test Process in Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Testware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Traceability Between the Test Basis and Testware . . . . . . . . . . . . . 1.4.5 Roles in Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Essential Skills and Good Practices in Testing . . . . . . . . . . . . . . . . . . . 1.5.1 Generic Skills Required for Testing . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Whole Team Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.3 Independence of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25 27 28 29 31 34 35 36 41 47 47 53 54 60 61 64 64 66 67 69

Chapter 2 Testing Throughout the Software Development Life Cycle . . 2.1 Testing in the Context of a Software Development Cycle . . . . . . . . . . . 2.1.1 Impact of the Software Development Life Cycle on Testing . . . . . . 2.1.2 Software Development Life Cycle and Good Testing Practices . . . . 2.1.3 Testing as a Driver for Software Development . . . . . . . . . . . . . . . . 2.1.4 DevOps and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.5 Shift-Left Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 Retrospectives and Process Improvement . . . . . . . . . . . . . . . . . . . 2.2 Test Levels and Test Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Test Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Test Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Confirmation Testing and Regression Testing . . . . . . . . . . . . . . . . 2.3 Maintenance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 76 77 87 87 92 96 97 99 99 115 124 127 130

Chapter 3 Static Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Static Testing Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Work Products Examinable by Static Testing . . . . . . . . . . . . . . . 3.1.2 Value of Static Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Differences Between Static Testing and Dynamic Testing . . . . . . . 3.2 Feedback and Review Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Benefits of Early and Frequent Stakeholder Feedback . . . . . . . . . 3.2.2 Review Process Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133 134 135 136 140 143 143 144

. . . . . . . .

Contents

xi

3.2.3 Roles and Responsibilities in Reviews . . . . . . . . . . . . . . . . . . . . . 3.2.4 Review Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Success Factors for Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.6 (*) Review Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

150 151 160 162 165

Chapter 4 Test Analysis and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Test Techniques Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Black-Box Test Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Equivalence Partitioning (EP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Boundary Value Analysis (BVA) . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Decision Table Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 State Transition Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 (*) Use Case Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 White-Box Test Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Statement Testing and Statement Coverage . . . . . . . . . . . . . . . . . . 4.3.2 Branch Testing and Branch Coverage . . . . . . . . . . . . . . . . . . . . . . 4.3.3 The Value of White-Box Testing . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experience-Based Test Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Error Guessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Exploratory Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Checklist-Based Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Collaboration-Based Test Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Collaborative User Story Writing . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Acceptance Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Acceptance Test-Driven Development (ATDD) . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

169 171 176 177 187 194 199 208 211 212 214 217 219 220 223 226 229 229 232 234 237 245

Chapter 5 Managing the Test Activities . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Purpose and Content of a Test Plan . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Tester’s Contribution to Iteration and Release Planning . . . . . . . . 5.1.3 Entry Criteria and Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Estimation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Test Case Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.6 Test Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.7 Testing Quadrants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Risk Definition and Risk Attributes . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Project Risks and Product Risks . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Product Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Product Risk Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Test Monitoring, Test Control, and Test Completion . . . . . . . . . . . . . .

251 252 253 257 258 259 268 275 276 277 279 279 281 284 286

. . . . . . . . . . . . . . .

xii

Contents

5.3.1 Metrics Used in Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Purpose, Content, and Audience for Test Reports . . . . . . . . . . . . . . 5.3.3 Communicating the Status of Testing . . . . . . . . . . . . . . . . . . . . . . 5.4 Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Defect Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

287 288 291 292 294 296 302

Chapter 6 Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Tool Support for Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Benefits and Risks of Test Automation . . . . . . . . . . . . . . . . . . . . . . . . . Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307 307 309 310

Part III

Answers to Questions and Exercises

Answers to Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Answers to Questions from Chap. 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

313 313 317 321 323 332 337

Solutions to Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Solutions to Exercises from Chap. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Solutions to Exercises from Chap. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Part IV

Official Sample Exam

Exam Set A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Additional Sample Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Exam Set A: Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Additional Sample Questions—Answers . . . . . . . . . . . . . . . . . . . . . . . . . 391 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403

About the Authors2

Lucjan Stapp, PhD, is a retired researcher and teacher of the Warsaw University of Technology, where for many years he gave lectures and seminars on software testing and quality assurance. He is the author of more than 40 publications, including 12 on various problems related to testing and quality assurance. As a tester, on his career path, he moved from junior tester to test team leader in more than a dozen projects. He has played the role of co-organizer and speaker at many testing conferences (including TestWarez—the biggest testing conference in Poland). Stapp is the founding member of the Information Systems Quality Association (www.sjsi.org), currently its vice president. He is also a certified tester (including ISTQB® CTALTM, CTAL-TA, Agile Tester, Acceptance Tester). Adam Roman, PhD, DSc, is a professor of computer science and research and teaching fellow at the Institute of Computer Science and Computer Mathematics at Jagiellonian University, where he has been giving lectures and seminars on software testing and quality assurance for many years. He heads the Software Engineering Department and is the co-founder of the “Software Testing” postgraduate program at Jagiellonian University. His research interests include research on software measurement, defect prediction models, and effective test design techniques. As part of the Polish Committee for Standardization, he collaborated on the international ISO/IEEE 29119 Software Testing Standard. Roman is the author of monographs Testing and Software Quality: Models, Techniques, Tools, Thinking-Driven Testing, and A Study Guide to the ISTQB® Foundation Level 2018 Syllabus: Test Techniques and Sample Mock Exams as well as many scientific and popular publications in the field of software testing. He has played the role of speaker at many Polish and

2

All three authors are experts in software testing. They are co-authors of the Foundation Level syllabus version 4.0, as well as other ISTQB® syllabi. They also have practical experience in writing exam questions. xiii

xiv

About the Authors

international testing conferences (including EuroSTAR, TestWell, TestingCup, and TestWarez). He holds several certifications, including ASQ Certified Software Quality Engineer, ISTQB® Full Advanced Level, and ISTQB® Expert Level— Improving the Test Process. Roman is a member of the Information Systems Quality Association (www.sjsi.org). Michaël Pilaeten. Breaking the system, helping to rebuild it, and providing advice and guidance on how to avoid problems. That’s Michaël in a nutshell. With almost 20 years of experience in test consultancy in a variety of environments, he has seen the best (and worst) in software development. In his current role as Learning and Development Manager, he is responsible for guiding his consultants, partners, and customers on their personal and professional path toward quality and excellence. He is the chair of the ISTQB Agile workgroup and Product Owner of the ISTQB® CTFL 4.0 syllabus. Furthermore, he is a member of the BNTQB (Belgium and Netherlands Testing Qualifications Board), an accredited training for most ISTQB® and IREB trainings, and an international keynote speaker and workshop facilitator.

List of Abbreviations

AC API ASQF ATDD BDD BPMN BVA CC CD CFG CI CMMI COTS CPU CRM DDD DDoS DevOps DoD DoR DTAP EP FDD FMEA GUI IEC IEEE IaC INVEST IoT

Acceptance criteria Application Programming Interface Der Arbeitskreis für Software-Qualität und Fortbildung Acceptance test-driven development Behavior-driven development Business Process Model and Notation Boundary value analysis Cyclomatic complexity Continuous delivery Control flow graph Continuous integration Capability Maturity Model Integration Commercial off-the-shelf Central processing unit Customer relationship management Domain-driven design Distributed Denial-of-Service Development and operations Definition of Done Definition of Ready Development, testing, acceptance, and production Equivalence partitioning Feature-driven development Failure mode and effect analysis Graphical user interface International Electrotechnical Commission Institute of Electrical and Electronics Engineers Infrastructure as Code Independent, Negotiable, Valuable, Estimable, Small, and Testable Internet of Things xv

xvi

IREB ISEB ISO ISTQB KPI LO LOC MBT MC/DC MCR MTTF MTTR N/A PERT OAT QA QC QM Req SDLC SMART SQL TC TDD TMap UAT UI UML UP US WBS WIP XP XSS

List of Abbreviations

International Requirements Engineering Board Information Systems Examination Board International Organization for Standardization International Software Testing Qualifications Board Key performance indicator Learning objective Lines of code Model-based testing Modified condition/decision coverage Modern Code Review Mean time to failure Mean time to repair Not applicable Program Evaluation and Review Technique Operational acceptance testing Quality assurance Quality control Quality management Requirement Software development lifecycle Specific, Measurable, Attainable, Realistic, and Time-Bound Structured query language Test case Test-driven development Test Management Approach User acceptance testing User interface Unified Modeling Language Unified Process User story Work breakdown structure Work in process (or work in progress) eXtreme Programming Cross-site scripting

Part I

Certification, Syllabus, and Foundation Level Exam

Foundation Level Certificate

History of the Foundation Level Certificate Independent certification of software testers began in 1998 in the United Kingdom. At that time, under the auspices of the British Computer Society’s Information Systems Examination Board (ISEB), a special unit for software testing was established—the Software Testing Board (www.bcs.org.uk/iseb). In 2002, the German ASQF (www.asqf.de) also created its own qualification system for testers. The Foundation Level syllabus was created on the basis of the ISEB and ASQF syllabi, with the information contained in it reorganized, updated, and supplemented. The main focus has been on topics that provide the greatest practical support for testers. Foundation Level syllabus was created in order to: • Emphasize testing as one of the core professional specialties within software engineering • Create a standard framework for professional development of testers • Establish a system to enable recognition of testers’ professional qualifications by employers, customers, and other testers and raise the status of testers • Promote consistent good testing practices across all software engineering disciplines • Identify testing issues that are relevant and valuable to the IT industry as a whole • Create opportunities for software vendors to hire certified testers and gain a commercial advantage over their competitors by advertising their adopted tester recruitment policy • Provide an opportunity for testers and those interested in testing to gain an internationally recognized qualification in the field Existing basic certifications in the field of software testing (e.g., certifications issued by ISEB, ASQF, or ISTQB® national councils) that were awarded prior to the inception of the international certificate are considered equivalent to this certificate.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_1

3

4

Foundation Level Certificate

The basic certificate does not expire and does not need to be renewed. The date of certification is placed on the certificate. Local conditions in each participating country are the responsibility of ISTQB®’s national councils (or “Member Boards”). The responsibilities of national councils are defined by ISTQB®, while the implementation of these responsibilities is left to individual member organizations. The responsibilities of national councils typically include accreditation of training providers and scheduling of exams.

Career Paths for Testers The system created by ISTQB® allows defining career paths for testing professionals based on a three-level certification program that includes Foundation Level, Advanced Level, and Expert Level. The entry point is the Foundation Level described in this book. Holding the Foundation Level certification is a prerequisite for earning subsequent certifications. The holders of a Foundation Level certificate can expand their knowledge of testing by obtaining an Advanced Level qualification. At this level, ISTQB® offers a number of educational programs. In terms of the basic path, there are three possible programs: • Technical Test Analyst (technology oriented, non-functional testing, static analysis, white-box test techniques, working with source code) • Test Analyst (customer oriented, business understanding, functional testing, black-box test techniques, and experience-based testing) • Test Manager (test process and test team management oriented) The advanced level is the starting point for acquiring further knowledge and skills at the expert level. A person who has already gained experience as a test manager, for example, can choose to further develop their career as a tester by obtaining expertlevel certifications in the areas of test management and test process improvement. In addition to the core track, ISTQB® also offers specialized education programs on topics such as acceptance testing, artificial intelligence testing, automotive testing, gaming machine testing, game testing, mobile application testing, modelbased testing, performance testing, security testing, test automation, or usability testing. In terms of agile methodologies, it is Technical Agile Tester or Agile Test Leadership at Scale. Figure 1 shows the certification scheme offered by ISTQB® as of June 25, 2023. The latest version of the ISTQB® career path review is available at www.istqb.org.

Objectives of the International Qualification System AGILE

CORE

5 SPECIALIST

EXPERT LEVEL Test Management

Acceptance Testing

Managing The Test Team Operational Test Management Strategic Test Management

ADVANCED LEVEL

Improving The Test Process

AI Testing Automotive Software Tester

Gambling Industry Tester Game Testing

Assessing Test Processes

Agile Technical Tester

Mobile Application Testing Agile Test Leadership At Scale

MVP

Implementing Test Process Improvement Model-Based Tester

Performance Testing

FOUNDATION LEVEL

ADVANCED LEVEL

Agile Tester

Test Manager

Security Tester

Test Analyst

Test Automation Engineer

Technical Test Analyst

Usability Testing

FOUNDATION LEVEL Certified Tester

START HERE

Fig. 1 Official ISTQB® certification scheme (source: www.istqb.org)

Target Audience The Foundation Level qualification is designed for anyone interested in software testing. This may include testers, test analysts, test engineers, test consultants, test managers, users performing acceptance testing, agile team members, and developers. In addition, the Foundation Level qualification is suitable for those looking to gain basic knowledge in software testing, such as project managers, quality managers, software development managers, business analysts, CIOs, and management consultants.

Objectives of the International Qualification System • • • • •

Laying the groundwork for comparing testing knowledge in different countries Making it easier for testers to find work in other countries Ensuring a common understanding of testing issues in international projects Increasing the number of qualified testers worldwide Creating an international initiative that will provide greater benefits and a stronger impact than initiatives implemented at the national level • Developing a common international body of information and knowledge about testing on the basis of syllabuses and a glossary of testing terms and raising the level of knowledge about testing among IT workers

6

Foundation Level Certificate

• Promoting the testing profession in more countries • Enabling testers to obtain widely recognized qualifications in their native language • Creating conditions for testers from different countries to share knowledge and resources • Ensuring international recognition of the status of testers and this qualification

Foundation Level Syllabus

Business Outcomes Associated with each ISTQB® syllabus is a set of so-called business outcomes (business goals). A business outcome is a concise, defined, and observable result or change in business performance, supported by a specific measure. Table 1 lists 14 business outcomes to which a candidate receiving Foundation Level certification should contribute.

Learning Objectives and K-Levels The content of each syllabus is created to cover the set of learning objectives established for that syllabus. The learning objectives support the achievement of business goals and are used to create exams for Foundation Level certificate. Understanding what the learning objectives are and knowing the relation between learning objectives and exam questions are key to effective preparation for the certification exam. All learning objectives are defined in the syllabus in such a way that each of them constitutes an indivisible whole. Learning objectives are defined at the beginning of each section of the syllabus. Each section of the syllabus deals with exactly one learning objective. This makes it possible to unambiguously link each learning objective (and exam questions) to well-defined portions of the material. The section below outlines the learning objectives applicable to the Foundation Level syllabus. Knowledge of each topic covered in the syllabus will be tested on the exam according to the assigned learning objective. Each learning objective is assigned a so-called knowledge level, also known as the cognitive level (or K level), K1, K2, or K3, which determines the degree to which a particular piece of material should be assimilated. The knowledge levels for each learning objective are © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_2

7

8

Foundation Level Syllabus

Table 1 Business outcomes pursued by a certified Foundation Level tester FL-BO1 FL-BO2 FL-BO3 FL-BO4 FL-BO5 FL-BO6 FL-BO7 FL-BO8 FL-BO9 FL-BO10 FL-BO11 FL-BO12 FL-BO13 FL-BO14

Understand what testing is and why it is beneficial Understand fundamental concepts of software testing Identify the test approach and activities to be implemented depending on the context of testing Assess and improve the quality of documentation Increase the effectiveness and efficiency of testing Align the test process with the software development life cycle Understand test management principles Write and communicate clear and understandable defect reports Understand the factors that influence the priorities and efforts related to testing Work as part of a cross-functional team Know risks and benefits related to test automation Identify essential skills required for testing Understand the impact of risk on testing Effectively report on test progress and quality

presented next to each learning objective listed at the beginning of each chapter of the syllabus. Level 1: Remember (K1)—the candidate remembers, recognizes, or recalls a term or concept. Action verbs: Identify, recall, remember, recognize Examples: • “Identify typical test objectives.” • “Recall the concepts of the test pyramid.” • “Recognize how a tester adds value to iteration and release planning.” Level 2: Understand (K2)—the candidate can select the reasons or explanations for statements related to the topic and can summarize, compare, classify, and give examples for the testing concept. Action verbs: Classify, compare, contrast, differentiate, distinguish, exemplify, explain, give examples, interpret, summarize Examples: • “Classify the different options for writing acceptance criteria.” • “Compare the different roles in testing” (look for similarities, differences, or both). • “Distinguish between project risks and product risks” (allows concepts to be differentiated). • “Exemplify the purpose and content of a test plan.” • “Explain the impact of context on the test process.” • “Summarize the activities of the review process.”

References to Norms and Standards

9

Level 3: Apply (K3)—the candidate can carry out a procedure when confronted with a familiar task or select the correct procedure and apply it to a given context. Action verbs: Apply, implement, prepare, use Examples: • “Apply test case prioritization.” • “Prepare a defect report.” • “Use boundary value analysis to derive test cases.” References for the cognitive levels of learning objectives: Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, Allyn & Bacon

Requirements for Candidates Candidates taking the ISTQB® Certified Tester Foundation Level exam are only required to have an interest in software testing. However, it is strongly recommended that candidates: • Had at least basic experience in software development or testing, such as 6 months’ experience as a tester performing system testing or acceptance testing or as a developer • Have completed an ISTQB® training course (accredited by one of the ISTQB® national councils in accordance with the ISTQB® standards) These are not formal requirements, although fulfilling them makes it significantly easier to prepare for and pass the certification exam. Anyone can take the exam, regardless of interest or experience as a tester.

References to Norms and Standards The syllabus contains references to IEEE, ISO, etc. standards and norms. The purpose of these references is to provide a conceptual framework or to refer the reader to a source they can use for additional information. It should be noted, however, that only those provisions of the referenced norms or standards to which the specifically discussed sections of the syllabus refer may be the subject of the exam. The content of norms and standards is not the subject of the exam, and references to these documents are for informational purposes only.

10

Foundation Level Syllabus

The current version of the syllabus (v4.0) refers to the following standards: • ISO/IEC/IEEE 29119—Software Testing Standard. This standard consists of several parts, the most important of which from the point of view of the syllabus are Part 1 (general concepts) [1], Part 2 (testing processes) [2], Part 3 (test documentation) [3], and Part 4 (test techniques) [4]. Part 3 of this standard replaces the withdrawn IEEE 829 standard. • ISO/IEC 25010—System and Software Quality Requirements and Evaluation (aka SQuaRE), System and software quality models [5]. This standard describes the software quality model and replaces the withdrawn ISO 9126 standard. • ISO/IEC 20246—Work Product Reviews [6]. This standard describes issues related to work product reviews. It replaces the withdrawn IEEE 1028 standard. • ISO 31000—Risk Management, Principles and Guidelines [7]. This standard describes the risk management process.

Continuous Update The IT industry is undergoing dynamic changes. In order to take into account the changing situation and ensure that stakeholders have access to useful, up-to-date information, ISTQB®’s working groups have created a list of links to supporting documents and changes in standards, which is available at www.istqb.org. The above information is not subject to the Foundation Level syllabus exam.

Release Notes for Foundation Level Syllabus v4.0 The Foundation Level v4.0 syllabus includes best practices and techniques that have stood the test of time, but a number of significant changes have been made from the previous version (v3.1) in 2018 to present the material in a more modern way, make it more relevant to the Foundation Level, and take into account changes that have occurred in software engineering in recent years. In particular: • Greater emphasis on methods and practices used in agile software development models (whole-team approach, shift-left approach, iteration planning and release planning, test pyramid, testing quadrants, “test first” practices like TDD, BDD, or ATDD). • The section on testing skills, particularly soft skills, has been expanded and deepened. • The risk management section has been reorganized and better structured. • A section discussing the DevOps approach has been added. • A section discussing in detail some test estimation techniques has been added. • The decision testing technique was replaced by branch testing. • A section describing detailed review techniques has been removed.

Content of the Syllabus

11

• The use case-based testing technique has been removed (it is discussed in the Advanced Level syllabus “Test Analyst”). • A section discussing sample test strategies has been removed. • Some tools-related content has been removed, in particular the issue of introducing a tool to an organization or conducting a pilot project.

Content of the Syllabus Chapter 1. Fundamentals of Testing • The reader learns the basic principles related to testing, the reasons why testing is required, and what the test objectives are. • The reader understands the test process, the major test activities, and testware. • The reader understands the essential skills for testing.

Learning Objectives 1.1. What is testing? FL-1.1.1 FL-1.1.2

(K1) Identify typical test objectives. (K2) Differentiate testing from debugging.

1.2. Why is testing necessary? FL-1.2.1 FL-1.2.2 FL-1.2.3

(K2) Exemplify why testing is necessary. (K1) Recall the relation between testing and quality assurance. (K2) Distinguish between root cause, error, defect, and failure.

1.3. Testing principles FL-1.3.1

(K2) Explain the seven testing principles.

1.4. Test activities, testware, and test roles FL-1.4.1 FL-1.4.2 FL-1.4.3 FL-1.4.4 FL-1.4.5

(K2) Summarize the different test activities and tasks. (K2) Explain the impact of context on the test process. (K2) Differentiate the testware that supports the test activities. (K2) Explain the value of maintaining traceability. (K2) Compare the different roles in testing.

1.5. Essential skills and good practices in testing FL-1.5.1 FL-1.5.2 FL-1.5.3

(K2) Give examples of the generic skills required for testing. (K1) Recall the advantages of the whole team approach. (K2) Distinguish the benefits and drawbacks of independence of testing.

12

Foundation Level Syllabus

Chapter 2. Testing Throughout the Software Development Life Cycle • The reader learns how testing is incorporated into different development approaches. • The reader learns the concepts of test-first approaches, as well as DevOps. • The reader learns about the different test levels, test types, and maintenance testing.

Learning Objectives 2.1. Testing in the context of a software development life cycle FL-2.1.1 FL-2.1.2 FL-2.1.3 FL-2.1.4 FL-2.1.5 FL-2.1.6

(K2) Explain the impact of the chosen software development life cycle on testing. (K1) Recall good testing practices that apply to all software development life cycles. (K1) Recall the examples of test-first approaches to development. (K2) Summarize how DevOps might have an impact on testing. (K2) Explain the shift-left approach. (K2) Explain how retrospectives can be used as a mechanism for process improvement.

2.2 Test levels and test types FL-2.2.1 FL-2.2.2 FL-2.2.3

(K2) Distinguish the different test levels. (K2) Distinguish the different test types. (K2) Distinguish confirmation testing from regression testing.

2.3 Maintenance testing FL-2.3.1

(K2) Summarize maintenance testing and its triggers.

Content of the Syllabus

13

Chapter 3. Static Testing • The reader learns about the static testing basics, the feedback, and review process.

Learning Objectives 3.1. Static testing basics FL-3.1.1 FL-3.1.2 FL-3.1.3

(K1) Recognize types of products that can be examined by the different static test techniques. (K2) Explain the value of static testing. (K2) Compare and contrast static and dynamic testing.

3.2. Feedback and review process FL-3.2.1 FL-3.2.2 FL-3.2.3 FL-3.2.4 FL-3.2.5

(K1) Identify the benefits of early and frequent stakeholder feedback. (K2) Summarize the activities of the review process. (K1) Recall which responsibilities are assigned to the principal roles when performing reviews. (K2) Compare and contrast the different review types. (K1) Recall the factors that contribute to a successful review.

Chapter 4. Test Analysis and Design • The reader learns how to apply black-box, white-box, and experience-based test techniques to derive test cases from various software work products. • The reader learns about the collaboration-based test approach.

Learning Objectives 4.1. Test techniques overview FL-4.1.1

(K2) Distinguish black-box, white-box, and experience-based test techniques.

4.2. Black-box test techniques FL-4.2.1 FL-4.2.2 FL-4.2.3 FL-4.2.4

(K3) Use equivalence partitioning to derive test cases. (K3) Use boundary value analysis to derive test cases. (K3) Use decision table testing to derive test cases. (K3) Use state transition testing to derive test cases.

14

Foundation Level Syllabus

4.3. White-box test techniques FL-4.3.1 FL-4.3.2 FL-4.3.3

(K2) Explain statement testing. (K2) Explain branch testing. (K2) Explain the value of white-box testing.

4.4. Experience-based test techniques FL-4.4.1 FL-4.4.2 FL-4.4.3

(K2) Explain error guessing. (K2) Explain exploratory testing. (K2) Explain checklist-based testing.

4.5. Collaboration-based test approaches FL-4.5.1 FL-4.5.2 FL-4.5.3

(K2) Explain how to write user stories in collaboration with developers and business representatives. (K2) Classify the different options for writing acceptance criteria. (K3) Use acceptance test-driven development (ATDD) to derive test cases.

Chapter 5. Managing the Test Activities • • • • •

The reader learns how to plan tests in general and how to estimate test effort. The reader learns how risks can influence the scope of testing. The reader learns how to monitor and control test activities. The reader learns how configuration management supports testing. The reader learns how to report defects in a clear and understandable way.

Learning Objectives 5.1 Test planning FL-5.1.1 FL-5.1.2 FL-5.1.3 FL-5.1.4 FL-5.1.5 FL-5.1.6 FL-5.1.7

(K2) Exemplify the purpose and content of a test plan. (K1) Recognize how a tester adds value to iteration and release planning. (K2) Compare and contrast entry criteria and exit criteria. (K3) Use estimation techniques to calculate the required test effort. (K3) Apply test case prioritization. (K1) Recall the concepts of the test pyramid. (K2) Summarize the testing quadrants and their relationships with test levels and test types.

Content of the Syllabus

15

5.2 Risk management FL-5.2.1 FL-5.2.2 FL-5.2.3 FL-5.2.4

(K1) Identify risk level by using risk likelihood and risk impact. (K2) Distinguish between project risks and product risks. (K2) Explain how product risk analysis may influence thoroughness and scope of testing. (K2) Explain what measures can be taken in response to analyzed product risks.

5.3 Test monitoring, test control, and test completion FL-5.3.1 FL-5.3.2 FL-5.3.3

(K1) Recall metrics used for testing. (K2) Summarize the purposes, content, and audiences for test reports. (K2) Exemplify how to communicate the status of testing.

5.4 Configuration management FL-5.4.1

(K2) Summarize how configuration management supports testing.

5.5 Defect management FL-5.5.1

(K3) Prepare a defect report.

Chapter 6. Test Tools • The reader learns to classify tools and to understand the risks and benefits of test automation.

Learning Objectives 6.1 Tool support for testing FL-6.1.1

(K2) Explain how different types of test tools support testing.

6.2. Benefits and risks of test automation FL-6.2.1

(K1) Recall the benefits and risks of test automation.

Foundation Level Exam

Structure of the Exam The description of the Foundation Level certification exam is defined in a document entitled “Exam Structure Rules,” which is available at www.istqb.org. The exam is in the form of a multiple-choice test and consists of 40 questions. To pass the exam, it is necessary to answer at least 65% of the questions correctly (i.e., 26 questions). Examinations can be taken as part of an accredited training course or independently (e.g., at an examination center or in a public examination). Completion of an accredited course is not a prerequisite for taking the exam, but attending such a course is recommended, as it allows you to better understand the material and significantly increases your chances of passing the exam. If you fail the exam, you can retake it as many times as you like.

Exam Rules • The Foundation Level exams are based on the Foundation Level syllabus [8]. • Answering an exam question may require using material from more than one section of the syllabus. • All learning objectives included in the syllabus (with cognitive levels from K1 to K3) are subject to examination. • All definitions of keywords in the syllabus are subject to examination (at the K1 level). An online dictionary is available at www.glossary.istqb.org. • Each Foundation Level exam consists of a set of multiple-choice questions based on the learning objectives for Foundation Level syllabus. The level of coverage and distribution of questions are based on the learning objectives, their K levels, and their level of importance according to the ISTQB® assessment. For details on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_3

17

18

Foundation Level Exam

Table 1 Distribution of exam questions by K level K-level K1 K2 K3 Total



• •

• •

Number of questions 8 24 8 40

Time per question [in min] 1 1 3

Average time for level K [in min] 8 24 24 56

the structure of each exam component, see the “Distribution of Questions” subsection below. In general, it is expected that the time to read, analyze, and answer questions at K1 and K2 levels should not exceed 1 min, and at K3 level, it may take 3 min. However, this is only a guideline for average time, and it is likely that some questions will require more and others less time to answer. The exam consists of 40 multiple-choice questions. Each correct answer is worth one point (regardless of the K level of the learning objective to which the question applies). The maximum possible score for the exam is 40 points. The time allotted for the exam is exactly 60 min. If the candidate’s native language is not the language of the exam, the candidate is allowed an additional 25% of the time (in the case of the Foundation Level exam, this means that the exam will last 75 min). A minimum of 65% (26 points) is required to pass. A general breakdown of questions by K level is shown in Table 1.

Rules and recommendations for writing multiple-choice questions can be found in ISTQB®—Exam Information document [9]. If you add up the expected time to answer the questions according to the rules given above, then—taking into account the distribution of questions by K levels— you will find that it should take about 56 min to answer all the questions. This leaves a 4-min reserve. Each exam question should test at least one learning objective (LO) from the Foundation Level syllabus. Questions may include terms and concepts that exist in the K1 level sections, as candidates are expected to be familiar with them. In case the questions are related to more than one LO, they should refer (and be assigned) to the learning objective with the highest K level value.

Distribution of Questions The structure of the Foundation Level exam is shown in Table 2. Each exam requires mandatory questions covering specific learning objectives, as well as a certain number of questions based on the selected learning objectives. If the number of learning objectives is greater than the number of questions for a specific group of

Distribution of Questions

19

Table 2 Detailed distribution of exam questions by K levels and chapters The distribution of questions Chapter 1 FL-1.1.1, FL-1.2.2 FL-1.5.2 FL-1.1.2, FL-1.2.1, FL-1.2.3 FL-1.3.1 FL-1.4.1, FL-1.4.2, FL-1.4.3, FL-1.4.4, FL-1.4.5 FL-1.5.1, FL-1.5.3 Chapter 2 FL-2.1.2 FL-2.1.3 FL-2.2.1, FL-2.2.2 FL-2.2.3, FL-2.3.1 FL-2.1.1, FL-2.1.6 FL-2.1.4, FL-2.1.5 Chapter 3 FL-3.1.1, FL-3.2.1, FL-3.2.3, FL-3.2.5 FL-3.1.2, FL-3.1.3 FL-3.2.2, FL-3.2.4

Chapter 4 FL-4.1.1 FL-4.3.1, FL-4.3.2, FL-4.3.3 FL-4.4.1, FL-4.4.2, FL-4.4.3 FL-4.5.1, FL-4.5.2 FL-4.2.1, FL-4.2.2, FL-4.2.3, FL-4.2.4, FL-4.5.3 Chapter 5 FL-5.1.2, FL-5.1.6, FL-5.2.1, FL-5.3.1 FL-5.1.1, FL-5.1.3 FL-5.1.7 FL-5.2.2, FL-5.2.3, FL-5.2.4 FL-5.3.2, FL-5.3.3

Klevel

Number of questions from the selected group of learning objectives

Scoring of a single question

K1 K1 K2

1 1 1

1 1 1

K2 K2

1 3

1 1

K2

1

1

K1 K1 K2 K2 K2 K2

1 1 1 1 1 1

1 1 1 1 1 1

A total of 6 questions required for Chap. 2 K1 = 2 K2 = 4 K3 = 0 Number of points for this chapter = 6

K1

2

1

K2 K2

1 1

1 1

A total of 4 questions required for Chap. 3 K1 = 2 K2 = 2 K3 = 0 Number of points for this chapter = 4

K1 K2

1 2

1 1

K2

2

1

K2 K3

1 5

1 1

K1

1

1

K2 K2 K2

1 1 1

1 1 1

K2

1

1

A total of 8 questions required for Chap. 1 K1 = 2 K2 = 6 K3 = 0 Number of points for this chapter = 8

A total of 11 questions required for Chap. 4 K1 = 0 K2 = 6 K3 = 5 Number of points for this chapter = 11

A total of 9 questions required for Chap. 5 K1 = 1 K2 = 5 K3 = 3 Number of points for this chapter = 9 (continued)

20

Foundation Level Exam

Table 2 (continued) The distribution of questions FL-5.4.1 FL-5.1.4, FL-5.1.5, FL-5.5.1 Chapter 6 FL-6.1.1 FL-6.2.1

Klevel K2 K3

Number of questions from the selected group of learning objectives 1 3

Scoring of a single question 1 1

K2 K1

1 1

1 1

Certified Tester—Foundation Level SUMMARY

40 points, 60 min

A total of 2 questions required for Chap. 6 K1 = 1 K2 = 1 K3 = 0 Number of points for this chapter = 2 A total of 40 questions

learning objectives described in the table, each question must cover a different learning objective. The analysis of Table 2 shows that the following 17 learning objectives are certain to be covered and examined: • Five K1-level questions (FL-1.5.2, FL-2.1.2, FL-2.1.3, FL-4.1.1, FL-6.2.1) • Four K2-level questions (FL-1.3.1, FL-5.1.7, FL-5.4.1, FL-6.1.1) • Eight questions covering all eight learning objectives at the K3 level Each of the remaining 23 questions will be selected from a group of two or more learning objectives. Since it is not known which of these learning objectives will be covered, the candidate must master all the material on the syllabus (all learning objectives) anyway.

Tips: Before and During the Exam In order to successfully pass the exam, the first thing you need to do is carefully read the syllabus and glossary of terms, knowledge of which is required at Foundation Level. This is because the exam is based on these two documents. It is advisable to also solve sample test questions and sample exams. On the ISTQB® website (www. istqb.org), you can find official sample exam sets in English and on the Member Boards websites in other languages. The list of all ISTQB® member boards and their websites is published on www.istqb.org/certifications/member-board-list. This publication, in addition to a number of sample questions for each chapter of the syllabus, also includes the official ISTQB® sample exam.

Tips: Before and During the Exam

21

During the exam itself, you should: • Read the questions carefully—sometimes one word changes the whole meaning of the question or is a clue to give the correct answer! • Pay attention to keywords (e.g., in what software development life cycle model the project is run). • Try to match the question with the learning objective—then it will be easier to understand the idea of the question and justify the correctness and incorrectness of individual answers. • Be careful with questions containing negation (e.g., “which of the following is NOT...”)—in such questions, three answers will be true statements, and one will be false statement. You need to indicate the answer containing the false statement. • Choose the option that directly answers the question. Some answers may be completely correct sentences, but do not answer the question asked—for example, the question is about the risks of automation, and one of the answers mentions some benefit of automation. • Guess if you don’t know which answer to choose—there are no negative points, so it doesn’t pay to leave questions unanswered. • Remember that answers with strong, categorical phrases are usually incorrect (e.g., “always,” “must be,” “never,” “in any case”)—although this rule may not apply in all cases.

Part II

The Syllabus Content

Chapter 1 Fundamentals of Testing

Keywords: Coverage

Debugging Defect

Error Failure

Quality Quality assurance

Root cause

Test analysis Test basis

the degree to which specified coverage items are exercised by a test suite expressed as a percentage. Synonyms: test coverage. the process of finding, analyzing, and removing the causes of failures in a component or system. an imperfection or deficiency in a work product where it does not meet its requirements or specifications. After ISO 24765. Synonyms: bug, fault. a human action that produces an incorrect result. After ISO 24765. Synonyms: mistake. an event in which a component or system does not perform a required function within specified limits. After ISO 24765. The degree to which a work product satisfies stated and implied needs of its stakeholders. After IREB. activities focused on providing confidence that quality requirements will be fulfilled. Abbreviation: QA. After ISO 24765. a source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed. References: CMMI the activity that identifies test conditions by analyzing the test basis. The body of knowledge used as the basis for test analysis and design. After TMap.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_4

25

26

Test case

Test completion

Test condition

Test control

Test data Test design Test execution Test implementation Test monitoring

Test object Test objective Test planning Test procedure

Test result Testing

Testware

Validation Verification

Chapter 1 Fundamentals of Testing

a set of preconditions, inputs, actions (where applicable), expected results, and postconditions, developed based on test conditions. the activity that makes testware available for later use, leaves test environments in a satisfactory condition, and communicates the results of testing to relevant stakeholders. a testable aspect of a component or system identified as a basis for testing. References: ISO 29119-1. Synonyms: test situation, test requirement. the activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned. data needed for test execution. Synonyms: test dataset. the activity that derives and specifies test cases from test conditions. the activity that runs a test on a component or system producing actual results. the activity that prepares the testware needed for test execution based on test analysis and design. the activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders. the work product to be tested. the purpose for testing. Synonyms: test goal. the activity of establishing or updating a test plan. a sequence of test cases in execution order and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution. References: ISO 29119-1. the consequence/outcome of the execution of a test. Synonyms: outcome, test outcome, result. the process within the software development life cycle that evaluates the quality of a component or system and related work products. work products produced during the test process for use in planning, designing, executing, evaluating, and reporting on testing. After ISO 29119-1. Confirmation by examination that a work product matches a stakeholder’s needs. After IREB. Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. References: ISO 9000.

1.1 What Is Testing?

27

1.1 What Is Testing? FL-1.1.1 (K1) FL-1.1.2 (K2)

Identify typical test objectives. Differentiate testing from debugging.

Nowadays, there is probably no area of life where software is not used to a greater or lesser extent. Information systems are playing an increasingly important role in our lives, from business solutions (banking sector, insurance) to consumer devices (cars), entertainment (computer games), or communications. Using software that contains defects can: • • • • •

Cause a loss of money or time Cause a loss of customer confidence Make it difficult to gain new customers Eliminate from the market In extreme situations—cause a threat to health or life

Testing of software enables the assessment of software quality and contributes to reducing the risk of software failure in action. Therefore, good testing is essential for project success. Software testing is a set of activities carried out to facilitate the detection of defects and evaluate the properties of software artifacts. Each of these testing artifacts is known as the test object. Many people, including those working in the IT industry, mistakenly think of testing as just executing tests, that is, running software to find defects. However, executing tests is only part of testing. There are other activities involved in testing. These activities occur both before (items 1–5 below) and after (item 7 below) test execution. These are: 1. 2. 3. 4. 5. 6. 7.

Test planning Test monitoring and test control Test analysis Test design Test implementation Test execution Test completion

Testing activities are organized and carried out differently in different software development life cycle (SDLC) models (see Chap. 2). Moreover, testing is often seen as an activity focused solely on verification of requirements, user stories, or other forms of specification (i.e., checking that the system meets the specified requirements). But testing also includes validation —that is, verifying that the system meets user requirements and other stakeholder needs in its operational environment. Testing may require running the component or system under test—then we have what is called dynamic testing. You can also perform testing without running the

28

Chapter 1 Fundamentals of Testing

object under test—such testing is called static testing. Testing thus also includes reviews of work products such as: • Requirements • User stories • Source code Static testing is described in more detail in Chap. 3. Dynamic testing uses different types of test techniques (e.g., black-box, white-box, and experiencebased) to derive test cases and is described in detail in Chap. 4. Testing is not just a technical activity. The testing process must also be properly planned, managed, estimated, monitored, and controlled (see Chap. 5). Testers extensively use various types of tools in their daily work (see Chap. 6), but it is important to remember that testing is largely an intellectual, sapient activity, requiring testers to have specialized knowledge, analytical skills, critical thinking, and systems thinking [10, 11]. ISO/IEC/IEEE 29119-1 [1] provides additional information on the concept of software testing. It is worth remembering that testing is a technical study to obtain information about the quality of the test object: • Technical—because we use an engineering approach, using experiments, experience, formal techniques, mathematics, logic, tools (supporting programs), measurements, etc. • Study—because it is a continuous organized search for information

1.1.1 Test Objectives Testing enables the detection of failures or defects in the work product under test. This fundamental property of testing enables to achieve a number of objectives. The primary test objectives are: • • • • • •

Evaluating work products such as requirements, user stories, designs, and code Triggering failures and finding defects Ensuring required coverage of a test object Reducing the level of risk of inadequate software quality Verifying whether specified requirements have been fulfilled Verifying that a test object complies with contractual, legal, and regulatory requirements • Providing information to stakeholders to allow them to make informed decisions • Building confidence in the quality of the test object • Validating whether the test object is complete and works as expected by the stakeholders

1.1 What Is Testing?

29

Different goals require different testing strategies. For example, in the case of component testing—i.e., testing individual pieces of an application/system (see Sect. 2.2)—the goal may be to trigger as many failures as possible so that the defects causing them can be identified and fixed early. One may also aim to increase code coverage through component testing. In acceptance testing (see Sect. 2.2), on the other hand, the goals may be: • To confirm that the system works as expected and meets its (user) requirements • To provide stakeholders with information on the risks involved in releasing the system at a given time In acceptance testing [especially user acceptance testing (UAT)], we do not expect to detect a large number of failures or defects, as this may lead to a loss of confidence by future users (see Sect. 2.2.1.4). These failures or defects should be detected at earlier stages of testing.

1.1.2 Testing and Debugging Some people think that testing is about debugging. However, it is important to remember that testing and debugging are two different activities. Testing (especially dynamic testing) is supposed to reveal failures caused by defects. Debugging , on the other hand, is a programming activity performed to identify the cause of a defect (fault), correcting the code and verifying that the defect has been correctly fixed. When dynamic tests detect a failure, a typical debugging process will consist of the following steps: • Failure reproduction (in order to make sure that the failure actually occurs and so that it can be triggered in a controlled manner in the subsequent debugging process) • Diagnosis (finding the cause of the failure, such as locating the defect responsible for the occurrence of that failure) • Fixing the cause (eliminating the cause of failure, such as fixing a defect in the code) The subsequent confirmation testing (re-testing) performed by the tester is to ensure that the fix actually fixed the failure. Most often, confirmation testing is performed by the same person who performed the original test that revealed the problem. Regression testing can also be performed after the fix to verify that the fix in the code did not cause the software to malfunction elsewhere. Confirmation testing and regression testing are discussed in detail in Sect. 2.2.3. When static testing discovers a defect, the debugging process is simply to eliminate the defect. It is not necessary, as in the case of failure discovery in dynamic testing, to perform failure reproduction and diagnosis, because in static testing, the work product under test is not run. The related source code might not even have been

30

Chapter 1 Fundamentals of Testing

Fig. 1.1 Software failure

created. This is because static testing does not find failures but directly identifies defects. Static testing is discussed in detail in Chap. 3. Example Consider a simplified version of the problem described by Myers [10]. We are testing a program that receives three natural numbers, a, b, c, as the input. The program answers “yes” or “no,” depending on whether a triangle can be constructed from segments with sides of lengths a, b, c. The program is in the form of an executable file triangle.exe and takes input values from the keyboard. The tester prepared several test cases. In particular, the tester ran the program for the input data a = b = c = 16,500 (a test case involving the entry of very large, but valid input values) and received the result presented in Fig. 1.1. The program answered “no,” i.e., it stated that it is impossible to build a triangle from three segments of length 16,500 each. This is a failure, because the expected result is “yes”—it is possible to build an equilateral triangle from such segments. The tester reported this defect to the developer. The developer repeated the test case and got the same result. The developer began to analyze the code, which looks as follows: int main(int argc, _TCHAR* argv[]) { short a,b,c,d; scanf("%d",&a); scanf("%d",&b); scanf("%d",&c); d=a+b; if (abs(a-b)0 AND a+c>b>0 AND b+c>a>0) THEN (you can build a triangle with sides of length a, b, c); • IF (monthlySalary > 10000) THEN (grant bank loan AND offer a gold card). Decision tables allow us to systematically test the correctness of the implementation of combinations of conditions. This is one of the so-called combinatorial

4.2 Black-Box Test Techniques

195

Table 4.4 Example decision table

Conditions Has a loyalty card? Total amount > $1000? Shopping in last 30 days? Actions Granted discount

Business rules 1–8 1 2 3

4

5

6

7

8

YES YES YES

YES YES NO

YES NO YES

YES NO NO

NO YES YES

NO YES NO

NO NO YES

NO NO NO

10%

5%

5%

0%

0%

0%

0%

0%

techniques. For more information on these techniques, see the Advanced Level— Test Analyst syllabus [28]. Construction of the Decision Table We will describe the construction of the decision table using an example (see Table 4.4). The decision table consists of two parts, describing conditions (upper part) and actions (lower part), respectively. The individual columns describe business rules. Table 4.4 describes the rules for assigning a discount on purchases depending on three factors describing the customer in question: • Does the customer have a loyalty card? (YES or NO) • Does the total amount of purchases exceed $1000? (YES or NO) • Has the customer made purchases in the last 30 days? (YES or NO) Based on the answers to these questions, a discount is assigned: 0%, 5%, or 10%. Example A customer has a loyalty card and has so far made purchases of $1250, and the last purchases took place 5 days ago. This situation corresponds to rule 1 (has a loyalty card, total amount >$1000, shopping in the last 30 days). So, the system should assign a 10% discount to this customer. Deriving Test Cases from the Decision Table The process of creating specific test cases using decision tables can be presented in the following five steps. Step 1. Identify all possible single conditions (from the test basis, e.g., based on specifications, customer conversations, so-called common sense), and list them in the consecutive rows in the upper part of the table. If needed, split compound conditions into single conditions. Conditions usually appear in specifications as sentence fragments preceded by words such as “if,” “in the event that,” etc. Step 2. Identify all corresponding actions that can occur in the system and that are dependent on these conditions (also derived from the test basis), and list them in the consecutive lines in the lower part of the table. Actions usually appear in specifications as sentence fragments preceded by words such as “then,” “in this case,” “the system should,” etc.

196

Chapter 4 Test Analysis and Design

Step 3. Generate all combinations of conditions, and eliminate infeasible combinations of conditions. For each feasible combination, a separate column is created in the table with the values of each condition listed in this column. Step 4. Identify, for each identified combination of conditions, which actions and how they should occur. This results in completing the bottom part of the corresponding column of the decision table. Step 5. For each column of the decision table, design a test case in which the test input represents combination of conditions specified in this column. The test is passed if, after its execution, the system takes actions as described at the bottom part of the table in the corresponding column. These action entries serve as the expected output for the test case. Notation and Possible Entries in the Decision Table Typically, condition and action values take the form of logical values TRUE or FALSE. They can be represented in various ways, such as the symbols T and F or Y and N (yes/no) or 1 and 0 or as the words “true” and “false.” However, the values of conditions and actions can in general be any objects, such as numbers, ranges of numbers, category values, equivalence partitions, and so on. For example, in our table (Table 4.4), the values of the “discount granted” action are categories expressing different types of discounts: 0%, 5%, and 10%. In the same table, there can be conditions and actions of different types, e.g., logical, numerical, and categorical conditions can occur simultaneously. The decision table with only Boolean (true/false) values is called a limited-entry decision table. If for any condition or action there are other than Boolean entries, such a table is called an extended-entry decision table. How to Determine all Combinations of Conditions If we need to manually determine combinations of conditions, and we are afraid that we will miss some combinations, we can use a very simple tree method to systematically determine all combinations of conditions. Consider the following example: Example Suppose a decision table has three conditions: • Earnings (two possible values—S, small; L, large) • Age (three possible values—Y, young; MA, middle aged; O, old) • Place of living (two possible values—C, city; V, village) To create all combinations of values of triples (age, earnings, residence), we build a tree, from the root of which we derive all possibilities of the first condition (earnings). This is the first level of the tree. Next, from each vertex of this level, we derive all the possibilities of the second condition (age). We get the second level of the tree. Finally, from each vertex of this level, we derive all the possible values of the third condition (place of living). Of course, if there were more conditions, we would proceed analogously. Our final tree looks like the one in Fig. 4.8. Each possible combination of conditions is a combination of vertex labels on the paths leading from the root (the vertex at the top of the tree) to any vertex at the very bottom of the tree. Thus, there will be as many combinations as there are vertices at

4.2 Black-Box Test Techniques

197

Fig. 4.8 Support tree for identifying combinations of conditions

the lowest level (in our case—12; this, of course, follows from the number of combinations: 2 × 3 × 2 = 12). In the figure, the bold lines indicate the path corresponding to the example combination (S, MA, C), which means small earnings, middle age, and city as the place of living. We can now enter each of these combinations into the individual columns of our decision table. At the same time, we are sure that no combination has been left out. The conditions of the decision table thus created are shown in Table 4.5. Infeasible Combinations Sometimes, after listing all possible combinations of conditions, you may find that some of them are infeasible for various reasons. For example, suppose we have the following two conditions in the decision table: • Customer’s age >18? (possible values: YES, NO) • Customer’s age ≤18? (possible values: YES, NO) It is obvious that although we have four possible combinations of these conditions, (YES, YES), (YES, NO), (NO, YES), and (NO, NO), only two of them are feasible, (YES, NO) and (NO, YES), since it is impossible to be more than 18 and less than 19 years old at the same time. Of course, in this case, we could replace these two conditions with one, “age”, and with two possible values: greater than 18 and less than or equal to 18. Sometimes, the decision table will not contain some combinations not for purely logical reasons but for semantic reasons. For example, if we have two conditions: • Was the goal defined? (YES, NO) • Was the goal achieved? (YES, NO) then the combination (NO, YES) is infeasible (nonsensical), because it is impossible to achieve a goal that you have not previously defined.

198

Chapter 4 Test Analysis and Design

Table 4.5 Combinations of decision table conditions formed from the support tree Earnings Age Place of residence

1 S Y C

2 S Y V

3 S MA C

4 S MA V

5 S O C

6 S O V

7 L Y C

8 L Y V

9 L MA C

10 L MA V

11 L O C

12 L O V

Minimizing the Decision Table Sometimes, some conditions may not have any effect on the actions taken by the system. For example, if the system allows only adult customers to buy insurance, and depending on whether they smoke or not they get a discount on that insurance, then as long as the customer is a minor, the system will not allow them to buy insurance regardless of whether the customer smokes or not. Such irrelevant values are most often marked in the decision table with a dash symbol or N/A (not applicable). Typically, a dash is used when the corresponding condition occurs, but its value is irrelevant for determining the action. The N/A symbol, on the other hand, is used when the condition cannot occur. For example, consider two conditions: “payment type” (card or cash) and “is the PIN correct?” (yes or no). If the payment type is cash, we can’t even check the value of the condition “is the PIN correct?” because the condition doesn’t occur at all. So we have only three possible combinations of conditions: (card, yes), (card, no) and (cash, N/A). This minimization, or collapsing, makes the decision table more compact with fewer columns and therefore fewer test cases to execute. On the other hand, for a test with an irrelevant value in the actual test case, we have to choose some specific, concrete value for this condition. So, there is a risk that if a defect occurs for some specific combination of values marked as irrelevant in the decision table, we can easily miss it and not test it. Minimizing decision tables is however a risk mitigation exercise: maybe the current mock-up of the GUI does not allow certain input, but the actual implementation might or a future API might just as well. Consider the collapsed decision table shown in Table 4.6. If we wanted to design a concrete test case for the first column, we would have to decide whether the customer smokes or not (although from the point of view of the specification this is irrelevant), because this input should be given. We can decide on the combination (adult = NO, smokes = NO), and such a test will pass, but we can imagine that due to some defect in the code, the program does not work properly for the combination (adult = NO, smokes = YES). Such a combination has not been tested by us, and failure will not be detected. On the actual exam, there may be questions involving minimized decision tables, but the candidate is not required to be able to perform minimization but only to be able to understand, interpret, and use decision tables that are already minimized. The minimization is required on the Advanced Level—Test Analyst certification exam. Therefore, in this book, we do not present an algorithm for minimizing decision tables. Coverage In decision table testing, the coverage items are the individual columns of the table, containing possible combinations of conditions (i.e., the so-called feasible columns).

4.2 Black-Box Test Techniques Table 4.6 Decision table with irrelevant values

199

CONDITIONS Adult? Smokes? ACTIONS Grant insurance? Grant discount?

NO –

YES YES

YES NO

NO NO

YES NO

YES YES

For a given decision table, full 100% coverage requires that at least one test case corresponding to each feasible column be prepared and executed. The test is passed if the system actually executes the actions defined for that column. The important thing is that coverage counts against the number of (feasible) columns of the decision table, not against the number of all possible combinations of conditions. Usually, these two numbers are equal, but in the case of the occurrence of infeasible combinations, as we discussed earlier, this might not be the case. For example, in order to achieve 100% coverage for the decision table in Table 4.6, we need three (not four, as the number of combinations would suggest) test cases. If we had the following tests: • Adult = YES, smokes = YES. • Adult = NO, smokes = YES. • Adult = NO, smokes = NO. then we would achieve 2/3 coverage (or about 66%), since the last two test cases cover the same, first column of the table. Decision Tables as a Static Testing Technique The decision table testing is excellent for detecting problems with requirements, such as their absence or contradiction. Once the decision table is created from the specification or even while it is still being created, it is very easy to discover such specification problems as: • Incompleteness—no defined actions for a specific combination of conditions • Contradiction—defining in two different places of specification two different behaviors of the system against the same combination of conditions • Redundancy—defining the same system behavior in two different places in the specification (perhaps described differently)

4.2.4 State Transition Testing Application State transition testing is a technique used to check the behavior of a component or system. Thus, it checks its behavioral aspect—how it behaves over time and how it changes its state under the influence of various types of events. The model describing this behavioral aspect is the so-called state transition diagram. In the literature, different variants of this model are called a finite automaton, finite state automaton, state machine, or Labeled Transition System. The

200

Chapter 4 Test Analysis and Design

syllabus uses the name “state transition diagram” to denote the graphical form of the state transition model and “state transition table” to denote the equivalent, tabular form of the model. Construction of the State Transition Diagram A state transition diagram is a graphical model, as described in the UML standard. From a theoretical point of view, it is a labeled directed graph. The state transition diagram consists of the following elements: • States—represent possible situations in which the system may be • Transitions—represent possible (correct) changes of states • Events—represent phenomena, usually external to the system, the occurrence of which triggers the corresponding transitions • Actions—actions that the system can take during the transition between states • Guard conditions—logical conditions associated with transitions; a transition can be executed only if the associated guard condition is true Figure 4.9 shows an example of a state transition diagram. It is not trivial, although in practice even more complicated models are often used, using richer notation than that discussed in the syllabus. However, this diagram allows us to show all the essential elements of the state transition model described in the syllabus while keeping the example practical. The diagram represents a model of the system’s behavior for making a phone call to a cell phone user with a specific number. The user dials a nine-digit phone number by pressing the keys corresponding to successive digits of the number one by one. When the ninth digit is entered, the system automatically attempts the call. The state transition diagram modeling this system consists of five states (rectangles). The possible transitions between them are indicated by arrows. The system starts in the initial state Welcome screen and waits for an event (labeled EnterDigit) involving the user selecting the first digit of a nine-digit phone number. When this event occurs, the system transitions to the Entering state, and in addition, during this transition, the system performs an action to set the value of the x variable to 1. This variable will represent the number of digits of the dialed phone number entered by the user so far. In the Entering state, only the EnterDigit event can occur, but depending on how many digits have been entered so far, transitions to two different states are possible. As long as the user has not entered the ninth digit, the system remains in the Entering state, each time increasing the x variable by one. This is because the guard condition “x < 8” is true in this situation. Just before the user enters the last, ninth digit, the variable x has a value of 8. This means that the guard condition “x < 8” is false and the guard condition “x = 8” is true. Therefore, selecting the last, ninth digit of the number will switch from the Entering state under the EnterDigit event to the Connect state. When the call succeeds (the occurrence of the ConnectionOK event), the system enters the Call state, in which it remains until the user terminates the call, which will be signaled by the occurrence of the EndConnection event. At this point, the system goes to the final state End, and its operation ends. If the system is in the Connect state and the ConnectionError event occurs, the system transitions to the final state, but unlike the analogous transition from the Call state, it will additionally perform the

4.2 Black-Box Test Techniques

201

Fig. 4.9 Example of state transition diagram

ErrorMessage action, signaling to the user the inability to connect to the selected number. The system at each moment of time is in exactly one of the states, and the change of states occurs as a result of the occurrence of corresponding events. The concept of state is abstract. A state can denote a very-high-level situation (e.g., the system being in a certain application screen), but it can also describe low-level situations (e.g., the system executing a certain program statement). The level of abstraction depends on the model adopted, i.e., what the model actually describes and at what level of generality. It is assumed that when a certain event occurs, the change of states is instantaneous (it can be assumed that it is a zero-duration event). The transition labels (i.e., arrows on the diagram) are in general of the form: event [guard condition] / action If, in a given case, the guard condition or action does not exist or is not relevant from the tester’s point of view, they can be omitted. Thus, transition labels can also take one of the following three forms: Event Event/action Event [guard condition] A guard condition for a given transition allows that transition to be executed only if the condition holds. Guard conditions allow us to define two different transitions under the same event while avoiding non-determinism. For example, in the diagram in Fig. 4.9 from the Entering state, we have two transitions under the EnterDigit event, but only one of them can be executed at any time because the corresponding guard conditions are disjoint (either x is less than 8 or is equal to 8). An example test case for the state transition diagram in Fig. 4.9, verifying the correctness of a successful connection, might look like the one in Table 4.7.

202

Chapter 4 Test Analysis and Design

Table 4.7 Example of a sequence of transitions in the state transition diagram of Fig. 4.9 Step 1 2 3 4 5 6 7 8 9 10 11

State Welcome screen Entering Entering Entering Entering Entering Entering Entering Entering Connect Call

Event EnterDigit EnterDigit EnterDigit EnterDigit EnterDigit EnterDigit EnterDigit EnterDigit EnterDigit ConnectionOK EndConnection

Action x := 1 x := x + 1 x := x + 1 x := x + 1 x := x + 1 x := x + 1 x := x + 1 x := x + 1 x := 9

Next state Entering Entering Entering Entering Entering Entering Entering Entering Connect Call End

The scenario is triggered by a sequence of 11 events: EnterDigit (9 times), ConnectionOK, and EndConnection. In each step, we trigger the corresponding event and check whether, after its occurrence, the system actually goes to the state described in the Next state column. Equivalent Forms of State Transition Diagram There are at least two equivalent forms of representation of the state transition diagram. Consider a simple, four-state machine with states S1, S2, S3, and S4 and events A, B, and C. The state transition diagram is shown in Fig. 4.10a. Equivalently, however, it can be presented in the form of a state table, where individual rows (respectively, columns) of the table represent successive states (respectively, events, together with guard conditions if they exist), and in the cell at the intersection of the column and row corresponding to the specified state S and event E is written the target state to which the system is to transition if, being in state S, event E occurs. For example, if the system is currently in state S2 and event B has occurred, the system is to move to state S1. If our diagram used actions, we would have to annotate them in the corresponding cells of the tables in Fig. 4.10b, c. One more way of representing the state machine is by using the full transition table shown in Fig. 4.10c. Here the table represents all possible combinations of states, events, and guard conditions (if they exist). Since we have four states, three different events, and no guard conditions, the table will contain 4 * 3 = 12 rows. The last column contains the target state to which the machine is to transition if, when it is in the state defined in the first column of the table, the event described in the second column of the table occurs. If the transition in question is undefined, the Next state column indicates this, e.g., with a dash or other fixed symbol. The absence of a transition (i.e., the absence of a state/event combination) on the state transition diagram is represented simply by the absence of the corresponding arrow. For example, being in state S1, the machine has no defined behavior for the occurrence of event B. Therefore, there is no outgoing arrow from S1 labeled by B. Both the state table and the full transition table allow us to directly show so-called invalid transitions, which can also be, and in some situations should be, tested. Invalid transitions (the “missing arrows” in the state transition diagram) are

4.2 Black-Box Test Techniques

203

Fig. 4.10 Different forms of state machine presentation

represented by empty cells in the tables. In other words, an invalid transition is any combination of state and event that does not appear on the state transition diagram. For example, the diagram in Fig. 4.10 is missing six transitions: (S1, B), (S3, A), (S3, C), (S4, A), (S4, B), and (S4, C). Thus, we have a total of six invalid transitions, corresponding to six empty cells in the table in Fig. 4.10b or six empty cells in the “Next state” column in the table in Fig. 4.10c. Sometimes, for the sake of simplicity, only one arrow between two states is drawn on the state transition diagram, even if there are more parallel transitions between these states. It is important to pay attention to this, because sometimes, the question of the number of transitions on the diagram is very important. Figure 4.11 shows an example of a “simplified” form of drawing transitions and an equivalent “full” form. Test Design: Coverage For state transition testing, there are many different coverage criteria. Here are the three most popular criteria, described in the Foundation Level syllabus: • All states coverage—the weakest coverage criterion. The coverage items are the states. All states coverage therefore requires that the system has been in each state at least once. • Valid transitions coverage (also called 0-switch coverage or Chow’s coverage)—the most popular coverage criterion. The coverage items are transitions between states. Valid transitions coverage therefore requires that each transition defined in the state transition diagram is executed at least once.

204

Chapter 4 Test Analysis and Design

Fig. 4.11 Equivalent forms of graphical representation of parallel transitions

• All transitions coverage—in which the coverage items are all transitions indicated in the state table. Thus, this criterion requires that all valid transitions be covered and, in addition, the execution of every invalid transition be attempted. It is good practice to test one such event per test (to avoid defect masking). Other coverage criteria can be defined, such as transition pair coverage (also called 1-switch coverage), which requires that every possible sequence of two consecutive valid transitions be tested. Criteria of this type can be generalized: we can require the coverage of all threes of transitions, all fours of transitions, etc. In general—we can define a whole family of coverage criteria regarding N+1 consecutive transitions, for any non-negative N (called N-switch coverage). It is also possible to consider coverage of some particular paths, coverage of loops, etc. (all these coverage criteria are not in the scope of the Foundation Level syllabus). Thus, in the case of state transition testing, we are dealing with a potentially infinite number of possible coverage criteria. From a practical point of view, the most commonly used criteria are the valid transitions coverage and the all transitions coverage, since the main thing we are usually concerned with when testing the state transition diagram is verification of the valid implementation of transitions between states. Coverage is defined as the number of elements covered by the tests relative to the number of all coverage items defined by the criterion. For example, for the all states coverage criterion applied to the transition diagram in Fig. 4.10, we have four elements to cover: states S1, S2, S3, and S4; for the valid transitions coverage criterion, we have as many elements as there are transitions between states (six). The usual requirement is to design the smallest possible number of test cases that are sufficient to achieve full coverage. Let us see how different types of coverage can be realized for the state transition diagram shown in Fig. 4.10a. For the all states coverage criterion, we have four states to cover: S1, S2, S3, and S4. Note that this can be achieved within a single test case, for example: S1 (A) S2 (B) S1 (C) S3 (B) S4. The convention used above describes the sequence of transitions between states under the influence of events (indicated in parentheses). The notation “S (E) T” denotes the transition from state S under the influence of event E to state T. In the

4.2 Black-Box Test Techniques Table 4.8 Test scenario for state transition testing

205 Step 1 2 3 4 5

Initial condition S1 S2 S1 S3 S4

Event A B C B

Expected result Transition to S2 Transition to S1 Transition to S3 Transition to S4

above example, we went through all four states within one test case, so this single test case achieved 100% state coverage. In practice, the expected test result is the actual passing of states one by one as described by the model. The test scenario corresponding to the test case: S1 (A) S2 (B) S1 (C) S3 (B) S4 could look like this, as described in Table 4.8. Note that a test case is not equated with a test condition. In our example, the test conditions were individual states, but a single test case could cover them all. A test case is a sequence of transitions between states, starting with the initial state and ending with the final state (possibly interrupting this trek earlier if necessary). So, it is possible to cover more than one test condition within a single test case. For the valid transitions coverage criterion, we have six transitions to cover. Let us denote them as T1–T6: T1: S1 (A) S2 T2: S1 (C) S3 T3: S2 (A) S2 T4: S2 (B) S1 T5: S2 (C) S4 T6: S3 (B) S4 We want to design as few tests as possible to cover all of these six transitions. The strategy is to try to cover as much as possible the previously uncovered items within each successive test. For example, if we started the first test case with the sequence S1 (A) S2, it would not be worthwhile at this point to trigger event C and move to the final state S4, when we can still cover several other transitions, such as S2 (A) S2 (B) S1. Note that it is impossible to cover, within a single test case, the transitions S2 (C) S4 and S3 (B) S4, because as soon as S4 is reached, the test case must end. So, we will need at least two test cases to cover all transitions. Indeed, two cases are enough to achieve full valid transitions coverage. An example set of such two test cases is shown in Table 4.9. In this table, the first column gives the sequence of transitions, and the second column gives the corresponding covered transitions. Transitions covered for the first time are indicated in bold (e.g., in the second test, we cover T1, which was already covered previously with the first test case).

206

Chapter 4 Test Analysis and Design

Table 4.9 Test cases achieving full valid transitions coverage Test case S1 (A) S2 (A) S2 (B) S1 (C) S3 (B) S4 S1 (A) S2 (C) S4

Transitions covered T1, T3, T4, T2, T6 T1, T5

To achieve the all transitions coverage, we must provide test cases covering all valid transitions (see above) and—in addition—try to exercise each invalid transition. According to the good practice described above, we will test each such transition in a separate test case. Thus, the number of test cases will be equal to the number of test cases covering all valid transitions plus the number of invalid transitions. In our model, this will be the following six invalid transitions (you can quickly write them out by analyzing the full state table—see Fig. 4.10c): S1 (B) ? S3 (A) ? S3 (C) ? S4 (A) ? S4 (B) ? S4 (C) ? Remember that each test case starts with an initial state. Thus, for each invalid transition, we must first call the sequence of events that reaches the given state and then, once we are in it, try to call the invalid transition. For the six invalid transitions described above, the corresponding six test cases might look like in Table 4.10. If we manage to trigger an event that is not defined in the model, we can interpret it in at least two ways: • If the system changed its state, this should be considered a failure, because since the model does not allow such transition, it should not be possible to trigger it. • If the system did not change its state, then this can be considered correct behavior (ignoring the event). However, we must be sure that semantically, this is an acceptable situation. There are also at least two possible solutions to such a problematic situations: • Fix the system so that the event is not possible (the invalid transition is impossible to be triggered).

Table 4.10 Test cases covering invalid transitions

Test case S1 (B) ? S1 (C) S3 (A) ? S1 (C) S3 (C) ? S1 (C) S3 (B) S4 (A) ? S1 (C) S3 (B) S4 (B) ? S1 (C) S3 (B) S4 (C) ?

Incorrect transition covered S1 (B) ? S3 (A) ? S3 (C) ? S4 (A) ? S4 (B) ? S4 (C) ?

4.2 Black-Box Test Techniques

207

• Add a transition under the influence of this event to the model so that it models the “ignoring” of the event by the system (e.g., a loop to the same state). Let us go back to the practical example of the state transition diagram in Fig. 4.9 modeling a phone call. An example of an invalid transition is the transition from the Connect state in response to the EnterDigit event. Such a situation is triggered very simply—at the time of establishing a call, we simply press one of the keys representing digits. If the system does not react to this in any way, we consider that the test is passed. Finally, let us discuss one of the coverage criteria not described in the syllabus— the coverage of pairs of valid transitions (1-switch coverage). This is an extra material, non-examinable at the Foundation Level certification exam. We consider this example to show that the stronger the criterion we adopt, the more difficult it is, and it usually requires designing more test cases than for a weaker criterion (in this case, the weaker criterion is the valid transitions coverage). To satisfy the 1-switch coverage, we need to define all allowed pairs of valid transitions. We can do this in the following way: for each single, valid transition, we consider all its possible continuations in the form of the following single, valid transition. For example, for the transition S1 (A) S2, the possible continuations are all transitions coming out of S2, that is, S2 (A) S2, S2 (B) S1, and S2 (C) S4. All possible pairs of valid transitions thus look like this: PP1: S1 (A) S2 (A) S2 PP2: S1 (A) S2 (B) S1 PP3: S1 (A) S2 (C) S4 PP4: S1 (C) S3 (B) S4 PP5: S2 (A) S2 (A) S2 PP6: S2 (A) S2 (B) S1 PP7: S2 (A) S2 (C) S4 PP8: S2 (B) S1 (A) S2 PP9: S2 (B) S1 (C) S3 Note (analogous to the case of valid transitions coverage) that the transition pairs PP3, PP4, and PP7 terminate in the final state, so no two of them can occur in a single test case, since the occurrence of any of these transition pairs results in the termination of the test case. Thus, we need at least three test cases to cover pairs of valid transitions, and indeed three test cases are sufficient. An example test set is given in Table 4.11 (pairs covered for the first time are indicated in bold). So, we need one more test case compared to the case of valid transitions coverage. Table 4.11 Test cases covering pairs of valid transitions Test case S1 (A) S2 (A) S2 (A) S2 (B) S1 (A) S2 (B) S1 (C) S3 (B) S4 S1 (A) S2 (C) S4 S1 (A) S2 (A) S2 (C) S4

Covered pairs of valid transitions PP1, PP5, PP6, PP8, PP2, PP9, PP4 PP3 PP1, PP7

208

Chapter 4 Test Analysis and Design

4.2.5 (*) Use Case Testing Application A use case is a requirements document that describes the interaction between so-called actors, which are most often the user and the system. A use case is a description of a sequence of steps that the user and the system perform, which ultimately leads to the user obtaining some benefit. Each use case should describe a single, well-defined scenario. For example, if we are documenting the requirements for an ATM (a cash machine), use cases could, among other things, describe the following scenarios: • • • • •

Rejecting an invalid ATM card Logging into the system by entering the correct PIN Correctly withdrawing money from an ATM Attempted failed withdrawal of money due to insufficient funds in the account Locking the card by entering the wrong PIN three times

For each use case, the tester can construct a corresponding test case, as well as a set of test cases to check the occurrence of unexpected events during the scenario run. Use Case Construction A correctly built use case, in addition to “technical” information such as a unique number and name, should consist of: • • • •

Pre-conditions (including input data) Exactly one, sequential main scenario (Optional) Markings of the locations of so-called alternative flows and exceptions Post-conditions (describing the state of the system after successful execution of the scenario and the user benefit obtained)

From the tester’s point of view, the primary purpose of the use case is to test the main scenario, i.e., to verify that actually performing all the steps as the scenario describes leads to the defined user benefit. However, in doing so, it is also necessary to test the system’s behavior for “unsuspected” events, that is, alternative flows and exceptions. The difference between these two is as follows: • An alternative flow causes an unexpected event to occur but allows the user to complete the use case, i.e., it allows the user to return to the main scenario and happily complete it. • An exception interrupts the execution of the main scenario—if an exception occurs, it is not possible to complete the main scenario correctly. The user gets no benefit, and the use case either ends with an error message or is aborted. The occurrence of the exception itself can be handled through a scenario defined in a separate use case. A use case should not contain business logic, i.e., the main scenario should be linear and contain no branching. The existence of such branching suggests that we

4.2 Black-Box Test Techniques

209

are actually dealing with more than one use case. Each of such possible paths should be described in a separate use case. Example Let us consider an example of the scenario “Correct withdrawal of money ($100) from an ATM with a commission of $5.” A use case describing this scenario could look like the one in below. This use case consists of one main scenario (steps 1–11), three exceptions (labeled 3A, 4A and 9A), and one alternative flow (9B). The numbering of these events refers to the step in which they can occur. If more than one exception or alternate flow is possible in a single step, they are denoted by consecutive letters: A, B, C, etc. Use case: UC-003-02 Name: successful withdrawal of $100 from an ATM with a commission of $5. Prerequisites: the user is logged in, the payment card recognized as a card with a withdrawal fee of $5. Main scenario steps: 1. The user selects the option to Withdraw money. 2. The system displays a menu with available payout amounts. 3. The user selects the $100 option. 4. The system checks if there is at least $105 in the user's account. 5. The system asks for a printout of the confirmation. 6. The user selects the option NO. 7. The system displays the message Remove card. 8. The system returns the card. 9. The user takes the card. 10. The system withdraws $100 and transfers $5 commission to the bank account. 11. The system displays the message Take money. The use case ends, the user is logged out, the

system returns to the initial screen.

Final conditions: $100 was withdrawn from the user, the user's account balance reduced by $105, the ATM cash balance reduced by $100. Alternative flows and exceptions: 3A - the user does not select any option for 30 seconds (the system returns the card and returns to the initial screen). 4A - the account balance is less than $105 (message Insufficient funds, return the card, log off the user, return to the initial screen and end the use case). 9A - the user does not take the card for 30 seconds (the system “pulls” the card, sends an email notification to the customer and returns to the initial screen). 9B - the user inserts the card back after receiving it (the system returns the card).

210

Chapter 4 Test Analysis and Design

Deriving Test Cases from a Use Case There is a minimum of what a tester should do to test a use case. This minimum (understood as full, 100% coverage of the use case) is to design: • One test case to exercise the main scenario, without any unexpected events • Enough test cases to exercise each exception and alternative flow So, in our example, we could design the following five test cases: TC1: implementation of steps 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 TC2: implementation of steps 1, 2, 3A TC3: implementation of steps 1, 2, 3, 4A TC4: implementation of steps 1, 2, 3, 4, 5, 6, 7, 8, 9A TC5: implementation of steps 1, 2, 3, 4, 5, 6, 7, 8, 9B, 10, 11 Each exception and alternative flow should be tested in separate test cases to avoid the defect masking. We discussed this phenomenon in detail in Sect. 4.2.1. However, in some situations, e.g., time pressure, it may be allowed to test more than one alternative flows within one test case. Note that TC5 (alternative flow) allows us to achieve the goal (we reach step 11). TC2, TC3, and TC4 end earlier due to the occurrence of incorrect situations during scenario execution. We can design the test cases from the use case, in the form of scenarios, describing the expected response of the system at each step. For example, the scenario for test case TC3 can look like as in Table 4.12. Let us note that the consecutive steps (user actions and expected responses) correspond to the consecutive use case steps 1, 2, 3, and 4A. Notice also, that the tester checked the boundary value for the withdrawal amount in this scenario: the whole operation was supposed to reduce the user’s account balance by $105, while the declared account balance was $104.99. This is a typical example of combining Table 4.12 Test case built from a use case Test case TC3: ATM withdrawal with insufficient funds in the account (concerning use case UC-003-02, occurrence of exception 4A). Pre-conditions: • User is logged in • The system is on the main menu • Withdrawal recognized as charged with a $5 commission (due to the type of card) • Balance on account: $104.99 Step Event Expected result 1 Select the option to The system goes to the menu for selecting amounts Withdraw money 2 Choose option $100 The system finds that there are no funds in the account, displays the message No funds, returns the card Postconditions: • The user’s account balance and bank’s account balance have not changed • The ATM did not withdraw any money • The ATM returned the card to the user • The system returned to the welcome screen

4.3 White-Box Test Techniques

211

test techniques to include verification of many different types of conditions in a small number of test cases. Here, in verifying the correct operation of the system in a situation of insufficient funds in the account, we used the boundary value analysis technique, so that the insufficient amount was only 1 cent less than the amount that would have already allowed the correct withdrawal of money. Coverage The ISO/IEC 29119 standard does not define a coverage measure for use case testing. However, the old syllabus does provide such a measure, assuming that the coverage items are use case scenario flows. Coverage can therefore be defined as the ratio of the number of verified flow paths to all possible flow paths described in the use case. For example, in the use case described above, there are five different flow paths: the main path, three flows with exception handling, and one with alternative flow. If the test set contained only test cases TC1 and TC5, the use case coverage with these tests would be 2/5 = 40%.

4.3 White-Box Test Techniques FL-4.3.1 (K2) FL-4.3.2 (K2) FL-4.3.3 (K2)

Explain statement testing. Explain branch testing. Explain the value of white-box testing.

White-box testing is based on the internal structure of the test object. Most often, white-box testing is associated with component testing, in which the model of the test object is the internal structure of the code, represented, for example, in the form of a so-called control flow graph (CFG). However, it is important to note that whitebox test techniques can be applied to all test levels, e.g.: • At component testing (example structure: CFG) • At integration testing (example structures: call graph, API) • At system testing (example structures: business process modeled in BPMN language,3 program menu) • At acceptance testing (example structure: website pages structure) The Foundation Level syllabus discusses two white-box test techniques: statement testing and branch testing. Both of these techniques are by their nature associated with code, so their area of application is mainly component testing. In addition to these, there are many other (and usually more powerful) white-box techniques, such as:

3 Business Process Model and Notation, BPMN—a graphical notation used to describe business processes

212

• • • •

Chapter 4 Test Analysis and Design

MC/DC testing Multiple conditions testing Loop testing Basis path testing

However, these techniques are not discussed in the Foundation Level syllabus. Some of them are discussed in the Advanced Level—Technical Test Analyst syllabus [29]. These stronger white-box test techniques are often used when testing safety-critical systems (e.g., medical instrument software or aerospace software).

4.3.1 Statement Testing and Statement Coverage Statement testing is the simplest and also the weakest of all white-box test techniques. It involves covering executable statements in the source code of a program. Example Consider a simple code snippet: 1. INPUT x, y // two natural numbers 2. IF (x > y) THEN 3. z := x – y ELSE 4. z := y – x 5. IF (x > 1) THEN 6. z := z * 2 7. RETURN z

Executable statements are marked with the numbers 1–7 in this code. The text starting with the “//” characters is a comment and is not executed when the program runs. In general, the code is executed sequentially, line by line, unless some jumps in the control flow occur (e.g., in the decision statements 2 and 5 in our code). The keyword ELSE is only part of the syntax of the IF-THEN-ELSE statement and by itself is not treated as executable instruction (because there is nothing to execute here). After taking from the input two variables, x and y, in statement 1, the program checks in statement 2 whether x is greater than y. If so, statement 3 is executed (assignment to z the difference x - y), followed by a jump to statement 5. If not, statement 4 (the “else” part of the IF-THEN-ELSE) is executed, in which the difference y - x is assigned to z variable, followed by a jump to statement 5. In statement 5, it is checked whether the x variable has a value greater than 1. If so, the body of the IF-THEN statement (statement 6) is executed (doubling the value of z). If the decision in statement 5 is false, statement 6 is skipped, and the control flow jumps after the IF-THEN block, that is, to statement 7, where the result—the value of the variable z—is returned and the program terminates its operation.

4.3 White-Box Test Techniques

213

Fig. 4.12 CFG and examples of control flow for different input data

The source code can be represented as a CFG. Such a graph for the abovementioned code is shown in Fig. 4.12a. It is a directed graph, in which the vertices represent statements and the arrows represent the possible control flow between statements. Decision statements are denoted here by rhombuses while other statements by squares. Consider two test cases for the code from the above example: • TC1: input: x = 2, y = 11; expected output: 18. • TC2: input: x = 8, y = 1; expected output: 14. In Fig. 4.12b, the bolded arrows show the control flow when the input is given the values x = 2, y = 11. The control flow for TC1 will be as follows (in the parentheses, we show the decision outcome at a given decision statement): 1 → 2 (x ≤ y) → 4 → 5 (x > 1) → 6 → 7. On the other hand, for the TC2 with input x = 8, y = 1, the control flow is shown in Fig. 4.12c and will be as follows: 1 → 2 (x > y) → 3 → 5 (x > 1) → 6 → 7. Let us emphasize one very important thing in the context of white-box test techniques: the expected results in test cases are not derived from the code. This is because the code is precisely what we are testing—so it cannot be its own oracle. The expected results are derived from a specification external to the code. In our case, it could be like this: “The program takes two natural numbers and then subtracts the

214

Chapter 4 Test Analysis and Design

smaller one from the larger one. If the first number is greater than 1, the value is further doubled. The program returns the value calculated in this way.” Coverage In statement testing, the coverage items are the executable statements. Thus, the statement coverage is the quotient of executable instructions covered by test cases by the number of all executable instructions in the analyzed code, generally expressed as a percentage. Note that this metric does not take into account statements that are not executable (e.g., comments or function headers). Let us reconsider the above source code and test cases TC1 and TC2. If our test set contains only TC1, its execution will achieve a coverage of 6/7 (ca. 86%), since this test case exercises six different executable instructions out of the seven that our code consists of (these are 1, 2, 4, 5, 6, and 7). TC2 also exercises six of the seven statements (1, 2, 3, 5, 6, 7), so it achieves the same coverage of ca. 86%. However, if our test set contains both TC1 and TC2, the coverage will be 100%, because together, these two test cases exercise all seven statements (1, 2, 3, 4, 5, 6, 7).

4.3.2 Branch Testing and Branch Coverage A branch is a control flow between two vertices of a CFG. A branch can be unconditional or conditional. An unconditional branch between vertices A and B means that after the execution of statement A is completed, control must move to statement B. A conditional branch between A and B means that after the execution of statement A is completed, control can move on to statement B, but not necessarily. Conditional branches come out of decision vertices, that is, places in the code where some decision is made on which the further course of control depends. Examples of a decision statement are the IF-THEN, IF-THEN-ELSE, and SWITCH-CASE conditional statements, as well as statements that check the so-called loop condition in WHILE, DO-WHILE, or FOR loops. For example, in Fig. 4.12a, the unconditional branches are 1→2, 3→5, 4→5, and 6→7, because, for example, if statement 3 is executed, the next statement must be statement 5. The other branches, 2→3, 2→4, 5→6, and 5→7, are conditional. For example, after the execution of statement 2, the control can go to statement 3 or to statement 4. Which of these cases occurs will depend on the logical value of the IF decision in statement 2. If x > y, the control will go to 3, but if x ≤ y, it will go to 4. Coverage In branch testing, the coverage items are branches, both conditional and unconditional. Branch coverage is therefore calculated as the number of branches that were exercised during test execution divided by the total number of branches in the code. The goal of branch testing is to design a sufficient number of test cases that achieve the required (accepted) level of branch coverage. As in other cases, this coverage is often expressed as a percentage. When full, 100% branch coverage is achieved, it means that every branch in the code—both conditional and

4.3 White-Box Test Techniques

215

unconditional—was executed during the tests at least once. This means that we have tested all possible direct transitions between statements in the code. Example Consider again the example code from Sect. 4.3.1: 1. INPUT x, y // two natural numbers 2. IF (x > y) THEN 3. z := x – y ELSE 4. z := y – x 5. IF (x > 1) THEN 6. z := z * 2 7. RETURN z

In this code, we have eight branches: 1 → 2, 2 → 3, 2 → 4, 3 → 5, 4 → 5, 5 → 6, 5 → 7, 6 → 7. Test case TC1 (x = 2, y = 11, expected output: 18) covers the following branches: • • • • •

1 → 2 (unconditional) 2 → 4 (conditional) 4 → 5 (unconditional) 5 → 6 (conditional) 6 → 7 (unconditional)

and achieves 5/8 = 67.5% coverage. Test case TC2 (x = 8, y = 1, expected output: 14) covers the following branches: • • • • •

1 → 2 (unconditional) 2 → 3 (conditional) 3 → 5 (unconditional) 5 → 6 (conditional) 6 → 7 (unconditional)

and achieves also 5/8 = 67.5% coverage. The two cases together achieve 7/8 = 87.5% branch coverage, because all branches are covered except one, 5 → 7. Example Consider the code and its CFG from Fig. 4.13. A single test case with input y = 3 achieves 100% statement coverage. This is because the control will go through the following statements (the value of the variable x assigned in statement 4 and the decision checked in statement 3 are given in parentheses): 1 → 2 (x := 1) → 3 (1 < 3) → 4 (x := 2) → 3 (2 < 3) → 4 (x := 3) → 3 (3 < 3) → 5.

216

Chapter 4 Test Analysis and Design

Fig. 4.13 The source code with a while loop and its CFG

The same test case also achieves 100% branch coverage. There are five branches in the code: 1 → 2, 2 → 3, 3 → 4, 3 → 5, and 4 → 3. According to the control flow, branches will be covered sequentially as follows: 1 → 2 (unconditional) 2 → 3 (unconditional) 3 → 4 (conditional) 4 → 3 (unconditional) 3 → 4 (conditional, covered earlier) 4 → 3 (unconditional, covered earlier) 3 → 5 (conditional) Example Consider the code in Fig. 4.14a. Its CFG, in which each vertex corresponds to a single statement, is shown in Fig. 4.14b. There are nine branches in this graph: • • • • • • • • •

1→2 (unconditional) 2→3 (unconditional) 3→4 (conditional) 3→5 (conditional) 5→6 (conditional) 5→9 (conditional) 6→7 (unconditional) 7→8 (unconditional) 8→5 (unconditional) To achieve 100% branch coverage, we need at least two test cases, for example:

TC1: 1→2→3→4 TC2: 1→2→3→5→6→7→8→5→9

4.3 White-Box Test Techniques

217

Fig. 4.14 Two CFGs for the same code

TC1 covers three out of nine branches, so it achieves 3/9 ≈ 33.3% branch coverage. TC2 covers eight of the nine branches, so it achieves 8/9 ≈ 89% branch coverage. The two tests together cover all nine branches, so the test set {TC1, TC2} achieves full branch coverage, 9/9 = 100%. Sometimes, a CFG is drawn so as to include several statements in a single vertex, constituting a so-called basic block. A basic block is a sequence of statements such that when one of them executes, all the others must execute. A CFG modified in this way is smaller and more readable (there are no “long chains of vertices” in it), but its use affects the coverage measure, since there will be fewer edges than in a CFG with no basic blocks. For example, the graph in Fig. 4.14c is equivalent to the graph in Fig. 4.14b but has only five edges. The TC1 mentioned earlier achieves 1/5 = 20% branch coverage in this case, while the TC2 achieves 4/5 = 80% coverage. If coverage is measured directly on the code, it is equivalent to the coverage calculated for a CFG, in which each vertex represents a single instruction, since branches then represent the direct flow of control between individual instructions, not their groups (basic blocks).

4.3.3 The Value of White-Box Testing Statement testing is based on the following so-called error hypothesis: if there is a defect in the program, it must be located in one of the executable statements. And if we achieve 100% statement coverage, then we are sure that a statement with a defect

218

Chapter 4 Test Analysis and Design

was executed. This, of course, will not guarantee the triggering of a failure, but at least it will create such a possibility. Consider a simple example of code that takes two numbers as input and returns their multiplication (if the first number is positive) or returns the first number (if the first number is not positive). The code looks as follows: 1. INPUT x, y 2. IF x > 0 THEN 3. x := x + y 4. RETURN x

In this program, there is a defect involving the incorrect use of the addition operator instead of multiplication in line 3. For the input data x = 2, y = 2, the program will go through all the statements, so full 100% statement coverage will be achieved. In particular, statement 3 will be executed incorrectly, but incidentally, 2 * 2 is equal to 2 + 2, so the returned result will be correct despite the defect. This simple example shows that statement coverage is not a strong technique, and it is worth using other, stronger ones, such as branch coverage testing. In branch testing, on the other hand, the error hypothesis is as follows: if there is a defect in the program, it causes an erroneous control flow. In a situation where we have achieved 100% branch coverage, we must have exercised at least once a transition that was wrong (e.g., the decision value should be true, but was false), so that the program will execute the wrong statements, and this will possibly result in a failure. Of course, as in the case of statement testing, full branch coverage does not guarantee that a failure will occur, even if the program takes the wrong path. In our example, when we add a second test case with x = 0, y = 5, we will additionally cover the branch 2 → 3. The two test cases together achieve 100% branch coverage, but still, the defect in the code will not be identified—the actual results in both test cases will be exactly the same, as the expected results. However, there is an important relationship between statement testing and branch testing: Achieving 100% branch coverage guarantees the achievement of 100% statement coverage.

In scientific jargon, we say that branch coverage subsumes the statement coverage. This means that any test set that achieves 100% branch coverage, by definition, also achieves 100% statement coverage. Thus, we don’t need to check statement coverage separately in such a case. We know that it must be 100%. The inverse subsumption relationship is not true—achieving 100% statement coverage does not guarantee that 100% branch coverage is achieved. To demonstrate this, let us consider the code snippet given above. This program has four statements and four branches: 1→2, 2→3, 2→4, and 3→4. For the input data x = 3, y = 1, the program will go through all four statements, 1, 2, 3, and 4, so this test case will achieve 100% statement coverage. However, we have not covered all branches—

4.4 Experience-Based Test Techniques

219

after the execution of statement 2, the program goes to statement 3 (because 3 > 0), so it does not cover branch 2→ 4 (which would be executed in the case where the decision in statement 2 would be false). So the test for x = 3, y = 1 achieves 100% statement coverage but only 75% branch coverage. Note also that each decision statement is a statement and as such is part of the set of executable statements. For example, in the code above, line 2 is a statement (and is treated as such when we use the statement coverage technique). So we can say that 100% statement coverage guarantees the execution of every decision in the code, but we can’t say that it guarantees the achievement of every outcome of every decision (and so we can’t say that it guarantees the coverage of all branches in the code). The key advantage of all white-box test techniques is that the entire software implementation is taken into account during testing, making it easier to detect defects even when the software specification is unclear or incomplete. Test design is independent of the specification. A corresponding weakness is that if the software does not implement one or more requirements, white-box testing may fail to detect resulting defects involving lack of functionality [50]. White-box test techniques can be used in static testing. They are well suited for reviews of code that is not yet ready for execution [51], as well as pseudocode and other high-level logic that can be modeled using a control flow diagram. If only black-box testing is performed, there is no way to measure actual code coverage. White-box testing provides an objective measure of coverage and provides the necessary information to allow additional tests to be generated to increase that coverage and subsequently increase confidence in the code.

4.4 Experience-Based Test Techniques FL-4.4.1 (K2) FL-4.4.2 (K2) FL-4.4.3 (K2)

Explain error guessing. Explain exploratory testing. Explain checklist-based testing.

In addition to specification-based and white-box test techniques, there is a third family of techniques: experience-based test techniques. This category contains techniques that are considered less formal than those discussed earlier, where the basis of the tester’s actions was always some formal model (a model of the domain, logic, behavior, structure, etc.). Experience-based test techniques primarily make use of testers’ knowledge, skills, intuition, and their experience working with similar products or even the same product before. This approach makes it easier for the tester to identify failures or defects that would be difficult to detect using more structured techniques. Despite appearances, experience-based test techniques are commonly used by testers. However, it is important to keep in mind that the effectiveness of these techniques, by

220

Chapter 4 Test Analysis and Design

their very nature, will depend heavily on the approach and experience of individual testers. The Foundation Level syllabus lists the following three types of experiencebased test techniques, which we will discuss in the following sections: • Error guessing • Exploratory testing • Checklist-based testing

4.4.1 Error Guessing Description of the Technique Error guessing is perhaps conceptually the simplest of all experience-based testing techniques. It is not associated with any prior planning, nor is it necessary to manage the tester’s activities associated with using this technique. The tester simply predicts the types of errors possibly made by developers, defects, or failures in a component or system by referring to, among other things, the following aspects: • How the component or system under test (or its previous version already working in production) has worked so far • What are the typical errors the tester knows, made by developers, architects, analysts, and other members of the production team • Knowledge of failures that have occurred previously in similar applications tested by the tester or that the tester has heard of Errors, defects, and failures in general can be related to: • Input (e.g., valid input not accepted, invalid input accepted, wrong parameter value, missing input parameter) • Output (e.g., wrong output format, incorrect output, correct output at the wrong time, incomplete output, missing output, grammatical or punctuation errors) • Logic (e.g., missing cases to consider, duplicate cases to consider, invalid logical operator, missing condition, invalid loop iteration) • Calculations (e.g., wrong algorithm, ineffective algorithm, missing calculation, invalid operand, invalid operator, bracketing error, insufficient precision of the result, invalid built-in function) • Interface (e.g., incorrect processing of events from the interface, time-related failures in input/output processing, calling the wrong or non-existent function, missing parameter, incompatible parameter types) • Data (e.g., incorrect initialization, definition, or declaration of a variable, incorrect access to a variable, wrong value of a variable, use of an invalid variable, incorrect reference to a data, incorrect scaling or unit of a data, wrong dimension of the data, incorrect index, incorrect type of a variable, incorrect range of a variable, “error by one” (see Sect. 4.2.2))

4.4 Experience-Based Test Techniques

221

Fig. 4.15 BMI calculator (source: calculatorsworld. com)

There is a more organized, methodical variation of this technique, called fault attack or software attack. This approach involves creating a list of potential mistakes, defects, and failures. The tester analyzes the list point by point and tries to make each mistake, defect, or failure appropriately visible to the object under testing. Many examples of such attacks are described in [52–54]. This technique differs from the others in that its starting point is a “negative” event, a specific defect or failure, rather than a “positive” thing to verify (such as the input domain or business logic of the system). A tester can create lists of errors, defects, and failures based on their own experience—then they will be the most effective. They can also use various types of lists that are publicly available, for example, on the Internet. Example The tester is testing a program to calculate BMI (Body Mass Index), which takes two values from the user as input, weight and height, and then calculates the BMI. The application’s interface is shown in Fig. 4.15. The tester uses a fault attack using the following list of defects and failures: 1.

The occurrence of an overflow for a form field.

2.

The occurrence of division by zero.

3.

Forcing the occurrence of an invalid value.

222

Chapter 4 Test Analysis and Design

Fig. 4.16 BMI calculator with a different interface (source: https://www. amazon.com/Meet-shingalaBmi-calculator/dp/B08FJ4 8KYQ)

In order to try to force a failure from item 1 of the list, the tester can try to enter a very long string of digits into the “weight” input field.4 In order to try to force a failure from item 2 of the list, the tester can give zero height, since BMI is calculated as the quotient of weight and the square of height. In order to try to force a failure from item 3, the tester can give, for example, a negative weight value to force a negative (incorrect) BMI value. Note that these attacks can be carried out for the above application because its interface allows weight and height values to be entered directly from the keyboard. It is a good idea to design interfaces in a way that prevents incorrect values from being entered. For example, by using fields with built-in value range controls, we allow the user to enter the input only through buttons that allow values to be increased or decreased in a controlled range. Another idea is to use mechanisms such as dropdown lists or sliders. Figure 4.16 shows such an interface for an analogous BMI calculator app.

4

The formula for BMI divides weight by the square of height. To get an overflow of BMI values, the numerator of the fraction should be as large as possible. Therefore, it does not make sense to give large values for height, because the larger the denominator, the smaller the BMI.

4.4 Experience-Based Test Techniques

223

4.4.2 Exploratory Testing Description of the Technique Exploratory testing is the approach of designing, executing, and recording unscripted tests (i.e., tests that have not been designed in advance) and evaluating them dynamically during execution. This means that the tester does not execute tests prepared in a separate design or analysis phase, but each of their next steps in the current test scenario is dynamic and depends on: • Knowledge and intuition of the tester • Experience with this or similar applications • The way the system behaved in the previous step of the scenario In fact, the strict division between exploratory and scripted testing does not quite capture the truth. This is because every testing activity includes some element of planning and some element of dynamics and exploration. For example, even in pure exploratory testing, it is usual to plan an exploratory session in advance, allocating, for example, a specific time for the tester to execute it. An exploratory tester may also prepare various test data needed for testing before the session starts. Session-Based Exploratory Testing Session-based exploratory testing is an approach that allows testers and test managers to obtain greater structuring of the tester’s activity, as well as the possibility of better managing exploratory testing activities. It consists of three basic steps: 1. The tester meets with the manager, to determine the scope of testing and to allocate time. The manager hands the test charter (see Table 4.13) to the tester or writes it collaboratively with the tester. The test charter should be created during the test analysis (see Sect. 1.4.3). 2. Conducting an exploratory testing session by the tester, during which the tester makes notes of any relevant observations (problems observed, recommendations for the future, information on what failed during the session, etc.). 3. Meeting again at the end of the session, where the results of the session are discussed and decisions are made on possible next steps. Session-based exploratory testing takes place in a well-defined time frame (usually from 1 to 4 h). The tester focuses on the task specified in the test charter, but can of course—if necessary—deviate to some extent from the essential task, in a situation where some serious defect in another area of the application is observed. The results of the session are documented by the tester in the test charter. It is recommended to perform session-based exploratory testing in a collaborative fashion, for example, by using pairing. A tester might pair with a business representative, end user, or Product Owner, to explore the application while testing. A tester might just as well pair with a developer, to avoid testing features that are unstable, to address certain technical issue, or to demonstrate unwanted behavior immediately, without having to provide extensive evidence in the defect report.

224

Chapter 4 Test Analysis and Design

Table 4.13 Test charter Test charter—TE 02-001-01 Goal Areas

Environment Time Tester Tester’s notes Files Defects found Division of time

Test the login functionality Log in as an existing user with a correct login and password Log in as an existing user through a Google account Log in as an existing user through a Facebook account Incorrect login—no user Incorrect login—wrong password Actions that result with the blocking of the account Using the password reminder function SQL injection attacka and other security attacks Site accessible through various browsers (Chrome, FF, IE) 2019-06-12, 11:00–13:15 Bob Bugbuster (Screenshots, test data, etc.). 20% preparation for the session 70% conduct of session 10% problem analysis

a

SQL injection is one type of attack involving the so-called injection of a malicious piece of code. It is an attack on security by inserting a malicious SQL statement into an input field with the intention of executing it

When to Use Exploratory Testing? Exploratory testing will be a good, effective, and efficient solution if one or more of the following premises are met: • Specification of the product under test is incomplete, of poor quality, or lacking. • There is time pressure; testers are short on time to conduct tests. • Testers know the product well and are experienced in exploratory testing. Exploratory testing is strongly related to the reactive test strategy (see Sect. 5.1.1). Exploratory testing can use other black-box, white-box, and experience-based techniques. No one can dictate to the tester how to conduct the session. If, for example, the tester finds it useful to create a state transition diagram that describes how the system works and then, in an exploratory session, designs test cases and executes them, they have the right to do so. This is an example of using scripted testing as part of exploratory testing. Example The organization plans to test the basic functionality of a website. A decision has been made to conduct exploratory testing, and a test charter shown in Table 4.13 has been prepared for the tester. The test manager, based on the results collected from multiple exploratory test sessions, derives the following sample metrics for further analysis:

4.4 Experience-Based Test Techniques

• • • • • •

225

Number of sessions conducted and completed Number of defects reported Time spent preparing for the session Time of the actual test session Time spent analyzing problems Number of functionalities covered Exploratory Testing Tours Andrew Whittaker [55] proposes a set of exploratory testing approaches inspired by sightseeing and tourism. This metaphor allows testers to increase their creativity when conducting exploratory sessions. The goals of these approaches are: • To understand how the application works, what its interface looks like, what functionality it offers to the user • To force the software to demonstrate its capabilities • To find defects The metaphor of the tourist allows to divide the software areas analogously to the parts of the city visited by the tourist: • Business district—corresponds to those parts of the software that are related to its “business” part, i.e., the functionalities and features that the software offers to users. • Historic district (historic center)—corresponds to the legacy code and history of defective functionality. • Tourist district—corresponds to those parts of the software that attract new users (tourists), functions that an advanced user (city resident) is unlikely to use anymore. • Entertainment district—corresponds to support functions and features related to usability and user interface. • Hotel district—the place where the tourist rests; corresponds to the moments when the user does not actively use the software but the software still does its work. • Suspicious neighborhoods—places where it is better not to venture; in the software, they correspond to places where various types of attacks can be launched against the application. Whittaker describes a range of exploration (sightseeing) types for each district. For more information on exploratory testing, see [56, 57].

226

Chapter 4 Test Analysis and Design

4.4.3 Checklist-Based Testing Description of the Technique Checklist-based testing —like the previous two techniques described in this section—uses the tester’s knowledge and experience, but the basis for test execution is the items contained in the so-called checklist. The checklist contains the test conditions to be verified. The checklist should not contain items that can be checked automatically, items that function better as entry/exit criteria, or items that are too general [58]. Checklist-based testing may seem like a technique similar to fault attacks. The difference between these techniques is that a fault attack starts from defects and failures, and the tester’s action is to reveal issues, given a particular failure or defect. In checklist-based testing, the tester also acts in a systematic way but checks the “positive” features of the software. The test basis in this technique is the checklist itself. Checklist items are often formulated in the form of a question. The checklist should allow you to check each of its items separately and directly. Items in the checklist may refer to requirements, quality characteristics, or other forms of test condition. Checklists can be created to support various test types, including functional and nonfunctional tests (e.g., 10 heuristics for usability testing [59]). Some checklist items may gradually become less effective over time as those developing the checklist learn to avoid making the same errors. New items may also need to be added to reflect high-severity defects that have been recently discovered. Therefore, checklists should be updated regularly based on defect analysis. However, care should be taken that the checklist does not become too long [60]. In the absence of detailed test cases, testing based on checklists can provide a degree of consistency for testing. Two testers working with the same checklist will probably perform their task slightly differently, but in general, they will test the same things (the same test conditions), so necessarily their tests will be similar. If the checklists are high level, there is likely to be some variability in the actual testing, resulting in potentially higher coverage but less repeatability of tests. Types of Checklists There are many different types of checklists for different aspects of software. In addition, checklists can have varying degrees of generality and a narrower or broader field of application. For example, checklists for the use of code testing (e.g., in component testing) will tend to be very detailed and will usually include a lot of technical detail about aspects of code development in a particular programming language. In contrast, a checklist for usability testing may be very high level and general. Of course, this is not a rule—the testers should always adjust the level of detail in the checklist to suit their own needs. Application Checklists can be used for basically any type of testing. In particular, they can apply to functional and nonfunctional testing.

4.4 Experience-Based Test Techniques

227

In checklist-based testing, the tester designs, implements, and runs tests to cover the test conditions found in the checklist. The testers can use existing checklists (e.g., available on the Internet) or can modify and adapt them to their needs. They can also create such lists themselves, based on their own and their organization’s experience with defects and failures, their knowledge of the expectations of users of the developed product, or their knowledge of the causes and symptoms of software failures. Referring to one’s own experience can make this technique more effective, because usually, the same people, in the same organization, working on similar products, will make similar errors. Typically, less experienced testers work on already existing checklists. Coverage For a checklist-based testing, no specific measures of coverage are defined. Of course, at the minimum, each checklist item should be covered. However, since it is difficult to know to what extent such an item has been covered (due to the fact that the technique appeals to the knowledge, experience, and intuition of the individual tester), this has both advantages and disadvantages. The disadvantage, of course, is the lack of detailed coverage information and less repeatability than with formal techniques. The advantage, on the other hand, is that more coverage can be achieved if the same test list is used by two or more testers. This is because each of them will most likely perform a slightly different set of steps to cover the same checklist item. Thus, there will be some variability in the tests, which will translate into greater coverage but at the expense of less repeatability. Example Below we present the two sample checklists. The first one contains the so-called Nielsen usability heuristics. Next to each checklist item, a comment is additionally provided in parentheses, which can help the tester with testing. Nielsen heuristics for system usability 1. Visibility of system status: is the status of the system shown at every point in its operation? (The user should always know where he or she is; the system should use so-called breadcrumbs, showing the path the user has taken, as well as clear titles for each screen) 2. Match between system and the real world: Is there compatibility between the system and reality? (The application should not use technical language but simple language used every day by users of the system, relating to the business domain in which the program operates) 3. User control and freedom: does the user have control over the system? (e.g., it should be possible to undo an action that the user performed by mistake, such as removing a product put there by mistake from the shopping cart) 4. Consistency and standards: does the system maintain consistency and standards? (Appearance solutions should be consistent throughout the application, e.g., the same formatting of links, use of the same font; familiar approaches should also be used, e.g., placing the company logo in the upper left corner of the screen)

228

Chapter 4 Test Analysis and Design

5. Error prevention: does the system adequately prevent errors? (The user should not get the impression that something has gone wrong; for example, we should not give the user the option to select a version of a product that is not available in the store, only to see a “no stock” error later). 6. Recognition rather than recall: does the system allow you to select instead of forcing you to remember? (e.g., descriptions of fields in a form should not disappear when you start filling them in—this happens when the description is initially placed inside that field) 7. Flexibility and efficiency of use: does the system provide flexibility and efficiency? (e.g., advanced search options should be hidden by default if most users do not use them) 8. Aesthetic and minimalist design: is the system aesthetically pleasing and not overloaded with content? (The application should appeal to users; it should have a good design, appropriate color scheme, layout of elements on the page, etc.). 9. Help users recognize, diagnose, and recover from errors: is effective error handling provided? (once an error occurs, the user should not receive a technical message about it or that it is the user’s fault; there should also be information about what the user should do in this situation) 10. Help and documentation: is there help available on the system? Is there user documentation? (Some users will need a help function; such an option should be available in the application; it should also include contact information for technical support) The second example of a checklist concerns code review. This list, as an example, includes only selected technical aspects of good code writing practices and is far from being complete. Checklist for code inspections 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Is the code written according to current standards? Is the code formatted in a consistent manner? Are there functions that are never called? Are the implemented algorithms efficient, with appropriate computational complexity? Is memory being used effectively? Are there variables that are used without being declared first? Is the code properly documented? Is the manner of commenting consistent? Is every divide operation protected against division by zero? In IF-THEN statements, are the blocks of statements executed most often checked first? Does each CASE statement have a default block? Is any allocated memory released when it is no longer in use?

4.5 Collaboration-Based Test Approaches

229

4.5 Collaboration-Based Test Approaches FL-4.5.1 (K2) FL-4.5.2 (K2) FL-4.5.3 (K2)

Explain how to write user stories in collaboration with developers and business representatives. Classify the different options for writing acceptance criteria. Use acceptance test-driven development (ATDD) to derive test cases.

The popularity of agile methodologies has led to the development of testing methods specific to this approach, taking into account the artifacts used by these methodologies and emphasizing collaboration between customers (business), developers, and testers. This chapter discusses the following issues related to testing in the context of agile methodologies, using a collaboration-based test approach : • User stories as an agile counterpart to user requirements (Sect. 4.5.1) • Acceptance criteria as the agile counterpart to test conditions, providing the basis for test design (Sect. 4.5.2) • Acceptance test-driven development (ATDD) as a form of high-level testing (system testing and acceptance testing) often used in agile methodologies, based on a “test-first” approach (Sect. 4.5.3) Each of the techniques described in Sects. 4.2, 4.3, and 4.4 has a specific goal with respect to detecting defects of a specific type. Collaborative approaches, on the other hand, focus also on avoiding defects through cooperation and communication.

4.5.1 Collaborative User Story Writing In agile software development, a user story represents a functional increment that will be of value to the user, purchaser of the system or software, or any other stakeholder. User stories are written to capture requirements from the perspective of developers, testers, and business representatives. In sequential SDLC models, this shared vision of a specific software feature or function is achieved through formal reviews after the requirements have been written. In agile approaches, on the other hand, the shared vision is achieved either through frequent informal reviews during requirements writing or by writing requirements together, in a collaborative way, by testers, analysts, users, developers, and any other stakeholders. User stories consist of three aspects, known as the “3Cs:” • Card • Conversation • Confirmation

230

Chapter 4 Test Analysis and Design

Card A card is a medium that describes a user story (e.g., it can be a physical card in the form of a piece of paper placed on a scrum board5 or an entry in an electronic board). The card identifies the requirement, its criticality or priority, constraints, expected development and testing time, and—very importantly—acceptance criteria for the story. The description must be accurate, as it will be used in the product backlog. Agile teams can document user stories in a variety of ways. One popular format is: As a (intended user) I want (intended action), so that (purpose/result of the action, profit achieved), followed by acceptance criteria. These criteria can also be documented in different ways (see Sect. 4.5.2). Regardless of the approach taken to documenting user stories, the documentation should be both concise and sufficient for the team that will implement and test it. Cards represent customer requirements rather than document them. While the card may contain the text of the story, the details are worked out in conversation and recorded in the confirmation [61]. Conversation The conversation explains how the software will be used. The conversation can be documented or verbal. Testers, having a different point of view than developers and business representatives, make valuable contributions to the exchange of thoughts, opinions, and experiences. The conversation begins during the release planning phase and continues when the story is planned. It is conducted among stakeholders with three primary perspectives on the product: the customer/user, the developer, and the tester. Confirmation Confirmation comes in the form of acceptance criteria, which represent coverage items that convey and document the details of a user’s story and which can be used to determine when a story is complete. Acceptance criteria are usually the result of a conversation. Acceptance criteria can be viewed as test conditions that the tester must check to verify story completeness. User stories should address both functional and nonfunctional features. Typically, the tester’s unique perspective will improve the user story by identifying missing details or nonfunctional requirements. The tester can provide input by asking business representatives open-ended questions about the user story, suggesting ways to test it, and confirming acceptance criteria. Shared authorship of user stories can use techniques such as brainstorming or mind mapping.6 Good user stories meet the so-called INVEST properties, that is, they are [62]: 5

In Scrum, a scrum board is an optionally used physical board that shows the current status of an iteration. It provides a visual representation of the schedule of work to be done in a given iteration. 6 For more on mind maps, see, for example, https://www.mindmapping.com/.

4.5 Collaboration-Based Test Approaches

231

• Independent—they do not overlap and can be developed in any order. • Negotiable—they are not explicit feature contracts; rather, the details will be co-created by the client and developer during development. • Valuable—once implemented, they should bring added value to the customer. • Estimable—the client should be able to prioritize them, and the team should be able to estimate their completion time to more easily manage the project and optimize the team’s efforts. • Small—this makes their scope easy for the team to understand and the creation of the stories themselves more manageable. • Testable—this is a general characteristic of a good requirement; the correctness of the story implementation should be easily verifiable. If a customer doesn’t know how to test something, it may indicate that the story isn’t clear enough or that it doesn’t reflect something of value to the customer or that the customer simply needs help with testing [62]. Example The following example shows a user story written from the perspective of a customer of an e-banking web application. Note that the acceptance criteria are not necessarily directly derived from the content of the story itself but can be the result of a conversation between the team and the customers. These criteria are used by the tester to design acceptance tests that will verify that the story has been fully and correctly implemented. User Story US-001-03 As a customer of the bank I want to be able to log into the system So that I can use bank products Acceptance criteria • • • •

Login must be a valid email address. System rejects login attempt with incorrect password. System rejects login attempt for non-existent user login. System allows the user to enter in the fields “login” and “password” only alphanumeric characters, as well as a period and the symbol “@”. • Login and password fields can take up to 32 characters. • After clicking on the link “remind password” and entering the user’s email address, a link to the password reminder system is sent to this address. • System starts the login process when the “login” button is clicked or when the Enter key is pressed; in the latter case, the login process starts when three conditions are met simultaneously: (1) the “login” field is non-empty, (2) the active window is the “password” window, (3) the “password” field is non-empty.

232

Chapter 4 Test Analysis and Design

4.5.2 Acceptance Criteria Acceptance criteria are conditions that a product (to the extent described by the user story or Product Backlog Item of which these acceptance criteria are a part) must meet in order to be accepted by the customer. From this perspective, acceptance criteria can be viewed as test conditions or coverage items that should be checked by acceptance tests. Acceptance criteria are used: • • • • •

To define the boundaries of the user’s story To reach a consensus between the developer team and the client To describe both positive and negative test scenarios As the basis for user story acceptance testing (see Sect. 4.5.3) As a tool for accurate planning and estimating

Acceptance criteria can be considered as a great help to determine and evaluate the Definition of Ready (DoR) or Definition of Done (DoD). A team can decide not to start the implementation when the Acceptance Criteria of a User Story are not exhaustively elicited. Similarly, a team can decide a User Story is not considered a candidate for release (or for demo) when not all Acceptance Criteria are met (Acceptance Criteria coverage below 100%). Acceptance criteria are discussed during the conversation (see Sect. 4.5.1) and defined in collaboration between business representatives, developers, and testers. Acceptance criteria—if met—are used to confirm that the user story has been implemented fully and in accordance with the shared vision of all stakeholders. They provide developers and testers with an expanded vision of the function that business representatives (or their proxies) will validate. Both positive and negative tests should be used to cover the criteria. During confirmation, different stakeholders play the role of the tester. These can range from developers to specialists focused on performance, security, interoperability, and other quality attributes. The agile team considers a task complete when a set of acceptance criteria is deemed to have been met. There is no single, established way to write acceptance criteria for a user story. The two most common formats are: • Scenario-oriented acceptance criteria • Rule-oriented acceptance criteria Scenario-Oriented Acceptance Criteria This format for writing acceptance criteria usually uses the Given/When/Then format known from the BDD technique. However, in some cases, it is difficult to fit acceptance criteria into such a format, for example, when the target audience does not need the exact details of the test cases. In that case, a rule-oriented format can be used. Rule-Oriented Acceptance Criteria Typically in this format, acceptance criteria take the form of a bulleted verification list or a tabular form of mapping inputs to outputs. Some frameworks and languages

4.5 Collaboration-Based Test Approaches

233

for writing Given/When/Then style criteria (e.g., Gherkin) provide mechanisms that allow you to create a set of rules within a scenario, each of which will be tested separately (this is a form of data-driven testing). Most acceptance criteria can be documented in one of the two formats mentioned above. However, the team can use any other nonstandard format, as long as the acceptance criteria are well-defined and unambiguous. Next to the scenario-oriented and rule-oriented acceptance criteria mentioned above, less mature teams typically elicit and document acceptance criteria in a free format, as additions or sub-sections to a User Story. Whatever format a team decides to use, the bidirectional traceability between the user story and its acceptance criteria is vital. Example The user story described in the previous section (logging into the e-banking system) has acceptance criteria described in the form of rules. Some of them can be detailed and represented in the form of mapping inputs to outputs by defining specific examples of test data. For example, in Gherkin, the first three points of the acceptance criteria list for this story: • Login must be a valid email address. • System rejects login attempt with incorrect password. • System rejects login attempt for non-existent user login. can be specified as follows: Scenario outline: correct and incorrect logins and passwords Given User enters as login And User enters as password When User clicks "login" button Then Login result is . Examples: | | | | | |

login [email protected] patsy.stone@gmail [email protected] mods→[email protected]

| | | | | |

password abFab martini swinging somePasswd

| | | | | |

result OK NOT OK NOT OK NOT OK NOT OK

| | | | | |

In the above example, there are references to variables (in brackets “") in the description of the acceptance criteria presented in Given/When/Then format, and the specific test data is in the Examples section below. This test will be executed five times, each time with a different set of test data. The name of the section where the test data is placed suggests that the tests are in the form of examples, which show on specific data how the system is supposed to behave. For example, a login is supposed to be correct if the login and password meet the set conditions (login is a valid email address, login and password are non-empty and contain only

234

Chapter 4 Test Analysis and Design

alphanumeric characters). On the other hand, if the email address is invalid (line 2), the password is blank (line 3), non-alphanumeric characters are used (line 4), or the login is blank (line 5), the login shall not succeed. Example Let us consider another example of a user story, this time for a CRM system for a certain bank. The story concerns the implementation of certain business rules related to offering credit cards to customers who meet certain requirements. As a financial institution I want to make sure that only customers with sufficient annual income get a credit card So that credit cards are not offered to customers who will not be able to repay the debit on the card Scenario: There are two types of cards: one with a debit limit of $2500, the other with a debit limit of $5000. The maximum credit card limit depends on the customer’s earnings (rounded to the nearest $). The customer must have a salary of more than $10,000 per month to get the lower debit limit. If the salary exceeds $15,000 per month, the customer gets a higher debit limit. Given a customer with monthly earnings . When a customer applies for a credit card Then the application should be or and if it is accepted, the maximum credit card limit should be . Examples of test cases written based on this story are shown in Table 4.14. Note that in this example, the tester, while creating test cases to check the rule described in the story, simultaneously tried to cover the boundary values of the “monthly salary” domain.

4.5.3 Acceptance Test-Driven Development (ATDD) Acceptance Test-Driven Development (ATDD) is a test-first approach (see Sect. 2.1.3). Test cases are created before the user story is implemented. Test cases are created by team members with different perspectives on the product, such as customers, developers, and testers [63]. Test cases can be manual or automated. The first step is the so-called specification workshop, during which team members analyze, discuss, and write the user story and its acceptance criteria. During this process, all kinds of problems in the story, such as incompleteness, ambiguities, Table 4.14 Business outcomes TC 1 2 3 4

Earnings $10,000 $10,001 $15,000 $15,001

Expected result Rejected Accepted Accepted Accepted

Maximum limit $0 $2500 $2500 $5000

Comments Income ≤ $1000 $10,000 < income ≤ $15,000 $10,000 < income ≤ $15,000 $15,000 < income

4.5 Collaboration-Based Test Approaches

235

contradictions, or other kinds of defects, are fixed. The next step is to create tests. This can be done collectively by a team or individually by a tester. In either case, an independent person, such as a business representative, performs validation of the tests. Tests are examples, based on acceptance criteria, that describe specific characteristics of the user story (see Sect. 4.5.2). These examples help the team implement the user story correctly. Because examples and tests are the same thing, the terms are often used interchangeably. Once the tests are designed, the test techniques described in Sects. 4.2, 4.3, and 4.4 can be applied. Typically, the first tests are positive tests, confirming correct behavior with no exceptions or error occurrences, involving a sequence of actions performed if everything goes as expected. These types of scenarios are said to implement so-called happy paths, i.e., execution paths where everything goes as planned without any failures occurring. After positive test executions, the team should perform negative tests and tests regarding the nonfunctional attributes (e.g., performance, usability). Tests should be expressed in terms that stakeholders can understand. Typically, tests include natural language sentences containing the necessary preconditions (if any), inputs, and associated outputs. The examples (i.e., tests) must cover all features of the user story and should not go beyond it. However, acceptance criteria may detail some of the issues described in the user story. In addition, no two examples should describe the same features of the user story (i.e., the tests should not be redundant). When tests are written in a format supported by the acceptance test automation framework, developers can automate these tests by writing support code during the implementation of the function described by the user story. The acceptance tests then become executable requirements. An example of such support code is shown in Sect. 2.1.3 when discussing the BDD approach. Example Suppose a team intends to test the following user story. User story US-002-02 As a logged-in bank customer I want to be able to transfer money to another account So that be able to transfer funds between accounts Acceptance criteria • (AC1) the amount of the transfer cannot exceed the account balance • (AC2) target account must be correct • (AC3) for valid data (amount, account numbers) the source account balance decreases and the destination account balance increases by the amount of the transfer • (AC4) the transfer amount must be positive and represent the correct amount of money (i.e., be accurate to two decimal places max) Keeping in mind that tests should verify that acceptance criteria are met, the examples of positive functional tests here could be the following tests relating to (AC3) and verifying also (AC4):

236

Chapter 4 Test Analysis and Design

• TC1: make a “typical” transfer, e.g., a transfer for $1500 from an account with a balance of $2735.45 to another correct account; expected result: account balance = $1235.45, the balance of the target account increases by $1500 • TC2: make a correct transfer of the entire account balance to another correct account (e.g., a transfer for $21.37 from an account with the balance of $21.37 to another correct account); expected result: account balance = $0, target account balance increases by $21.37 The second test case uses boundary value analysis for the difference between the account balance and the transfer amount. The next step is to create negative tests, keeping in mind that the tests are to verify that acceptance criteria are met. For example, a tester might consider the following situations: • TC3: attempt to make a transfer to another, correct account, when the transfer amount is greater than the balance of the source account (verification of AC1) • TC4: attempt to make a transfer with correct transfer amount and balance, but to a non-existent account (verification of AC2) • TC5: attempt to make a transfer with the correct transfer amount and balance to the same account (verification of AC2) • TC6: attempt to make a transfer for an incorrect amount (verification of AC4) Of course, for each of these tests, the tester can define a series of test data to verify specific program behavior. In creating these test cases, the tester can use black-box techniques (e.g., equivalence partitioning or boundary value analysis). For example, for TC1, it is worth considering transfers for the following amounts: the minimum possible (e.g., $0.01), “typical” (e.g., $1500), amounts with different levels of accuracy (e.g., $900.2, $8321.06), or a very large amount (e.g., $8,438,483,784). For TC3, on the other hand, it is worth checking the following situations (here we use the equivalence partitioning and boundary value analysis): • When the transfer amount is much larger than the account balance • When the transfer amount is greater by 1 cent (the minimum possible increment) than the account balance For TC4, an invalid account can be represented as: • An empty character string • A number with the correct structure (26-digit), but not corresponding to any physical account • A number with incorrect structure—too short (e.g., 25-digit) • A number with incorrect structure—too long (e.g., 27-digit) • A number with invalid structure—containing forbidden characters, such as letters Finally, for TC6, it is worth considering in particular: • A negative number representing the correct amount (e.g., $-150.25) • Number 0 • A character string containing letters (e.g., $15B.20)

Sample Questions

237

• A number represented as the result of a mathematical operation (e.g., $15+$20) • A number with more than two decimal places (e.g., $0.009) Note that each of the test data sets given above checks a significantly different potential risk existing in the system than the other cases. Thus, the tests are nonredundant; they do not check “the same thing.”

Sample Questions Question 4.1 (FL-4.1.1, K2) Test design, test implementation, and test execution based on software input domain analysis are an example of: A. B. C. D.

White-box test technique. Black-box test technique. Experience-based test technique. Static testing technique. Choose one answer.

Question 4.2 (FL-4.1.1, K2) What is a common feature of techniques such as equivalence partitioning, state transition testing, or decision tables? A. Using these techniques, test conditions are derived based on information about the internal structure of the software under test. B. To provide test data for test cases designed based on these techniques, the tester should analyze the source code. C. Coverage in these techniques is measured as the ratio of tested source code elements (e.g., statements) to all such elements identified in the code. D. Test cases designed with these techniques can detect discrepancies between requirements and their actual implementation. Choose one answer. Question 4.3 (FL-4.2.1, K3) The system assigns a discount for purchases depending on the amount of purchases expressed in $, which is a positive number with two decimal places. Purchases up to the amount of $99.99 do not entitle to a discount. Purchases from $100 to $299.99 entitle to a 5% discount. Purchases above $299.99 entitle to 10% discount. Indicate the minimum set of values (given in $) to achieve 100% equivalence partitioning technique.

238

A. B. C. D.

Chapter 4 Test Analysis and Design

0.01; 100.99; 500. 99.99; 100; 299.99; 300. 1; 99; 299. 0; 5; 10. Choose one answer.

Question 4.4 (FL-4.2.1, K3) The coffee vending machine accepts 25c, 50c, and $1 coins. The coffee costs 75c. After inserting the first coin, the machine waits until the amount of coins inserted by the user is equal to or greater than 75c. When this happens, the coin slot is blocked, coffee is dispensed, and (if necessary) change is given. Assume that machine has always sufficient numbers and denominations of coins to give change. Consider the following test scenarios: Scenario 1: Insert coins in order: 25c, 25c, 25c. Expected behavior: vending machine dispenses coffee and no change. Scenario 2: Insert coins in order: 25c, 50c. Expected behavior: vending machine dispenses coffee and no change. Scenario 3: Insert $1 coin. Expected behavior: vending machine dispenses coffee and 25c change. Scenario 4: Insert coins in order: 25c, 25c, 50c. Expected behavior: vending machine dispenses coffee and 25c change. You want to check whether the machine actually gives change when the user has inserted coins for more than 75c and whether it gives no change at all if the user has inserted exactly 75c. Which of these scenarios represent the minimum set of test cases that achieves this goal? A. B. C. D.

Scenario 1, Scenario 2. Scenario 3, Scenario 4. Scenario 1, Scenario 3. Scenario 1, Scenario 2, Scenario 3. Choose one answer.

Question 4.5 (FL-4.2.1, K3) You are testing a card management system that entitles you to discounts on purchases. There are four types of cards, regular, silver, gold, and diamond, and three possible discounts: 5%, 10%, and 15%. You use the equivalence partitioning

Sample Questions

239

technique to test whether the system works correctly for all types of cards and all possible discounts. You already have the following test cases prepared: TC1: regular card, discount: 10% TC2: silver card, discount: 15% TC3: gold card, discount: 15% TC4: silver card, discount: 10% What is the LEAST number of test cases you have to ADDITIONALLY prepare to achieve 100% “each choice” coverage of both card types and discount types? A. B. C. D.

1 3 2 8 Choose one answer.

Question 4.6 (FL-4.2.2, K3) The user defines the password by entering it in the text field and clicking the “Confirm” button. By default, the text field is blank at the beginning. The system considers the password format correct if the password has at least 6 characters and no more than 11 characters. You identified three equivalence partitions: password too short, password length OK, and password too long. The existing test cases represent a minimum set of test cases achieving 100% 2-value BVA coverage. The test manager decided that due to the criticality of the component under test, your tests should achieve 100% 3-value BVA coverage. Passwords of what lengths should be ADDITIONALLY tested to achieve the required coverage? A. B. C. D.

5, 12 0, 5, 12 4, 7, 10 1, 4, 7, 10, 13 Choose one answer.

Question 4.7 (FL-4.2.2, K3) Users of the car wash have electronic cards that record how many times they have used the car wash so far. The car wash offers a promotion: every tenth wash is free. You are testing the correctness of offering the promotion using a boundary value analysis technique. What is the MINIMAL set of test cases that achieves 100% 2-value BVA coverage? The values in the answers indicate the number of the given (current) wash.

240

Chapter 4 Test Analysis and Design

Table 4.15 Decision table for free fare allocation rules

A. B. C. D.

Conditions Member of parliament? Disabled? Student? Action Free ride?

R1

R2

R3

YES – –

– YES –

– – YES

YES

YES

NO

9, 10 1, 9, 10 1, 9, 10, 11 Required coverage cannot be achieved Choose one answer.

Question 4.8 (FL-4.2.3, K3) The business analyst prepared a minimized decision table (Table 4.15) to describe the business rules for granting free bus rides. However, the decision table is flawed. Which of the following test cases, representing combinations of conditions, shows that the business rules in this decision table are CONTRADICTORY? A. B. C. D.

Member of parliament = YES, disabled = NO, student = YES. Member of parliament = YES, disabled = YES, student = NO. Member of parliament = NO, disabled = NO, student = YES. Member of parliament = NO, disabled = NO, student = NO. Choose one answer.

Question 4.9 (FL-4.2.3, K3) You create a full decision table that has the following conditions and actions: • Condition “age”—possible values: (1) up to 18; (2) 19–40; (3) 41 or more. • Condition “place of residence”—possible values: (1) city; (2) village. • Condition “monthly salary”—possible values: (1) up to $4000; (2) $4001 or more. • Action “grant credit”—possible values: (1) YES; (2) NO. • Action “offer credit insurance”—possible values: (1) YES; (2) NO. How many columns will the full decision table have for this problem? A. B. C. D.

6 11 12 48 Choose one answer.

Sample Questions Table 4.16 Valid transitions of the system for the login process

241 Transition 1 2 3 4

State Initial Logging Logging Logged

Event Login LoginOK LoginError Logout

Next state Logging Logged Initial Initial

Question 4.10 (FL-4.2.4, K3) Table 4.16 shows in its rows ALL valid transitions between states in the system for handling the login process. The system contains three states (Initial, Logging, Logged) and four possible events (Login, LoginOK, LoginError, Logout). How many INVALID transitions are in this system? A. B. C. D.

8 0 4 12 Choose one answer.

Question 4.11 (FL-4.2.4, K3) Figure 4.17 shows a state transition diagram for a certain system. What is the MINIMAL number of test cases that will achieve 100% valid transitions coverage? A. B. C. D.

2 3 6 7 Choose one answer.

Question 4.12 (FL-4.3.1, K2) After test execution, your test cases achieved 100% statement coverage. Which of the following statements describes the correct consequence of this fact?

Fig. 4.17 Transition diagram of a state machine

242

A. B. C. D.

Chapter 4 Test Analysis and Design

100% branch coverage is achieved. Every logical value of every decision in the code was exercised. Every possible output that the program under test can return was enforced. Each statement containing a defect was exercised. Choose one answer.

Question 4.13 (FL-4.3.2, K2) Consider a simple program of three statements (assume that statements 1 and 2 take nonnegative integers (0, 1, 2, etc.) and that these statements will always execute correctly): 1. Get the input value x 2. Get the input value y 3. Return x + y

How many test cases are needed to achieve 100% branch coverage in this code and why? A. Two test cases are needed, because one should test the situation where the returned value is be 0, and the other should test the situation where the returned value is a positive number. B. There is no need to run any tests, because branch coverage for this code is satisfied by definition, since the program consists of a sequential passage of three statements and there are no conditional branches to test. C. One test case is needed with arbitrary input values x and y, because each test will result in the execution of the same path covering all branches. D. It is impossible to achieve branch coverage with a finite number of test cases, because to do so, we would have to force all possible values of the sum of x+y in statement 3, which is infeasible. Choose one answer. Question 4.14 FL-4.3.3 (K2) Which of the following BEST describes the benefit of using white-box test techniques? A. The ability to detect defects when specifications are incomplete. B. The ability of developers to perform tests, as these techniques require programming skills. C. Making sure tests achieve 100% coverage of any black-box technique, as this is implied by achieving 100% code coverage. D. Better residual risk level control in code, as it is directly related to measures such as statement coverage and branch coverage. Choose one answer.

Sample Questions

243

Question 4.15 (FL-4.4.1, K2) The tester uses the following document to design the test cases: 1. Occurrence of an arithmetic error caused by dividing by zero. 2. Occurrence of a rounding error. 3. Forcing a negative value result. What technique does the tester use? A. B. C. D.

Boundary value analysis. Checklist-based testing. Use case based testing. Error guessing Choose one answer.

Question 4.16 (FL-4.4.2, K2) Which of the following is the correct sentence regarding the use of formal (i.e., black-box and white-box) test techniques as part of a session-based exploratory testing? A. All formal test techniques are allowed, because exploratory testing does not impose any specific method of operation. B. All formal test techniques are forbidden, because exploratory testing is based on tester’s knowledge, skills, intuition, and experience. C. All formal test techniques are forbidden, because the steps performed in exploratory testing are not planned in advance. D. All formal test techniques are allowed, because an exploratory tester needs a test basis from which to derive test cases. Choose one answer. Question 4.17 (FL-4.4.3, K2) What benefit can be achieved by using checklist-based testing? A. B. C. D.

Appreciation of nonfunctional testing, which is often undervalued Enabling accurate code coverage measurement. Leveraging the tester’s expertise. Improved test consistency. Choose one answer.

Question 4.18 (FL-4.5.1, K2) During an iteration planning meeting, the team shares thoughts with each other about the user story. The product owner wants the customer to have a single-screen form for entering information. The developer explains that this feature has some

244

Chapter 4 Test Analysis and Design

technical limitations, related to the amount of information that can be captured on the screen. Which of the following BEST represents the continuation of writing this user story? A. The tester decides that the form must fit on the screen and describes this as one of the acceptance criteria for the story, since the tester will be the one to conduct acceptance tests later. B. The tester listens to the points of view of the product owner and the developer and creates two acceptance tests for each of the two proposed solutions. C. The tester advises the developer that the performance acceptance criteria should be based on a standard: a maximum of 1 second per data record. The developer describes this as one of the acceptance criteria, since the developer is responsible for the application’s performance. D. The tester negotiates with the developer and the product owner the amount of necessary input information, and together they decide to reduce this information to the most important to fit on the screen. Choose one answer. Question 4.19 (FL-4.5.2, K2) Consider the following user story: As a potential customer of the e-shop, I want to be able to register by filling out the registration form, so that I am able to use the full functionality of the e-shop. Which TWO of the following are examples of testable acceptance criteria for this story? A. The process of carrying out registration must take place quickly enough. B. After registration, the user has access to the “home ordering” function. C. The system refuses registration if the user enters as a login an e-mail already existing in the database. D. The system operator can send the order placed by the user for processing. E. Acceptance criteria for this user story should be in Given/When/Then format. Select TWO answers. Question 4.20 (FL-4.5.3, K3) The business rule for borrowing books in the university library system states that a reader can borrow new books if the following two conditions are met together: • The time of the longest held book does not exceed 30 days. • After borrowing, the total number of borrowed books will not exceed five books for a student and ten books for a professor.

Exercises

245

The team is to implement a user story for this requirement. The story is created using the ATDD framework, and the following acceptance criteria are written as examples in the Given/When/Then format: Given has already borrowed of books And the time of the longest held book is days When user wants to borrow new books Then the system to loan the books Examples: No 1 2 3 4

| UserType | Student | Student | Professor | Professor

| Number | 3 |4 |6 |0

| Days | 30 |1 | 32 |0

| Request |2 |1 |3 |6

| | | | |

Decision does not allow allows does not allow allows

How many of the above four test cases are INCORRECTLY defined, i.e., they violate the business rules of book lending? A. B. C. D.

None—they all follow the business rule. One. Two. Three.

Exercises Exercise 4.1 (FL-4.2.1, K3) The user fills out a web form to purchase concert tickets. Tickets are available for three concerts: Iron Maiden, Judas Priest, and Black Sabbath. For each of the concerts, one of two types of tickets can be purchased: a sector in front of the stage and a sector away from the stage. The user confirms the choice by checking the boxes on the corresponding drop-down lists (one of which contains the names of the bands, the other—the type of ticket). Only one ticket for a single band can be purchased per session. You want to check the correctness of the system for each band and, independently, for the type of ticket. A) Identify the domains and their equivalence partitions. B) Are there any invalid partitions? Justify your answer. C) Design the smallest possible set of test cases that will achieve 100% equivalence partitioning coverage. Exercise 4.2 (FL-4.2.1, K3)

246 Table 4.17 Discount rules

Chapter 4 Test Analysis and Design Amount Up to $300 Over $300, up to $800 Over $800

Discount granted No 5% 10%

A natural number greater than 1 is called prime if it is divisible by only two numbers: by 1 and by itself. The system takes a natural number (entered by the user in the form field) as input and returns whether it is prime or not. The form field for the input data has a validation mechanism and will not allow any string to be entered that does not represent a valid input (i.e., a natural number greater than one). A) Identify the domain and its equivalence partitioning. B) Are there any invalid partitions in the problem? Justify your answer. C) Design the smallest possible set of test cases that will achieve 100% equivalence partitioning coverage. Exercise 4.3 (FL-4.2.2, K3) You are testing the payment functionality of the e-shop. The system receives a positive amount of purchases (in $ with an accuracy of 1 cent). This amount is then rounded up to the nearest integer, and based on this rounded value, a discount is calculated according to the rules described in Table 4.17: You want to apply a 2-value BVA to check the correctness of the discount calculation. The input data in the test case is the amount before rounding. Conduct equivalence partitioning, determine the boundary values, and design the test cases. Exercise 4.4 (FL-4.2.2, K3) The system calculates the price of picture framing based on the given parameters: width and height of the picture (in centimeters). The correct width of the picture is between 30 and 100 cm inclusive. The correct height of the picture is between 30 and 60 cm inclusive. The system takes both input values through an interface that accepts only valid values from the ranges specified above. The system calculates the area of the image as the product of width and height. If the surface area exceeds 1600 cm2, the framing price is $500. Otherwise, the framing price is $450. Apply 2-value BVA to the above problem: A) Identify the partitions and their boundary values. B) Design the smallest possible number of test cases covering the boundary values for all relevant parameters. Is it possible to achieve full 2-value BVA coverage? Justify your answer. Exercise 4.5 (FL-4.2.3, K3)

Exercises

247

The operator of the driver’s license test support system enters the following information into the system, for a candidate who is taking the exams for the first time: • The number of points from the theoretical exam (integer number from 0 to 100). • The number of errors made by the candidate during the practical exam (integer number 0 or greater). The candidate must take both exams. A candidate is granted a driver’s license if they meet the following two conditions: they scored at least 85 points on the theoretical test and made no more than two errors on the practical test. If a candidate fails one of the exams, they must repeat this exam. In addition, if the candidate fails both exams, they are required to take additional hours of driving lessons. Use decision table testing to conduct the process of designing test cases for the above problem. Follow the five-step procedure described in Sect. 4.2.3, that is: A) Identify all the possible conditions, and list them at the top part of the table. B) Identify all the possible actions that can occur in the system, and list them at the bottom part of the table. C) Generate all combinations of conditions. Eliminate infeasible combinations (if there are any), and list all remaining, feasible combinations in individual columns at the top part of the array. D) For each combination of conditions identified in this way, determine, based on the specification, what actions should take place in the system, and enter them in the appropriate columns at the bottom part of the table. E) For each column, design a test case that includes a name (describing what the test case tests), pre-conditions, input data, expected output, and post-conditions. Exercise 4.6 (FL-4.2.3, K3) Figure 4.18 (after Graham et al., Foundations of Software Testing) shows the process of allocating seats to passengers based on whether they have a gold card and whether there are seats available in business and economy classes, respectively. Describe this process with a decision table. Did you notice any problem with the specification while creating the table? If so, suggest a solution to the problem. Exercise 4.7 (FL-4.2.4, K3) The ATM is initially in a waiting state (welcome screen). After the user inserts the card, card validation takes place. If the card is invalid, the system returns it and terminates with the message “Card error.” Otherwise, the system asks the user to enter a PIN. If the user provides a valid PIN, the system switches to the “Logged” state and terminates the operation. If the user enters a wrong PIN, the system asks to enter it again. If the user enters the wrong PIN three times, the card is blocked, the user receives the message “Card blocked,” and the system goes to the final state representing the card being blocked.

248

Chapter 4 Test Analysis and Design

Fig. 4.18 The process of allocating seats on an airplane

A) Identify possible states, events, and actions, and design a state transition diagram for the above scenario without using guard conditions. B) Design a state transition diagram for the same scenario but using guard conditions. You should get a model with fewer states. C) For the state transition diagram from (A), design the smallest possible number of test cases that achieve: a) 100% all states coverage. b) 100% valid transitions coverage. Exercise 4.8 (FL-4.2.4, K3) The operation of the mechanical robot dog is described by the state transition diagram shown in Fig. 4.19. The initial state is “S.” A) How many invalid transitions are in this diagram? B) Design test cases that achieve full all transitions coverage. Adopt the rule that when testing invalid transitions, one test case tests only one invalid transition. Exercise 4.9 (FL-4.5.3, K3) As a tester, you start working on acceptance test design for the following user story: US-01-002 New user registration Effort estimation: 3 As: any user—potential customer I want to: be able to fill out the web registration form So that: I am able to use the service provider’s offerings Acceptance criteria:

Exercises

249

Fig. 4.19 Robot dog state transition diagram

AC1 The user’s login is a valid e-mail address AC2 User cannot register using an existing login AC3 For security reasons, the user must enter the password in two independent fields, and these passwords must be identical; the system does not allow the user to paste a string into these fields; it must be entered manually AC4 Password must contain at least 6 and at most 12 characters, at least 1 number and at least 1 capital letter AC5 Successful registration results in sending an email containing a link, the clicking of which confirms registration and activates the user’s account Propose some sample acceptance tests with specific test data and expected system behavior.

Chapter 5 Managing the Test Activities

Keywords Defect management Defect report Entry criteria Exit criteria

Product risk Project risk Risk Risk analysis Risk assessment Risk control Risk identification Risk level Risk management Risk mitigation

The process of recognizing, recording, classifying, investigating, resolving, and disposing of defects Documentation of the occurrence, nature, and status of a defect. Synonyms: bug report The set of conditions for officially starting a defined task. References: Gilb and Graham The set of conditions for officially completing a defined task. After Gilb and Graham. Synonyms: test completion criteria, completion criteria A risk impacting the quality of a product A risk that impacts project success A factor that could result in future negative consequences The overall process of risk identification and risk assessment The process to examine identified risks and determine the risk level The overall process of risk mitigation and risk monitoring The process of finding, recognizing, and describing risks. References: ISO 31000 The measure of a risk defined by impact and likelihood. Synonyms: risk exposure The process for handling risks. After ISO 24765 The process through which decisions are reached and protective measures are implemented for reducing or maintaining risks to specified levels

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_8

251

252

Chapter 5 Managing the Test Activities

Risk monitoring Risk-based testing

Test approach Test completion report

Test control

Test monitoring

Test plan

Test planning Test progress report

Test pyramid

Testing quadrants

The activity that checks and reports the status of known risks to stakeholders Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels. After ISO 29119-1 The manner of implementing testing tasks A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria. Synonyms: test summary report The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities. References: ISO 29119-1 The activity of establishing or updating a test plan A type of periodic test report that includes the progress of test activities against a baseline, risks, and alternatives requiring a decision. Synonyms: test status report A graphical model representing the relationship of the amount of testing per level, with more at the bottom than at the top A classification model of test types/test levels in four quadrants, relating them to two dimensions of test objectives: supporting the product team versus critiquing the product and technology facing versus business facing

5.1 Test Planning FL-5.1.1 (K2) FL-5.1.2 (K1) FL-5.1.3 (K2) FL-5.1.4 (K3)

Exemplify the purpose and content of a test plan Recognize how a tester adds value to iteration and release planning Compare and contrast entry criteria and exit criteria Use estimation techniques to calculate the required test effort

5.1 Test Planning

FL-5.1.5 (K3) FL-5.1.6 (K1) FL-5.1.7 (K2)

253

Apply test case prioritization Recall the concepts of the test pyramid Summarize the testing quadrants and their relationships with test levels and test types

5.1.1 Purpose and Content of a Test Plan Test plan is a document that provides a detailed description of the test project’s objectives, the means necessary to achieve those objectives, and a schedule of test activities. In typical projects, a single test plan, sometimes called a master test plan or project test plan, is usually created. However, in larger projects, you may encounter several plans, such as a master test plan and plans corresponding to the test levels defined in the project (level test plan or phase test plan). In such a situation, there may be, for example, a component integration test plan, a system test plan, an acceptance test plan, etc. Detailed test plans may also relate to the types of tests that are planned to be carried out in the project (e.g., a performance test plan). The test plan describes the test approach and helps the team make sure that test activities can be started and, when completed, that they have been conducted properly. This is accomplished by defining specific entry and exit criteria (see Sect. 5.1.3) for each test activity or activity in the test plan. The test plan also confirms that, if followed, testing will be conducted in accordance with the project’s testing strategy and the organization’s testing policy. Very rarely will the project turn out 100% as we planned it. In most situations, minor or major modifications will be necessary. The natural question then arises: why waste time planning in such a case? Dwight Eisenhower, US Army General and later President of the United States, once famously said, “In preparing for battle, I have always realized that plans are useless, but planning is indispensable.” 1 Indeed, the most valuable part of creating a test plan is the test planning process itself. Planning in a sense “forces” testers to focus their thinking on future challenges and risks related to schedule, resources, people, tools, cost, effort, etc. During planning, testers can identify risks and think through the test approach that will be most effective. Without planning, testers would be unprepared for the many different undesirable events that will happen during the project. This, in turn, would create a very high risk of project failure, i.e., not completing the project or completing it late, over budget, or with a reduced scope of work completed.

1

https://pl.wikiquote.org/wiki/Dwight_Eisenhower

254

Chapter 5 Managing the Test Activities

The planning process is influenced by many factors that testers should consider in order to best plan all the activities of the testing process. Among these factors, the following should be considered in particular: • • • • • • • • •

Test policy and test strategy SDLC Scope of testing Objectives Risks Limitations Criticality Testability Resource availability

A typical test plan includes the following information: Context of testing. Scope, objectives, limitations, and test basis information Assumptions and limitations of the test project Stakeholders. Roles, responsibilities, influence on testing process (e.g., power vs. interest), and hiring and training needs of people Communication. Types and frequency of communication and templates of documents used List of risks. Product risks and project risks (see Sect. 5.2) Approaches to testing. Test levels (see Sect. 2.2.1), test types (see Sect. 2.2.2), test techniques (see Chap. 4), test work products, entry and exit criteria (see Sect. 5.1.3), level of test independence (see Sect. 1.5.3), definition of test metrics used (see Sect. 5.3.1), test data requirements, test environment requirements, and deviations from organizational best practices (with justification) Schedule (see Sect. 5.1.5) As the project is being realized and test plan is being implemented, additional information appears, detailing the plan. Thus, test planning is an ongoing activity and is performed throughout the product lifecycle; sometimes, it includes a maintenance phase. It is important to realize that the initial test plan will be updated, as feedback from test activities will be used to identify changing risks and adjust plans accordingly. The results of the planning process can be documented in a master test plan and in separate plans for specific test levels and test types. The following steps are performed during test planning (see Fig. 5.1): • • • •

Defining the scope and objectives of testing and the associated risks Determining the overall approach to testing Integrating and coordinating the test activities within the SDLC activities Deciding what to test, what personnel and other resources will be needed to perform the various test activities, and how the activities should be performed

5.1 Test Planning

255

Fig. 5.1 Test planning process (according to ISO/IEC/IEEE 29119-2 standard)

• Planning the activities of test analysis, design, implementation, execution, and evaluation by setting specific deadlines (sequential approach) and placing these activities in the context of individual iterations (iterative approach) • Making a selection of measures for test monitoring and test control • Determining the budget for the collection and evaluation of metrics • Determining the level of detail and structure of documentation (templates or sample documents) One part of the test plan—the test approach—will in practice be the implementation of a specific test strategy in effect in the organization or project. The test strategy is a general description of the test process being implemented, usually at the product or organization level. Testing strategies Typical testing strategies are: • Analytical strategy. This strategy is based on the analysis of a specific factor (e.g., requirements or risk). An example of an analytical approach is risk-based testing, in which the starting point for test design and prioritization is the risk level. • Model-based strategy. With this strategy, tests are designed on the basis of a model of a specific required aspect of the product, for example, function, business process, internal structure, or non-functional characteristics (such (continued)

256





• •



Chapter 5 Managing the Test Activities

as reliability). Models can be created based on business process models, state models, reliability growth models, etc. Methodical strategy. The basis of this strategy is the systematic application of a predetermined set of tests or test conditions, for example, a standard set of tests or a checklist containing typical or likely failure types. According to this approach, checklist-based testing, fault attacks, and testing based on quality characteristics, among other things, are performed. Process-compliant (or standard-compliant) strategy. This strategy involves creating test cases based on external rules and standards (derived, e.g., from industry standards), process documentation or rigorous identification and use of the test basis, as well as any process or standard imposed by the organization. Directed (or consultative) strategy. This strategy is primarily based on advice and guidance from stakeholders, subject matter experts, or technical experts—including those outside the test team or organization. Regression-averse strategy. This strategy—motivated by the desire to avoid regression of already existing functionality—envisions the reuse of legacy testware (especially test cases), the extensive automation of regression testing, and the use of standard test suites. Reactive strategy. With this strategy, testing is geared more toward reacting to events than following a predetermined plan (as with the strategies described above), and tests are designed and can be immediately executed based on knowledge gained from the results of previous tests. An example of this approach is exploratory testing, in which tests are executed and evaluated in response to the behavior of the software under test.

In practice, different strategies can and even should be combined. The basics of test strategy selection include: • Risk of project failure: – Risks to the product – Danger to people, the environment, or the company caused by product failure – Lack of skills and experience of the people in the project • • • •

Regulations (external and internal) on the development process Purpose of the testing venture Mission of the test team Type and specifics of the product

5.1 Test Planning

257

5.1.2 Tester’s Contribution to Iteration and Release Planning In iterative approaches to software development, there are two types of planning related to products: release planning and iteration planning. There exist other types of planning, on a higher, organizational, or strategical level (such as “big room” planning or “product increment” planning). These types of planning are not further discussed. Release planning Release planning looks ahead to the release of a product. Release planning defines and redefines the product backlog and may include refining larger user stories into a collection of smaller stories. Release planning provides the basis for a test approach and test plan covering all iterations of the project. During release planning, business representatives (in collaboration with the team) determine and prioritize user stories for a release. Based on these user stories, project and product risks are identified (see Sect. 5.2), and high-level effort estimation is performed (see Sect. 5.1.4). Testers are involved in release planning and add value especially in the following activities: • • • • •

Defining testable user stories, including acceptance criteria Participating in project and product (quality) risk analysis Estimating testing effort related to user stories Defining the necessary test levels Planning the testing for the release

Iteration planning Once release planning is complete, iteration planning begins for the first iteration. Iteration planning looks ahead to the end of a single iteration and addresses the iteration backlog. During iteration planning, the team selects user stories from the prioritized product backlog, refines (clarifies) them and slices them (when needed), develops them, performs risk analysis for the user stories, and estimates the work needed to implement each selected story (see Sect. 5.1.4). If a user story is unclear and the attempts to clarify it have failed, the team can refuse it and use the next user story based on priority. Business representatives must answer the team’s questions about each story so that the team understands what it should implement and how to test each story. The number of selected stories is based on the so-called team velocity2 and the estimated size of the selected user stories, as well as technical constraints. Once the

2

Team velocity is the empirically determined amount of work a team is able to perform during a single iteration. It is usually expressed in terms of so-called user story points. The size of each story is also estimated in these units, so the team knows how many user stories it can take into the iteration backlog—the sum of their complexity cannot exceed the team’s speed. This reduces the risk that the team will not have time to complete all the work to be done in an iteration and that the team will not finish the work before the end of the iteration, causing so-called empty runs.

258

Chapter 5 Managing the Test Activities

content of the iteration is finalized, the user stories are divided into tasks to be executed by the corresponding team members. Testers are involved in iteration planning and add value especially in the following activities: • • • • • • •

Participating in detailed risk analysis of user stories Determining the testability of user stories Co-creating acceptance tests for user stories Splitting user stories into tasks (especially test tasks) Estimating the testing effort for all testing tasks Identifying functional and non-functional testing aspects of the system under test Supporting and participating in test automation at multiple test levels

5.1.3 Entry Criteria and Exit Criteria Entry criteria Entry criteria (more or less similar to the Definition of Ready in an agile approach) define the pre-conditions that must be met before a test activity can begin. If they are not met, testing can be more difficult and time-consuming, costly, and risky. Entry criteria include the availability of resources or testware: • Testable requirements, user stories, and/or models • Test items that met the exit criteria applicable to earlier levels of testing, mainly in the waterfall approach • Test environment • Necessary test tools • Test data and other necessary resources as well as the initial quality level of a test object (e.g., all smoke tests pass). Entry criteria protect us from starting tasks for which we are not yet fully prepared. Exit criteria Exit criteria (more or less similar to the Definition of Done in an agile approach) define the conditions that must be met in order for the execution of a test level or set of tests to be considered completed. These criteria should be defined for each test level and test type. They may vary depending on the test objectives. Typical exit criteria are: • Completion of the execution of scheduled tests • Achieving the right level of coverage (e.g., requirements, user stories, acceptance criteria, code) • Not exceeding the agreed limit of unrepaired defects

5.1 Test Planning

259

• Obtaining a sufficiently low estimated defect density • Achieving sufficiently high reliability rates Note that sometimes test activities may be shortened due to: • The use of the entire budget • The passage of the scheduled time • The pressure of bringing a product to the market In such situations, project stakeholders and business owners should learn about and accept the risks of running the system without further testing. Since these situations often occur in practice, testers (especially the person in the role of test team leader) should provide information describing the current state of the system, highlighting the risks. Exit criteria usually take the form of measures of thoroughness or completeness. They express the desired degree of completion of a given job. Exit criteria allow us to determine whether we have performed certain tasks as planned. Example The team adopted the following exit criteria from the system testing: • • • •

(EX1) achieved 100% coverage of requirements by test cases (EX2) achieved at least 75% statement coverage for each component tested (EX3) no open (unrepaired) defects with the highest level of severity (EX4) at most two open defects of medium or low severity

After analyzing the report at the end of the system testing phase, it became clear that: • For each requirement, at least one test case was designed • For two of the four modules, full statement coverage was achieved; for the third, 80%; and for the fourth, 70% • One of the tests detected a medium-priority defect that is still not closed This analysis shows that criteria (EX1), (EX3), and (EX4) are met, while criterion (EX2) is not met for one of the components. The team decided to analyze what parts of this component’s code are not covered and added a test case covering an additional control flow path in this module, which increased the coverage for this module from 70 to 78%. At this point, the criterion (EX2) has been met. Since all exit criteria from the system testing phase are met at this point, the team formally considers this phase complete.

5.1.4 Estimation Techniques Test effort is the amount of test-related work needed to achieve the test project objectives. Test effort is often a major or significant cost factor during software development, sometimes consuming more than 50% of the total development effort [64]. One of the most common questions team members hear from their managers is “how long will it take to do this task?” The problems described above make test

260

Chapter 5 Managing the Test Activities

estimation a very important activity from a test management perspective. Every tester should have the ability to estimate test effort, be able to use appropriate estimation techniques, understand the advantages and disadvantages of different approaches, and be aware of issues such as estimation accuracy (estimation error). Test effort can be measured as labor intensity, a product measure (time * resources). For example, an effort of 12 person-days means that a given job to be done will be done in 12 days by one person, or in 6 days by two persons, or in 4 days by three persons, and so on. In general, it will be work done in x days by y persons, where x * y = 12. Be careful, however, because in practice, product measures “do not scale.” For example, if a team of three people does some work in 4 days, the labor intensity is 12 person-days. However, if a manager adds a new person to the team, hoping that four people will do the same work in 3 days, they may be disappointed—such an enlarged team can still do the job in 4 days or even longer. This is due to a number of factors, such as an increase in communication efforts (more people on the team) or the need for longer-serving team members to spend time apprenticing new member, which can cause delays. Project managers have a saying illustrating this phenomenon: “one woman will have a baby in nine months, but nine women will not have a baby in a month.” When making an effort estimate, often not all knowledge about the test object is available, so the accuracy of the estimate may be limited. It is important to explain to stakeholders that the estimate is based on many assumptions. The estimate can be refined and possibly adjusted later (e.g., when more data is available). A possible solution is to use an estimation range to provide an initial estimate (e.g., the effort will be 10 person-months with the standard deviation of 3 person-months, meaning that with high probability, the actual effort will be between 10 - 3 = 7 and 10 + 3 = 13 person-months). Estimating for small tasks is usually more accurate than for large ones. Therefore, when estimating a complex task, you can use a decomposition technique called work breakdown structure (WBS). In this technique, the main (complex) task or activity to be estimated is hierarchically decomposed into smaller (simpler) subtasks. The goal is to break down the main task into easily estimable but executable components. The task should be broken down as precisely as possible based on current information, but only as much as necessary to understand and accurately estimate a single subtask. After estimating all subtasks at the lowest level, the top-level tasks are estimated (by summing up the estimates of the relevant subtasks) in a bottom-up manner. In the last step, the estimation of the main task is obtained. An example use of the WBS method is shown in Fig. 5.2. We want to estimate the effort of the entire test project. However, it is so large that it does not make sense to estimate the entire effort at once—as the result will be subject to a very large error. So we divide the test project into the main tasks: planning, defining the test environment, integration testing, and system testing. Let’s assume that we are able to fairly accurately estimate the effort for the first two tasks (1 and 4 person-days, respectively). Integration testing, however, will require a lot of effort, so we hierarchically divide this task into smaller ones. We identify two

5.1 Test Planning

261

Fig. 5.2 Using the WBS method to estimate test project effort

subtasks within integration testing: integration of components A and B (6 persondays) and integration of components A and C (3 person-days). Under system testing, we identify three functions, F1, F2, and F3, as test objects. For F2 and F3, we estimate the testing effort to be 8 and 4 person-days, respectively. The F1 function is too large, so we split it into two subtasks and estimate the related effort as 3 and 5 person-days, respectively. Now we can collect the results from the lowest levels and estimate the tasks at higher levels by summing the corresponding subtasks. For example, the effort for integration testing (9 person-days) will be the sum of the effort of two subtasks with an effort of 6 and 3 person-days, respectively. Similarly, the effort for F1 is 3+5 = 8 person-days, and the effort for system testing is 8+8+4 = 20 person-days. At the very end, we calculate the effort for the entire test project as the sum of its subtasks: 1+4+9+20 = 34 person-days. Estimating techniques can be divided into two main groups: • Metrics-based techniques—where the test effort is based on metrics of previous similar projects, on historical data from the current project, or on typical values (so-called industry baselines) • Expert-based techniques—where the test effort is based on the experience of test task owners or subject matter experts The syllabus describes the following four frequently used estimation techniques in practice. Estimation based on ratios In this metrics-based technique, as much historical data as possible is collected from previous projects, allowing the derivation of “standard” ratios of various indicators for similar projects. An organization’s own ratios are usually the best source to use in the estimation process. These standard ratios can then be used to estimate the testing effort for a new project. For example, if in a previous similar project the ratio of

262

Chapter 5 Managing the Test Activities

implementation effort to test effort was 3:2, and in the current project the development effort is expected to be 600 person-days, the test effort can be estimated as 400 person-days, since the same or similar ratio of implementation to test will most likely occur as in the similar previous project. Extrapolation In this metrics-based technique, measurements are taken as early as possible to collect real, historical data from the current project. With enough such observations (data points), the effort required for the remaining work can be approximated by extrapolating this data. This method is very useful in iterative software development techniques. For example, a team can extrapolate the testing effort in the fourth iteration as an average of the effort in the last three iterations. If the effort in the last three iterations was 30, 32, and 25 person-days, respectively, the extrapolation of effort for the fourth iteration would be (30+32+25) / 3 = 29 person-days. Note that proceeding in an analogous manner, we could estimate the effort for subsequent iterations using the calculated extrapolation for previous iterations. In our example, the extrapolation of effort for the fifth iteration will be the average of iterations 2, 3, and 4, and so it will be (32+25+29) / 3 = 14.66. In a similar way, we can extrapolate the effort for the sixth iteration: (25 + 29 + 14.66) / 3 and so on. Note, however, that the “farther” the data point we extrapolate, the greater the risk of making an increasing estimation error, since the first estimation already has some error. Applying such a result to the next estimation can cause the error to grow (reduce the accuracy of the estimated value), because the errors can cumulate. Wideband Delphi In the expert-based Wideband Delphi method, experts make estimates based on their own experience. Each expert, in isolation, estimates the workload. The results are collected, and the experts discuss their current estimates. Each expert is then asked to make new predictions based on this information. The discussion allows all experts to rethink their estimation process. It may be, for example, that some experts did not take certain factors into account when estimating. The process is repeated until consensus is reached or there is a sufficiently small range in the obtained estimates. In this situation, for example, the mean or median of the expert estimates can be considered the final result. Figure 5.3 shows the idea behind the Delphi method. From iteration to iteration, the range of values estimated by experts narrows. Once it is sufficiently small, one can draw, for example, the mean or median of all the estimates and consider it the result of the estimation process. Very often, this result will not deviate too much from the true value. This phenomenon is known colloquially as the “wisdom of the crowd” and results from the simple fact that errors cancel each other out: one expert may overestimate a little, another may underestimate a little, another may underestimate a lot, and yet another may overestimate a lot. Thus, the error usually has a normal distribution with the mean at zero. The spread of results can be very large, but averaging them will often be a sufficient approximation of the true value. A variation of the Delphi method is the so-called planning poker. This is an approach commonly used in agile methodologies. In planning poker, estimates are

5.1 Test Planning

263

Fig. 5.3 The Wideband Delphi method

made using numbered cards. The values on the cards represent the amount of effort expressed in specific units that are well defined and understood by all experts. Planning poker cards most often contain values derived from the so-called Fibonacci sequence, although for larger values, there may be some deviations. In the Fibonacci sequence, each successive term is the sum of the previous two. The following values are most often used: 1, 2, 3, 5, 8, 13, 20, 40, 100 As you can see, the last two values are no longer the sum of the previous two. This is because they are large enough that rounded values are assumed here, simply representing something large enough that it doesn’t even make sense to estimate such a value accurately. If a team estimates, for example, the effort required to implement and test some user story and most experts throw a 40 or 100 card on the table, this means that the story is large enough that it is probably an epic and should be broken down into several smaller stories, each of which can already be reasonably estimated. An example deck of planning poker cards is shown in Fig. 5.4. If a card with “?” is thrown, it means that the expert does not have enough information to do the estimation. A value of “0,” on the other hand, means that the expert considers the story to be trivial to implement and that the time spent on it will be negligible. A card with coffee picture means “I am tired, let us do the break and grab some coffee!”. Other frequently used sets of values in planning poker are the set of successive powers of two: 1, 2, 4, 8, 16, 32, 64, 128 or so-called T-shirt sizes. XS, S, M, L, XL. In the latter case, the measurement (estimation) is not made on a numerical, ratio scale, and therefore does not directly represent the physical size of the estimated

264

Chapter 5 Managing the Test Activities

Fig. 5.4 Planning poker cards

effort. T-shirt sizes are defined only on an ordinal scale, which allows only the comparison of elements with each other. Thus, we can say that a story estimated as “L” is more labor-intensive than one estimated as “S,” but we cannot say how many times more effort it will take to implement a story of size “L” relative to one of size “S.” In this variant of the method, we are only able to prioritize stories by effort, grouping them into five groups of increasing effort size. However, this is not a recommended practice, because from the point of view of measurement theory, it is important that the differences between successive degrees of the scale are constant. In the case where the scale expresses story points, there is no problem: the difference between 4 and 5 points is the same as between 11 and 12 points and is 1 point. However, this is no longer so obvious in the case of a scale like “shirt sizes.” So it should be assumed that, for example, the difference between S and XS is the same as between L and M, and so on. Other scales are also acceptable. Which one specifically to use is a team or management decision. It is only important to understand how a unit of this scale translates into a unit of effort. Often, teams take as a unit the so-called user story point. Then one unit can mean the amount of effort required to implement/test one average user story. If the team knows from experience, for example, that implementing a 1 story point user story usually takes them 4 person-days, then a component estimated at 5 user story points should take about 5 * 4 = 20 person-days of effort.

5.1 Test Planning

265

Three-point estimation In this expert-based technique, experts perform three types of estimation: most optimistic (a), most likely (m), and most pessimistic (b). The final estimate (E) is their weighted arithmetic mean calculated as E = ða þ 4m þ bÞ=6: The advantage of this technique is that it allows experts to directly calculate also the measurement error: SD = ðb - aÞ=6: For example, if the estimates (in person-hours) are a = 6, m = 9, and b = 18, the final estimate is 10 ± 2 person-hours (i.e., between 8 and 12 person-hours), because E = ð6 þ 4  9 þ 18Þ=6 = 10 and SD = ð18 - 6Þ=6 = 2: Note that the estimation result will not always be equal to the most probable value m, because it will depend on how far away this value is from the optimistic and pessimistic values. The closer the value of m is to the extreme estimation value (a or b), the greater the distance between the value of m and the estimation result E. The formula shown above is the most commonly used in practice. It is derived from the program evaluation and review technique (PERT) method. However, teams sometimes use other variants, weighting the m factor differently, for example, with a weight of 3 or 5. Then the formula takes the form E = (a + 3m + b) / 5 or E = (a + 5m + b) / 7. For more information on these and other estimation techniques, see [45, 65, 66]. Example A team uses a planning poker to estimate the implementation and test effort of a certain user story. The planning poker is played in sessions according to the following procedure. 1. Each player has a full set of cards. 2. The moderator (generally the product owner) presents the user story to be estimated. 3. A brief discussion of the scope of work follows, clarifying ambiguities. 4. Each expert chooses one card. 5. At the sign of the facilitator, everyone simultaneously throws their card in the center of the table. 6. If everyone has chosen the same card, a consensus is reached, and the meeting ends—the result is the value chosen by all team members. If, after a fixed number

266

Chapter 5 Managing the Test Activities

of iterations, the team has not reached a consensus, the final result is determined according to the established, accepted variant of the procedure (see below). 7. If the estimates differ, the person who chose the lowest estimate and the person who chose the highest present their point of view. 8. If necessary, a brief discussion follows. 9. Return to point 4. Planning poker, like, indeed, any estimation technique based on expert judgment, will only be as effective as the good experts who participate in the estimation session are (see Fig. 5.5). Therefore, people who estimate should be experts, preferably with years of experience. The team has the following possible options for determining the final estimate when consensus has not been reached after a certain number of poker iterations. Assume that the team performing the effort estimation for a certain user story consists of five people and each participant has a full deck of cards with values 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Assume that in the last round, five people simultaneously present a card of their choice. The results are: 3, 8, 13, 13, 20. Variant 1 The result is averaged, and the final estimate is the average (11.4) rounded to the nearest value from the deck, which is 13. Variant 2 The most common value is chosen, which is 13. Variant 3 The result is the median (middle value) of the values 3, 8, 13, 13, and 20, that is, 13. Variant 4 In addition to providing an estimate, each person also rates the degree of their confidence in that estimate, on a scale of 1 (high uncertainty) to 5 (high confidence). The degree of certainty can reflect, for example, the level of experience and expertise in the area in which the estimate is made. Let’s assume that estimators 3, 8, 13, 13, and 20 rated the certainty of their choice at 4, 5, 2, 3, and 2, respectively. The score is then calculated as a weighted average: ð3 4 þ 8 5 þ 13 2 þ 13 3 þ 20 2Þ=ð4 þ 5 þ 2 þ 3 þ 2Þ = ð12 þ 40 þ 26 þ 39 þ 40Þ=16 = 157=16 = 9:8: The result can then be rounded to the nearest value on the scale, which is 8. The advantage of this approach is that the results of those more convinced of the correctness of their estimation weigh more in the averaged assessment. Of course, the accuracy of the method will depend in particular on the accuracy of the individual evaluators’ confidence scores. The above options do not exhaust all possible ways of estimation. Each team can adopt its own individual rules for determining the final value of the parameter that is subjected to the estimation procedure. It should also be borne in mind that all such

5.1 Test Planning

267

Fig. 5.5 Humorously about planning poker (www.commitstrip.com/en/2015/01/22/pokerplanning)

variants for selecting the final answer in the absence of consensus are always subject to some error. For example, taking the median (13) from the values 8, 13, 13, 13, 13, and 13 will be subject to less error than, for example, from the values 1, 8, 13, 40, and 100, due to the much greater variance of the results in the latter case. Factors affecting the test effort Factors affecting the testing effort include:

268

Chapter 5 Managing the Test Activities

• Product characteristics (e.g., product risks, quality of specifications (i.e., test basis), product size, complexity of product domain, requirements for quality characteristics (e.g., safety and reliability), required level of detail in test documentation, regulatory compliance requirements) • Characteristics of the software development process (e.g., stability and maturity of the organization, SDLC model used, test approach, tools used, test process, time pressure) • Human factors (e.g., skills and experience of testers, team cohesion, manager’s skills) • Test results (e.g., number, type, and significance of defects detected, number of corrections required, number of tests failed) Information on the first three groups of factors is generally known in advance, often before we start the project. Test results, on the other hand, are a factor that works “from the inside”: we cannot factor test results into the estimation before test execution. Their results can therefore significantly affect the estimation later in the project.

5.1.5 Test Case Prioritization Once test cases and test procedures are created and organized into test suites, these suites can be arranged into a test execution schedule, which defines the order in which they are to be executed. Various factors should be taken into account when prioritizing test cases (and therefore the order in which they are executed). A schedule is a way of planning tasks over time. The schedule should take into account: • • • •

Priorities Dependencies The need for confirmation and regression tests The most effective order in which to perform the tests When creating a test execution schedule, it is important to consider:

• The dates of the core activities within the project timeframe • That the schedule is within the timeframe of the project schedule • The milestones: start and end of each stage One of the most typical graphical ways of presenting the schedule is the so-called Gantt chart (see Fig. 5.6). In this chart, tasks are represented as rectangles expressing their duration, and the arrows define different types of relationships between tasks. For example, if an arrow goes from the end of rectangle X to the beginning of rectangle Y, it means that task Y can only start when task X has finished. Well-planned tests should have what is known as a linear trend—if you connect the “start of test project” and “end of tests” milestones from Fig. 5.6 with a line, the tasks performed should line up along this line. This means that the test-related tasks

Fig. 5.6 Schedule in the form of a Gantt chart

5.1 Test Planning 269

270

Chapter 5 Managing the Test Activities

are performed in such a way that the testing team works in a constant pace, without unnecessary time chasing. Practice shows that the final testing period (>95% of cases) requires the maximum number of resources to be committed in the final testing phase (the final phase of the project)—see Fig. 5.7. The most common test case prioritization strategies are as follows. • Risk-based prioritization, where the order of test execution is based on the risk analysis results (see Sect. 5.2.3). Test cases covering the most important risks are executed first. • Coverage-based prioritization, where the order of test execution is based on coverage (e.g., code coverage, requirements coverage). Test cases achieving the highest coverage are executed first. In another approach, called additional coverage-based prioritization, the test case achieving the highest coverage is executed first. Each subsequent test case is the one that achieves the highest additional coverage. • Requirements-based prioritization, where the order in which tests are executed is based on requirements priorities that are linked to corresponding test cases. Requirements priorities are determined by stakeholders. Test cases related to the most important requirements are executed first. Ideally, test cases should be ordered to execute based on their priority levels, using, for example, one of the prioritization strategies mentioned above. However, this practice may not work if the test cases or functions being tested are dependent. The following rules apply here: • If a higher-priority test case depends on a lower-priority test case, the lowerpriority test case should be executed first. • If there are interdependencies between several test cases, the order of their execution should be determined regardless of relative priorities. • Confirmation and regression test executions may need to be prioritized. • Test execution efficiency versus adherence to established priorities should be properly balanced. The order in which tests are executed must also take into account the availability of resources. For example, the required tools, environments, or people may only be available within a certain time window. Example We are testing a system whose CFG is shown in Fig. 5.8. We are adopting a coverage-based test prioritization method. The criterion we use will be statement coverage. Let us assume that we have four test cases TC1–TC4 defined in Table 5.1. This table also gives the coverage that each test case achieves. The highest coverage (70%) is achieved by TC4, followed by TC1 and TC2 (60% each), and the lowest by TC3 (40%). Accordingly, the test execution priority will be as follows: TC4, TC2, TC1, and TC3, where TC1 and TC2 can be executed in any order as they achieve the same coverage.

Fig. 5.7 Final test period

5.1 Test Planning 271

272

Chapter 5 Managing the Test Activities

Fig. 5.8 CFG of the system under test

Table 5.1 Existing test cases and the coverage they achieve Test case TC1 TC2 TC3 TC4

Exercised path 1→3→5→7→8→10 1→3→5→7→9→10 1→2→3→4 1→3→5→6→5→7→8→10

Coverage 60% 60% 40% 70%

Priority 2 2 3 1

However, note that the execution of TC1, after the execution of TC4 and TC2, will not cover any new statements. So let us apply a variant of prioritization based on additional coverage. The highest priority will be given, as in the previous variant, to the test case achieving the highest coverage, namely, TC4. Now let’s see how many additional statements not covered by TC4 are covered by the other test cases. The calculation is shown in Table 5.2 in the penultimate column. TC1 does not cover any additional statements, because the statements it covers (i.e., 1, 3, 5, 7, 8, 10) are already covered by TC4. TC2, relative to TC4, covers additionally statement 9, so it achieves 10% additional coverage (covered one new statement out of ten total). TC3, relative to TC4, additionally covers two statements, 2 and 4, resulting in an additional 20% coverage (covered two new statements out of ten total). Thus, the next test case after TC4 covering the most new statements not yet covered is TC3. We now repeat the above analysis counting the additional coverage relative to the coverage achieved by TC4 and TC3. The calculation is shown in the last column of Table 5.2. TC1 covers nothing new. TC2, on the other hand, covers one new statement that TC4 and TC3 did not cover, namely, 9. So, it achieves an

5.1 Test Planning

273

Table 5.2 Calculation of additional coverage achieved by testing Test case TC1 TC2 TC3 TC4

Exercised path 1→3→5→7→8→10 1→3→5→7→9→10 1→2→3→4 1→3→5→6→5→7→8→10

Coverage 60% 60% 40% 70%

Additional coverage after TC4 0% 10% (9) 20% (2, 4) –

Additional coverage after TC4 and TC3 0% 10% (9) – –

additional coverage of 10%. Thus, we obtained the following prioritization according to additional coverage: TC4 → TC3 → TC2 → TC1. Note that this order differs from the order resulting from the previous variant of the method. Its advantage is that we get coverage of all statements very quickly (after just the first three tests). In the earlier variant (with the order TC4→TC2→TC1→TC3), statements 2 and 4 are covered only in the last, fourth test. Another example shows that sometimes technical or logical dependencies between tests can disrupt the desired order of test execution and force us, for example, to execute a lower-priority test first, in order to “unlock” the possibility of executing a higher-priority test. Example Assume that we are testing the functionality of a system for conducting driving tests electronically. Assume also that at the beginning of each test run, the system database is empty. The following test cases have been identified, along with their priorities: TC1: Adding the examinee to the database. Priority: low TC2: Conducting the exam for the examinee. Priority: high TC3: Conducting a revision exam. Priority: medium TC4: Making a decision and sending the exam results to the examinee. Priority: medium The test case with the highest priority here is TC2 (conducting the exam), but we cannot run it before we add the examinee to the database. So even though TC2 has a higher priority than TC1, the logical dependency that occurs requires us to execute TC1 first, which will “unlock” test execution for the other test cases. The next test case to run would be TC2, as the highest-priority task. At the very end, we can execute TC3 and TC4. Note that in order to pass a revision exam, the examinee must know that they failed the original exam. Therefore, from the perspective of a particular user, it makes sense to execute TC3 only after completing TC4. Thus, the final test execution is as follows: TC1 → TC2 → TC4 → TC3.

274 Table 5.3 Tasks with their priorities and dependencies

Chapter 5 Managing the Test Activities Task 1 2 3 4 5 6 7

Priority (1 = high, 5 = low) 2 4 1 1 3 3 5

Dependent on 5 2, 4 2, 1 6 7

In many situations, it may happen that the order in which to perform actions depends on several factors, such as priority and logical and technical dependencies. There is a simple method for dealing with such tasks, even if the dependencies seem complicated. It consists of aiming for the earliest possible execution of higherpriority tasks, but if they are blocked by other tasks, we must execute those tasks first. Within these tasks, we also set their order according to priorities and dependencies. Let us consider this method with a concrete example. Example A list of tasks with their priorities and dependencies is given in Table 5.3. We start by establishing an order that depends solely on priorities, without looking at dependencies, with tasks of equal priority grouped together: 3, 4 → 1 → 5, 6 → 2 → 7. Within the first group, task 3 is dependent on task 4. We therefore clarify the order of tasks 3 and 4: 4 → 3 → 1 → 5, 6 → 2 → 7. We check now if we can perform the tasks in this order. It turns out that we can’t: task 4 depends on tasks 2 and 1, with task 1 having a higher priority than task 2. So we move both these tasks before task 4 with their priorities preserved, and after task 4, we keep the original order of the other tasks: 1 → 2 → 4 → 3 → 5, 6 → 7. Can we perform the tasks in this order? Still no: task 1 is dependent on task 5, this on task 6, and this in turn on task 7. So we need to move tasks 5, 6, and 7 in front of task 1 with their dependencies (note that here the priorities of tasks 5, 6, and 7 do not matter—the order is forced by the dependency): 7 → 6 → 5 → 1 → 2 → 4 → 3. Note that at this point, we can already perform the full sequence of tasks, 7, 6, 5, 1, 2, 4, and 3, since each task in the above sequence does not depend on any other task or depends only on the tasks preceding it.

5.1 Test Planning

275

5.1.6 Test Pyramid Test pyramid is a model showing that different tests can have different “granularity.” The test pyramid model supports the team in test automation and test effort allocation. The layers of the pyramid represent groups of tests. The higher the layer, the lower the test granularity, test isolation, test execution speed, and cost of quality. The tests in the bottom layer of the test pyramid are small, isolated, and fast and check a small piece of functionality, so you usually need a lot of them to achieve reasonable coverage. The top layer represents large, high-level end-to-end tests. These high-level tests are slower than the tests in the lower layers and usually check a large chunk of functionality, so you usually only need a few of them to achieve reasonable coverage. The number and naming of layers in a test pyramid can vary. For example, the original test pyramid model [67] defines three layers: “unit tests” (called component tests in the ISTQB® syllabus), “service tests,” and “UI tests” (see Fig. 5.9). Another popular model defines unit (component) tests, integration tests, and end-to-end tests. The cost of test execution from a lower layer is usually much less relative to the cost of test execution from a higher layer. However, as mentioned earlier, a single test from a lower layer usually achieves much less coverage than a single test from a higher layer. Therefore, in practice, in typical development projects, the test effort is well reflected by the test pyramid model. The team typically writes a lot of low-level tests and few high-level tests. However, keep in mind that the number of tests from different levels will always be driven by the broad context of the project and product. It may happen, for example, that most of the tests the team will perform are integration and acceptance tests (e.g., in the situation of integrating two products purchased from a software house; in this case, there is no point in performing component testing at all).

Fig. 5.9 Test pyramid

276

Chapter 5 Managing the Test Activities

Even though the test pyramid is widely known, there are still quite a few organizations that flip this pyramid upside down, by focusing on the more complex (UI) tests. A variant is the tilted pyramid, where specific tests and series of tests slice through the pyramid, following a functional approach.

5.1.7 Testing Quadrants The testing quadrants model, defined by Brian Marick [68], [21], aligns test levels with relevant test types, activities, techniques, and work products in agile development approaches. The model supports test management in ensuring that all important test types and test levels are included in the software development process and in understanding that some test types are more related to certain test levels than others. The model also provides a way to distinguish and describe test types to all stakeholders, including developers, testers, and business representatives. The testing quadrant model is shown in Fig. 5.10. In the testing quadrants, tests can be directed at the needs of the business (user, customer) or the technology (developer, manufacturing team). Some tests validate software behavior and hence support the work done by the agile team; others verify (critique) the product. Tests can be fully manual, fully automated, a combination of

Fig. 5.10 Testing quadrants

5.2 Risk Management

277

manual and automated, or manual but supported by tools. The four quadrants are as follows: Quadrant Q1 Quadrant Q1 describes the component test level, oriented toward technology and supporting the team. This quadrant includes component testing. These tests should be automated as much as possible and integrated into the continuous integration process. Quadrant Q2 Quadrant Q2 describes the system testing level, which is considered businessoriented and team supporting. This quadrant includes functional tests, examples (see Sect. 4.5), user story testing, prototypes, and simulations. These tests check acceptance criteria and can be manual or automated. They are often created during the development of user stories, thus improving their quality. They are useful when creating automated regression test suites. Quadrant Q3 Quadrant Q3 describes a business-oriented acceptance testing level that includes product critique tests using realistic scenarios and data. This quadrant includes exploratory testing, scenario-based testing (e.g., use case-based testing) or process flow testing, usability testing, and user acceptance testing, including alpha and beta testing. These tests are often manual and are user-oriented. Quadrant Q4 Quadrant Q4 describes a technology-oriented acceptance testing level that includes “product critique” tests. This quadrant contains most non-functional tests (except usability tests). These tests are often automated. During each iteration, tests from any or all quadrants may be required. Testing quadrants refer to dynamic testing rather than static testing.

5.2 Risk Management FL-5.2.1 (K1) FL-5.2.2 (K2) FL-5.2.3 (K2) FL-5.2.4 (K2)

Identify risk level by using risk likelihood and risk impact Distinguish between project risks and product risks Explain how product risk analysis may influence thoroughness and scope of testing Explain what measures can be taken in response to analyzed product risks

Organizations face many internal and external factors that make it uncertain if and when they will achieve their goals [7]. Risk management allows organizations to increase the likelihood of achieving their goals, improve product quality, and increase stakeholder confidence and trust.

278

Chapter 5 Managing the Test Activities

The risk level is usually defined by risk probability and risk impact (see Sect. 5.2.1). Risks, from the tester’s point of view, can be divided into two main groups: project risks and product risks (see Sect. 5.2.2). The main risk management activities include: • Risk analysis (consisting of risk identification and risk assessment; see Sect. 5.2.3) • Risk control (consisting of risk mitigation and risk monitoring; see Sect. 5.2.4) An approach to testing in which test activities are managed, selected, and prioritized based on risk analysis and risk control is called risk-based testing . One of the many challenges in testing is the proper selection and prioritization of test conditions. Risk is used to properly focus and allocate the effort required during testing these test conditions. It is used to decide where and when to start testing and to identify areas that need more attention. Testing is used to reduce risk, that is, to reduce the likelihood of an adverse event or to reduce the impact of an adverse event. Various forms of testing are one of the typical risk mitigation activities in software development projects (see Sect. 5.2.4). They provide feedback on identified and remaining (unresolved) risks. Early product risk analysis (see Sect. 5.2.3) contributes to project success. Risk-based testing contributes to reducing product risk levels and provides comprehensive information to help decide whether a product is ready for release (this is one of the main goals of testing; see Sect. 1.1.1). Typically, the relationship between product risk and testing is controlled through a bi-directional traceability mechanism. The main benefits of risk-based testing are: • Increasing the probability of discovering defects in order of their priority by performing tests in order related to risk prioritization • Minimizing residual product risk after release by allocating test effort according to risk • Reporting residual risk by measuring test results in terms of levels of related risks • Counteracting the effects of time pressure on testing and allowing the testing period to be shortened with the least possible increase in product risk associated with a reduction in the time allotted for testing To ensure that the likelihood of product failure is minimized, risk-based testing activities provide a disciplined approach to: • • • •

Analyze (and regularly reassess) what can go wrong (risk) Determine which risks are important enough to address Implement measures to mitigate these risks Design contingency plans to deal with risks when they occur

In addition, testing can identify new risks, help determine which risks should be mitigated, and reduce risk-related uncertainty.

5.2 Risk Management

279

5.2.1 Risk Definition and Risk Attributes According to the ISTQB® Glossary of Testing Terms [42], risk is a factor that may result in negative consequences in the future; it is usually described by impact and likelihood (probability). Therefore, risk level is determined by the likelihood of an adverse event and the impact—or consequences (the harm resulting from the event). This is often defined by the equation: Risk level = Risk likelihood  Risk impact: This equation can be understood symbolically (as described above) or quite literally. We deal with the latter case in the so-called quantitative approach to risk, where probability and impact are expressed numerically—probability as a number in the interval (0, 1), while impact, in money. The impact represents the amount of loss we will suffer when the risk materializes. Example An organization defines a risk level as the product of its probability and impact. A risk has been identified that a key report is generated too slowly when there are a large number of simultaneously logged-in users. Since the requirement for high performance is critical in this case, the impact of this risk was estimated to be very high. Taking into account the number of users who could experience delays in report generation and the consequences of these delays, the risk impact was estimated at $800,000. The likelihood of poor program performance, on the other hand, was estimated to be very low, due to a fairly well-defined testing process, scheduled architecture reviews, and the team’s extensive experience with performance testing. The risk likelihood was estimated at 5%. Finally, the risk level according to the above equation was set at: 5% × $800,000 = ð5=100Þ × $800,000 = $40,000:

5.2.2 Project Risks and Product Risks There are two types of risks in software testing: project risks and product risks. Project risks Project risks are related to project management and control. Project risks are risks that affect the success of the project (the ability of the project to achieve its objectives). Project risks include:

280

Chapter 5 Managing the Test Activities

• Organizational issues (e.g., delays in delivering work products, inaccurate estimates, cost cutting, poorly implemented, managed, and maintained processes, including the requirements engineering or quality assurance process) • Human issues (e.g., insufficient skills, conflicts, communication problems, staff shortages, lack of available subject matter experts) • Technical issues (e.g., scope creep; poor tool support; inaccurate estimation; lastminute changes; insufficient requirements clarification; inability to meet requirements due to time constraints; failure to make the test environment available in time; late scheduling of data conversion; migration or provision of tools needed for this; defects in the manufacturing process; poor quality of project work products, including requirements specifications or test cases; accumulation of defects or other technical debt) • Supplier-related issues (e.g., failure of third-party delivery, bankruptcy of the supporting company, delay in delivery, contractual problems) Project risks, when they occur, can affect a project’s schedule, budget, or scope, which in turn affects the project’s ability to achieve its goals. The most common direct consequence of project risks is that the project is delayed, which brings further problems, such as increased costs due to the need to allocate more time to certain activities or the need to pay contractual penalties for late delivery of a product. Product risks Product risks are related to the quality characteristics of a product, such as functionality, reliability, performance, usability, and other characteristics described, for example, by the ISO/IEC 25010 quality model [5]. Product risks occur wherever a work product (e.g., a specification, component, system, or test) may fail to meet the needs of users and/or stakeholders. Product risks are defined by areas of possible failure in the product under test, as they threaten the quality of the product in various ways. Examples of product risks include: • Missing or inadequate functionality • Incorrect calculations • Failures during the operation of the software (e.g., the application hangs or shuts down) • Poor architecture • Inefficient algorithms • Inadequate response time • Bad user experience • Security vulnerabilities Product risk, when it occurs, can result in various negative consequences, including: • • • •

End user dissatisfaction Loss of revenue Damage caused to third parties High maintenance costs

5.2 Risk Management

• • • •

281

Overloading the help desk Loss of image Loss of confidence in product Criminal penalties

In extreme cases, the occurrence of product risks can cause physical harm, injury, and even death.

5.2.3 Product Risk Analysis The purpose of risk analysis is to provide risk awareness to focus the testing effort in such a way as to minimize the product’s residual risk level. Ideally, product risk analysis begins early in the development cycle. Risk analysis consists of two major phases: risk identification and risk assessment. The product risk information obtained in the risk analysis phase can be used to: • Test planning • Specification, preparation, and execution of test cases • Test monitoring and control Early product risk analysis contributes to the success of the entire project, because product risk analysis makes it possible to: • • • •

Identify specific test levels and test types to be performed Define the scope of the tests to be performed Prioritize testing (to detect critical defects as early as possible) Select appropriate test techniques that will most effectively achieve the set coverage and detect defects associated with identified risks • Estimate for each task the amount of effort put into testing • Determine whether other measures in addition to testing can be used to reduce (mitigate) risks • Establish other activities unrelated to testing Risk identification Risk identification involves generating a comprehensive list of risks. Stakeholders can identify risks using various techniques and tools, such as: • Brainstorming (to increase creativity in finding risks) • Risk workshops (to jointly conduct risk identification by various stakeholders) • Delphi method or expert assessment (to obtain expert agreement or disagreement on risk) • Interviews (to ask stakeholders about risks and gather their opinions) • Checklists (to address typical, known, commonly occurring risks) • Databases of past projects, retrospectives, and lessons learned (to address past experience)

282

Chapter 5 Managing the Test Activities

• Cause-and-effect diagrams (to discover risks by performing root cause analysis) • Risk templates (to determine who can be affected by risks and how) Risk assessment Risk assessment involves categorizing identified risks; determining their likelihood, impact, and risk level; prioritizing them; and proposing ways to deal with them. Categorization helps assign mitigation actions, as typically risks in the same category can be mitigated using a similar approach. Risk assessment can use a quantitative or qualitative approach or a mix of the two. In a quantitative approach, the risk level is calculated as the product of likelihood and impact. In a qualitative approach, the risk level can be calculated using a risk matrix. An example of a risk matrix is shown in Fig. 5.11. The risk matrix is a kind of “multiplication table,” in which the risk level is defined as the product of certain categories of likelihood and impact. In the risk matrix in Fig. 5.11, we have defined four categories of likelihood: low, medium, high, and very high, and three categories of impact: low, medium, and high. At the intersection of the defined categories of likelihood and impact, a risk level is defined. In this risk matrix, six risk level categories are defined: very low, low, medium, high, very high, and extreme. For example, a risk with medium likelihood and high impact has a high risk level, because medium likelihood  high impact = high risk level: There are other risk methods in which—as in the failure mode and effect analysis (FMEA) method, for example—likelihood and impact are defined on a numerical scale (e.g., from 1 to 5, where 1 is the lowest and 5 is the highest category) and the risk level is calculated by multiplying the two numbers. For example, for a likelihood of 2 and an impact of 4, the risk level is defined as 2 * 4 = 8. In such cases, however, care must be taken. Multiplication is commutative, so the risk level of a likelihood of 1 and impact of 5 is 5 and is the same as for a risk with a likelihood of 5 and impact of 1, because 1 * 5 = 5 * 1. In practice, however, impact is

Fig. 5.11 Risk matrix

5.2 Risk Management

283

a much more important factor for us. A risk with a likelihood of 5 and an impact of 1 will perhaps occur very often, but will cause practically no trouble, due to its negligible impact. In contrast, a risk with a likelihood of 1 and an impact of 5 admittedly has a very small chance of occurring, but when it occurs, its consequences can be catastrophic. In addition to the quantitative and qualitative approaches to risk assessment, there is also a mixed approach. In this approach, probability and impact are not defined by specific numerical values, but are represented by ranges of values that express some uncertainty about the estimation of these parameters. For example, likelihood can be defined as the interval [0.2, 0.4], and impact—as the interval [$1000, $3000]. This means that we expect the likelihood of the risk to be somewhere between 20% and 40%, while its impact is expected to be somewhere between $1000 and $3000. The risk level in this approach can also be specified as a range, according to the following multiplication formula: ½a, b  ½c, d = ½a  c, b  d, i.e., in our case, the risk level will be ½0:2, 0:4  ½$1000, $3000 = ½$200, $1200: The risk level is therefore between $200 and $1200. The mixed approach is a good solution if we find it difficult to estimate the exact values of the probabilities and impacts of individual risks (quantitative approach), but at the same time, we want more accurate risk assessment results than those expressed on an ordinal scale (qualitative approach). Example An organization defines a risk level as the product of its likelihood and impact. The following risks are defined, along with estimates of the likelihood and impact of each risk: Risk 1: likelihood of 20%, impact of $40,000 Risk 2: likelihood of 10%, impact of $100,000 Risk 3: likelihood of 5%, impact of $20,000 To calculate the total risk level, we add up the respective products: Total risk level = 0.2 * $40,000 + 0.1 * $100,000 + 0.05 * $20,000 = $8000 + $10,000 + $1000 = $19,000. This means that if we do not take any measures to minimize (mitigate) these risks, the average (expected) loss we will incur as a result of their potential occurrence will be about $19,000. Note that quantitative risk analysis for a single risk doesn’t make much sense, because risks—from the point of view of the tester or an outside observer—are random phenomena. We don’t know when or if they will occur at all and, if so, what kind of damage they will really cause. An analysis of a single random phenomenon with likelihood P and impact X makes no sense, because it will occur or not, so the loss will be either 0 or X (assuming our estimation of impact was correct) and will

284

Chapter 5 Managing the Test Activities

never be P*X. However, an analysis performed for multiple risks, as in the example above, makes much more sense. This is because the sum of the risk levels for all the identified risks can be treated as the expected value of the loss we will incur due to the occurrence of some of these risks.

5.2.4 Product Risk Control Product risk control plays a key role in risk management, as it includes all measures that are taken in response to identified and assessed product risks. Risk control is defined as measures taken to mitigate and monitor risks throughout the development cycle. Risk control is a process that includes two main tasks: risk mitigation and risk monitoring. Risk mitigation Once the risks have been analyzed, there are several options for risk mitigation or possible responses to risk [71, 72]: • • • •

Risk acceptance Risk transfer Contingency plan Risk mitigation by testing

Accepting (ignoring) risks can be a good idea when dealing with low-level risks. Such risks are usually prioritized lowest and dealt with at the very end. Often, we may simply run out of time to deal with them, but this does not particularly bother us. This is because even when such risks occur, the damage they bring is minor or even negligible. So there is no point in wasting time dealing with them when we have other, much more important, and serious issues to consider (e.g., risks of a much higher level). An example of risk transfer could be, for example, taking out an insurance policy. In exchange for paying the insurance premium, one transfers the risk of incurring the cost of the risk to a third party, in this case—to the insurer. Contingency plans are developed for certain types of risks. These plans are developed in order to avoid chaos and a delay in responding to risks when an undesirable situation occurs. Thanks to the existence of such plans, when a risk occurs, everyone knows exactly what to do. The response to risk is therefore quick and well thought out in advance and—therefore—effective. In the Foundation Level syllabus, of the above risk response methods, only risk mitigation by testing is presented in detail, as the syllabus is about testing. Tests can be designed to check if a risk is actually present. If the test fails (triggers a failure, so shows a defect), the team is aware of the problem. This accordingly mitigates the risk, which is already identified and not unknown. If the test passes, the team gains more confidence in the quality of the system. Risk mitigation can reduce the likelihood or impact of a risk. In general, testing contributes to lowering the overall

5.2 Risk Management

285

risk level in the system. Each passed test can be interpreted as a situation in which we have shown that a specific risk (with which the test is associated) does not exist because the test is passed. This reduces the total residual risk by the value of the risk level of that particular risk. Actions that can be taken by testers to mitigate product risk are as follows: • • • • • • •

Selecting testers with the right level of experience, appropriate for the type of risk Applying an appropriate level of independence of testing Conducting reviews Performing static analysis Selecting appropriate test design techniques and coverage levels Prioritizing tests based on risk level Determining the appropriate range of regression testing

Some of the mitigation activities address early preventive testing by applying them before dynamic testing begins (e.g., reviews). In risk-based testing, risk mitigation activities should occur throughout the SDLC. The agile software development paradigm requires team self-organization (see Sect. 1.4.2 on testing roles and Sect. 1.5.2 on the “whole team” approach) while providing risk mitigation practices that can be viewed as a holistic risk mitigation system. One of its strengths is its ability to identify risks and provide good mitigation practices. Examples of risks reduced by these practices are [69]: • The risk of customer dissatisfaction. In an agile approach, the customer or customer’s representative sees the product on an ongoing basis, which, with good project execution, frequent feedback, and intensive communication, mitigates this risk. • The risk of not completing all functionality. Frequent release and product planning and prioritization reduce this risk by ensuring that increments related to high business priorities are delivered first. • The risk of inadequate estimation and planning. Work product updates are tracked daily to ensure management control and frequent opportunities for correction. • The risk of not resolving problems quickly. A self-organized team, proper management, and daily work reports provide the opportunity to report and resolve problems on a daily basis. • The risk of not completing the development cycle. In a given cycle (the shorter, the better), the agile approach delivers working software (or, more precisely, a specific piece of it) so that there are no major development problems. • The risk of taking on too much work and changing expectations. Managing the product backlog, iteration planning, and working releases prevent the risk of unnoticed changes that negatively impact product quality by forcing teams to confront and resolve issues early.

286

Chapter 5 Managing the Test Activities

Risk monitoring Risk monitoring is a risk management task that deals with activities related to checking the status of product risks. Risk monitoring allows testers to measure implemented risk mitigation activities (mitigation actions) to ensure they are achieving their intended effects and to identify events or circumstances that create new or increased risks. Risk monitoring is usually done through reports that compare the actual state with what was expected. Ideally, risk monitoring takes place throughout the SDLC. Typical risk monitoring activities involve reviewing and reporting on product risks. There are several benefits of risk monitoring, for example: • • • • • •

Knowing the exact and current status of each product risk The ability to report progress in reducing residual product risk Focusing on the positive results of risk mitigation Discovering risks that have not been previously identified Capturing new risks as they emerge Tracking factors that affect risk management costs

Some risk management approaches have built-in risk monitoring methods. For example, in the aforementioned FMEA method, estimating the likelihood and impact of a risk is done twice—first in the risk assessment phase and then, again, after implementing actions to mitigate that risk.

5.3 Test Monitoring, Test Control, and Test Completion FL-5.3.1 (K1) FL-5.3.2 (K2) FL-5.3.3 (K2)

Recall metrics used for testing Summarize the purposes, content, and audiences for test reports Exemplify how to communicate the status of testing

The primary objective of test monitoring is collecting and sharing of information to gain insight into test activities and visualize the testing process. The monitored information can be collected manually or automatically, using test management tools. We rather suggest an automatic approach, using the test log and information from the defect reporting tool, supported by a manual approach (direct control of the tester’s work; this allows a more objective assessment of the current status of testing). However, do not forget to collect information manually, e.g., during daily Scrum meetings (see Sect. 2.1.1). The results obtained are applied to: • Measurement of the fulfillment of exit criteria, e.g., achievement of assumed product risk coverage, requirements, and acceptance criteria • Assessing the progress of the work against the schedule and budget Activities performed as part of test control

are mainly as follows:

5.3 Test Monitoring, Test Control, and Test Completion

287

• Making decisions based on information obtained from test monitoring • Re-prioritizing tests when identified risks materialize (e.g., failure to deliver software on time) • Making changes to the test execution schedule • Assessing the availability or unavailability of the test environment or other resources

5.3.1 Metrics Used in Testing During and after a given test level, metrics can (and should) be collected allowing to estimate: • • • •

The progress of schedule and budget implementation The current quality level of the test object The appropriateness of the chosen test approach The effectiveness of the test activities from the point of view of achieving the objectives

Test monitoring collects various metrics to support test control activities. Typical test metrics include: • Project metrics (e.g., task completion, resource utilization, testing effort, percentage of completion of planned test environment preparation work, milestone dates) • Test case metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time) • Product quality metrics (e.g., availability, response time, mean time to failure) • Defect metrics (e.g., number of defects found/repaired, defect priorities, defect density (number of defects per unit volume, e.g., 1000 lines of code), defect frequency (number of defects per unit time), percentage of defects detected, percentage of successful confirmation tests) • Risk metrics (e.g., residual risk level, risk priority) • Coverage metrics (e.g., requirements coverage, user stories coverage, acceptance criteria coverage, test conditions coverage, code coverage, risks coverage) • Cost metrics (e.g., cost of testing, organizational cost of quality, average cost of test execution, average cost of defect repair) Note that different metrics serve different purposes. For example, from a purely management point of view, a manager will be interested, for example, in how many planned test executions were made or whether the planned budget was not exceeded. From a software quality point of view, however, more important will be such issues as, for example, the number of defects found by their degree of criticality, mean time between failures, etc. Being able to choose the right set of metrics to measure is an art, as it is usually impossible to measure everything. In addition, the more data in the

288

Chapter 5 Managing the Test Activities

reports, the less readable they become. You should choose a small set of metrics to measure, which at the same time will allow you to answer all the questions about the aspects of the project you are interested in. The Goal-Question-Metric (GQM) technique, for example, can help develop such a measurement plan, but we do not present it here, as it is beyond the scope of the syllabus.

5.3.2 Purpose, Content, and Audience for Test Reports Test reports are used to summarize and provide information on test activities (e.g., tests at a given level) both during and after their execution. A test report produced during the execution of a test activity is called a test progress report , and the report produced at the completion of such an activity—a test completion report . Examples of these reports are shown in Figs. 5.12, 5.13, and 5.14. According to the ISO/IEC/IEEE 29119-3 standard [3], a typical test report should include: • • • •

Summary of the tests performed Description of what happened during the testing period Information on deviations from the plan Information on the status of testing and product quality, including information on meeting the exit criteria (or Definition of Done) • Information on factors blocking the tests

Fig. 5.12 Test progress report in an agile organization (after ISO/IEC/IEEE 29119-3)

5.3 Test Monitoring, Test Control, and Test Completion

289

• Measures related to defects, test cases, test coverage, work progress, and resource utilization • Residual risk information • Information on work products for reuse A typical test progress report further includes information on: • Status of test activities and the progress of the test plan • Tests scheduled for the next reporting period Test reports must be tailored to both the project context and the needs of the target audience. They must include: • Detailed information on defect types and related trends—for technical audience • Summary of defect status by priority, budget, and schedule, as well as passed, failed, and blocking test cases—for business stakeholders The main recipients of the test progress report are those who are able to make changes to the way tests are conducted. These changes may include adding more

Fig. 5.13 Test summary report in an agile organization (after ISO/IEC/IEEE 29119-3)

290

Chapter 5 Managing the Test Activities

Fig. 5.14 Test summary report in a traditional organization (per ISO/IEC/IEEE 29119)

testing resources or even making changes to the test plan. Typical members of this group are therefore project managers and product owners. It may also be useful to include in this group people that are responsible for factors inhibiting test progress, so that it is clear that they are aware of the problem. Finally, the test team should also

5.3 Test Monitoring, Test Control, and Test Completion

291

be included in this group, as this will allow them to see that their work is appreciated and help them understand how their work contributes to overall progress. The audience of the test completion report can be divided into two main groups: those responsible for making decisions (what to do next) and those who will conduct tests in the future using the information from the report. The decision-makers will vary depending on the testing that is reported (e.g., test level, test type, or project) and may decide which tests should be included in the next iteration or whether to deploy a test object given the reported residual risk. The second group are those who are responsible for future testing of the test object (e.g., as part of maintenance testing) or those who can otherwise decide on the reuse of products from reported tests (e.g., cleaned customer test data in a separate project). In addition, all individuals appearing on the original distribution list for the test plan should also receive a copy of the test completion report.

5.3.3 Communicating the Status of Testing The means of communicating the status of testing varies, depending on the concerns, expectations, and vision of the test management, organizational test strategies, regulatory standards, or, in the case of self-organizing teams (see Sect. 1.5.2), the team itself. Options include, in particular: • • • • •

Verbal communication with team members and other stakeholders Dashboards such as CI/CD dashboards, task boards, and burndown charts Electronic communication channels (e.g., emails, chats) Online documentation Formal test reports (see Sect. 5.3.2)

One or more of these options can be used. More formal communication may be more appropriate for distributed teams, where direct face-to-face communication is not always possible due to geographic or time differences. Figure 5.15 shows an example of a typical dashboard for communicating information about the continuous integration and continuous delivery process. From this dashboard, you can very quickly get the most important basic information about the state of the process, for example: • • • • •

The number of releases by release status (successful, not built, aborted, etc.) The dates and status of the last ten releases Information about the last ten commits Frequency of releases Average time to create a release as a function of time

292

Chapter 5 Managing the Test Activities

Fig. 5.15 Example of a CI/CD dashboard (source: https://devops.com/electric-cloud-extendscontinuous-delivery-platform/)

5.4 Configuration Management FL-5.4.1 (K2)

Summarize how configuration management supports testing

The primary goal of configuration management is to ensure and maintain the integrity of the component/system and testware and the interrelationships between them throughout the project and product lifecycle. Configuration management is to ensure that: • All configuration items, including test objects, test items (individual parts of test objects), and other testware, have been identified, version controlled, tracked for changes, and linked to each other in a way that maintains traceability at all stages of the test process • All identified documentation and software items are referenced explicitly in the test documentation Today, appropriate tools are used for configuration management, but it is important to note that configuration management procedures along with the necessary infrastructure (tools) should be identified and implemented at the planning stage, since the process covers all work products that occur within the development process. Figure 5.16 shows a graphical representation of a certain repository stored in the git code versioning system. The vertices C0, C1, ..., C6 denote so-called snapshots, i.e., successive versions of the source code. The arrows indicate on the basis of which version the next version was created. For example, commit C1 was created on the basis of commit C0 (e.g., a developer downloaded the C0 code to their local computer, made some changes to it, and

5.4 Configuration Management

293

Fig. 5.16 Graphical representation of a code repository in the Git system (source: https://git-scm. com/docs/gittutorial)

committed it to the repository). The repository distinguishes between these two versions of the code. Commits C3 and C4 may have been created independently by two other developers, based on the same C2 commit. The “master” rectangle represents the master branch. Commit C5, on the other hand, represents the “iss53” branch, where the code was created as a result of fixing an error detected in the C3 version of the code. Commit C6 was created as a result of a merge of changes made independently in commits C5 (defect repair) and C4 (e.g., adding a new feature based on the C2 version of the code). By using a version control system like git, developers have full control over their code. They can track, save, and—if necessary—recreate any historical version of the code. If an error is made or a failure occurs in some compilation, it is always possible to undo the changes and return to the previous, working version. Nowadays, the use of code versioning tools is the de facto standard. Without such tools, the project would be in chaos very quickly. Example Consider a very simplified example of the application of configuration management in practice. Assume that the system we are producing consists of three components: A, B, and C. The system is used by two of our customers: C1 and C2. The following sequence of events shows the history of changes in the code repository and the history of software releases to customers. We assume that the creation of a given version of the application always takes into account the latest versions of the components included in it. 1. 2. 3. 4. 5. 6.

Uploading version 1.01 of component A to the repository Uploading version 1.01 of component B to the repository Uploading version 1.02 of component A (after defect removal) to the repository Uploading version 1.01 of component C to the repository Creating version 1.0 of the application for release and sending it to C1 Uploading version 1.02 of component C to the repository (adding new functionality) 7. Creating version 1.01 of the application for release and sending it to C1 8. Uploading version 1.03 of component A to the repository (adding new functionality) 9. Creating version 1.02 of the application for release and sending it to C2

294

Chapter 5 Managing the Test Activities

Fig. 5.17 History of the creation of successive versions of components and software releases

This process is graphically depicted in Fig. 5.17. Now, assume that at this point (after step 9), customer C1 has reported a failure. If we didn’t have a configuration management process established, we wouldn’t know in which versions of components A, B, and C to look for potential defects. Let us assume that the customer sent us information that the problem was observed in version 1.01 of the software. The configuration management tool, based on this information and the above data, is able to reconstruct the individual software components that went into the software version 1.01 delivered to the customer C1 in step 7. Moreover, the tool is able also to reproduce proper versions of the test cases used for software in version 1.01, proper versions of environment components, databases, etc. that were used to build version 1.01 of the software. In particular, analyzing the above sequence of events, we see that the software version 1.01 consists of: • Component A version 1.02 • Component B version 1.01 • Component C version 1.02 This means that the potential defect is in one (or more) of these three components. Note that if the defect is found to be in component B, after fixing it and creating version 1.02, a new version of the software (1.03) should be sent not only to customer C1 but also to customer C2, since the current version of the software used by C2 uses the defective component B in version 1.01.

5.5 Defect Management FL-5.5.1 (K3)

Prepare a defect report

One of the goals of testing is to find defects, and therefore, all defects found should be logged. How a defect is reported depends on the testing context of the component or system, the test level, and the chosen SDLC model. Each identified defect should

5.5 Defect Management

295

be investigated and tracked from the moment it is detected and classified until the problem is resolved. The organization should implement a defect management process, with its formalization varying from an informal approach to a very formal one. It may be that some reports describe false-positive situations rather than actual failures caused by defects. Testers should try to minimize the number of false-positive results that are reported as defects; nevertheless, in real projects, a certain number of reported defects are false positives. The primary objectives of the defect report are: • Communication: – Provide manufacturers with information about the incident to enable them to identify, isolate, and repair the defect if necessary. – Provide test management with current information about the quality of the system under test and the progress of testing. • Enabling the collection of information about the status of the product under test: – As a basis for decision-making – To identify risks early – As a basis for post-project analysis and improvement of development procedures In the ISO/IEC/IEEE 29119-3 standard [3], defect reports are called incident reports. This is a slightly more precise nomenclature, because not every observed anomaly immediately has to mean a problem to be solved: a tester can make a mistake and consider something that is perfectly correct as a defect that needs fixing. Such situations are usually analyzed during the defect lifecycle, and typically such reports are rejected, with a status of “not a defect” or similar. An example of a defect lifecycle is shown in Fig. 5.18. From the tester’s point of view, only the statuses Rejected, Changed, Postponed, and Closed mean that the defect analysis has been completed; the status Fixed means that the developer has fixed the defect, but it can be closed only after retesting. The defect report for dynamic testing should include:

Fig. 5.18 An example of a defect lifecycle and defect statuses

296

• • • • • • • • • • • • • • • •

Chapter 5 Managing the Test Activities

Unique identifier Title and a brief summary of the reported anomaly Date of the report (the date the anomaly was discovered) Information about the defect report’s author Identification of the item under test Phase of the software development cycle in which an anomaly was observed Description of the anomaly to enable its reproduction and removal, including any kind of logs, database dumps, screenshots, or recordings Actual and expected results Defect removal priority (if known) Status (e.g., open, deferred, duplicate, re-opened, etc.) Description of the nonconformity to help determine the problem’s cause Urgency of the solution (if known) Identification of a software or system configuration item Conclusions and recommendations History of changes References to other elements, including the test case through which the problem was revealed

An example of a defect report is shown in Fig. 5.19. Stakeholders interested in defect reports include testers, developers, test managers, and project managers. In many organizations, the customer also gains access to defect management system, especially in the period immediately after the release. However, it should be taken into account that the typical customer is not familiar with the rules of defect reporting. Also, it is typical that from the client point of view, almost every defect is critical, because it impedes the customer’s work.

Sample Questions Question 5.1 (FL-5.1.1, K2) Which of the following is NOT part of the test plan? A. B. C. D.

Test strategy. Budgetary constraints. Scope of testing. Risk register. Choose one answer.

Question 5.2 (FL-5.1.2, K2) Which of the following is done by the tester during release planning? A. Conducting detailed risk analysis for user stories. B. Identifying non-functional aspects of the system to be tested.

Sample Questions

Fig. 5.19 Example of a defect report according to ISO/IEC/IEEE 29119-3

C. Estimating test effort for new features planned in a given iteration. D. Defining user stories and their acceptance criteria. Choose one answer. Question 5.3 (FL-5.1.3, K2) Consider the following entry and exit criteria: i. ii. iii. iv.

Availability of testers. No open critical defects. 70% statement coverage achieved in component testing. All smoke tests performed before system testing have passed. Which of the above are entry criteria and which are exit criteria for testing?

A. (i), (iii), and (iv) are the entry criteria; (ii) is the exit criterion. B. (ii) and (iii) are the entry criteria; (i) and (iv) are the exit criteria.

297

298

Chapter 5 Managing the Test Activities

C. (i) and (ii) are the entry criteria; (iii) and (iv) are the exit criteria. D. (i) and (iv) are the entry criteria; (ii) and (iii) are the exit criteria. Choose one answer. Question 5.4 (FL-5.1.4, K3) The team uses the following extrapolation model for the test effort estimation: E ð nÞ =

E ð n - 1Þ þ E ð n - 2Þ þ E ð n - 3Þ , 3

where E(n) is the effort in the n-th iteration. The effort in the n-th iteration is thus the average of the effort of the last three iterations. In the first three iterations, the actual effort was 12, 15, and 18 person-days, respectively. The team has just completed the third iteration and wants to use the above model to estimate the test effort needed in the FIFTH iteration. What should be the estimate made by the team? A. B. C. D.

24 person-days. 16 person-days. 15 person-days. 21 person-days. Choose one answer.

Question 5.5 (FL-5.1.5, K3) You want to prioritize test cases for optimal test execution order. You use a prioritization method based on additional coverage. You use feature coverage as a coverage metric. There are seven features in the system, labeled A, B, C, D, E, F, and G. Table 5.4 shows which features are covered by each test case: Which test case will be executed THIRD in order? A. B. C. D.

TC5. TC2. TC4. TC3. Choose one answer.

Table 5.4 Feature coverage by test cases

Test case TC1 TC2 TC3 TC4 TC5

Features covered A, B, C, F D A, F, G E D, G

Sample Questions

299

Question 5.6 (FL-5.1.6, K1) What does the test pyramid model describe? A. Team effort spent on testing, increasing from iteration to iteration. B. A list of project tasks sorted by descending number of required test activities for each task. C. The granularity of tests at each test level. D. Test effort at each test level. Choose one answer. Question 5.7 (FL-5.1.7, K2) In which test quadrant is component testing located? A. In the technology-oriented, team-supporting quadrant that includes automated tests and being part of the continuous integration process. B. In the business-oriented, team-supporting quadrant that includes acceptance criteria testing. C. In the business-oriented, product-criticizing, that includes tests focusing on users needs. D. In the technology-oriented, product-criticizing quadrant that includes automated non-functional tests. Choose one answer. Question 5.8 (FL-5.2.1, K1) The likelihood of a system performance risk was estimated as “very high.” What can be said about the impact of this risk? A. B. C. D.

We know nothing about impact; impact and probability are independent. The impact is also very high; high likelihood risks also have a high impact. The impact is low because impact is inversely proportional to likelihood. Until this risk occurs, we cannot assess its impact. Choose one answer.

Question 5.9 (FL-5.2.2, K2) Which of the following is an example of a consequence of a project risk? A. B. C. D.

User death due to software failure. Failure to complete all tasks intended for completion in a given iteration. Very high software maintenance costs. Customer dissatisfaction due to an inconvenient user interface. Choose one answer.

300

Chapter 5 Managing the Test Activities

Question 5.10 (FL-5.2.3, K2) The tester is working on a project preparing a new version of a mobile homebanking application. During risk analysis, the team identified the following two risks: • Overly complicated interface for defining the transfers—especially for seniors. • Malfunctioning of the transfer mechanism—transfers are executed late when the payment falls on Saturday or Sunday. What are the most reasonable risk mitigation actions a tester should propose for these two risks? A. Technical review for transfer interface and branch coverage for transfer mechanism. B. Component testing for transfer interface and acceptance testing for transfer mechanism. C. Beta testing for transfer interface and usability testing for transfer mechanism D. White-box testing for transfer interface and non-functional testing for transfer mechanism. Choose one answer. Question 5.11 (FL-5.2.4, K2) After performing system testing and releasing the software to the customer, the software producer took out insurance with an insurance company. The producer did this in case a malfunction in the software caused its users to lose their health. What kind of mitigation action are we dealing with here? A. B. C. D.

Contingency plans. Risk mitigation through testing. Risk transfer. Risk acceptance.

Question 5.12 (FL-5.3.1, K1) Which of the following is NOT a metric used for testing? A. B. C. D.

Residual risk level. Coverage of requirements by source code. Number of critical defects found. Test environment implementation progress. Choose one answer.

Sample Questions

301

Question 5.13 (FL-5.3.2, K2) Which of the following will NOT normally be included in a test completion report? A. B. C. D.

The remaining (unmitigated) risks are risks R-001-12 and R-002-03. Deviations from test plan: integration testing delayed by 5 days. Number of open critical defects: 0. Tests scheduled for the next reporting period: component tests of the M3 component. Choose one answer.

Question 5.14 (FL-5.3.3, K2) Which of the following is the BEST form of communicating the status of testing? A. Test progress reports and test completion reports, because they are the most formal form of communication. B. Which form of communication is best will depend on various factors. C. Emails, because they allow for quick exchange of information. D. Verbal, face-to-face communication, because it is the most effective form of communication between people. Choose one answer. Question 5.15 (FL-5.4.1, K2) The team received information from the client about a software failure. Based on the software version number, the team was able to reconstruct all the component and testware versions that were used to generate the software release for this customer. This made it possible to locate and fix the defect more quickly, as well as to analyze which other versions of the software release should be patched related to the defect. Which process enabled the team to execute the above scenario? A. B. C. D.

Configuration management. Impact analysis. Continuous delivery. Retrospective. Choose one answer.

Question 5.16 (FL-5.5.1, K3) While testing a new application, a tester finds a malfunctioning of the login component. According to the documentation, the password is supposed to have at least ten characters, including a minimum of one uppercase and one lowercase letter and one digit.

302

Chapter 5 Managing the Test Activities

The tester prepares a defect report containing the following information: • Title: Incorrect login. • Brief summary: Sometimes, the system allows passwords of 6, 7, and 9 characters. • Product version: compilation 11.02.2020. • Risk level: High. • Priority: Normal. What VALUABLE information is missing from the above defect report? A. B. C. D.

Steps to reproduce the defect. Data identifying the product being tested. Defect status. Ideas for improving the test case. Choose one answer.

Exercises for Chapter 5 Exercise 5.1 (FL-5.1.4, K3) A group of three experts estimates the test effort for the task of “conducting system testing” using a method that is a combination of planning poker and threepoint estimation. The procedure is as follows: • Experts determine the pessimistic, most likely, and optimistic values for threepoint estimation. Each of them is determined by conducting a planning poker. The poker session is carried out until at least two experts give the same value, in which case the poker ends and the result is the value determined by the majority of experts. • Experts use three-point estimation with the pessimistic, most likely, and optimistic parameters determined in the previous step; this estimation is the final result. The standard deviation is also calculated to describe the estimation error. The results of the poker session are shown in Table 5.5. All values are expressed in person-days.

Table 5.5 Results of the planning poker session

Estimated value Optimistic (a) Most likely (m) Pessimistic (b)

Iterations of planning poker Results of iteration 1: 3, 5, 8 Results of iteration 2: 3, 3, 5 Results of iteration 1: 5, 5, 3 Results of iteration 1: 13, 8, 21 Results of iteration 2: 8, 13, 40 Results of iteration 3: 13, 13, 13

Exercises for Chapter 5

303

Table 5.6 Historical data on a project Phase Design Implementation Testing

Number of people involved 4 10 4

Duration (days) 5 18 10

What are the final effort estimation and estimation error for the task under analysis? Exercise 5.2 (FL-5.1.4, K3) Note—this task is quite difficult and requires some analytical skills and a good understanding of product metrics such as effort. However, we give this exercise on purpose, to show what kind of problems managers may deal with in practice. Often, these are non-trivial problems, like the one below. Table 5.6 shows historical data from a completed project in which the design, implementation, and testing phases were performed sequentially, one after the other: You want to use ratio-based effort estimation technique to estimate the effort needed to run a new, similar project. It is known that the new project will have four designers, six developers, and two testers. The contract calls for the project to be completed in 66 days. How many days should we plan for design, how many for implementation, and how many for testing in a new project? Hint. Calculate the effort for each of the three phases in the previous project (in person-days). Then calculate the total effort required for the new project. Finally, use the ratio-based technique to distribute the effort in the new project among the phases. Exercise 5.3 (FL-5.1.5, K3) You are testing an application that supports help-line operation in an insurance company and allows you to find a customer’s policy according to their ID number. The test cases are described in Table 5.7. Priority 1 means the highest priority and 4—the lowest priority. Define the correct order in which the test cases should be executed.

Table 5.7 Test cases for the help-line system testing with priorities and dependencies Test case TC001 TC002 TC003 TC004

Test condition covered Search by ID number Entering personal data ID number modification Deletion of personal data

Priority 1 3 2 4

Logical dependence 002, 003 002 002

304

Chapter 5 Managing the Test Activities

Fig. 5.20 Priorities and relationships between requirements

Exercise 5.4 (FL-5.1.5, K3) The team wants to prioritize tests according to the prioritization of requirements presented to the team by the customer. The priority of each of the six requirements Req1–Req6 is specified by the customer as low, medium, or high. In addition to the priorities, there is a certain logical order between the requirements. Some of them can be implemented (and tested) only after others have been implemented. The priorities and dependencies between requirements are shown in Fig. 5.20. An arrow leading from requirement A to requirement B means that requirement B can be implemented and tested only after the implementation and testing of requirement A are complete. Determine the final order in which the requirements should be tested. Exercise 5.5 (FL-5.5.1, K3) You are testing an application for an e-commerce store. The requirements to consider the discount for a customer are given below: • If a customer has made a purchase of less than $50 and does not have a loyalty card, they do not receive a discount. • If a customer has made a purchase of less than $50 and has a loyalty card, they will receive a 5% discount. • If a customer has made a purchase of at least $50 and less than $500 and does not have a loyalty card, they will receive a 5% discount. • If a customer has made a purchase of at least $50 and less than $500 and has a loyalty card, they will receive a 10% discount. • If a customer has made a purchase of at least $500 and does not have a loyalty card, they will receive a 10% discount. • If a customer has made a purchase of at least $500 and has a loyalty card, they will receive a 15% discount.

Exercises for Chapter 5

305

Table 5.8 Table of test case results (Exercise 5.5) Test case TC 001 TC 002 TC 003 TC 004 TC 005

Purchase amount [$] 25 50 50 500 600

Has a card? Yes Yes No Yes Yes

Discount value [$] 1 0 2.50 50 50

To be paid [$] 24 50 48 450 550

Test result Fail Fail Fail Pass Fail

You have executed several test cases. Their results are shown in Table 5.8. Prepare a defect report for TC 003.

Chapter 6 Test Tools

Keyword Test automation

The use of software to perform or support test activities

6.1 Tool Support for Testing FL-6.1.1 (K2)

Explain how different types of test tools support testing

A well-known saying goes that automation replaces what works with something that almost works, but is faster and cheaper [70]. This saying quite accurately captures the reality of test automation. We already know that the basic tasks of testing are: • • • • • • • •

Product analysis and evaluation Determining what measures will be used in testing Test analysis Test design Test implementation Test execution Analyzing the test result Test environment management

Not all of these activities can be fully automated. In fact, automated testing means computer-aided testing, and in practice, when talking about test automation , one usually means automating their execution, i.e., implementing and executing automated test scripts. However, it is important to note that automation can also include other areas of testing. For example, in a model-based testing (MBT) approach, automation can include test design and determination of the expected result based on the analysis of the model provided by the tester. Today’s tools that support testing support several testing activities, such as: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_9

307

308

• • • •

Chapter 6 Test Tools

Test design, implementation, and execution Test data preparation Test management, defect management, and requirements management Test execution monitoring and reporting The use of test tools can have several purposes:

• Automate repetitive tasks or tasks that require a lot of resources or significant effort to perform manually (e.g., regression tests)—this allows us to increase the efficiency of the tests. • Support manual activities and increase the efficiency of test activities—which increases the reliability of testing. • Increase the consistency of testing and the reproducibility of defects, thus increasing the quality of testing. • Automate activities that cannot be done or are very difficult to do manually (such as performance testing). Probe effect Some types of test tools can be invasive—their use can affect the actual test result. This phenomenon is called the probe effect. For example, when we use a performance testing tool, the performance of the software under test may be slightly worse, due to the introduction of additional code instructions to the software by performance testing tool. Test tools support and facilitate many testing activities. Examples include, but are not limited to, the following groups of tools: • Management tools—increase the efficiency of the test process by facilitating application lifecycle management, test basis to requirements traceability, test monitoring, defect management, and configuration management; they offer teamwork tools (e.g., scrum board) and provide automated reporting. • Static testing tools—support the tester in performing reviews (mainly in review planning, supporting traceability, facilitating communication, collaborating on reviews, and maintaining a repository for collecting and reporting metrics) and static analysis. • Dynamic testing tools—support the tester in performing dynamic analysis, code profiling, monitoring memory usage during program execution, etc. • Test design and test implementation tools—facilitate the generation of test cases, test data, and test procedures, support model-based testing approach, and provide frameworks for behavior-driven development (BDD) or acceptance test-driven development (ATDD). • Test execution and coverage measurement tools—facilitate the automated execution of tests (test scripts) and the automated measurement of coverage achieved by these tests, as well as enable the measurement of expected and actual test results; these tools include tools for automated GUI/API testing and frameworks

6.2 Benefits and Risks of Test Automation



• • •

309

for component testing and for executing tests in approaches such as BDD or ATDD. Non-functional testing tools—allow the tester to perform non-functional testing that is difficult or impossible to perform manually (e.g., generating load for performance testing, scanning for security vulnerabilities, usability testing of interfaces and web pages, etc.). DevOps tools—support deployment pipeline, workflow tracking, build automation process, automated software deployment, continuous integration, and continuous delivery. Collaboration tools—facilitate communication. Tools to support the scalability and standardization of deployments (e.g., virtual machines, containerization tools, etc.).

6.2 Benefits and Risks of Test Automation FL-6.2.1 (K2)

Recall the benefits and risks of test automation

Simply owning and using a tool does not yet guarantee success. A well-known Grady Booch saying goes: “a fool with a tool is still a fool.” Achieving real and lasting benefits from implementing a new tool in an organization always requires additional effort. A tool is just a tool and will not do all the work for the tester. It’s like expecting the hammer we bought to drive nails by itself. Potential benefits of using the tools include: • Saving time by reducing repetitive manual work (e.g., running regression tests, re-entering the same test data, comparing expected and actual test results, checking code against coding standards). • Greater consistency and repeatability prevents simple human errors (e.g., tests are consistently derived from requirements, test data is created in a systematic way, and tests are executed by the tool in the same order, in the same way, and with the same frequency). • More objective assessment (e.g., consistent coverage measurement) and the ability to calculate metrics that are too complex for humans to calculate. • Easier access to test information (e.g., statistics, charts, and aggregated data on test progress, defects, and execution time) to support test management and reporting. • Reduced test execution time, providing earlier defect detection, faster feedback, and faster time to market. • More time for testers to design new, stronger, and more effective tests. There are also certain risks associated with the use of testing tools: • Unrealistic expectations about the benefits of the tool (including functionality and ease of use).

310

Chapter 6 Test Tools

• Inaccurate/erroneous estimation of the time, cost, and effort required to implement the tool, maintain testing scripts, and change the existing manual testing process. • Using a test tool when manual testing is more appropriate (e.g., interface usability testing subject to human evaluation). • Relying on the tool when human critical thinking is needed. • Dependence on the tool vendor, which may go out of business, retire the tool, sell the tool to another vendor, or provide poor support (e.g., responses to inquiries, updates, and error fixes). • When using the open-source tool, this tool project may be abandoned, meaning that no further updates will be available or its internal components may need to be updated quite frequently as part of the tool’s further development. • The platform and the tool are not compatible with each other. • The failure to comply with regulatory requirements and/or safety standards.

Sample Questions Question 6.1 (FL-6.1.1, K2) Which of the following activities should be supported by a test management tool? A. B. C. D.

Test design. Requirements management. Test execution. Defect reporting. Choose one answer.

Question 6.2 (FL-6.1.2, K1) Which TWO are the benefits associated with using test tools? A. B. C. D. E.

Tool dependence. Tool vendor dependency. Increasing the repeatability of tests. Mechanical coverage assessment. Cost of maintaining testware higher than estimated. Choose two answers.

Part III

Answers to Questions and Exercises

Answers to Sample Questions

Answers to Questions from Chap. 1 Question 1.1 (FL-1.1.1, K1) Correct answer: B Answer A is incorrect. This is one of the test objectives according to the syllabus. Answer B is correct. The objective of acceptance testing is to confirm (validation) that the system works as expected, rather than looking for failures (verification). Answer C is incorrect. This is one of the test objectives according to the syllabus. Answer D is incorrect. This is one of the test objectives according to the syllabus. Question 1.2 (FL-1.1.2, K2) Correct answer: A Triggering failures (ii) and performing retests, or confirmation testing (iv), are the test activities. Debugging is the process of finding, analyzing, and fixing the causes of failure in a component or system, so finding defects in code (i) and analyzing the defects found (iii) are the debugging activities. Thus, answer A is correct. Question 1.3 (FL-1.2.1, K2) Correct answer: B Answer A is incorrect. Such a remark can create conflict in the team, and this should not be allowed, because conflict threatens the achievement of project goals. Answer B is correct. Reporting this deficiency or including the issue of the time of turning an object into gold as one of the story acceptance criteria illustrates well the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_10

313

314

Answers to Sample Questions

contribution of testing in the development process. This is because it minimizes the risk of developers not taking this requirement into account. Answer C is incorrect. There is no justification for the immediate reaction of the product owner. Moreover, testing cannot force anyone to do anything. It can only inform about problems. Answer D is incorrect. Performance efficiency is a nonfunctional software quality characteristic. The problem found is a functional defect, not a nonfunctional one. Also, we do not improve the game performance by removing this defect. Question 1.4 (FL-1.2.2, K1) Correct answer: A Quality assurance focuses on preventing the introduction of defects by establishing, implementing, and controlling appropriate processes (i), while testing focuses on evaluating software and related products to determine whether they meet specified requirements (iii). Quality assurance does not control the quality of the product being developed (ii). Testing does not focus on removing defects from software (iv). So the correct sentences are (i) and (iii). Thus, the correct answer is A. Question 1.5 (FL-1.2.3, K1) Correct answer: B Answer A is incorrect. This is the definition of error according to the ISTQB® glossary. Answer B is correct. According to the ISTQB® glossary, a defect is an imperfection or deficiency in a work product where it does not meet its requirements or specifications. Answer C is incorrect. This is the definition of failure according to the ISTQB® glossary. Answer D is incorrect. A test case is a work product, not a defect in a work product. Question 1.6 (FL-1.3.1, K2) Correct answer: D Answer A is incorrect. This principle states that early testing and defect detection allow us to fix defects early in the SDLC, reducing or eliminating costly changes that would have to be made if the defects were discovered later, such as after the release. Answer B is incorrect. This principle says that testing is done differently in different business contexts. Answer C is incorrect. This principle says that you need to modify existing tests and test data and write new tests so that the test suite is constantly ready to detect new defects.

Answers to Questions from Chap. 1

315

Answer D is correct. This principle says exactly that “A small number of system components usually contain most of the defects discovered or are responsible for most of the operational failures. This phenomenon is an illustration of the Pareto principle.” Question 1.7 (FL-1.4.1, K2) Correct answer: C According to the syllabus, the testability of the test basis is checked during test analysis. Hence, the correct answer is C. Question 1.8 (FL-1.4.2, K2) Correct answer: C Answer A is incorrect. Budget has a significant impact on the test process. Answer B is incorrect. Standards and norms have a significant impact on the test process, especially in audited projects or critical systems projects. Answer C is correct. The number of certified testers employed in an organization has no significant impact on the testing process. Answer D is incorrect. Testers’ knowledge of the business domain has a significant impact on the testing process, as it enables more effective communication with the customer and contributes to testing efficiency. Question 1.9 (FL-1.4.3, K2) Correct answer: D Answer A is incorrect. A test progress report is a typical work product of test monitoring and control. Answer B is incorrect. Information about the current risk level of a product is the typical information reported as part of test monitoring. Answer C is incorrect. If decisions made within test control are documented, it is in this phase. Answer D is correct. The test completion report is a work product produced during the test completion phase. Question 1.10 (FL-1.4.4, K2) Correct answer: A Answer A is correct. If risks are quantified, test results can be tracked to test cases, and these to the risks they cover. If all test cases traced back to a given risk pass, the risk level remaining in the product can be considered to have decreased by the value of that risk.

316

Answers to Sample Questions

Answer B is incorrect. Defining an acceptable level of code coverage is an example of establishing an exit criteria. The traceability mechanism has nothing to do with this process. Answer C is incorrect. Traceability will not help determine the expected outcome of a test case, because the traceability does not have the property of a test oracle. Answer D is incorrect. Deriving this type of test data can be possible by using a proper test technique rather than a traceability mechanism. Question 1.11 (FL 1.4.5, K2) Correct answer: D The test management activities mainly include tasks performed in test planning, monitoring, test control, and test completion. In particular, this includes coordinating the implementation of the test strategy and test plan (i), creating a test completion report (iii), and deciding on test environment implementation (v). The test activities include tasks that occur mainly during the test analysis, test design, test implementation, and test execution phases. Thus, tester is responsible for defining test conditions (ii), test automation (iv), and verifying test environments (vi). Thus, the correct answer is D. Question 1.12 (FL-1.5.1, K2) Correct answer: C According to the syllabus, typical generic skills of a good tester include, in particular, analytical thinking (A), domain knowledge (B), and communication skills (D). Programming skill (C) is not a critical attribute of a tester. It is not necessarily needed for good test execution (e.g., for manual or exploratory testing). Question 1.13 (FL-1.5.2, K1) Correct answers: B, E Answer A is incorrect. Typically, developers, not testers, implement and execute component tests. Answer B is correct. This is one feature of the “whole team” approach, relying on the cooperation of all stakeholders. Answer C is incorrect. The business representative is not competent to choose the tools for the development team—it is the team that chooses the tools it wants to use, and the formal decision is made by management or, in agile methodologies, by the team itself. Answer D is incorrect. The client is not competent in nonfunctional test design; this task falls on the testers and developers. Answer E is correct. Shared responsibility and attention to quality are two of the basic principles of the “whole team” approach.

Answers to Questions from Chap. 2

317

Question 1.14 (FL 1.5.3, K2) Correct answer: A Answer A is correct. Testers have a different perspective on the system under test than developers and avoid many of the cognitive errors of the work products’ authors. Answer B is incorrect. Developers can (and indeed should) test the code they create. Answer C is incorrect. This is how testers should work, but failures can also be reported constructively by other stakeholders. This is not the reason for independent testing. Answer D is incorrect. Finding defects should not be seen as a criticism of developers, but this has nothing to do with independence of testing, only with the desire for harmonious cooperation between developers and testers.

Answers to Questions from Chap. 2 Question 2.1 (FL-2.1.1, K2) Correct answer: D Answer A is incorrect. In a sequential model, like V-model, and for life-critical software like autopilot, experience-based test techniques should be a complementary, not the primary technique. Answer B is incorrect. The choice of SDLC model does not directly affect whether static tests will be present in the project. Moreover, performing static tests early, especially for life-critical systems, like autopilot, is considered a good practice. Answer C is incorrect. The V-model is a sequential SDLC model, so there are no iterations. In addition, because of its sequential nature, the running software, even in prototype form, can usually only be available in later phases of the development cycle. Answer D is correct. In the early phases of SDLC models like V-model, testers are usually involved in requirements reviews, test analysis, and test design. Executable code is typically developed in later phases, so dynamic testing usually cannot be performed in the early phases of the SDLC.

318

Answers to Sample Questions

Question 2.2 (FL-2.1.2, K1) Correct answer: D In the V-model, each development phase (the left arm of the model) corresponds to the associated testing phase (the right arm of the model). On the model, this is represented by the horizontal arrows between the phases (e.g., from acceptance testing to the requirements phase; from system testing to the design phase; etc.). Hence, the correct answer is D. Question 2.3 (FL-2.1.3, K1) Correct answer: B Answer A is incorrect. TDD (test-driven development) is about writing low-level component tests that do not use user stories. Answer B is correct. ATDD (acceptance test-driven development) uses acceptance criteria of the user stories as the basis for test case design. Answer C is incorrect. In the FDD (feature-driven development) approach, the basis for software development are the defined features, not tests; this approach has nothing to do with ATDD (see correct answer). Answer D is incorrect. BDD (behavior-driven development) uses as a test basis a description of the desired behavior of the system, usually in Given/When/Then format. Question 2.4 (FL-2.1.4, K2) Correct answer: B Answer A is incorrect. DevOps has nothing in common with automated test data generation. Answer B is correct. Activities performed automatically after the developer commits code to the repository, such as static analysis, component testing, or integration testing, allow very quick feedback to the developer on the quality level of the code committed by the developer. Answer C is incorrect. This type of activity is possible in model-based testing, for example. The DevOps approach is not about automated test case generation. Answer D is incorrect. The DevOps approach does not affect the time of release planning and iteration planning; moreover, even if this was true, it is not a testrelated benefit, but rather a project management benefit.

Answers to Questions from Chap. 2

319

Question 2.5 (FL-2.1.5, K2) Correct answer: A Answer A is correct. One example of the shift-left approach is the use of the testfirst approach exemplified by the acceptance test-driven development (ATDD). Answer B is incorrect. The shift-left approach does not distinguish exploratory testing in any way. Rather, the emphasis on using specific test types depends on risk analysis. Answer C is incorrect. Creating GUI prototypes is not an example of using the shift-left approach, because the creation itself is not related to testing. Answer D is incorrect. This is an example of a shift-right approach, that is, late testing, after the software has been released to the customer, in order to monitor the quality level of the product in the operational environment on an ongoing basis. Question 2.6 (FL-2.1.6, K2) Correct answer: C Answer A is incorrect. Testers should participate in retrospective meetings by addressing all issues raised at these meetings. Answer B is incorrect. Testers should participate in all aspects of the retrospective meeting. The role described is more like that of a facilitator. Answer C is correct. This is the typical activity performed by a tester in a retrospective meeting: to discuss what happened during the completed iteration. Answer D is incorrect. This is not the purpose of a retrospective meeting. The tester should discuss what happened during the last iteration. Question 2.7 (FL-2.2.1, K2) Correct answer: B Answer A is incorrect. The component integration tests we are dealing with here focus on the interaction and communication between components, not on the components themselves. Answer B is correct. We want to perform integration testing. The architecture design is the typical test basis for this type of testing, because it usually describes how the various components of the system communicate with each other. Answer C is incorrect. Risk analysis reports are more useful for system testing than integration testing. Answer D is incorrect. Regulations are usually useful for high-level testing and validation, such as in acceptance testing.

320

Answers to Sample Questions

Question 2.8 (FL-2.2.2, K2) Correct answer: D Answers A and C are incorrect. These tests check “what” the system does and are therefore examples of functional testing. Answer B is incorrect. This is an example of a white-box test, not a nonfunctional test. Answer D is correct. This is an example of a nonfunctional test or, more precisely, a performance test. This test type checks “how” the system works, not “what” it does. Question 2.9 (FL-2.2.3, K2) Correct answer: D Confirmation testing is performed after a defect has been found and reported to be fixed. Since we don’t know when the test will trigger a failure, nor do we usually know how long the repair will take, we cannot predict the timing of the confirmation test execution. Thus, it is impossible to accurately schedule these tests in advance. All other test types can be planned in advance, and a schedule for their execution can be put into the test plan. Question 2.10 (FL-2.3.1, K2) Correct answer: C Answer A is incorrect. We do not update the software, but we fix it. Answer B is incorrect. It does not appear from the scenario that we are performing any software migration activities. Answer C is correct. Software modification is one of the events that trigger maintenance. Repairing a defect is a software modification. Answer D is incorrect. While this is a maintainability trigger for IoT systems, in this scenario, we want to fix a defect—not introduce new system functionality.

Answers to Questions from Chap. 3

321

Answers to Questions from Chap. 3 Question 3.1 (FL-3.1.1, K1) Correct answer: A Answer A is correct. The document that defines the rules for reviews can also be reviewed. In doing so, it does not have to be reviewed according to the provisions of this document; the review can be conducted based on common-sense criteria. Answer B is incorrect. The document talks about the rules for conducting reviews, but we can review it without following the rules it discusses, just using common sense. Answer C is incorrect. The fact that a document is not the work product of some specific process does not exclude it from being included in a review. Answer D is incorrect. Reviews can be applied to any human-understandable document. Question 3.2 (FL-3.1.2, K2) Correct answer: B Answer A is incorrect because we did not run the code but only analyzed its properties. Answer B is correct, this is a classic example of the gain from using a static technique—in this case, static analysis. Answer C is incorrect because the measurement of cyclomatic complexity, the analysis of this measurement, and the refactoring of code are not managerial activities but technical ones. Answer D is incorrect because static analysis is not an example of a formal test technique; such techniques include equivalence partitioning, boundary value analysis, or other black- or white-box test techniques. The result of static analysis is not a test case design. Question 3.3 (FL-3.1.3, K2) Correct answer: C Answer A is incorrect. Static testing directly detects defects, not failures. It cannot detect failures, since we do not execute the work product under test. Dynamic testing directly detects failures, not defects. Answer B is incorrect. For example, reviews may be performed at very late stages (e.g., they may check the user documentation), and dynamic tests may start early in the implementation phase. Answer C is correct. Static analysis and dynamic analysis have the same goals (see Sect. 1.1.1), such as identifying defects (directly or indirectly, through failures)

322

Answers to Sample Questions

as early as possible in the development cycle. So, regarding the purpose, there is no difference between these techniques. Answer D is incorrect. First, reviews usually do not require any programming skills; second, it does not answer the question, which was about the criterion of purpose, not the required skills. Question 3.4 (FL-3.2.1, K1) Correct answer: C Sentence (i) is false; developers implement those features that are required by the business and are part of the iteration. When they complete their tasks, they support other tasks related to the iteration. Sentence (ii) is true; frequent feedback helps focus attention on the features of greatest value. Sentence (iii) is false; early feedback may even result in the need for more testing due to frequent or significant changes. Sentence (iv) is true; users indicate which requirements are skipped or misinterpreted, resulting in a final product that better meets their needs. Thus, the correct answer is C. Question 3.5 (FL-3.2.2, K2) Correct answer: C Answers A and D are incorrect. These activities are part of the planning phase. Answer B is incorrect. Collecting metrics is part of the “fixing and reporting” activity. Answer C is correct. According to the syllabus, this is an activity performed during the review initiation phase. Question 3.6 (FL-3.2.3, K1) Correct answer: B Answer A is incorrect. The review leader’s role is overall responsibility for the review, as well as deciding who is to participate in the review and where and when the review is to take place. Thus, the leader has an organizational role, while the facilitator (see the correct answer) is a technical role directly related to conducting review meetings. Answer B is correct. The moderator (also known as the facilitator) is responsible for ensuring that the review meetings run effectively. Answer C is incorrect. It may be the author’s responsibility to make a presentation or comment on their work product, but the author never steps into the role of moderator.

Answers to Questions from Chap. 4

323

Answer D is incorrect. The reviewers’ job is to substantively review the work product, not to moderate meetings. The moderator is a role that precisely relieves reviewers of the need to write down observations made by them during the review meeting. Question 3.7 (FL-3.2.4, K2) Correct answer: D Understanding the work product (or learning something) is one of the goals of a walk-through. Walk-throughs can take the form of so-called dry runs, so as a result of using this type of review, the team can most effectively understand how the software works and thus more easily discover the cause of a strange failure. Question 3.8 (FL-3.2.5, K1) Correct answer: C Answer A is incorrect. The main purpose of inspection is to find defects. Evaluating alternatives is a more appropriate goal for a technical review. Answer B is incorrect. Inspections are usually conducted by peers of the author. The presence of management at a review meeting carries the risk of misunderstanding the purpose of the review and, for example, the risk assessment by that management of the individual members of the meeting. Answer C is correct. Training in review techniques is one of the success factors for reviews. Answer D is incorrect. Measuring metrics helps improve the review process. Moreover, in a formal review such as inspection, collecting metrics is a mandatory activity.

Answers to Questions from Chap. 4 Question 4.1 (FL-4.1.1, K2) Correct answer: B Analysis of the software input domain takes place when using black-box test techniques such as equivalence partitioning and boundary value analysis.

324

Answers to Sample Questions

Question 4.2 (FL-4.1.1, K2) Correct answer: D Answer A is incorrect. It describes a common feature of white-box test techniques. Answer B is incorrect. If we already want to design test data based on code analysis, we need to have access to it, so we can do it through white-box testing. Answer C is incorrect. It describes how coverage is measured in white-box test techniques. Answer D is correct. The techniques listed in the question are black-box test techniques. According to the syllabus, in black-box test techniques, test conditions, test data, and test cases are derived from a test basis external to the object under test (e.g., requirements, specifications, use cases, etc.). Therefore, test cases will be able to detect discrepancies between these requirements and their actual implementation. Question 4.3 (FL-4.2.1, K3) Correct answer: A The domain is the amount of purchase. There are three equivalence partitions for this domain according to the specification: • “No discount” partition {0.01, 0.02, ..., 99.98, 99.99} • “5% discount” partition {100.00, 100.01, ..., 299.98, 299.99} • “10% discount” partition {300.00, 300.01, 300.02, ...} Note that the smallest possible amount is 0.01 as the smallest possible positive number. Answer A is correct. The three values 0.01, 100.99, and 500, each of which belongs to a different partition, cover all three equivalence partitions. Answer B is incorrect. We have only three partitions, so the minimum set of values taken for testing should have three elements. Answer C is incorrect. The values 1 and 99 belong to the same partition. This set does not contain a value for which the system should allocate a 10% discount, so the last partition is not covered. Answer D is incorrect. All the values belong to the “no discount” partition. They are values representing possible discount percentages (0, 5, 10), but we are not analyzing the discount type domain but the input domain (amount of purchases).

Answers to Questions from Chap. 4

325

Question 4.4 (FL-4.2.1, K3) Correct answer: C We are supposed to check two situations: one in which the machine does not give change and one in which it gives change. Answer A is incorrect. Both scenario 1 and scenario 2 are scenarios in which the machine does not return any change. So we miss a case in which a change is given. Answer B is incorrect. Both scenarios only cover the case in which the machine gives change. We miss a test in which the machine does not give change. Answer C is correct. In scenario 1, the machine does not give change (the amount inserted is exactly equal to 75c), and in scenario 3, the machine gives 25c change. Answer D is incorrect, because two scenarios are enough to cover all (two) equivalence partitions—see a justification for the correct answer. Question 4.5 (FL-4.2.1, K3) Correct answer: A To achieve 100% equivalence partitioning “each choice” coverage, we need to cover every card type and every discount. The existing test cases do not cover only the diamond card and the 5% discount. These two items can be covered with one additional test case: PT05: diamond card, 5%. Thus, the correct answer is A. Question 4.6 (FL-4.2.2, K3) Correct answer: D The domain under consideration is the length of the password, and the equivalence partitions look as follows: • Password too short: {0, 1, 2, 3, 4, 5} • Password with correct length: {6, 7, 8, 9, 10, 11} • Password too long: {12, 13, 14, ...} Existing test cases achieve 100% 2-value BVA coverage, so they must cover all the boundary values, that is, 0, 5, 6, 11, and 12. To achieve the 100% 3-value BVA coverage, we need to cover the following values: • • • • •

0, 1 (for the boundary value 0) 4, 5, 6 (for the boundary value 5) 5, 6, 7 (for the boundary value 6) 10, 11, 12 (for the boundary value 11) 11, 12, 13 (for the boundary value 12)

326

Answers to Sample Questions

So, in total, the values 0, 1, 4, 5, 6, 7, 10, 11, 12, and 13 should be tested. However, since we know that the values 0, 5, 6, 11, and 12 are already in our test set (because the 100% 2-value BVA is achieved), the missing values are 1, 4, 7, 10, and 13. Hence, answer D is correct. Question 4.7 (FL-4.2.2, K3) Correct answer: D Achieving full BVA coverage is infeasible. The numbers of consecutive free washes are all multiples of 10: {10, 20, 30, 40, 50, ...}. But in order to apply the BVA method, the partitions must be consistent, i.e., they must not have “holes.” Therefore, if we wanted to apply the BVA to this problem, we would have to derive infinitely many equivalence partitions: {1, ..., 9}, {10}, {11, ..., 19}, {20}, {21, ..., 29}, {30}, {31, ..., 39}, {40}, etc..., and this would mean that we have to execute infinitely many test cases (since we have infinitely many boundary values: 1, 9, 10, 11, 19, 20, 21, 29, 30, 31, etc.). Note that if we used the equivalence partitioning method, the problem could be solved, because we would have only two partitions: numbers divisible by 10 and other numbers. Therefore, only two tests, such as 9 and 10, would be enough to achieve coverage of equivalence partitions. Question 4.8 (FL-4.2.3, K3) Correct answer: A Requirements are contradictory if, for a given combination of conditions, we can indicate two different sets of corresponding actions. In our case, the two different actions are “free ride”=YES and “free ride”=NO. To force the value of NO, the condition “student”=YES must be satisfied. To force the value of YES, either “member of parliament”=YES or “disabled person”=YES must occur. Answer A is correct. This combination matches both rules R1 and R3, which give contradictory actions. Answer B is incorrect. This combination only matches rules R1 and R2, which result in the same action. Answer C is incorrect. This combination only matches column R3, so there can be no contradiction within a single rule. Answer D is incorrect. It fits neither rule R1 nor R2 nor R3. So this is an example of a missing requirement, but not contradictory requirements.

Answers to Questions from Chap. 4 Table 1 Combinations of conditions

Combination 1 2 3 4 5 6 7 8 9 10 11 12

327 Age Up to 18 Up to 18 Up to 18 Up to 18 19–40 19–40 19–40 19–40 From 41 From 41 From 41 From 41

Residence City City Village Village City City Village Village City City Village Village

Earnings Up to 4000/month From 4001/month Up to 4000/month From 4001/month Up to 4000/month From 4001/month Up to 4000/month From 4001/month Up to 4000/month From 4001/month Up to 4000/month From 4001/month

Question 4.9 (FL-4.2.3, K3) Correct answer: C The number of columns in the full decision table is equal to the number of all possible combinations of conditions. Since we have three conditions, with 3, 2, and 2 possible choices, respectively, the number of all possible combinations of these conditions is 3*2*2 = 12. These are listed in Table 1. Question 4.10 (FL-4.2.4, K3) Correct answer: A Since we have three states and four events, there are 3*4 = 12 possible combinations (state, event). The table from Question 4.10 contains only four of them (i.e., there are only four valid transitions in the machine). Thus, invalid transitions are 12-4 = 8. Here they are (the state and event are placed in parentheses sequentially): 1. 2. 3. 4. 5. 6. 7. 8.

(Initial, LoginOK) (Initial, LoginError) (Initial, Logout) (Logging, Login) (Logging, Logout) (Logged, Login) (Logged, LoginOK) (Logged, LoginError)

None of these combinations appear in the list of valid transitions given in the task. Another way to show that there is eight invalid transitions is to count empty cells in the state table, which looks as shown in Table 2.

328

Answers to Sample Questions

Table 2 State table for Question 4.10 State\Transition Initial Logging Logged

Login Logging

LoginOK

LoginError

Logged

Initial

Logout

Initial

Fig. 1 Paths determined by tests covering all valid transitions (Question 4.11)

Question 4.11 (FL-4.2.4, K3) Correct answer: B Note that no two of the following three transitions can occur within a single test case: • S0 > (E2) > S3. • S1 > (E1) > S4. • S2 > (E2) > S5. This means that we need at least three test cases to cover all valid transitions. In fact, three test cases are enough, and these are: TC1. S0 > (E1) > S1 > (E2) > S2 > (E2) > S5. TC2. S0 > (E1) > S1 > (E1) > S4 > (E1) > S5. TC3. S0 > (E2) > S3 > (E2) > S4 > (E1) > S5. The paths defined by these test cases are shown in Fig. 1. Note that each transition (arrow) is covered by at least one test case.

Answers to Questions from Chap. 4

329

Question 4.12 (FL-4.3.1, K2) Correct answer: D Answer A is incorrect. The branch coverage criterion subsumes the statement coverage criterion, but not vice versa. For example, for the code: 1. IF (x==0) THEN 2. x := x + 1 3. RETURN x

one test case (for x = 0) will result in the passage of path 1, 2, and 3 and therefore achieve 100% statement coverage, but this test covers only two of the three branches: (1, 2) and (2, 3). The branch (1, 3), which will be executed when input x is different from zero, is uncovered. Answer B is incorrect. The code example above shows this. The test case (x = 0) achieves 100% code coverage but causes only the truth outcome of the decision in statement 1. We do not have a test that forces a false outcome for this decision. Answer C is incorrect. The program above can return any number other than zero. Test (x = 0) achieves 100% statement coverage but only forces the return of the value x = 1. Answer D is correct. Statement coverage forces the execution of every statement in the code, so in particular, it means executing every statement that contains a defect. Of course, this does not mean triggering every failure caused by these defects, because the execution of an invalid statement may not have any negative effect. For example, the execution of the statement x := a / b will be completely correct, as long as the denominator (b) is not equal to 0. Question 4.13 (FL-4.3.2, K2) Correct answer: C Branch coverage requires that tests cover every possible flow of control between executable statements in the code, i.e., all possible branches in the code—both unconditional and conditional. The code under test has a linear structure, and each execution of this code will exercise statements in the order 1, 2, 3. This means that each time this program is run, all two unconditional branches occurring in it will be covered: (1, 2) and (2, 3). Hence, the correct answer is C: one test case with any input data x, y is enough.

330

Answers to Sample Questions

Question 4.14 FL-4.3.3 (K2) Correct answer: A Answer A is correct. This is a fundamental property and advantage of white-box test techniques. In this approach, tests are designed directly on the basis of the structure of what is going to be tested (e.g., source code), so a full, accurate specification is not necessary for the test design itself (except from deriving the expected result). Answer B is incorrect. White-box techniques do not always require programming skills. They can be used, for example, at the system test level, where the covered structure is, for example, the program menu. Answer C is incorrect. There is no relationship between the coverage metrics of black-box and white-box techniques. Answer D is incorrect. There is no direct relationship between white-box coverage and risk, because the risk level depends specifically on the impact of the risk, not just on how many lines of code has the function associated with a particular risk. Question 4.15 (FL-4.4.1, K2) Correct answer: D Answer A is incorrect. The document does not mention boundary values. Answer B is incorrect. In checklist-based testing, the checklist defines the “positive” features of the software, while the analyzed document talks about possible failures. Answer C is incorrect. These are not use cases. Answer D is correct. The document appearing in the question is a list of possible defects or failures. Such lists are used in the technique of fault attacks, a formalized approach to error guessing. Question 4.16 (FL-4.4.2, K2) Correct answer: A Exploratory testing utilizes the knowledge, skills, intuition, and experience of the tester but gives the tester full room for maneuver when it comes to the repertoire of techniques they can use in a session-based exploratory testing. Therefore, answer A is correct. Answers B and C are incorrect—see A. Answer D is incorrect. Although formal test techniques are allowed (see A), the explanation in this answer is incorrect. In an exploratory approach, it is not necessary to have the test basis needed to derive test cases, since this is an experience-based test technique.

Answers to Questions from Chap. 4

331

Question 4.17 (FL-4.4.3, K2) Correct answer: D Answer A is incorrect. Although checklists can be organized around nonfunctional testing, this is not the main advantage of using checklists. Answer B is incorrect. In an experience-based approach such as checklist-based testing, it is impossible to precisely define meaningful measures of coverage, especially measures of code coverage. Answer C is incorrect. Using checklists does not necessarily require expertise (especially in case of low-level, detailed checklists)—this approach fits more with exploratory testing. Answer D is correct. In the absence of detailed test cases, testing based on checklists can provide a degree of consistency for testing. Question 4.18 (FL-4.5.1, K2) Correct answer: D Answer A is incorrect. The tester cannot decide the acceptance criteria alone. User stories, including acceptance criteria, are written based on the cooperation of the product owner, developer, and tester. Answer B is incorrect. This solution does not make sense. First, story planning is not the moment to write tests. Second, acceptance tests must be created on the basis of established and precise acceptance criteria, and at the moment, the team is in the process of negotiating these criteria, and it is not yet clear what form they will take. Answer C is incorrect. The scenario does not say anything about performance, but even if this topic came up in the discussion, the product owner cannot be excluded from it. Answer D is correct. This is a model example of negotiating a user story by all team members as part of a collaboration-based test approach. Question 4.19 (FL-4.5.2, K2) Correct answers: B, C Answer A is incorrect. The acceptance criteria may address a nonfunctional aspect such as performance, but the term “fast enough” used in this criterion is imprecise and therefore untestable. Answer B is correct. This is the desired mechanism for offering functionality in this software: the user can only order purchases when registered. This is a precise and testable criterion, directly related to the content of the story. Answer C is correct. Acceptance criteria can take into account “negative” events, such as a user making an error during the registration process, which may result in a denial of further processing. This is a precise and testable criterion, directly related to the content of the story.

332

Answers to Sample Questions

Answer D is incorrect. While it is a reasonable, precise and testable acceptance criterion, it does not directly address the story from the scenario. This is because it is written from the point of view of the system operator, not that of the e-shop customer. Answer E is incorrect. This is an example of a rule for writing acceptance criteria, not a specific acceptance criterion for a user story. Question 4.20 (FL-4.5.3, K3) Correct answer: B Test 1 is inconsistent with the business rule. The first condition is met (the time of the longest book held does not exceed 30 days), but after borrowing two new books, having already borrowed three others, the student will have a total of five books borrowed. This will not exceed the limit from the second condition of the business rule. So the system should allow lending, but in Test 1, the decision is “does not allow.” Test 2 follows the business rule: the student does not keep any of the four borrowed books for more than 30 days and wants to borrow one new one, so they will have a total of five borrowed books. They will not exceed the limit, so the system allows them to borrow. Test 3 follows the business rule: a professor has at least one held book (Days > 30), so the system cannot allow a book loan. Test 4 follows the business rule: a professor has a clean account and wants to borrow six books, which is within the limit of ten books. The system should allow the loan. Hence, only one is incorrect in the tests, so the correct answer is B.

Answers to Questions from Chap. 5 Question 5.1 (FL-5.1.1, K2) Correct answer: A The test plan must be in line with the test strategy, not the other way around. So, test strategy is not a part of test plan. All other elements, budgetary constraints, scope of testing, and risk register, are parts of the test plan. Thus, the correct answer is A.

Answers to Questions from Chap. 5

333

Question 5.2 (FL-5.1.2, K2) Correct answer: D Answer A is incorrect. Detailed risk analysis for user stories is performed during iteration planning, not during release planning. Answer B is incorrect. Identification of nonfunctional aspects of the system to be tested is done during iteration planning, not during release planning. Answer C is incorrect. Estimating the test effort for new features planned for an iteration is done during iteration planning, not during release planning. Answer D is correct. During release planning, testers are involved in creating testable user stories and their acceptance criteria. Question 5.3 (FL-5.1.3, K2) Correct answer: D Entry criteria include, in particular, availability criteria and the initial quality level of the test object. Therefore, entry criteria are the availability of testers (i) and passing all smoke tests (iv). In contrast, typical exit criteria are measures of thoroughness. Therefore, the absence of critical defects (ii) and achieving a certain threshold of test coverage (iii) are exit criteria. Thus, the correct answer is D. Question 5.4 (FL-5.1.4, K3) Correct answer: B The team wants to calculate E(5), for which it will need the values of E(4), E(3), and E(2). Since the value of E(4) is unknown, the team must first estimate E(4) and then E(5). According to the effort extrapolation model, we have: Eð4Þ = ð1=3Þ  ðEð3Þ þ Eð2Þ þ Eð1ÞÞ = ð1=3Þ  ð12 þ 15 þ 18Þ = 15: So E(4) = 15. Now we can extrapolate the effort in the fifth iteration: Eð5Þ = ð1=3Þ  ðEð4Þ þ Eð3Þ þ Eð2ÞÞ = ð1=3Þ  ð15 þ 18 þ 15Þ = 16: So E(5) = 16. This means that the correct answer is B.

334

Answers to Sample Questions

Question 5.5 (FL-5.1.5, K3) Correct answer: C The first executed test case will be the one achieving the highest feature coverage, i.e., TC1, which covers four of the seven functions: A, B, C, and F. The second in order will be the one that covers the most of the previously uncovered features (i.e., D, E, G). Each of the TC2, TC3, and TC4 covers only one of these additional features, while TC5 covers two additional uncovered features: D and G. Thus, TC5 will be executed second. These first two test cases, PT1 and PT5, cover a total of six of the seven functions: A, B, C, D, F, and G. The third test case will be the one that covers the most of the features not covered so far, that is, feature E. This feature is covered only by TC4, so this test case will be executed third. Thus, the correct answer is C. Question 5.6 (FL-5.1.6, K1) Correct answer: C Answer A is incorrect. Team effort has nothing to do with the concept of test pyramid. Answer B is incorrect. The test pyramid does not refer to design tasks but directly to testing at different test levels. Answer C is correct. The test pyramid illustrates the fact that we have more granular (detailed) tests at lower testing levels, and therefore, usually, more of these tests are needed, because each test achieves relatively low coverage. The higher the level, the lower the granularity of the tests, and usually, a decreasing number of them is needed, since usually, a single test at a higher level achieves more coverage than a single test at a lower level. Answer D is incorrect. The test pyramid does not model test effort; it models the granularity and number of tests at each test level. Question 5.7 (FL-5.1.7, K2) Correct answer: A Answer A is correct. According to the testing quadrants model, component (unit) tests are placed in the technology-oriented and team-supporting quadrant, as this quadrant contains automated tests and tests that are part of the continuous integration process. Component tests are typically this kind of tests. Answers B, C, and D are incorrect—see the rationale for the correct answer.

Answers to Questions from Chap. 5

335

Question 5.8 (FL-5.2.1, K1) Correct answer: A Answer A is correct: the risk likelihood and risk impact are independent factors. Answer B is incorrect: see the rationale for answer A. Answer C is incorrect: see the rationale for answer A. Answer D is incorrect: the risk impact should be assessed before the risk occurs. Question 5.9 (FL-5.2.2, K2) Correct answer: B Answer A is incorrect. This is an extreme example of the product risk materialization. Answer B is correct. The occurrence of project risks often results in problems related to delays in project tasks. Answer C is incorrect. High software maintenance costs are due to maintainability defects in the product, that is, they are the consequence of product risks, not project risks. Answer D is incorrect. Customer dissatisfaction is due to a product defect, so it is a consequence of product risks, not project risks. Question 5.10 (FL-5.2.3, K2) Correct answer: A Answer A is correct. A technical review to look at possible usability problems that are associated with the interface seems to be a reasonable idea, as is checking the control flow in the transfer mechanism code by applying branch coverage. Answer B is incorrect. Component testing may not show problems related to transfer interface—we don’t know what the application architecture looks like. Component testing is too low test level for this. Answer C is incorrect. Usability is not related to the implementation of the business logic of the application. Answer D is incorrect. White-box testing does not address the risks related to interface usability. Question 5.11 (FL-5.2.4, K2) Correct answer: C Buying insurance transfers the burden of risk to a third party, in this case the insurer. This is an example of risk transfer, so the correct answer is answer C.

336

Answers to Sample Questions

Question 5.12 (FL-5.3.1, K1) Correct answer: B Answer A is incorrect. The residual risk level is a typical metric used in testing, as it expresses the current level of residual risk in a product after the testing cycle. Answer B is correct. Coverage of requirements by source code has nothing to do with testing. This metric can represent the progress of development work, not testing. Answer C is incorrect. The number of critical defects found is directly related to testing. Answer D is incorrect. The progress of the test environment implementation refers to an important activity performed as part of the testing process, so it is a metric used in testing. Question 5.13 (FL-5.3.2, K2) Correct answer: D Answer A is incorrect. Information about unmitigated risks is a typical information contained in a test completion report. Answer B is incorrect. Deviations from test plan are a typical information contained in a test completion report. Answer C is incorrect. Information about defects is a typical information contained in a test completion report. Answer D is correct. The typical information included in a test progress report is testing scheduled for the next reporting period. This is not a typical information included in a test completion report, as this type of report describes a closed, completed scope of work for which there will be no further reporting periods. Question 5.14 (FL-5.3.3, K2) Correct answer: B There is no single best method of communication. For example, formal reports or e-mails will not be useful when the team needs to communicate quickly, frequently, and in real time. On the other hand, “face-to-face,” verbal communication will be impossible when the team is dispersed and works in many different time zones. The form of communication should always be chosen individually, according to the circumstances, taking into account various contextual factors. Therefore, the correct answer is B.

Answers to Questions from Chap. 6

337

Question 5.15 (FL-5.4.1, K2) Correct answer: A Answer A is correct. The purpose of configuration management is to ensure and maintain the integrity of the component/system, the related testware, and the interrelationships between them throughout the project and software life cycle so that the activities described in the scenario can be performed. Answer B is incorrect. Impact analysis can determine the magnitude or risk of a change, but it does not identify the source work products based on the software version. Answer C is incorrect. Continuous delivery helps automate the software release process, but it does not ensure the integrity and versioning of work products. Answer D is incorrect. Retrospectives are for process improvement; they do not provide integrity or versioning of work products. Question 5.16 (FL-5.5.1, K3) Correct answer: A Answer A is correct. The report lacks information on the steps to reproduce the test. For example, the developer may not know why the report says about accepting the passwords of length 6, 7, and 9, but not of length 8. Steps to reproduce might allow the developer to verify this quickly. Answer B is incorrect: The version of the product (in the form of the last compilation date) is given in the defect report. Answer C is incorrect. When we create a defect report using a defect management tool, it is most likely that an “open” status is automatically assigned. In addition, this is not as crucial information as the one given in answer A. Answer D is incorrect. This information is useful to the tester, but does not have to be included in the defect report and may be of little value for the developer who is responsible for fixing the defect.

Answers to Questions from Chap. 6 Question 6.1 (FL-6.1.1, K2) Correct answer: B According to the syllabus (Sect. 6.1), tools to support the test management include requirements management tools. Thus, the correct answer is B.

338

Answers to Sample Questions

Question 6.2 (Fl-6.1.2, K1) Correct answer: C, D According to the syllabus, the benefits are, in particular, increased consistency and repeatability of testing (C) and objective evaluation through the use of welldefined operational definitions of measurement (D). Overdependence on the tool (A), dependence on the vendor (B), and errors in estimating the cost of maintaining the tool (E) are risks. Hence, the correct answers are C and D.

Solutions to Exercises

Solutions to Exercises from Chap. 4 Exercise 4.1 (FL-4.2.1, K3) A) We have two domains: band = {Iron Maiden, Judas Priest, Black Sabbath} and ticket type = {in front of stage, away from stage}. Each element from each domain will be a separate one-element equivalence partition. Thus, the division of the “band” domain is as follows: {Iron Maiden}, {Judas Priest}, {Black Sabbath}; the division of the “ticket type” domain is as follows: {in front of stage}, {away from stage}. B) There are no invalid partitions in the problem, because the way the band and ticket type are selected makes this impossible: the user selects these values from predefined drop-down lists. Of course, it is possible to consider a situation in which, after submitting the form, this query is intercepted and modified, e.g., the team name is changed to a non-existent one. Here, however, we focus only on purely functional testing, restricting the user-system interaction to GUI only, and we do not consider advanced application security testing issues. C) The test suite must cover each of the three partitions of the “band” domain and each of the two partitions of the “ticket type” domain. Since each test case covers one partition from each of these domains, three test cases will be enough, e.g.: TC1 band = Iron Maiden, ticket type = in front of stage. TC2 band = Judas Priest, ticket type = away from stage. TC3 band = Black Sabbath, ticket type = in front of stage. It should be also pointed out that in each test case, you also need to specify the expected output. In our case, it will be the assignment of a ticket of a specific type for a concert by a specific band.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_11

339

340

Solutions to Exercises

Exercise 4.2 (FL-4.2.1, K3) A) The input (valid) domain is the natural numbers greater than 1. We can divide this domain into two partitions: prime numbers and composite numbers, namely, {2, 3, 5, 7, 11, 13, ...} and {4, 6, 8, 9, 10, 12, ...}. B) The interface does not allow the user to enter a value that is not a natural number. Therefore, we can consider that there are no invalid partitions in the problem. However, if we were able, for example, to intercept the message that passes the input value to the system and modify it accordingly, we could then force an invalid value. Whether this is possible and whether testers choose to take such a “hacking” approach depends, of course, on a number of factors, in particular the required level of application security. If we consider the problem only from the user’s perspective, we can safely assume that the program will operate only on correct, expected values. C) Since there are no invalid partitions in the problem, two tests are enough to cover all equivalence partitions: one in which we consider a prime number and one in which we consider a composite number, e.g.: TC1: input is a prime number (e.g., 7), TC2: input is a composite number (e.g., 12). Exercise 4.3 (FL-4.2.2, K3) The domain under analysis is the total amount of purchases. It is a positive number measurable to two decimal places (that is, to 1 cent). We need to find those of these values which, after rounding, will be the boundary values for the rounded amount (see Table 1). The smallest possible input value that meets the conditions of the task is $0.01. The value 300 is the largest amount, which after rounding (=300) will give the maximum boundary value for 0% discount. The value 300.01 is the smallest amount that, after rounding (=301), will give the minimum boundary value for a 5% discount, etc. So we have the following test cases: • TC1: amount = 0.01, expected result: 0% discount. • TC2: amount = 300, expected result: 0% discount. • TC3: amount = 300.01, expected result: 5% discount. Table 1 Boundary values before and after rounding

Discount 0% 5% 10%

Boundary values for the amount rounded up Minimum Maximum 1 300 301 800 801 –

Boundary values for the amount before rounding Minimum Maximum 0.01 300 300.01 800 800.01 –

Solutions to Exercises from Chap. 4

341

• TC4: amount = 800, expected result: 5% discount. • TC5: amount = 800.01, expected result: 10% discount. Exercise 4.4 (FL-4.2.2, K3) A) Partitions for the “width” parameter: • Valid partition: {30, 31, ..., 99, 100} Partitions for the “height” parameter: • Valid partition: {30, 31, ..., 59, 60} Partitions for the “area” (the price of the service will depend on its value): • Valid partition for the price of $450: {900, 901, ..., 1600} • Valid partition for the price of $500: {1601, 1602, ..., 6000} The values for “area” are derived from the fact that the minimum dimensions of the image are 30 cm wide and 30 cm high, so the minimum surface area is 30 cm*30 cm = 900 cm2. Similarly, the maximum dimensions are 100 cm width and 60 cm height, so the maximum area is 60 cm*100 cm = 6000 cm2. Boundary values: • For “width”: (W1) 30, (W2) 100. • For “height”: (H1) 30, (H2) 60. • For “area”: (A1) 900, (A2) 1600, (A3) 1601, (A4) 6000. B) We will represent the test case as a pair (w, h), where w and h are the width and height (input), respectively. We need to cover eight boundary values with tests: W1, W2, H1, H2, A1, A2, A3, and A4. Note that some boundary values for the area can be obtained from multiplying the values that are the height and width boundary values. For example, 900 = 30*30, and 6000 = 100*60. We can use this to minimize the number of test cases. The test cases and covered boundary values are shown in Table 2. Note that it is impossible to cover the boundary value of A3. This is because 1601 is a prime number, so it cannot be expressed as the product of two numbers greater than or equal to 30.

Table 2 Covered boundary values TC 1 2 3

Input Width 30 100 40

Height 30 60 40

Area 900 6000 1600

Covered boundary values for: Width Height Area W1 H1 A1 W2 H2 A4 A2

342

Solutions to Exercises

We designed three test cases TC1, TC2, and TC3, covering seven of the eight identified boundary values. The smallest number belonging to the partition {1601, 1602, . . ., 6000} that can be represented as a multiplication of two numbers fulfilling the given constraints, is 1610 = 35 * 46. We could add the fourth test case (35, 46) that exercises this ‘feasible’ boundary value for the partition {1601, 1602, . . ., 6000}. Exercise 4.5 (FL-4.2.3, K3) A) The conditions occurring in our problem are “points ≥ 85” (possible values: YES, NO) and “number of errors ≤ 2” (possible values: YES, NO). B) The system can take the following actions: • • • •

Grant a driver’s license (YES, NO) Repeat the theory exam (YES, NO) Repeat the practical exam (YES, NO) Additional driving lessons (YES, NO)

C) All combinations of conditions are shown at the top of Table 3 and can be generated using the “tree” method described earlier in this chapter. Since we have two conditions and each of them takes one of the two possible values, we have 2*2 = 4 combinations of their values. All combinations are feasible. D) The full decision table is shown in Table 3. It is easy to see that the various actions follow directly from the provisions of the specification. For example, if the number of points for the theoretical exam is 85 or more, and the candidate has made at most two errors (column 1), this means that a driver’s license should be granted (action “grant a driver’s license” = YES), and the candidate should not repeat any exams or take additional lessons (other actions = NO). E) Sample test cases generated from the decision table might look like the following: Test case 1 (corresponding to column 1) Name: grant the driver’s license. Pre-conditions: the candidate took the exams for the first time. Input: theoretical exam score = 85 points, number of errors made = 2.

Table 3 Decision table for the driving test support system

Conditions Points ≥ 85? Errors ≤ 2? Actions Grant a driver’s license? Repeat the theory exam? Repeat the practical exam? Additional driving lessons?

1

2

3

4

YES YES

YES NO

NO YES

NO NO

YES NO NO NO

NO NO YES NO

NO YES NO NO

NO YES YES YES

Solutions to Exercises from Chap. 4

343

Expected output: granting of a driver’s license, no need to repeat exams, no need to take additional driving lessons. Post-conditions: candidate marked as a candidate who has already taken the exams. Test case 2 (corresponding to column 2) Name: pass theoretical exam, fail practical exam. Pre-conditions: the candidate took the exams for the first time. Input: theoretical exam score = 93 points, number of errors made = 3. Expected output: driver’s license not granted, the candidate has to repeat the practical exam, does not have to take additional driving lessons. Post-conditions: candidate marked as a candidate who has already taken the exams. Test case 3 (corresponding to column 3) Name: fail theoretical exam, pass practical exam. Pre-conditions: the candidate took the exams for the first time. Input: theoretical exam score = 84 points, number of errors made = 0. Expected output: driver’s license not granted, the candidate has to repeat the theoretical exam, does not have to take additional driving lessons. Post-conditions: Candidate marked as a candidate who has already taken the exams. Test case 4 (corresponding to column 4) Name: fail both exams. Pre-conditions: the candidate took the exams for the first time. Input: theoretical exam score = 42 points, number of errors made = 3. Expected exit: driver’s license not granted, the candidate has to repeat both theoretical and practical tests and also has to take additional driving lessons. Post-conditions: candidate marked as a candidate who has already taken the exams. Note that the post-conditions are not out of the question here—perhaps the system has a completely different set of behaviors toward candidates who retake the exam. For example, the system could then check whether the candidate has actually taken additional driving lessons, and this would be part of the input to the system. Exercise 4.6 (FL-4.2.3, K3) In Fig. 4.19, describing the process, the rhombuses represent conditions, and rectangles represent actions. We have three conditions: “Does a passenger have a gold card?” “Is economy class full?” and “Is business class full?” The possible actions are as follows: “Issue a boarding pass?” “Type of seat?” [economy (E) or business (B)] and “Is the passenger removed from the passenger list?” The corresponding decision table is shown in Table 4. The actions were assigned based on the diagram in Fig. 4.19. Note that the action “type of seat” is not a Boolean variable—its possible values are E, B, and N/A (“not applicable,” meaning that no seat can be allocated, because the passenger is removed from the passenger list). According to the business rules, when a passenger has a gold card and business class is full, they should be assigned a seat in economy class. Columns 1 and

344

Solutions to Exercises

Table 4 Decision table for Exercise 4.6 Conditions

Actions

3 correspond to this situation (gold card = YES, business class full = YES). Let us look carefully at column 1. It describes the situation when economy class is also full. Nevertheless, according to the specification, the system tells the passenger to allocate a seat in this class (see cells with gray background)! We have discovered a serious error in the specification. We do not know how this problem should be solved. Here are some possible solutions: • Remove another passenger without a gold card from economy class, and assign their seat to the customer under consideration (the question, however, is what if every passenger in economy class has a gold card? Arguably, this is a very unlikely situation, but possible—the specification should take this into account). • Add an additional condition to the table—if the customer has a gold card and the business class is full, consider whether the economy class is full. If not, we assign the passenger a seat in this class, as described in column 3. However, if the economy class is full, then we need to take another action, such as removing the passenger from the list. Exercise 4.7 (FL-4.2.4, K3) A) In the first step, let us analyze what states the system may be in. We can identify the following eight states (they basically follow directly from the scenario analysis): • • • • •

Welcome screen—initial system status, waiting for card insertion. Card validation—the state in which the system verifies the inserted card. End—the state to which the system will go after validation if the card is wrong. Ask for PIN—the state in which the system asks to enter the PIN for the first time. Ask for PIN second time—the state in which the system asks to enter the PIN for the second time (after entering PIN incorrectly the first time).

Solutions to Exercises from Chap. 4

345

Table 5 Possible events for each state of the state machine State Welcome screen Card validation

End Ask for PIN

Ask for PIN second time Ask for PIN third time Logged Card blocked

Possible events and actions InsertCard CardOK (transition to Ask for PIN state) InvalidCard (actions: return card, display “card error” message; transition to End state) — PinOK (transition to Logged state) InvalidPIN (action: “PIN error” message; transition to “Ask for PIN 2nd time” state) PinOK (transition to Logged state) InvalidPIN (action: “PIN error” message; transition to “Ask for PIN 3rd time” state) PinOK (transition to Logged state) InvalidPIN (actions: “PIN error” message, “card locked” message; transition to “Card blocked” state) — —

• Ask for PIN third time—a state in which the system asks to enter the PIN for the third time (after entering PIN incorrectly the second time). • Logged—the state to which the system goes after correctly entering the PIN the first, second, or third time. • Card blocked—the state to which the system goes after the PIN is entered incorrectly three times. Note that in this state transition model, we need to define as many as three states related to waiting for PIN entry, because our state model has no memory—the representation of the history of past events is only the state we are currently in. So, in order to distinguish the number of incorrectly entered PIN codes, we need three states. Now let us consider the possible events that can occur in our system and the actions the system can take to handle these events. One of the most convenient ways to do this is to analyze individual states and think about what can happen (based on the specification) while we are in a given state. The results of our analysis are presented in Table 5. Based on Table 5, we can design a state transition diagram. It is shown in Fig. 1. B) The transition diagram using guard conditions is shown in Fig. 2. Note that by introducing guard conditions, we are able to reduce the number of states. Now we have only one state related to asking for a PIN, and the number of incorrectly entered PINs is remembered in a variable named “attempts.” Whether we go from the state “Ask for PIN” to the state “Card blocked” or “Logged” depends on how many times the wrong PIN was entered. Note that the loop Ask for PIN (InvalidPIN) Ask for PIN

346

Solutions to Exercises

Fig. 1 Transition diagram for PIN verification without using guard conditions

Fig. 2 Transition diagram for PIN verification using guard conditions

can be executed a maximum of two times. This is because each time we execute this loop transition, the value of the “attempt” variable increases by one, and the guard condition allows us to exercise this loop only if this variable has a value less than 2. After the PIN has been entered incorrectly twice, this variable has a value of 2, and at the time of the third failed attempt to enter the PIN, the guard condition is false. Instead, the guard condition on the transition to “Card blocked” becomes true, so after the third failed attempt to enter the PIN, the system does not stay in “Ask for PIN” again but goes to “Card blocked” state.

Solutions to Exercises from Chap. 4

347

C) Before we move on to test design, let us consider what test conditions we have for both types of coverage. In the case of all states coverage, we have the following items (states) to cover: Welcome screen, Card validation, End, Ask for PIN, Ask for PIN second time, Ask for PIN third time, Logged, and Card blocked. So we need to cover eight coverage items with tests. For valid transitions coverage, we need to cover all arrows between states. There are nine of them: InsertCard, InvalidCard, CardOK, three different transitions triggered by PinOK, and three different transitions triggered by InvalidPIN. So to achieve full valid transitions coverage, we need to cover nine coverage items with our test cases. Regarding all states coverage, let us note that in the diagram from Fig. 1, we have three final states. So we will need at least three test cases, since reaching the final state terminates the test execution. Here are sample three test cases that achieve full all states coverage: TC1: Welcome screen (InsertCard) Card validation (InvalidCard) End TC2: Welcome screen (InsertCard) Card validation (CardOK) Ask for PIN (PinOK) Logged TC3: Welcome screen (InsertCard) Card validation (CardOK) Ask for PIN (InvalidPIN) Ask for PIN second time (InvalidPIN) Ask for PIN third time (PinWrong) Card blocked These three cases cover all states (TC1 covers three of them, TC2 covers additional two, and TC3 covers additional three), but do not cover all transitions. The two transitions not covered are: Ask for PIN second time (PinOK) Logged Ask for PIN third time (PinOK) Logged We need to add two new tests that cover these two transitions, such as: PT4: Welcome screen (InsertCard) Card validation (CardOK) Ask for PIN (InvalidPIN) Ask for PIN second time (PinOK) Logged PT5: Welcome screen (InsertCard) Card validation (CardOK) Ask for PIN (InvalidPIN) Ask for PIN second time (InvalidPIN) Ask for PIN third time (PinOK) Logged Again, note that we cannot achieve valid transitions coverage with less than five test cases, because we have five transitions directly reaching the end states. This means that the execution of such a transition ends the test case, so no two of these five transitions can be within a single test case. Exercise 4.8 (FL-4.2.4, K3) A) The system has three states (S, W, B) and five different events (Silence!, Bark!, Down!, SeesCat, IsPetted). So we have 3*5 = 15 combinations of (state, transition) pairs. Since we can see from the diagram that there are six valid transitions (six arrows between states), so there must be 15 - 6 = 9 invalid transitions:

348

• • • • • • • • •

Solutions to Exercises

(S, Silence!) (S, Down!) (B, Bark!) (B, Down!) (B, SeesCat) (W, Silence!) (W, Bark!) (W, IsPetted) (W, SeesCat)

B) Valid transitions can be tested using one test case, for example: S (Bark!) B (Silence!) S (SeesCat) B (IsPetted) W (Down!) S (IsPetted) W This test case covers all six valid transitions. Since we have nine invalid transitions, we need to add nine test cases, one for each invalid transition. For example, in order to cover the invalid transition (W, Bark!), we can design the case: S (IsPetted) W (Bark!) ? After reaching the W state, we attempt to trigger the (invalid) “Bark!” event. If it is infeasible, or if the system ignores it, we assume that the test passes. If, on the other hand, the “Bark!” event can be invoked and the system changes its state, the test fails, since this is not the expected behavior. Note that we always start in the initial state, so first we need to exercise the valid transitions to reach the desired state (in our case, the W state) and then attempt to trigger the invalid transition (in our case, Bark!). Similarly, we design test cases for the remaining eight invalid transitions. Exercise 4.9 (FL-4.5.3, K3) Here are some examples of test cases we can design: • Successful registration with valid login (not yet used in the system) and valid password, e.g., login [email protected], password Abc123Def typed twice in both fields, expected result: the system accepts the data and sends an email to [email protected] with a link to activate the account (this test verifies the acceptance criteria AC1 and AC5). • Attempted registration with syntactically incorrect login (covers acceptance criterion AC1). Test data sets: – – – –

John.Smith@post (too short part after the @ sign) JohnSmith-mymail.com (no @ sign) @mymail.com (no text before the @ sign) [email protected] (two consecutive dots after the @ sign)

Solutions to Exercises from Chap. 5

349

• Attempt to register with a valid but existing login. Expected result: registration denied, no email sent (this test verifies acceptance criterion AC2). • Attempt to register with valid login, but wrong password. Expected result: registration denied, no email sent. Test data sets (covering AC3 and AC4 acceptance criteria): – Syntactically correct password, but different in both fields for the password (e.g., Abc123Def and aBc123Def) – Password too short (e.g., Ab12) – Password too long (e.g. ABCD1234abcd1234) – Password without digits (e.g. ABCdef) – Password without capital letters (e.g. abc123)

Solutions to Exercises from Chap. 5 Exercise 5.1 (FL-5.1.4, K3) The last iteration of the poker for the optimistic value (a) yielded values of 3, 3, and 5, which, according to the procedure described, means that the experts reached a consensus on the optimistic value. It is 3, as this is the value indicated by the majority of experts. Similarly, for the most likely value (m), the experts reached a consensus after the first iteration. The result is 5, as two of the three experts indicated this value. For the pessimistic value (b), the experts reached consensus only in the third iteration. The result is 13, as it was indicated by all experts. After these poker sessions, experts determined the values of the variables to be used in the three-point estimation technique, namely: • Optimistic value: a = 3 • Most likely value: m = 5 • Pessimistic value: b = 13 Substituting these values into the formula from the three-point method, we get: E = ð3 þ 4 5 þ 13Þ=6 = 36=6 = 6 meaning that the estimated effort is 6 person-days, with a standard deviation of SD = ðb - aÞ=6 = ð13 - 3Þ=6 = 10=6 = 1:66: This means that the final estimation result is 6 ± 1.66 person-days, which is between 4.33 and 7.66 person-days.

350

Solutions to Exercises

Exercise 5.2 (FL-5.1.4, K3) In the completed project, the effort for the design phase was 20 person-days (because 4 people did the work in 5 days). Similarly, for the implementation phase, the effort was 10*18 = 180 person-days and for testing 4*10 = 40 person-days. Thus, the total effort was 240 person-days in the ratio design/programming/testing equal to 1:9:2. Let x = number of days of work of designers and y = number of days of work of developers. Then 66 - x - y is the number of work days of testers (because the total project is expected to last 66 days). Given the number of designers, developers, and testers in the new project, the effort in the design, implementation, and testing phases is therefore 4x, 6y, 2*(66–x–y), respectively. From the ratio, we have 4*x : 6*y : 2*(66–x–y) = 1:9:2. So, we have 2*4*x = 2* (66–x–y) and that 9*4*x = 6*y, since testing takes twice as much effort as designing and implementation takes nine times as much effort as planning. We must now solve the system of two equations: 8 x = 132 - 2 x - 2 y 36 x = 6 y From the second equation, we have y = 6x. Substituting into the first equation, we get: 8 x = 132 - 2 x - 12 x, or 22 x = 132, from where we calculate x = 132=22 = 6: By substituting x = 6 into the relation y = 6x, we get that y = 6*6 = 36. Since x, y, and 66-x-y represent the number of days spent on design, programming, and testing, respectively, we get the final answer: using the ratio method, that is, assuming that the effort in the new project will be distributed proportionally to the effort in the completed project: • We must allocate x = 6 days for design. • We must allocate y = 36 days for development. • We must allocate 66–x–y = 66 - 42 = 24 days for testing. Exercise 5.3 (FL-5.1.5, K3) If we only considered priorities, the order of test case execution would be as follows: TC001 → TC003 → TC002 → TC004. However, we must also take into account logical dependencies—we cannot execute TC001 before TC002 and TC003. Logical dependencies force us to start with TC002 (personal data entry), because it alone is not dependent on any other case. We still can’t execute TC001 with the

Solutions to Exercises from Chap. 5

351

highest priority, because it depends not only on 002 but also on 003. So we have to execute case 003 in the next step, which will unlock the possibility of executing TC001. At the very end, we run TC004. Thus, the final order is: TC002 → TC003 → TC001 → TC004. Exercise 5.4 (FL-5.1.5, K3) If we were to rely solely on the client’s priorities, we should implement and test requirements Req2, Req3, and Req6 first (high priority), then Req1 and Req5 (medium priority), and finally Req4 (low priority). However, regardless of the priorities, the first requirement to be implemented and tested must be Req1, as it is the only requirement that does not depend on other requirements. The only requirement that can be implemented and tested next is Req3 (because it is the only one that depends only on the already implemented and tested Req1). With Req1 and Req3 tested, we have to implement and test Req2, as it is the only requirement that depends on already implemented and tested requirements. Note that up to this point, priorities have played no role. But now we can implement and test Req4 or Req5. Let us note that among the yet unimplemented and untested requirements, the requirement with the highest priority for the customer is Req6, and it is dependent on Req4. So we implement and test Req4 first, in order to “unlock” the possibility of implementing and testing the highly prioritized Req6 as soon as possible. At the very end, we implement and test Req5. The final order of requirements is as follows: Req1, Req3, Req2, Req4, Req6, Req5. The order takes into account the customer’s priorities while also taking into account the necessary logical relationships between requirements. Exercise 5.5 (FL-5.5.1, K3) The defect report should contain at least the information described in Table 6.

Table 6 Defect report content Unique identifier Title, summary of defect reported Date of report (= date of error discovery) Author Description of the defect to allow for reproduction and repair Actual result Expected result Priority for defect removal A description of the nonconformity to help determine its cause

34.810 Incorrect calculation of the amount to be paid 07.09.2023 Carol Beer Test case PT003 $48 $47.50 Normal Seems that the system rounds the resulting amount to full $

Part IV

Official Sample Exam

This part of the manual contains a sample exam. This is the official sample exam published by ISTQB®. The duration of the exam is 60 minutes, or 75 minutes if the exam is not taken in the native language. The original ISTQB® sample exam document contains the exam questions in the order of the corresponding learning objectives in the syllabus. This may make it easier to answer some of the questions. In the actual exam, the questions are arranged in random order. Therefore, in this chapter, the sample exam questions are placed in random order to best reflect the reality of the exam.

Exam Set A

Question #1 (1 point) You are testing a system that calculates the final course grade for a given student. The final grade is assigned based on the final result, according to the following rules: • • • • • •

0–50 points: failed. 51–60 points: fair. 61–70 points: satisfactory. 71–80 points: good. 81–90 points: very good. 91–100 points: excellent. You have prepared the following set of test cases:

TC1 TC2 TC3 TC4 TC5 TC6

Final result 91 50 81 60 70 80

Evaluation final Excellent Failed Very good Fair Satisfactory Good

What is the 2-value boundary value analysis (BVA) coverage for the final result that is achieved with the existing test cases? a) b) c) d)

50% 60% 33.3% 100% Select ONE option.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_12

355

356

Exam Set A

Question #2 (1 point) Which of the following is a benefit of early and frequent feedback? a) b) c) d)

It improves the test process for future projects. It forces customers to prioritize their requirements based on agreed risks. It is the only way to measure the quality of changes. It helps avoid requirements misunderstandings. Select ONE option.

Question #3 (1 point) Your favorite bicycle daily rental store has just introduced a new Customer Relationship Management system and asked you, one of their most loyal members, to test it. The implemented features are as follows: • Anyone can rent a bicycle, but members receive a 20% discount. • However, if the return deadline is missed, the discount is no longer available. • After 15 rentals, members get a gift: a T-Shirt. Decision table describing the implemented features looks as follows:

Conditions Being a member Missed deadline 15th rental Actions 20% discount Gift T-shirt

R1

R2

R3

R4

R5

R6

R7

R8

T T F

T F F

T T T

T F T

F T F

F F F

F F T

F T T

X

X X

X

X

Based ONLY on the feature description of the Customer Relationship Management system, which of the above rules describes an impossible situation? a) b) c) d)

R4 R2 R6 R8 Choose ONE answer.

Question #4 (1 point) You need to update one of the automated test scripts to be in line with a new requirement. Which process indicates that you create a new version of the test script in the test repository? a) b) c) d)

Traceability management. Maintenance testing. Configuration management. Requirements engineering. Select ONE option.

Exam Set A

357

Question #5 (1 point) Which of the following is a characteristic of experience-based test techniques? a) Test cases are created based on detailed design information. b) Items tested within the interface code section are used to measure coverage. c) The techniques heavily rely on the tester’s knowledge of the software and the business domain. d) The test cases are used to identify deviations from the requirements. Select ONE option. Question #6 (1 point) You test a system whose life cycle is modeled by the state transition diagram shown below. The system starts in the INIT state and ends its operation in the OFF state.

What is the MINIMAL number of test cases to achieve valid transitions coverage? a) b) c) d)

4 2 7 3 Select ONE option.

Question #7 (1 point) You received the following defect report from the developers stating that the anomaly described in this test report is not reproducible: Application hangs up 2022-May-03—John Doe—Rejected The application hangs up after entering “Test input: $ä” in the Name field on the new user creation screen. Tried to log off, log in with test_admin01 account, same issue. Tried with other test admin accounts, same issue. No error message received; log (see attached) contains fatal error notification. Based on the test case TC-1305, the application should accept the provided input and create the user. Please fix with

358

Exam Set A

high priority, this feature is related to REQ-0012, which is a critical new business requirement. What critical information is MISSING from this test report that would have been useful for the developers? a) b) c) d)

Expected result and actual result. References and defect status. Test environment and test item. Priority and severity. Select ONE option.

Question #8 (1 point) How do testers add value to iteration and release planning? a) Testers determine the priority of the user stories to be developed. b) Testers focus only on the functional aspects of the system to be tested. c) Testers participate in the detailed risk identification and risk assessment of user stories. d) Testers guarantee the release of high-quality software through early test design during the release planning. Select ONE option. Question #9 (1 point) You work in a team that develops a mobile application for food ordering. In the current iteration, the team decided to implement the payment functionality. Which of the following activities is a part of test analysis? a) Estimating that testing the integration with the payment service will take 8 person-days. b) Deciding that the team should test if it is possible to properly share payment between many users. c) Using boundary value analysis (BVA) to derive the test data for the test cases that check the correct payment processing for the minimum allowed amount to be paid. d) Analyzing the discrepancy between the actual result and expected result after executing a test case that checks the process of payment with a credit card and reporting a defect. Select ONE option. Question #10 (1 point) Which tool can be used by an agile team to show the amount of work that has been completed and the amount of total work remaining for a given iteration? a) b) c) d)

Acceptance criteria. Defect report. Test completion report. Burndown chart. Select ONE option.

Exam Set A

359

Question #11 (1 point) Which of the following statements BEST describes the acceptance test-driven development (ATDD) approach? a) In ATDD, acceptance criteria are typically created based on the given/when/then format. b) In ATDD, test cases are mainly created at component testing and are codeoriented. c) In ATDD, tests are created, based on acceptance criteria to drive the development of the related software. d) In ATDD, tests are based on the desired behavior of the software, which makes it easier for team members to understand them. Select ONE option. Question #12 (1 point) Which item correctly identifies a potential risk of performing test automation? a) b) c) d)

It may introduce unknown regressions in production. Sufficient efforts to maintain testware may not be properly allocated. Testing tools and associated testware may not be sufficiently relied upon. It may reduce the time allocated for manual testing. Select ONE option.

Question #13 (1 point) Which TWO of the following options are the exit criteria for testing a system? a) b) c) d) e)

Test environment readiness. The ability to log in to the test object by the tester. Estimated defect density is reached. Requirements are translated into given/when/then format. Regression tests are automated. Select TWO options.

Question #14 (1 point) Which of the following is NOT a benefit of static testing? a) Having less expensive defect management due to the ease of detecting defects later in the SDLC. b) Fixing defects found during static testing is generally much less expensive than fixing defects found during dynamic testing. c) Finding coding defects that might not have been found by only performing dynamic testing. d) Detecting gaps and inconsistencies in requirements. Select ONE option.

360

Exam Set A

Question #15 (1 point) Which of the following skills (i–v) are the MOST important skills of a tester? i. ii. iii. iv. v.

Having domain knowledge. Creating a product vision. Being a good team player. Planning and organizing the work of the team. Critical thinking.

a) b) c) d)

ii and iv are important; i, iii, and v are not. i, iii, and v are important; ii and iv are not. i, ii, and v are important; iii and iv are not. iii and iv are important; i, ii, and v are not. Select ONE option.

Question #16 (1 point) Which of the arguments below would you use to convince your manager to organize retrospectives at the end of each release cycle? a) Retrospectives are very popular these days, and clients would appreciate it if we added them to our processes. b) Organizing retrospectives will save the organization money because end user representatives do not provide immediate feedback about the product. c) Process weaknesses identified during the retrospective can be analyzed and serve as a to-do list for the organization’s continuous process improvement program. d) Retrospectives embrace five values including courage and respect, which are crucial to maintain continuous improvement in the organization. Select ONE option. Question #17 (1 point) In your project, there has been a delay in the release of a brand-new application and test execution started late, but you have very detailed domain knowledge and good analytical skills. The full list of requirements has not yet been shared with the team, but management is asking for some test results to be presented. Which test technique fits BEST in this situation? a) b) c) d)

Checklist-based testing. Error guessing. Exploratory testing. Branch testing. Select ONE option.

Exam Set A

361

Question #18 (1 point) Your team uses the three-point estimation technique to estimate the test effort for a new high-risk feature. The following estimates were made: • Most optimistic estimation: 2 person-hours. • Most likely estimation: 11 person-hours. • Most pessimistic estimation: 14 person-hours. What is the final estimate? a) b) c) d)

9 person-hours. 14 person-hours. 11 person-hours. 10 person-hours. Select ONE option.

Question #19 (1 point) Which of the following is NOT true for white-box testing? a) During white-box testing, the entire software implementation is considered. b) White-box coverage metrics can help identify additional tests to increase code coverage. c) White-box test techniques can be used in static testing. d) White-box testing can help identify gaps in requirements implementation. Select ONE option. Question #20 (1 point) The reviews being used in your organization have the following attributes: • • • • •

There is the role of a scribe. The main purpose is to evaluate quality. The meeting is led by the author of the work product. There is individual preparation. A review report is produced. Which of the following review types is MOST likely being used?

a) b) c) d)

Informal review. Walkthrough. Technical review. Inspection. Select ONE option.

362

Exam Set A

Question #21 (1 point) You are testing a simplified apartment search form which has only two search criteria: • Floor (with three possible options: ground floor; first floor; second or higher floor). • Garden type (with three possible options: no garden; small garden; large garden). Only apartments on the ground floor may have gardens. The form has a built-in validation mechanism that will not allow you to use the search criteria which violate this rule. Each test has two input values: floor and garden type. You want to apply equivalence partitioning (EP) to cover each floor and each garden type in your tests. What is the MINIMAL number of test cases to achieve 100% EP coverage? a) b) c) d)

3 4 5 6 Select ONE option.

Question #22 (1 point) Which of the following is NOT an example of the shift left approach? a) Reviewing the user requirements before they are formally accepted by the stakeholders. b) Writing a component test before the corresponding code is written. c) Executing a performance efficiency test for a component during component testing. d) Writing a test script before setting up the configuration management process. Select ONE option. Question #23 (1 point) Which test activity does a data preparation tool support? a) b) c) d)

Test monitoring and control. Test analysis and design. Test implementation and execution. Test completion. Select ONE option.

Exam Set A

363

Question #24 (1 point) Your test suite achieved 100% statement coverage. What is the consequence of this fact? a) Each instruction in the code that contains a defect has been executed at least once. b) Any test suite containing more test cases than your test suite will also achieve 100% statement coverage. c) Each path in the code has been executed at least once. d) Every combination of input values has been tested at least once. Select ONE option. Question #25 (1 point) Consider the following rule: “for every SDLC activity there is a corresponding test activity.” In which SDLC models does this rule hold? a) b) c) d)

Only in sequential SDLC models. Only in iterative SDLC models. Only in iterative and incremental SDLC models. In sequential, incremental, and iterative SDLC models. Select ONE option.

Question #26 (1 point) Consider the following test categories (1–4) and agile testing quadrants (A–D): 1. 2. 3. 4. A. B. C. D.

Usability testing. Component testing. Functional testing. Reliability testing. Agile testing quadrant Q1: technology facing, supporting the development team. Agile testing quadrant Q2: business facing, supporting the development team. Agile testing quadrant Q3: business facing, critique the product. Agile testing quadrant Q4: technology facing, critique the product. How do the following test categories map onto the agile testing quadrants?

a) b) c) d)

1C, 2A, 3B, 4D. 1D, 2A, 3C, 4B. 1C, 2B, 3D, 4A. 1D, 2B, 3C, 4A. Select ONE option.

364

Exam Set A

Question #27 (1 point) Which types of failures (1–4) fit which test levels (A–D) BEST? 1. 2. 3. 4.

Failures in system behavior as it deviates from the user’s business needs. Failures in communication between components. Failures in logic in a module. Failures in not correctly implemented business rules.

A. B. C. D.

Component testing. Component integration testing. System testing. Acceptance testing.

a) b) c) d)

1D, 2B, 3A, 4C. 1D, 2B, 3C, 4A. 1B, 2A, 3D, 4C. 1C, 2B, 3A, 4D. Select ONE option.

Question #28 (1 point) Which of the following BEST describes the way acceptance criteria can be documented? a) Performing retrospectives to determine the actual needs of the stakeholders regarding a given user story. b) Using the given/when/then format to describe an example test condition related to a given user story. c) Using verbal communication to reduce the risk of misunderstanding the acceptance criteria by others. d) Documenting risks related to a given user story in a test plan to facilitate the riskbased testing of a given user story. Select ONE option. Question #29 (1 point) Which of the following BEST describes the concept behind error guessing? a) Error guessing involves using your knowledge and experience of defects found in the past and typical errors made by developers. b) Error guessing involves using your personal experience of development and the errors you made as a developer. c) Error guessing requires you to imagine that you are the user of the test object and to guess errors the user could make interacting with it. d) Error guessing requires you to rapidly duplicate the development task to identify the sort of errors a developer might make. Select ONE option.

Exam Set A

365

Question #30 (1 point) Which TWO of the following tasks belong MAINLY to a testing role? a) b) c) d) e)

Configure test environments. Maintain the product backlog. Design solutions to new requirements. Create the test plan. Report on achieved coverage. Select TWO options.

Question #31 (1 point) You are testing a mobile application that allows users to find a nearby restaurant based on the type of food they want to eat. Consider the following list of test cases, priorities (i.e., a smaller number means a higher priority), and dependencies: Test case number TC 001 TC 002 TC 003 TC 004 TC 005

Test condition covered Select type of food Select restaurant Get direction Call restaurant Make reservation

Priority 3 2 1 2 3

Logical dependency None TC 001 TC 002 TC 002 TC 002

Which of the following test cases should be executed as the third one? a) b) c) d)

TC 003. TC 005. TC 002. TC 001. Select ONE option.

Question #32 (1 point) You have been assigned as a tester to a team producing a new system incrementally. You have noticed that no changes have been made to the existing regression test cases for several iterations and no new regression defects were identified. Your manager is happy, but you are not. Which testing principle explains your skepticism? a) b) c) d)

Tests wear out. Absence-of-errors fallacy. Defects cluster together. Exhaustive testing is impossible. Select ONE option.

366

Exam Set A

Question #33 (1 point) During a risk analysis the following risk was identified and assessed: • Risk: Response time is too long to generate a report. • Risk likelihood, medium; risk impact, high. • Response to risk: – An independent test team performs performance testing during system testing. – A selected sample of end users performs alpha and beta acceptance testing before the release. What measure is proposed to be taken in response to this analyzed risk? a) b) c) d)

Risk acceptance. Contingency plan. Risk mitigation. Risk transfer. Select ONE option.

Question #34 (1 point) Consider the following user story: As an Editor I want to review content before it is published so that I can assure the grammar is correct and its acceptance criteria: • • • • • •

The user can log in to the content management system with “Editor” role. The editor can view existing content pages. The editor can edit the page content. The editor can add markup comments. The editor can save changes. The editor can reassign to the “content owner” role to make updates. Which of the following is the BEST example of an ATDD test for this user story?

a) b) c) d)

Test if the editor can save the document after deleting the page content. Test if the content owner can log in and make updates to the content. Test if the editor can schedule the edited content for publication. Test if the editor can reassign to another editor to make updates. Select ONE option.

Exam Set A

367

Question #35 (1 point) You are testing a user story with three acceptance criteria: AC1, AC2, and AC3. AC1 is covered by test case TC1, AC2 by TC2, and AC3 by TC3. The test execution history had three test runs on three consecutive versions of the software as follows:

TC1 TC2 TC3

Execution 1 (1) Failed (2) Passed (3) Failed

Execution 2 (4) Passed (5) Failed (6) Failed

Execution 3 (7) Passed (8) Passed (9) Passed

Tests are repeated once you are informed that all defects found in the test run are corrected and a new version of the software is available. Which of the above tests are executed as regression tests? a) b) c) d)

Only 4, 7, 8, 9 Only 5, 7 Only 4, 6, 8, 9 Only 5, 6 Select ONE option.

Question #36 (1 point) How is the whole team approach present in the interactions between testers and business representatives? a) b) c) d)

Business representatives decide on test automation approaches. Testers help business representatives to define test strategy. Business representatives are not part of the whole team approach. Testers help business representatives to create suitable acceptance tests. Select ONE option.

Question #37 (1 point) Which of these statements is NOT a factor that contributes to successful reviews? a) Participants should dedicate adequate time for the review. b) Splitting large work products into small parts to make the required effort less intense. c) Participants should avoid behaviors that might indicate boredom, exasperation, or hostility to other participants. d) Failures found should be acknowledged, appreciated, and handled objectively. Select ONE option.

368

Exam Set A

Question #38 (1 point) Which of the following options shows an example of test activities that contribute to success? a) Having testers involved during various software development life cycle (SDLC) activities will help detect defects in work products. b) Testers try not to disturb the developers while coding, so that the developers write better code. c) Testers collaborating with end users help improve the quality of defect reports during component integration and system testing. d) Certified testers will design much better test cases than non-certified testers. Select ONE option. Question #39 (1 point) Which of the following statements describe a valid test objective? a) To prove that there are no unfixed defects in the system under test. b) To prove that there will be no failures after the implementation of the system into production. c) To reduce the risk level of the test object and to build confidence in the quality level. d) To verify that there are no untested combinations of inputs. Select ONE option. Question #40 (1 point) Which of the following factors (i–v) have SIGNIFICANT influence on the test process? i. ii. iii. iv. v.

The SDLC. The number of defects detected in previous projects. The identified product risks. New regulatory requirements forcing. The number of certified testers in the organization.

a) b) c) d)

i and ii have significant influence; iii, iv, and v have not. i, iii, and iv have significant influence; ii and v have not. ii, iv, and v have significant influence; i and iii have not. iii and v have significant influence; i, ii, and iv have not. Select ONE option.

Additional Sample Questions

ISTQB®’s general rule for publishing sample questions requires that at least one sample question be published for each learning objective. Thus, if there are more learning objectives than exam questions, questions covering learning objectives not included in the sample exam set are published separately as supplementary questions. This section contains the official supplementary questions. Question #A1 (1 point) You were given a task to analyze and fix causes of failures in a new system to be released. Which activity are you performing? a) b) c) d)

Debugging. Software testing. Requirement elicitation. Defect management. Select ONE option.

Question #A2 (1 point) In many software organizations, the test department is called the Quality Assurance (QA) department. Is this sentence correct or not, and why? a) It is correct. Testing and QA mean exactly the same thing. b) It is correct. These names can be used interchangeably because both testing and QA focus their activities on the same quality issues. c) It is not correct. Testing is something more; testing includes all activities with regard to quality. QA focuses on quality-related processes. d) It is not correct. QA is focused on quality-related processes, while testing concentrates on demonstrating that a component or system is fit for purpose and to detect defects. Select ONE option.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_13

369

370

Additional Sample Questions

Question #A3 (1 point) A phone ringing in a neighboring cubicle distracts a programmer causing him to improperly program the logic that checks the upper boundary of an input variable. Later, during system testing, a tester notices that this input field accepts invalid input values. Which of the following correctly describes an incorrectly coded upper bound? a) b) c) d)

The root cause. A failure. An error. A defect. Select ONE option.

Question #A4 (1 point) Consider the following testware. Test Charter #04.018 Session time: 1 h Explore: Registration page With: Different sets of incorrect input data To discover: Defects related to accepting the registration process with the incorrect input

Which test activity produces this testware as an output? a) b) c) d)

Test planning. Test monitoring and control. Test analysis. Test design. Select ONE option.

Question #A5 (1 point) Which of the following is the BEST example of how traceability supports testing? a) Performing the impact analysis of a change will give information about the completion of the tests. b) Analyzing the traceability between test cases and test results will give information about the estimated level of residual risk. c) Performing the impact analysis of a change will help in selecting the right test cases for regression testing. d) Analyzing the traceability between the test basis, the test objects, and the test cases will help in selecting test data to achieve the assumed coverage of the test object. Select ONE option.

Additional Sample Questions

371

Question #A6 (1 point) Which of the following BEST explains a benefit of independence of testing? a) The use of an independent test team allows project management to assign responsibility for the quality of the final deliverable to the test team. b) If a test team external to the organization can be afforded, then there are distinct benefits in terms of this external team not being so easily swayed by the delivery concerns of project management and the need to meet strict delivery deadlines. c) An independent test team can work separately from the developers, need not be distracted with project requirement changes, and can restrict communication with the developers to defect reporting through the defect management system. d) When specifications contain ambiguities and inconsistencies, assumptions are made on their interpretation, and an independent tester can be useful in questioning those assumptions and the interpretation made by the developer. Select ONE option. Question #A7 (1 point) You are working as a tester in the team that follows the V-model. How does the choice of this software development lifecycle (SDLC) model impact the timing of testing? a) b) c) d)

Dynamic testing cannot be performed early in the SDLC. Static testing cannot be performed early in the SDLC. Test planning cannot be performed early in the SDLC. Acceptance testing can be performed early in the SDLC. Select ONE option.

Question #A8 (1 point) Which of the following are advantages of DevOps? i. ii. iii. iv. v.

Faster product release and faster time to market. Increases the need for repetitive manual testing. Constant availability of executable software. Reduction in the number of regression tests associated with code refactoring. Setting up the test automation framework is inexpensive since everything is automated.

a) b) c) d)

i, ii, and iv are advantages; iii and v are not. Iii and v are advantages; i, ii, and iv are not. i and iii are advantages; ii, iv, and v are not. ii, iv, and v are advantages; i and iii are not. Select ONE option.

372

Additional Sample Questions

Question #A9 (1 point) You work as a tester in a project on a mobile application for food ordering for one of your clients. The client sent you a list of requirements. One of them, with high priority, says The order must be processed in less than 10 seconds in 95% of the cases.

You created a set of test cases in which a number of random orders were made, the processing time measured, and the test results were checked against the requirements. What test type did you perform? a) Functional, because the test cases cover the user’s business requirement for the system. b) Nonfunctional, because the measure the system’s performance. c) Functional, because the test cases interact with the user interface. d) Structural, because we need to know the internal structure of the program to measure the order processing time. Select ONE option. Question #A10 (1 point) Your organization’s test strategy suggests that once a system is going to be retired, data migration shall be tested. As part of what test type is this testing MOST likely to be performed? a) b) c) d)

Maintenance testing. Regression testing. Component testing. Integration testing. Select ONE option.

Question #A11 (1 point) The following is a list of the work products produced in the SDLC. i. ii. iii. iv. v.

Business requirements. Schedule. Test budget. Third-party executable code. User stories and their acceptance criteria. Which of them can be reviewed?

a) b) c) d)

i and iv can be reviewed; ii, iii, and v cannot. i, ii, iii, and iv can be reviewed; v cannot. i, ii, iii, and v can be reviewed; iv cannot. iii, iv, and v can be reviewed; i and ii cannot. Select ONE option.

Additional Sample Questions

373

Question #A12 (1 point) Decide which of the following statements (i–v) are true for dynamic testing and which are true for static testing. i. ii. iii. iv. v.

Abnormal external behaviors are easier to identify with this testing. Discrepancies from a coding standard are easier to find with this testing. It identifies failures caused by defects when the software is run. Its test objective is to identify defects as early as possible. Missing coverage for critical security requirements is easier to find and fix.

a) b) c) d)

i, iv, and v are true for static testing; ii and iii are true for dynamic testing. i, iii, and iv are true for static testing; ii and v are true for dynamic testing. ii and iii are true for static testing; i, iv, and v are true for dynamic testing. ii, iv, and v are true for static testing; i, iii, and iv are true for dynamic testing. Select ONE option.

Question #A13 (1 point) Which of the following statements about formal reviews is TRUE? a) Some reviews do not require more than one role. b) The review process has several activities. c) Documentation to be reviewed is not distributed before the review meeting, with the exception of the work product for specific review types. d) Defects found during the review are not reported since they are not found by dynamic testing. Select ONE option. Question #A14 (1 point) What task may management take on during a formal review? a) b) c) d)

Taking overall responsibility for the review. Deciding what is to be reviewed. Ensuring the effective running of review meetings and mediating, if necessary. Recording review information such as review decisions. Select ONE option.

Question #A15 (1 point) A wine storage system uses a control device that measures the wine cell temperature T (measured in °C, rounded to the nearest degree) and alarms the user if it deviates from the optimal value of 12, according to the following rules: • If T = 12, the system says, “Optimal temperature.” • If T < 12, the system says, “Temperature is too low!” • If T > 12, the system says, “Temperature is too high!” You want to use the 3-value boundary value analysis (BVA) to verify the behavior of the control device. A test input is a temperature in °C provided by the device.

374

Additional Sample Questions

What is the MINIMAL set of test inputs that achieves 100% of the desired coverage? a) b) c) d)

11, 12, 13 10, 12, 14 10, 11, 12, 13, 14 10, 11, 13, 14 Select ONE option.

Question #A16 (1 point) Which of the following statements about branch testing is CORRECT? a) If a program includes only unconditional branches, then 100% branch coverage can be achieved without executing any test cases. b) If the test cases exercise all unconditional branches in the code, then 100% branch coverage is achieved. c) If 100% statement coverage is achieved, then 100% branch coverage is also achieved. d) If 100% branch coverage is achieved, then all decision outcomes in each decision statement in the code are exercised. Select ONE option. Question #A17 (1 point) You are testing a mobile application that allows customers to access and manage their bank accounts. You are running a test suite that involves evaluating each screen, and each field on each screen, against a general list of user interface best practices derived from a popular book on the topic that maximizes attractiveness, ease of use, and accessibility for such applications. Which of the following options BEST categorizes the test technique you are using? a) b) c) d)

Black-box. Exploratory. Checklist-based. Error guessing. Select ONE option.

Question #A18 (1 point) Which of the following BEST describe the collaborative approach to user story writing? a) User stories are created by testers and developers and then accepted by business representatives. b) User stories are created by business representatives, developers, and testers together. c) User stories are created by business representatives and verified by developers and testers.

Additional Sample Questions

375

d) User stories are created in a way that they are independent, negotiable, valuable, estimable, small, and testable. Select ONE option. Question #A19 (1 point) Consider the following part of a test plan. Testing will be performed using component testing and component integration testing. The regulations require to demonstrate that 100% branch coverage is achieved for each component classified as critical. Which part of the test plan does this part belong to? a) b) c) d)

Communication. Risk register. Context of testing. Test approach. Select ONE option.

Question #A20 (1 point) Your team uses planning poker to estimate the test effort for a newly required feature. There is a rule in your team that if there is no time to reach full agreement and the variation in the results is small, applying rules like “accept the number with the most votes” can be applied. After two rounds, the consensus was not reached, so the third round was initiated. You can see the test estimation results in the table below.

Round 1 Round 2 Round 3

Team members’ estimations 21 2 5 13 8 8 13 8 13

34 34 13

13 13 13

8 8 13

2 5 8

Which of the following is the BEST example of the next step? a) b) c) d)

The product owner has to step in and make a final decision. Accept 13 as the final test estimate as this has most of the votes. No further action is needed. Consensus has been reached. Remove the new feature from the current release because consensus has not been reached. Select ONE option.

Question #A21 (1 point) Which of the following is NOT true regarding the test pyramid? a) The test pyramid emphasizes having a larger number of tests at the lower test levels. b) The closer to the top of the pyramid, the more formal your test automation should be.

376

Additional Sample Questions

c) Usually, component testing and component integration testing are automated using API-based tools. d) For system testing and acceptance testing, the automated tests are typically created using GUI-based tools. Select ONE option. Question #A22 (1 point) During risk analysis, the team considered the following risk: The system allows too high a discount for a customer. The team estimated the risk impact to be very high. What can one say about the risk likelihood? a) It is also very high. High risk impact always implies high risk likelihood. b) It is very low. High risk impact always implies low risk likelihood. c) One cannot say anything about risk likelihood. Risk impact and risk likelihood are independent. d) Risk likelihood is not important with such a high risk impact. One does not need to define it. Select ONE option. Question #A23 (1 point) The following list contains risks that have been identified for a new software product to be developed: i. ii. iii. iv. v.

Management moves two experienced testers to another project. The system does not comply with functional safety standards. System response time exceeds user requirements. Stakeholders have inaccurate expectations. Disabled people have problems when using the system. Which of them are project risks?

a) b) c) d)

i and iv are project risks; ii, iii, and v are not project risks. iv and v are project risks; i, ii, and iii are not project risks. i and iii are project risks; ii, iv, and v are not project risks. ii and v are project risks; i, iii, and iv are not project risks. Select ONE option.

Question #A24 (1 point) Which of the following is an example of how product risk analysis influences thoroughness and scope of testing? a) The test manager monitors and reports the level of all known risks on a daily basis so the stakeholders can make an informed decision on the release date. b) One of the identified risks was “Lack of support of open-source databases,” so the team decided to integrate the system with an open-source database. c) During the quantitative risk analysis, the team estimated the total level of all identified risks and reported it as the total residual risk before testing.

Additional Sample Questions

377

d) Risk assessment revealed a very high level of performance risks, so it was decided to perform detailed performance efficiency testing early in the SDLC. Select ONE option. Question #A25 (1 point) Which TWO of the following options are common metrics used for reporting on the quality level of the test object? a) b) c) d) e)

Number of defects found during system testing. Total effort on test design divided by the number of designed test cases. Number of executed test procedures. Number of defects found divided by the size of a work product. Time needed to repair a defect. Select TWO options.

Question #A26 (1 point) Which of the following pieces of information contained in a test progress report is the LEAST useful for business representatives? a) b) c) d)

Impediments to testing. Branch coverage achieved. Test progress. New risks within the test cycle. Select ONE option.

Exam Set A: Answers

Question #1 FL-4.2.2 (K3) Correct answer: a There are 12 boundary values for the final result values: 0, 50, 51, 60, 61, 70, 71, 80, 81, 90, 91, and 100. The test cases cover six of them (TC1, 91; TC2, 50; TC3, 81; TC4, 60; TC5, 70; and TC7, 51). Therefore, the test cases cover 6/12 = 50%. a) b) c) d)

Is correct. Is not correct. Is not correct. Is not correct.

Question #2 FL-3.2.1 (K1) Correct answer: d a) Is not correct. Feedback can improve the test process, but if one only wants to improve future projects, the feedback does not need to come early or frequently. b) Is not correct. Feedback is not used to prioritize requirements. c) Is not correct. The quality of changes can be measured in multiple ways. d) Is correct. Early and frequent feedback allows for the early communication of potential quality problems. Question #3 FL-4.2.3 (K3) Correct answer: d a) Is not correct. A member without a missed deadline can get a discount and a gift T-Shirt after 15 bicycle rentals. b) Is not correct. A member without a missed deadline can get a discount but no gift T-Shirt until they rented a bicycle 15 times.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_14

379

380

Exam Set A: Answers

c) Is not correct. Non-members cannot get a discount, even if they did not miss a deadline yet. d) Is correct. No discount as a non-member that has also missed a deadline, but only members can receive a gift T-Shirt. Hence, the action is not correct. Question #4 FL-5.4.1 (K2) Correct answer: c a) Is not correct. Traceability is the relationship between two or more work products, not between different versions of the same work product. b) Is not correct. Maintenance testing is about testing changes; it is not related closely to versioning. c) Is correct. To support testing, configuration management may involve the version control of all test items. d) Is not correct. Requirements engineering is the elicitation, documentation, and management of requirements; it is not closely related to test script versioning. Question #5 FL-4.1.1 (K2) Correct answer: c a) Is not correct. This is a common characteristic of white-box test techniques. Test conditions, test cases, and test data are derived from a test basis that may include code, software architecture, detailed design, or any other source of information regarding the structure of the software. b) Is not correct. This is a common characteristic of white-box test techniques. Coverage is measured based on the items tested within a selected structure and the technique applied to the test basis. c) Is correct. This is a common characteristic of experience-based test techniques. This knowledge and experience include expected use of the software, its environment, likely defects, and the distribution of those defects used to define tests. d) Is not correct. This is a common characteristic of black-box test techniques. Test cases may be used to detect gaps within requirements and the implementation of the requirements, as well as deviations from the requirements. Question #6 FL-4.2.4 (K3) Correct answer: d “test” and “error” transitions cannot occur in one test case. Neither can both “done” transitions. This means we need at least three test cases to achieve transition coverage. For example: TC1: test, done TC2: run, error, done TC3: run, pause, resume, pause, done

Exam Set A: Answers

381

Hence a) b) c) d)

Is not correct Is not correct Is not correct Is correct

Question #7 FL-5.5.1 (K3) Correct answer: c a) Is not correct. The expected result is “the application should accept the provided input and create the user.” The actual result is “The application hangs up after entering ‘Test input. $ä.’” b) Is not correct. There is a reference to the test case and to the related requirement, and it states that the defect is rejected. Also, the defect status would not be very helpful for the developers. c) Is correct. We do not know in which test environment the anomaly was detected, and we also do not know which application (and its version) is affected. d) Is not correct. The defect report states that the anomaly is urgent and that it is a global issue (i.e., many, if not all, test administration accounts are affected) and states the impact is high for business stakeholders. Question #8 FL-5.1.2 (K1) Correct answer: c a) Is not correct. Priorities for user stories are determined by the business representative together with the development team. b) Is not correct. Testers focus on both functional and nonfunctional aspects of the system to be tested. c) Is correct. According to the syllabus, this is one of the ways testers add value to iteration and release planning. d) Is not correct. Early test design is not part of release planning. Early test design does not automatically guarantee the release of quality software. Question #9 FL-1.4.1 (K2) Correct answer: b a) Is not correct. Estimating the test effort is part of test planning. b) Is correct. This is an example of defining test conditions, which is a part of test analysis. c) Is not correct. Using test techniques to derive coverage items is a part of test design. d) Is not correct. Reporting defects found during dynamic testing is a part of test execution.

382

Exam Set A: Answers

Question #10 FL-5.3.3 (K2) Correct answer: d a) Is not correct. Acceptance criteria are the conditions used to decide whether the user story is ready. They cannot show work progress. b) Is not correct. Defect reports inform about the defects. They do not show work progress. c) Is not correct. Test completion report can be created after the iteration is finished, so it will not show the progress continuously within an iteration. d) Is correct. Burndown charts are a graphical representation of work left to do versus time remaining. They are updated daily, so they can continuously show the work progress. Question #11 FL-2.1.3 (K1) Correct answer: c a) Is not correct. It is more often used in behavior-driven development (BDD). b) Is not correct. It is the description of test-driven development (TDD). c) Is correct. In acceptance test-driven development (ATDD), tests are written from acceptance criteria as part of the design process. d) Is not correct. It is used in BDD. Question #12 FL-6.2.1 (K1) Correct answer: b a) Is not correct. Test automation does not introduce unknown regressions in production. b) Is correct. Wrong allocation of effort to maintain testware is a risk. c) Is not correct. Test tools must be selected so that they and their testware can be relied upon. d) Is not correct. The primary goal of test automation is to reduce manual testing. So, this is a benefit, not a risk. Question #13 FL-5.1.3 (K2) Correct answer: c, e a) Is not correct. Test environment readiness is a resource availability criterion; hence, it belongs to the entry criteria. b) Is not correct. This is a resource availability criterion; hence, it belongs to the entry criteria. c) Is correct. Estimated defect density is a measure of diligence; hence, it belongs to the exit criteria. d) Is not correct. Requirements translated into a given format result in testable requirements; hence, it belongs to the entry criteria.

Exam Set A: Answers

383

e) Is correct. Automation of regression tests is a completion criterion; hence, it belongs to the exit criteria. Question #14 FL-3.1.2 (K2) Correct answer: a a) Is correct. Defect management is not less expensive. Finding and fixing defects later in SDLC is more costly. b) Is not correct. This is a benefit of static testing. c) Is not correct. This is a benefit of static testing. d) Is not correct. This is a benefit of static testing. Question #15 FL-1.5.1 (K2) Correct answer: b i. Is true. Having domain knowledge is an important tester skill. ii. Is false. This is a task of the business analyst together with the business representative. iii. Is true. Being a good team player is an important skill. iv. Is false. Planning and organizing the work of the team is a task of the test manager or, mostly in an Agile software development project, the whole team and not just the tester. v. Is true. Critical thinking is one of the most important skills of testers. Hence b is correct. Question #16 FL-2.1.6 (K2) Correct answer: c a) Is not correct. Retrospectives are more useful for identifying improvement opportunities and have little importance for clients. b) Is not correct. Business representatives are not giving feedback about the product itself. Therefore, there is no financial gain to the organization. c) Is correct. Regularly conducted retrospectives, when appropriate follow-up activities occur, are critical to continual improvement of development and testing. d) Is not correct. Courage and respect are values of Extreme Programming and are not closely related to retrospectives.

384

Exam Set A: Answers

Question #17 FL-4.4.2 (K2) Correct answer: c a) Is not correct. This is a new product. You probably do not have a checklist yet, and test conditions might not be known due to missing requirements. b) Is not correct. This is a new product. You probably do not have enough information to make correct error guesses. c) Is correct. Exploratory testing is most useful when there are few known specifications, and/or there is a pressing timeline for testing. d) Is not correct. Branch testing is time-consuming, and your management is asking about some test results now. Also, branch testing does not involve domain knowledge. Question #18 FL-5.1.4 (K3) Correct answer: d In the three-point estimation technique E = ðoptimistic þ 4 most likely þ pessimisticÞ=6 = ð2 þ ð4 11Þ þ 14Þ=6 = 10: Hence d is correct. Question #19 FL-4.3.3 (K2) Correct answer: d a) Is not correct. The fundamental strength of white-box test techniques is that the entire software implementation is taken into account during testing. b) Is not correct. White-box coverage measures provide an objective measure of coverage and provide the necessary information to allow additional tests to be generated to increase this coverage. c) Is not correct. White-box test techniques can be used to perform reviews (static testing). d) Is correct. This is the weakness of the white-box test techniques. They are not able to identify the missing implementation, because they are based solely on the test object structure, not on the requirements specification. Question #20 FL-3.2.4 (K2) Correct answer: b Considering the attributes: • There is a role of a scribe—specified for walk-throughs, technical reviews, and inspections; thus, the reviews being performed cannot be informal reviews. • The purpose is to evaluate quality—the purpose of evaluating quality is one of the most important objectives of a walk-through.

Exam Set A: Answers

385

• The review meeting is led by the author of the work product—this is not allowed for inspections and is typically not done in technical reviews. A moderator is needed in walk-throughs and is allowed for informal reviews. • Individual reviewers find potential anomalies during preparation—all types of reviews can include individual reviewers (even informal reviews). • A review report is produced—all types of reviews can produce a review report, although informal reviews do not require documentation. Hence b is correct. Question #21 FL-4.2.1 (K3) Correct answer: b “Small garden” and “large garden” can go only with “ground floor,” so we need two test cases with “ground floor,” which cover these two “garden-type” partitions. We need two more test cases to cover the two other “floor” partitions and a remaining ”garden-type” partition of “no garden.” We need a total of four test cases: TC1 (ground floor, small garden). TC2 (ground floor, large garden). TC3 (first floor, no garden). TC4 (second or higher floor, no garden). a) b) c) d)

Is not correct. Is correct. Is not correct. Is not correct.

Question #22 FL-2.1.5 (K2) Correct answer: d a) Is not correct. Early review is an example of the shift-left approach. b) Is not correct. TDD is an example of the shift-left approach. c) Is not correct. Early nonfunctional testing is an example of the shift-left approach. d) Is correct. Test scripts should be subject to configuration management, so it makes no sense to create the test scripts before this process is set up. Question #23 FL-6.1.1 (K2) Correct answer: c a) Is not correct. Test monitoring involves the ongoing checking of all activities and comparison of actual progress against the test plan. Test control involves taking the actions necessary to meet the test objectives of the test plan. No test data are prepared during these activities. b) Is not correct. Test analysis includes analyzing the test basis to identify test conditions and prioritize them. Test design includes elaborating the test

386

Exam Set A: Answers

conditions into test cases and other testware. Test data are not prepared during these activities. c) Is correct. Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test data). d) Is not correct. Test completion activities occur at project milestones (e.g., release, end of iteration, test level completion), so it is too late for preparing test data. Question #24 FL-4.3.1 (K2) Correct answer: a a) Is correct. Since 100% statement coverage is achieved, every statement, including the ones with defects, must have been executed and evaluated at least once b) Is not correct. Coverage depends on what is tested, not on the number of test cases. For example, for code “if (x==0) y=1”, one test case (x=0) achieves 100% statement coverage, but two test cases (x=1) and (x=2) together achieve only 50% statement coverage. c) Is not correct. If there is a loop in the code, there may be an infinite number of possible paths, so it is not possible to execute all the possible paths in the code. d) Is not correct. Exhaustive testing is not possible (see the seven testing principles section in the syllabus). For example, for code “input x; print x” any single test with arbitrary x achieves 100% statement coverage, but covers one input value. Question #25 FL-2.1.2 (K1) Correct answer: d a) b) c) d)

Is not correct. Is not correct. Is not correct. Is correct; this rule holds for all SDLC models.

Question #26 FL-5.1.7 (K2) Correct answer: a Usability testing is in Q3 (1—C). Component testing is in Q1 (2—A). Functional testing is in Q2 (3—B). Reliability testing is in Q4 (4—D). Hence a is correct.

Exam Set A: Answers

387

Question #27 FL-2.2.1 (K2) Correct answer: a The test basis for acceptance testing is the user’s business needs (1D). Communication between components is tested during component integration testing (2B). Failures in logic can be found during component testing (3A). Business rules are the test basis for system testing (4C). Hence a is correct. Question #28 FL-4.5.2 (K2) Correct answer: b a) Is not correct. Retrospectives are used to capture lessons learned and to improve the development and testing process, not to document the acceptance criteria. b) Is correct. This is the standard way to document acceptance criteria. c) Is not correct. Verbal communication does not allow to physically document the acceptance criteria as part of a user story (“card” aspect in the 3C’s model). d) Is not correct. Acceptance criteria are related to a user story, not a test plan. Also, acceptance criteria are the conditions that have to be fulfilled to decide if the user story is complete. Risks are not such conditions. Question #29 FL-4.4.1 (K2) Correct answer: a a) Is correct. The basic concept behind error guessing is that the tester tries to guess what errors may have been made by the developer and what defects may be in the test object based on past experience (and sometimes checklists). b) Is not correct. Although testers who used to be a developer may use their personal experience to help them when performing error guessing, the test technique is not based on prior knowledge of development. c) Is not correct. Error guessing is not a usability technique for guessing how users may fail to interact with the test object. d) Is not correct. Duplicating the development task has several flaws that make it impractical, such as the tester having equivalent skills to the developer and the time involved to perform the development. It is not error guessing. Question #30 FL-1.4.5 (K2) Correct answer: a, e a) Is correct. This is done by the testers. b) Is not correct. The product backlog is built and maintained by the product owner. c) Is not correct. This is done by the development team.

388

Exam Set A: Answers

d) Is not correct. This is a managerial role. e) Is correct. This is done by the testers. Question #31 FL-5.1.5 (K3) Correct answer: a Test TC 001 must come first, followed by TC 002, to satisfy dependencies. Afterward, TC 003 to satisfy priority and then TC 004, followed by TC 005. Hence: a) b) c) d)

Is correct. Is not correct. Is not correct. Is not correct.

Question #32 FL-1.3.1 (K2) Correct answer: a a) Is correct. This principle means that if the same tests are repeated over and over again, eventually, these tests no longer find any new defects. This is probably why the tests all passed in this release as well. b) Is not correct. This principle says about the mistaken belief that just finding and fixing a large number of defects will ensure the success of a system. c) Is not correct. This principle says that a small number of components usually contain most of the defects. d) Is not correct. This principle states that testing all combinations of inputs and preconditions is not feasible. Question #33 FL-5.2.4 (K2) Correct answer: c a) Is not correct. We do not accept the risk; concrete actions are proposed. b) Is not correct. No contingency plans are proposed. c) Is correct. The proposed actions are related to testing, which is a form of risk mitigation. d) Is not correct. Risk is not transferred but mitigated. Question #34 FL-4.5.3 (K3) Correct answer: a a) Is correct. This test covers two acceptance criteria: one about editing the document and one about saving changes. b) Is not correct. Acceptance criteria cover the editor activities, not the content owner activities. c) Is not correct. Scheduling the edited content for publication may be a nice feature, but it is not covered by the acceptance criteria.

Exam Set A: Answers

389

d) Is not correct. Acceptance criteria state about reassigning from an editor to the content owner, not to another editor. Question #35 FL-2.2.3 (K2) Correct answer: b Because TC1 and TC3 failed in Execution 1 [i.e., test (1) and test (3)], test (4) and test (6) are confirmation tests. Because TC2 and TC3 failed in Execution 2 [i.e., tests (5) and (6)], test (8) and test (9) are also confirmation tests. TC2 passed in Execution 1 [i.e., test (2)], so test (5) is a regression test. TC1 passed in the Execution 2 [i.e., test (4)], so test (7) is also a regression test. Hence b is correct. Question #36 FL-1.5.2 (K1) Correct answer: d a) Is not correct. The test automation approach is defined by testers with the help of developers and business representatives. b) Is not correct. The test strategy is decided in collaboration with the developers. c) Is not correct. Testers, developers, and business representatives are part of the whole team approach. d) Is correct. Testers will work closely with business representatives to ensure that the desired quality levels are achieved. This includes supporting and collaborating with them to help them create suitable acceptance tests. Question #37 FL-3.2.5 (K1) Correct answer: d a) Is not correct. Adequate time for individuals is a success factor. b) Is not correct. Splitting work products into small adequate parts is a success factor. c) Is not correct. Avoiding behaviors that might indicate boredom, exasperation, etc. is a success factor. d) Is correct. During reviews one can find defects, not failures. Question #38 FL-1.2.1 (K2) Correct answer: a a) Is correct. It is important that testers are involved from the beginning of the software development lifecycle (SDLC). It will increase understanding of design decisions and will detect defects early. b) Is not correct. Both developers and testers will have more understanding of each other’s work products and how to test the code.

390

Exam Set A: Answers

c) Is not correct. If testers can work closely with system designers, it will give them insight as to how to test. d) Is not correct. Testing will not be successful if legal requirements are not tested for compliance. Question #39 FL-1.1.1 (K1) Correct answer: c a) Is not correct. It is impossible to prove that there are no defects anymore in the system under test. See testing principle 1. b) Is not correct. See testing principle 7. c) Is correct. Testing finds defects and failures, which reduces the level of risk and at the same time gives more confidence in the quality level of the test object. d) Is not correct. It is impossible to test all combinations of inputs (see testing principle 2). Question #40 FL-1.4.2 (K2) Correct answer: b i. Is true. The SDLC has an influence on the test process. ii. Is false. The number of defects detected in previous projects may have some influence, but this is not as significant as i, iii, and iv. iii. Is true. The identified product risks are one of the most important factors influencing the test process. iv. Is true. Regulatory requirements are important factors influencing the test process. v. Is false. The test environment should be a copy of the production environment but has no significant influence on the test process. Hence b is correct.

Additional Sample Questions—Answers

Question #A1 FL-1.1.2 (K2) Correct answer: a a) Is correct. Debugging is the process of finding, analyzing, and removing the causes of failures in a component or system. b) Is not correct. Testing is the process concerned with planning, preparation, and evaluation of a component or system and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. It is not related to fixing causes of failures. c) Is not correct. Requirement elicitation is the process of gathering, capturing, and consolidating requirements from available sources. It is not related to fixing causes of failures. d) Is not correct. Defect management is the process of recognizing, recording, classifying, investigating, resolving, and disposing of defects. It is not related to fixing causes of failures. Question #A2 FL-1.2.2 (K1) Correct answer: d a) b) c) d)

It is not correct. See justification d. It is not correct. See justification d. It is not correct. See justification d. Is correct. Testing and quality assurance are not the same. Testing is the process consisting of all software development lifecycle (SDLC) activities, both static and dynamic, concerned with planning, preparation, and evaluation of a component or system and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. Quality assurance is focused on establishing, introducing, monitoring, improving, and adhering to the quality-related processes.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1_15

391

392

Additional Sample Questions—Answers

Question #A3 FL-1.2.3 (K2) Correct answer: d a) Is not correct. The root cause is the distraction that the programmer experienced while programming. b) Is not correct. Accepting invalid inputs is a failure. c) Is not correct. The error is the mistaken thinking that resulted in putting the defect in the code. d) Is correct. The problem in the code is a defect. Question #A4 FL-1.4.3 (K2) Correct answer: d The testware under consideration is a test charter. Test charters are the output from test design. Hence d is correct. Question #A5 FL-1.4.4 (K2) Correct answer: c a) Is not correct. Performing the impact analysis will not give information about completeness of tests. Analyzing the impact analysis of changes will help to select the right test cases for execution. b) Is not correct. Traceability does not give information about the estimated level of residual risk if the test cases are not traced back to risks. c) Is correct. Performing the impact analysis of the changes helps in selecting the test cases for the regression test. d) Is not correct. Analyzing the traceability between the test basis, test objects, and test cases does not help in selecting test data to achieve the assumed coverage of the test object. Selecting test data is more related to test analysis and test implementation, not traceability. Question #A6 FL-1.5.3 (K2) Correct answer: d a) Is not correct. Quality should be the responsibility of everyone working on the project and not the sole responsibility of the test team. b) Is not correct. First, it is not a benefit if an external test team does not meet delivery deadlines, and second, there is no reason to believe that external test teams will feel they do not have to meet strict delivery deadlines. c) Is not correct. It is bad practice for the test team to work in complete isolation, and we would expect an external test team to be concerned with changing project requirements and communicating well with developers. d) Is correct. Specifications are never perfect, meaning that assumptions will have to be made by the developer. An independent tester is useful in that they can

Additional Sample Questions—Answers

393

challenge and verify the assumptions and subsequent interpretation made by the developer. Question #A7 FL-2.1.1 (K2) Correct answer: a a) Is correct. In sequential development models, in the initial phases, testers participate in requirement reviews, test analysis, and test design. The executable code is usually created in the later phases, so dynamic testing cannot be performed early in the SDLC. b) Is not correct. Static testing can always be performed early in the SDLC. c) Is not correct. Test planning should be performed early in the SDLC before the test project begins. d) Is not correct. Acceptance testing can be performed when there is a working product. In sequential SDLC models, the working product is usually delivered late in the SDLC. Question #A8 FL-2.1.4 (K2) Correct answer: c i. Is true. Faster product release and faster time to market is an advantage of DevOps. ii. Is false. Typically, we need less effort for manual tests because of the use of test automation. iii. Is true. Constant availability of executable software is an advantage. iv. Is false. More regression tests are needed. v. Is false. Not everything is automated and setting up a test automation framework is expensive. Hence c is correct. Question #A9 FL-2.2.2 (K2) Correct answer: b a) Is not correct. The fact that the requirement about the system’s performance comes directly from the client and that the performance is important from the business point of view (i.e., high priority) does not make these tests functional, because they do not check “what” the system does, but “how” (i.e., how fast the orders are processed). b) Is correct. This is an example of performance testing, a type of nonfunctional testing. c) Is not correct. From the scenario, we do not know if interacting with the user interface is a part of the test conditions. But even if we did, the main test objective of these tests is to check the performance, not the usability.

394

Additional Sample Questions—Answers

d) Is not correct. We do not need to know the internal structure of the code to perform the performance testing. One can execute performance efficiency tests without structural knowledge. Question #A10 FL-2.3.1 (K2) Correct answer: a a) Is correct. When a system is retired, this can require testing of data migration, which is a form of maintenance testing. b) Is not correct. Regression testing verifies whether a fix accidentally affected the behavior of other parts of the code, but now we are talking about data migration to a new system. c) Is not correct. Component testing focuses on individual hardware or software components, not on data migration. d) Is not correct. Integration testing focuses on interactions between components and/or systems, not on data migration. Question #A11 FL-3.1.1 (K1) Correct answer: c Only third-party executable code cannot be reviewed. Hence the correct answer is c. Question #A12 FL-3.1.3 (K2) Correct answer: d i. These behaviors are easily detectable while the software is running. Hence, dynamic testing shall be used to identify them. ii. This is an example of deviations from standards, which is a typical defect that is easier found with static testing. iii. If the software is executed during the test, it is dynamic testing. iv. Identifying defects as early as possible is the test objective of both static testing and dynamic testing. v. This is an example of gaps in the test basis traceability or coverage, which is a typical defect that is easier found with static testing. Hence d is correct. Question #A13 FL-3.2.2 (K2) Correct answer: b a) Is not correct. In all types of reviews, there is more than one role, even in informal ones. b) Is correct. There are several activities during the formal review process.

Additional Sample Questions—Answers

395

c) Is not correct. Documentation to be reviewed should be distributed as early as possible. d) Is not correct. Defects found during the review should be reported. Question #A14 FL-3.2.3 (K1) Correct answer: b a) b) c) d)

Is not correct. This is the task of the review leader. Is correct. This is the task of the management in a formal review. Is not correct. This is the task of the moderator. Is not correct. This is the task of the scribe.

Question #A15 FL-4.2.2 (K3) Correct answer: c There are three equivalence partitions: {..., 10, 11}, {12}, and {13, 14, ...}. The boundary values are 11, 12, and 13. In the three-point boundary value analysis for each boundary, we need to test the boundary and both its neighbors, so: • For 11, we test 10, 11, 12. • For 12, we test 11, 12, 13. • For 13, we test 12, 13, 14. Altogether, we need to test 10, 11, 12, 13, and 14. a) b) c) d)

Is not correct. Is not correct. Is correct. Is not correct.

Question #A16 FL-4.3.2 (K2) Correct answer: d a) Is not correct. In this case, one test case is still needed since there is at least one (unconditional) branch to be covered. b) Is not correct. Covering only unconditional branches does not imply covering all conditional branches. c) Is not correct. 100% branch coverage implies 100% statement coverage, not otherwise. For example, for an IF decision without the ELSE, one test is enough to achieve 100% statement coverage, but it only achieves 50% branch coverage. d) Is correct. Each decision outcome corresponds to a conditional branch, so 100% branch coverage implies 100% decision coverage.

396

Additional Sample Questions—Answers

Question #A17 FL-4.4.3 (K2) Correct answer: c a) Is not correct. The book provides general guidance, and is not a formal requirements document, a specification, or a set of use cases, user stories, or business processes. b) Is not correct. While you could consider the list as a set of test charters, it more closely resembles the list of test conditions to be checked. c) Is correct. The list of user interface best practices is the list of test conditions to be systematically checked. d) Is not correct. The tests are not focused on failures that could occur but rather on knowledge about what is important for the user, in terms of usability. Question #A18 FL-4.5.1 (K2) Correct answer: b a) Is not correct. Collaborative user story writing means that all stakeholders create the user stories collaboratively, to obtain the shared vision. b) Is correct. Collaborative user story writing means that all stakeholders create the user stories collaboratively, to obtain the shared vision. c) Is not correct. Collaborative user story writing means that all stakeholders create the user stories collaboratively, to obtain the shared vision. d) Is not correct. This is the list of properties that each user story should have, not the description of the collaboration-based approach. Question #A19 FL-5.1.1 (K2) Correct answer: d a) Is not correct. The paragraph contains information on test levels and exit criteria, which are part of the test approach. b) Is not correct. The paragraph contains information on test levels and exit criteria, which are part of the test approach. c) Is not correct. The paragraph contains information on test levels and exit criteria, which are part of the test approach. d) Is correct. The paragraph contains information on test levels and exit criteria, which are part of the test approach. Question #A20 FL-5.1.4 (K3) Correct answer: b a) Is not correct. This should be a team activity and not overruled by one team member. b) Is correct. If test estimates are not the same but the variation in the results is small, applying rules like “accept the number with the most votes” can be done.

Additional Sample Questions—Answers

397

c) Is not correct. There is no consensus yet as some say 13 and others say 8. d) Is not correct. A feature should not be removed only because the team cannot agree on the test estimates. Question #A21 FL-5.1.6 (K1) Correct answer: b a) Is not correct. The test pyramid emphasizes having a larger number of tests at the lower test levels. b) Is correct. It is not true that near the top of pyramid, test automation should be more formal. c) Is not correct. Usually component testing and component integration testing are automated using API-based tools. d) Is not correct. For system testing and acceptance testing, the automated tests are typically created using GUI-based tools. Question #A22 FL-5.2.1 (K1) Correct answer: c a) b) c) d)

Is not correct. Risk impact and risk likelihood are independent. Is not correct. Risk impact and risk likelihood are independent. Is correct. Risk impact and risk likelihood are independent. Is not correct. We need both factors to calculate risk level.

Question #A23 FL-5.2.2 (K2) Correct answer: a i. ii. iii. iv. v.

Project risk. Product risk. Product risk. Project risk. Product risk. Hence a is correct.

Question #A24 FL-5.2.3 (K2) Correct answer: d a) Is not correct. This is an example of a risk monitoring activity, not risk analysis. b) Is not correct. This is an example of an architectural decision, not related with testing. c) Is not correct. This is an example of performing a quantitative risk analysis and is not related to thoroughness or scope of testing. d) Is correct. This shows how risk analysis impacts the thoroughness of testing (i.e., the level of detail).

398

Additional Sample Questions—Answers

Question #A25 FL-5.3.1 (K1) Correct answer: a, d a) Is correct. The number of defects found is related to the test object quality. b) Is not correct. This is the measure of the test efficiency not the test object quality. c) Is not correct. The number of test cases executed does not tell us anything about the quality; test results might do. d) Is correct. Defect density is related to the test object quality. e) Is not correct. Time to repair is a process metric. It does not tell us anything about the product quality. Question #A26 FL-5.3.2 (K2) Correct answer: b a) Is not correct. Impediments to testing can be high level and business-related, so this is an important piece of information for business stakeholders. b) Is correct. Branch testing is a technical metric used by developers and technical testers. This information is of no interest to business representatives. c) Is not correct. Test progress is project related, so it may be useful for business representatives. d) Is not correct. Risks impact product quality, so it may be useful for business representatives.

References

1. ISO/IEC/IEEE 29119-1 - Software and systems engineering - Software testing - Part 1: Concepts and definitions, 2022. 2. ISO/IEC/IEEE 29119-2 - Software and systems engineering - Software testing - Part 2: Test processes, 2021. 3. ISO/IEC/IEEE 29119-3 - Software and systems engineering - Software testing - Part 3: Test documentation, 2013. 4. ISO/IEC/IEEE 29119-4 - Software and systems engineering - Software testing - Part 4: Test techniques, 2021. 5. ISO/IEC 25010 - Systems and software engineering - Systems and software quality requirements and evaluation (SQuaRE) - System and software quality models, 2011. 6. ISO/IEC 20246 - Software and systems engineering - Work product reviews, 2017. 7. ISO 31000 - Risk Management, 2018. 8. „ISTQB Certified Tester - Foundation Level Syllabus v4.0,” 2023. 9. „ISTQB Exam Structure and Rules,” 2021. 10. G. Myers, The Art of Software Testing (John Wiley and Sons, 2011) 11. A. Roman, Thinking-Driven Testing. The Most Reasonable Approach to Quality Control, Springer Nature, 2018. 12. J. Buxton i B. Randell, RedaktorzySoftware Engineering Techniques. Report on a conference sponsored by the NATO Science Committee, p. 16, 1969. 13. Z. Manna i R. Waldinger, „The logic of computer programming,” IEEE Transactions on Software Engineering, tom 4, nr 3, pp. 199-229, 1978. 14. B. Boehm, Software Engineering Economics (Prentice Hall, 1981) 15. A. Enders, „An Analysis of Errors and Their Causes in System Programs,” IEEE Transactions on Software Engineering, tom 1, nr 2, pp. 140-149, 1975. 16. B. Beizer, Software Testing Techniques, Van Nostrand Reinhold, 1990. 17. C. Kaner, J. Bach i B. Pettichord, Lessons Learned in Softwarew Testing: A Context-Driven Approach, Wiley, 2011. 18. „UML 2.5 - Unified Modeling Language Reference Manual,” 2017. [Online]. Available: www. omg.org/spec/UML/2.5.1. 19. „Acceptance test plan,” [Online]. Available.: https://ssdip.bip.gov.pl/fobjects/download/13576/ zalacznik-nr-1-do-opz-szablon-pta-pdf.html. 20. V. Stray, R. Florea i L. Paruch, „Exploring human factors of the agile software tester,” Software Quality Journal, tom 30, nr 1, pp. 1-27, 2021.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1

399

400

References

21. L. Crispin i J. Gregory, Agile Testing: A Practical Guide for Testers and Agile Teams, Pearson Education, 2008. 22. R. Pressman, Software Engineering. A Practitioner’s Approach, McGraw Hill, 2019. 23. T. Linz, Testing in Scrum: A Guide for Software Quality Assurance in the Agile World (Rocky Nook, 2014) 24. G. Adzic, Specification by Example: How Successful Teams Deliver the Right Software (Manning Publications, 2011) 25. D. e. a. Chelimsky, The Rspec Book: Behaviour Driven Development with Rspec, Cucumber, and Friends, The Pragmatic Bookshelf, 2010. 26. M. Gärtner, ATDD by Example: A Practical Guide to Acceptance Test-Driven Development (Pearson Education, 2011) 27. G. Kim, J. Humble, P. Debois i J. Willis, The DevOps Handbook, IT Revolution Press, 2016. 28. „ISTQB Certified Tester - Advanced Level Syllabus - Test Analyst,” 2021. 29. „ISTQB Certified Tester - Advanced Level Syllabus - Technical Test Analyst,” 2021. 30. „ISTQB Certified Tester - Advanced Level Syllabus - Security Tester,” 2016. 31. C. Jones i O. Bonsignour, The Economics of Software Quality, Addison-Wesley, 2012. 32. S. Reid, „Software Reviews using ISO/IEC 20246,” 2018. [Online]. Available: http://www. stureid.info/wp-content/uploads/2018/01/Software-Reviews.pdf. 33. S. Nazir, N. Fatima i S. Chuprat, „Modern Code Review Benefits-Primary Findings of A Systematic Literature Review,” w ICSIM '20: Proceedings of the 3rd International Conference on Software Engineering and Information Management, 2020. 34. T. Gilb i D. Graham, Software Inspection, Addison-Wesley, 1993. 35. M. Fagan, „Design and Code Inspection to Reduce Errors in Program Development,” tom 15, nr 3, pp. 182-211, 1975. 36. P. Johnson, Introduction to formal technical reviews (University College of London Press, 1996) 37. D. O’Neill, „National Software Quality Experiment. A lesson in measurement 1992-1997, 23rd Annual Software Engineering Workshop,” NASA Goddard Space Flight Center, 1998. 38. K. Wiegers, Peer Reviews in Software: A Practical Guide (Addison-Wesley Professional, 2001) 39. E. v. Veenendaal, The Testing Practitioner, UTN Publishers, 2004. 40. ISO, ISO 26262 - Road Vehicles - Functional Safety, 2011. 41. C. Sauer, „The Effectiveness of Software Development Technical Reviews: A Behaviorally Motivated Program of Research,” IEEE Transactions on Software Engineering, tom 26, nr 1, 2000. 42. „ISTQB Glossary of testing terms,” [Online]. Available: http://glossary/istqb.org. 43. R. Craig i S. Jaskiel, Systematic Software Testomg, Artech House, 2002. 44. L. Copeland, A Practitioner’s Guide to Software Test Design (Artech House, 2004) 45. T. Koomen, L. van der Aalst, B. Broekman i M. Vroon, TMap Next for result-driven testing, UTN Publishers, 2006. 46. P. Jorgensen, Software Testing, A Craftsman’s Approach, CRC Press, 2014. 47. P. Ammann i J. Offutt, Introduction to Software Testing, Cambridge University Press, 2016. 48. I. Forgács i A. Kovács, Practical Test Design: Selection of traditional and automated test design techniques, BCS, The Chartered Institute for IT, 2019. 49. G. O’Regan, Concise Guide to Software Testing (Springer Nature, 2019) 50. A. Watson, D. Wallace i T. McCabe, „Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” U.S. Dept. of Commerce, Technology Administration, NIST, 1996. 51. B. Hetzel, The Complete Guide to Software Testing (John Wiley and Sons, 1998) 52. A. Whittaker, How to Break Software (Pearson, 2002) 53. J. Whittaker i H. Thompson, How to Break Software Security, Addison-Wesley, 2003. 54. M. Andrews i J. Whittaker, How to Break Web Software: Functional and Security Testing of Web Applications and Web Services, Addison-Wesley Professional, 2006.

References

401

55. J. Whittaker, Exploratory Software Testing. Tips, Tricks, Tours, and Techniques to Guide Test Design, Addison-Wesley, 2009. 56. C. Kaner, J. Falk i H. Nguyen, Testing Computer Software, Wiley, 1999. 57. E. Hendrickson, Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing. The Pragmatic Programmer (2013) 58. B. Brykczynski, „A survey of software inspection checklists,” ACM SIGSOFT Software Engineering Notes,, tom 24, nr 1, pp. 82-89, 1999. 59. J. Nielsen, „Enhancing the explanatory power of usability heuristics,” w Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Celebrating Interdependence, 1994. 60. A. Gawande, The Checklist Manifesto: How to Get Things Right (Metropolitan Books, 2009) 61. M. Cohn, User Stories Applied For Agile Software Development (Addison-Wesley, 2004) 62. B. Wake, „INVEST in Good Stories, and SMART Tasks,” 2003. [Online]. Available: https:// xp123.com/articles/invest-in-good-stories-and-smart-tasks/. 63. G. Adzic, Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing (Neuri Limited, 2009) 64. D. Jackson, M. Thomas i L. Millett, Redaktorzy, Software for Dependable Systems: Sufficient Evidence? Committee on Certifiably Dependable Software Systems, National Research Council, 2007. 65. S. Kan, Metrics and Models in Software Quality Engineering (Addison-Wesley, 2003) 66. L. Westfall, The Certified Software Quality Engineer Handbook (ASQ Quality Press, 2009) 67. M. Cohn, Succeeding with Agile: Software Development Using Scrum (Addison-Wesley, 2009) 68. B. Marick, „Exploration through Example,” 2003. [Online]. Available: http://www.exampler. com/old-blog/2003/08/21.1.html#agile-testing-project-1. 69. K. Schwaber i M. Beedle, Agile Software Development with Scrum, Prentice-Hall, 2002. 70. R. Neeham, „Operational experience with the Cambridge multiple-access system,” w Computer Science and Technology, Conference Publication, 1969. 71. Pandian, C.R. (2007) Applied Software Risk Management. A Guide for Software Project Managers, Auerbach Publications, Boca Raton 72. Van Veenendaal, E (ed.) (2012) Practical Risk-Based Testing, The PRISMA Approach, UTN Publishers: The Netherlands

Index

A Acceptance criteria (AC), 56, 232 Acceptance test-driven development (ATDD), 77, 92 Acceptance testing, 110 Ad hoc review, 162 All states coverage, 203 All transitions coverage, 204 Alpha testing, 112 Anomaly, 146 Author (reviews), 150

B Behavior-driven development (BDD), 77, 90 Beta testing, 112 Black-box testing, 122 Black-box test technique, 51, 172 Boehm’s curve, 43 Boundary value analysis, 187 Branch, 214 Branch coverage, 214 Branch testing, 214–217 Buddy check, 154 Bug, see Defect Burn-down chart, 291

C Change request, 60 Checklist, 226 Checklist-based review, 162

Checklist-based testing, 226 Collaboration-based test approach, 229 Component integration testing, 103 Conditional branch, 214 Configuration management, 292 Confirmation bias, 64 Confirmation testing, 124 Continuous delivery, 94 Continuous deployment, 94 Continuous integration, 94 Contractual acceptance testing, 111 Control directive, 55 Coverage, 47, 60 Coverage-based prioritization, 270 Coverage item, 51, 57, 181, 193, 198, 203, 214

D Dashboard, 291 Debugging, 29 Decision table minimization, 198 Decision table testing, 194 Defect, 29, 36 Defect management, 295 Defect masking, 183 Defect report, 56, 59, 295 Defects clustering, 44 DevOps, 92–96 DevOps pipeline, 94 Domain-driven design, 77 Driver, 52, 58, 100 Dynamic testing, 135

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 L. Stapp et al., ISTQB® Certified Tester Foundation Level, https://doi.org/10.1007/978-3-031-42767-1

403

404 E Each choice coverage, 184 Early testing, 43 Entry criteria, 54, 258 Equivalence partitioning (EP), 177 Error, 36 Error guessing, 220 Error hypothesis, 171 Estimating, 260 Estimation based on ratios, 261 Exhaustive testing, 42 Exit criteria, 54, 258 Experience-based test technique, 51, 173, 219 Exploratory testing, 223 Extrapolation, 262 Extreme programming, 77, 88

F Facilitator, 150 Failure, 29, 36, 39 False negative, 39 False positive, 39 Fault, see Defect Feature-driven development (FDD), 77 Formal review, 145 Full transition table, 202 Functional requirement, 115 Functional testing, 115

G Gantt chart, 268 Guard condition, 200

I Impact analysis, 129 Incremental model, 77 Individual review, 146 Informal review, 154 Inspection, 155 Integration strategy, 105 Integration testing, 103 Iteration planning, 257 Iterative model, 77, 78

K Kanban, 77, 84

Index L Lean IT, 77 Legal compliance acceptance testing, 112

M Maintenance testing, 127 Manager (reviews), 150 Metric, 287 Mistake, see Error Mock object, 52, 58 Moderator, see Facilitator

N Non-functional testing, 117 N-switch coverage, 204

O Operational acceptance testing, 111

P Pair review, 154 Pareto rule, 44 Peer review, 152 Perspective-based reading, 163 Planning poker, 262 Product risk, 280 Project risk, 279 Prototyping model, 77, 82

Q Quality, 34 Quality assurance (QA), 36 Quality control (QC), 36

R Recorder, see Scribe Regression testing, 125 Release planning, 257 Requirements, 50 Requirements-based prioritization, 270 Retrospective, 97 Review, 143 Reviewer, 151 Review leader, 151

Index Review meeting, 149 Review process, 145 Risk, 55, 279 Risk analysis, 281 Risk assessment, 282 Risk-based prioritization, 270 Risk-based testing, 278 Risk control, 284 Risk identification, 281 Risk impact, 279 Risk level, 279 Risk likelihood, 279 Risk management, 277 Risk matrix, 282 Risk mitigation, 284 Risk monitoring, 286 Risk register, 54 Role-based review, 163 Root cause, 40

S Scenarios and dry runs, 163 Scribe, 150 Scrum, 77, 83 Sequential model, 77 Service virtualization, 52, 58 Shift-left, 43, 96 Simulator, 52, 58 Skills, 35, 64 Software development lifecycle (SDLC), 76, 87 Spiral model, 77, 81 Statement coverage, 214 Statement testing, 212 State table, 202 State transition diagram, 200, 202 State transition testing, 199 Static analysis, 134 Static testing, 134 Structural coverage, 120 Stub, 52, 58, 100 Subsumption, 218 System integration testing, 103 System testing, 107

T Task board, 291 Team velocity, 257 Technical review, 154 Test analysis, 49 Test approach, 253 Test automation, 307 Test basis, 49 Test case (TC), 51, 57, 59

405 high level, 57 low level, 58 Test completion, 52 Test completion report, 60, 288 Test condition, 50, 56 Test control, 48, 286 Test data, 51, 57, 58 Test design, 51 Test-driven development (TDD), 77, 89 Test effort, 259 Test environment, 52, 57, 58 Tester, 63 Test execution, 52 Test execution schedule, 58, 268 Test first, 88 Test harness, 100 Test implementation, 51 Testing, 27 Testing independence, 67 Testing principles, 41 Testing quadrants, 276 Test level, 99, 122 Test log, 59 Test manager, 62 Test monitoring, 48, 286 Test object, 27, 99 Test objective, 28 Test plan, 48, 54, 253 Test planning, 48, 253 Test procedure, 51, 58 Test process, 47, 53 Test progress report, 55, 288 Test pyramid, 275 Test result, 39 Test script, 58 Test set, 58 Test strategy, 255 Test technique, 171 Test type, 99, 122 Testware, 54 Three-point estimation, 265 3-value BVA, 189 Tools, 307 Traceability, 60 2-value BVA, 189

U Unconditional branch, 214 Unified process (UP), 77, 81 Use case, 208 Use case testing, 208 User acceptance testing (UAT), 111 User story (US), 229 User story points, 264

406

Index

V Validation, 27, 46 Valid transitions coverage, 203 Verification, 27, 46 V model, 77, 80

White-box test technique, 51, 172, 211–219 Whole team approach, 66 Wideband Delphi, 262 Work Breakdown Structure (WBS), 260 Work product, 54, 135

W Walkthrough, 154 Waterfall model, 77, 79 White-box testing, 120

Z 0-switch coverage, 203