Practical Network Security: An auditee's guide to zero findings. 9789387284609, 9387284603

Prepare yourself for any type of audit and minimise security findings Key Features It follows a lifecycle approach to

208 83 8MB

English Pages 328 [525]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Start
Recommend Papers

Practical Network Security: An auditee's guide to zero findings.
 9789387284609, 9387284603

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Practical Network Security An Auditee's Guide to Zero Findings By Neha Saxena

FIRST EDITION 2018 Copyright © BPB Publications, INDIA ISBN : 978-93-87284-60-9

All Rights Reserved. No part of this publication can be stored in a retrieval system or reproduced in any form or by any means without the prior written permission of the publishers.

LIMITS OF LIABILITY AND DISCLAIMER OF WARRANTY The Author and Publisher of this book have tried their best to ensure that the programmes, procedures and functions described in the book are correct.However, the author and the publishers make no warranty of any kind, expressed or implied, with regard to these programmes or the documentation contained in the book. The author and publisher shall not be liable in any event of any damages, incidental or consequential, in connection with, or arising out of the furnishing, performance or use of these programmes, procedures and functions. Product name mentioned are used for identification purposes only and may be trademarks of their respective companies / owners and are duly acknowledged. Distributors:

BPB PUBLICATIONS

20, Ansari Road, Darya Ganj New Delhi-110002 Ph: 23254990/23254991

MICRO MEDIA Shop No. 5, Mahendra Chambers, 150 DN Rd. Next to Capital Cinema, V.T. (C.S.T.) Station, MUMBAI-400 001

Ph: 22078296/22078297 BPB BOOK CENTRE 376 Old Lajpat Rai Market, Delhi-110006 Ph: 23861747 DECCAN AGENCIES 4-3-329, Bank Street Hyderabad-500195 Ph: 24756967/24756400 Published by Manish Jain for BPB Publications, 20, Ansari Road, Darya Ganj, New Delhi-110002 and Printed by Repro India Pvt Ltd, Mumbai

Dedication Dedicated to my dearest Husband Udayan, without whom this book would never have become a reality. Thank you for your constant support and companionship.

About the Author Author is currently teaching at Symbiosis International (Deemed University) as guest faculty and working as a Freelance security consultant with various organizations. She has previously worked with HP Singapore, Etihad airways Abu Dhabi, Quadrant Risk Management Dubai, Noor Islamic bank Dubai as Information security Officer (ISO), Senior Consultant and Team Lead. Her recently concluded projects include ISO27001 audit preparation for one of Dubai’s government subsidiary and Process Gap assessment at a Bank in Abu Dhabi.

During her tenure at various jobs she wore many hats including Pen Tester, Application security assessor, Security Trainer, ISO27001 Implementer etc. Later on she moved to leading Audit and Compliance team. Currently she enjoys the thrill of challenges posed by doing different type of security/ teaching assignments as well as flexibility of working as a Freelancer. She takes each project as an opportunity to learn new things, new environment and meet interesting people around the world.

She holds a Master’s degree in Computer Applications from Symbiosis International (Deemed University). She resides with her family in Pune, India currently. When not working she indulges herself in reading books, watching movies & paranormal/fantasy TV series, yoga and meditation.

Acknowledgement There are so many people who are directly and indirectly responsible for this book. First of all, I want to thank my family, who has truly inspired me to do this book throughout the journey of writing this book. Thank you Dad, Mom, Juhi and Harshit for your constant support. Special thanks to Harshit for providing his quality input on network technologies and also writing a couple of sections. I can’t thank my mother-in-law enough to take away all my responsibilities and allow me to do what I wanted to do and also my sister-in-law Shivani who consistently reminded me to complete my book when I had to take a break in between due to other assignments. I feel so blessed to have such a great family.

Special thanks to my mentors and managers in various jobs during my more than eleven years of tenure. I can easily say that I have been lucky to have such great managers. It’s said that real learning starts at job front, which is completely true in my case. With all my heart I want to thank Ramesh PS, Leslie Moller, Siva Sathyamurthy and Ramesh Vempali. Some of you have literally mentored me and some of you gave me encouragement and space to work in my own way. Special thanks to my teachers Harshad Gune Sir and Atul Kahate Sir (Symbiosis Institute of Computer Studies and Research) for encouraging me and guiding me in the right direction.

Good friends are hard to come by and my friends always encouraged me to write this book. I know they are as happy as I am to see this book. Thank you dear Dinesh, Seema, Sayantan, Preeti, Smita, Sundanda, Roli, Roopa, Divya and all my friends for being there in my life. I am extremely grateful for your presence.

Thanks to BPB Publications for releasing the book. Team has been instrumental in bringing it to the shape it currently is. Images Courtesy: Designed by Freepik

Introduction Welcome to Practical Network Security: An auditee’s guide to zero findings I have been working as a pen-tester, security assessor, ISO27001 implementer and auditor during my various jobs in Dubai and Singapore. I was ruthless in giving findings to technical teams and was always amazed at number of findings I identified over and over. A lot of findings were never fixed or the process to rectify the root issue was not followed and this left me disappointed and frustrated. It felt like an endless job to identify findings year on year which were never going to be fixed and we could never be confident about security posture of the organization.

It was only when I worked in Singapore, tables were turned. Instead of working as an auditor, I was given responsibility to facilitate audit. Organization was obligated to have multiple third party security audits every year. I had the herculean task of getting previous year audit findings closed and manage current year audits. I took this job assignment very reluctantly as I had never done it earlier. I was happy with my simple job of auditing and giving out findings but at the end it turned out to be a blessing in disguise.

I analyzed all past year findings and tried to identify the real root cause of all of them. I had endless discussions with systems/network/database teams and then I realized their pain and root cause of it all.

Technical teams did not understand the big picture of what security processes are in place. Few had not heard of Security Policy or ISMS policy and even if they were aware of it, they did not fully realize how it affects them in their day to day job. They already had so much pressure to do their own job; security audits and findings were seen as inconveniences.

I tried to put myself in their shoes to understand their processes and realized that they were following a lot of security controls unknowingly, as a part of their own processes; however, they were not able to explain and provide appropriate evidences during audits. Hence, were receiving a lot of findings. I was able to close all past year findings by telling technical teams, what exactly to provide as an evidence. Also for all upcoming audits, I prepared technical teams on how to go through security audits. I attended the auditor’s interviews and helped in translating his requirements in terms of tech terminologies, our teams were used to. With that, we all together were able to bring down process audit findings to just 1 medium finding and thousands of tech findings to below 100 across the environment. That was a success for the whole organization and management as well as the client was pleasantly surprised with this development. We were all applauded and awarded for bringing down the number of findings drastically as compared to previous years. That got me thinking that so many organizations are going through this rough phase and not able to handle it properly because security teams and technical teams are always struggling

to coordinate effectively with each other. There’s always accountability shunning, blame game and pointing fingers at each other.

I have seen the same story at various organizations where I worked or consulted. If only the teams understand that they all are working for the same side and things can be worked out without contentions, then it will be a lot easier to handle the real enemy. This book is my effort to help Network teams all over the world, to understand essential security concepts and terms to help them go through audits smoothly. This book uses examples and scenarios from day to day life of a Network professional to help him to understand that a lot of things are being done by him already. It’s just about creating and managing the evidences properly for audits. This book covers just the right amount of security essentials that is required and leaves the complicated part aside. I have also incorporated potential audit questions related to each topic and the right way to answer it. Just like law requires proofs to convict anyone, audit requires evidences to support the answers. Do remember that even if security policies are being followed diligently, absence of evidence may still be a finding. Guidance on collecting and creating evidences is also incorporated in this book. A lot of tips and tricks along with real world scenarios, sample spreadsheets and sample procedures are also included wherever required.

I strongly believe that ‘A Picture is worth thousand words’ therefore wherever required; I have included pictures and tried to imitate the real conversations. I have tried to prepare this book as an easy to read through without any complicated security jargons and terminologies. I have tried to explain security concepts and processes in the easiest way possible. Also the width and depth of concepts is kept to bare minimum required for network professionals to understand.

Please note that this book is not a reference manual or go-to book for all security related terms. This book is a practical handbook of handling security in day to day tasks to minimize findings in the audits. Purpose of this book is to develop a security culture, where security is understood rather than misunderstood and taken as just another task rather than major inconvenience. Who should read this book IT Heads, Network managers, Network planning engineers, Network Operation engineer or anybody interested in understanding holistic network security. How to Use this book? This book is solely focused on aspects of Information security that Network professionals (Network engineer, manager and trainee) need to deal with, for different types of Audits.

The book is divided in 4 sections for easy reference. Chapters can be considered as sub sections of these major sections. If you are confident about a particular section, you can directly jump to next section; however, I will still recommend going through this book in orderly manner as chapters slowly build upon previous ones. 1) First Section - Information Security Basics explains all required security concepts in detail. It also ventures in the threat paradigm that we are facing today and relevant security controls that are available. Only the threats and controls that are relevant to network infrastructure in some or the other way are explained. 2) Second Section - Securing the Network focuses on network security design aspects and how policies influence network design decisions. This section also covers assets and good practices of securing network and assets in initial phases.

3) Third Section - Secure Operations is all about incorporating security in Network operations. Operations are what network professionals do day in day out. Secure operation means planning and implementing security in day to day tasks. It also includes creating and preserving evidences for audit purposes. Potential audit questions from each area are also incorporated in this section.

4) Fourth Section - Managing Audits is the real test. First three sections provide information and prepare for audit while this section explains different audits and preparations required from

auditor and auditee. It gives a brief overview on how audit works, what are the processes followed and what auditee should do in each audit phase. It basically focuses on what kind of information has to be provided to auditors to be able to pass audit with flying colors. Assumptions There are a couple of assumptions made for writing this book: book: book: book: book: book: book: book: book: book: book: book: book: book: book: book: book: book: book: If budget was not a constraint then there are lot of tools that can be used, however I have kept it simple by using spreadsheets wherever possible.

3. Reader is aware of commonly used technologies in corporate environment like Intranet, Outlook etc.

You might find a couple of topics being repeated in the book. This is to emphasize the importance of the topic in particular chapter’s context.

Characters and Icons explained

I have used a lot of characters and icons to make it easier to penetrate. I myself never liked too much text in a book and I’m a big fan of people who incorporate interesting graphics in their books. I hope you enjoy these graphics as much as I did in creating them. Main Leads

Meet our Auditee: He represents Network security team and is quite an expert in his subject matter however when it comes to security, he gets a little lost. He has recently joined and trying to figure out how’s security affecting his day to day job. He is also given responsibility to handle upcoming audits which is adding to his already stressed mind.

Meet our Security Consultant:

He represents security team who understands the security awareness level of the whole organization and is been assigned responsibilities of facilitating audits. He wants to handle everything without much confrontation with technical teams. He already has a plan in mind and is constantly working with the Auditee to achieve the common goal.

Meet our Auditor

She is very experienced third party auditor who has audited this organization earlier as well and has a generic idea of where to look for findings. She is not expecting this audit to be any different from the previous ones but probably she is in for a surprise.

Supporting Characters have been introduced as and when required.

Icons

In the context of topic, how the relevant areas are managed practically in organizations especially from what I have seen

In the context of topic, anything of importance to make note of

In the context of topic, tips are tricks on how to manage the tasks easily

In the context of the topic, what are auditor’s expectations

Table of Contents Chapter 1: Basics of Information Security Chapter 2: Threat Paradigm

Chapter 3: Information Security Controls

Chapter 4: Decoding Policies Standards Procedures & Guidlines Chapter 5: Network Security Design

Chapter 6: Know your assets

Chapter 7: Implementing Network Security.

Chapter 8: Secure Change Management.

Chapter 9: Vulnerability and Risk Management.

Chapter 10: Access Control

Chapter 11: Capacity Management Chapter 12: Log Management

Chapter 13: Network Monitoring

Chapter 14: Information Security Audit.

Chapter 15: Technical Compliance Audit

Chapter 16: Penetration Testing

Section 1 – Information Security Basics

This section covers the absolute basics of Information security required to understand the rest of the book.

Following Chapters are covered in this section:

Chapter 1: Fundamental Security Concepts Chapter 2: Threats and Threat Paradigm

Chapter 3: Information Security Controls

This section covers Confidentiality, Integrity and Availability concepts and explains what is meant by protecting / securing an asset. It also delves in the threats that threaten smooth functioning of business and also available controls to keep these threats at bay.

Chapter 1 Basics of Information Security

1.1 Why Information Security

We are living in an information age. Everything we know of has turned digital; our finances, business strategies, physical facilities are all centered around information. Even at the personal front, money in our account is just a number on banking portal, we WhatsApp our friends rather than writing letters, we read newspapers online and express ourselves on social media, we even see time in mobiles. Information age has permeated in all aspects of our lives. We can love it, hate it but we can’t get away with it.

If so much of our lives depend on information, we will have to learn to protect this information if not today then tomorrow. It does not matter what profession we are in; we need to safeguard our information from falling into wrong hands.

Below Newspaper headlines will give you enough reasons of why you need to have security controls in your organization. These are some of the most notorious security breaches which made headlines in last 2 years. Successful breaches have resulted in major financial and reputational losses, loss of customers and even closure of the whole organization.

As we all know whenever a new technology evolves, its capability of misuse also comes along. It’s part and parcel of digital age

and we can’t accept only the beneficial part and completely ignore the evil twin. With technology advancement hackers are also growing at a rapid pace in terms of numbers as well as skills. Financial gain is the main motivator for developing different kinds of attack. Hackers are now even providing Ransomware as a service. User may not know anything about hacking, he just needs to buy the service, infect whoever he wants to and extort money. Some percentage of ransom amount goes to the original hacker (in this case Ransomware service provider). We have to strategize and be prepared for the evil twin while using the technology. Today all organizations either have an active online presence or they are in the process of having one. With so much of exposure, organizations are responsible to protect themselves and their customers’ information. Most of the countries now have some form of Information Security Laws in place that require organizations to keep customers’ information safe and secure at all times. E.g. According to Indian IT Law, any organization storing customer personal information including name/address/Aadhar card number (UID) etc., are liable to secure this information.

1.2 What is Information Security Information Security can be understood in a nutshell with this simple diagram:

1.3 Goals

What are the goals of information security? In other words, what are the aspects of information assets which are needed to be secured in order to ensure information protection? There are 3 major aspects of information that are needed to be secured to ensure information protection:

Confidentiality: Unauthorized people can’t access information asset. Confidentiality aspect protects privacy of data. It ensures that information assets can only be accessed by intended personnel or authorized people and it prevents unauthorized people or attackers to get access to data, at any point of time. Data or Information has two states – At rest or in motion. At rest data is sitting on storage media like servers/tapes and in motion it is being transferred over network internally or externally. E.g. router configuration files are only accessible to authorized employees of network department and they are stored securely so that no one except from authorized personnel can access the files.

When at rest, data can be protected by encrypting data and with access control. In Motion, data can be protected by encrypting data or sending through an encrypted channel like SSL VPNs. We will understand more about these controls in Chapter 3.

Integrity: Unauthorized people can’t modify information assets.

Integrity protects accuracy and reliability of data. It prevents data from accidently or intentionally being modified by unauthorized users. Integrity protection provides a way to implement authorized changes and prevent any unauthorized changes.

E.g. Only HR payroll staff should be authorized to modify payroll database and no one else should have modification rights to it. Only authorized people in bank should be able to change customer’s account balance. Unauthorized access or modification to customer’s account will lead to loss of credibility of the Bank. Often an integrity check uses hash function to ensure data remains unchanged at rest and after transit.

We will understand more on integrity controls and hashing in Chapter 3.

Availability: Authorized people should have access to information assets whenever they need Availability provides protection for the use of a resource in a timely and effective manner. It ensures resources are available to authorized personnel whenever they need those. Often, availability-protection controls support sufficient bandwidth and efficient processing as deemed necessary by the organization or situation.

When availability is protected, users can perform their task productively and customers can access services provided by organization without any hindrance. If availability is violated, employees may not be able to perform their work effectively and customers may not be able to access organization’s services.

Availability can be violated through the destruction or modification of a resource, overloading of a resource host, interference with communications to a resource host, or prevention of a client from being able to communicate with a resource host. E.g. If internet routers are non-functional for 5 hours, internet capability of the whole organization may get impacted which leads to the loss of precious man hours for the company.

Attacks or Violation of Information asset availability is known as Denial of Service (DOS)

Some of the technologies or concepts that focus on protecting availability include redundancy, fault tolerance, capacity management and patching. We will understand these in detail in Chapter 3.

1.4 Methods

Methods focus on aspects of information security planning.

Prevent: First priority of any information security plan is to prevent any breach in confidentiality, integrity and availability of information. Most of the security investment is done in deploying prevention techniques. Organizations have to understand threats, risks to information and ways to prevent threats from materializing.

Network Intrusion prevention system (NIPS), Firewalls, Passwords, MAC address filtering etc. are some examples of prevention techniques.

Detect:

As we are all aware it’s simply not possible to stop each and every attack, our next goal is to detect the attack as soon as possible. There have been instances when attacker was able to

infiltrate the network and remain there for up to few months without detection. Prolonged exposures give attackers ample time to sit through, analyze and extract meaningful and sensitive information from the network. Timely detection of an attack in progress can largely minimize impact of a successful attack. With advent of crypto mining malwares attackers aim to stealthily stay in the network for as long as possible to utilize resources (computational power and bandwidth).

Log Analysis, Network Intrusion Detection System (NIDS), Closed circuit TVs (CCTV), Motion detection cameras, Security Audits etc. are some examples of detection techniques.

Response: Whether or not detection process was effective, once it is obvious that organization is under threat, appropriate ways to respond to any situation is the next goal of information security. Response focuses on minimizing and containing the damage which may include shutting down the systems or disconnecting victim systems from network. It also focuses on Business continuity if primary servers/network have to be disconnected which may mean operating from a secondary site, or moving to manual processes. Server and data recovery is also part of this phase.

Once attack is stopped and business continuity is dealt with, damage assessment and thorough investigation is required to trace

back source of attack, intermediary attack points and the extent of damage caused. Professional forensic investigators may be required for this phase.

Next phase is to correct the mistakes so that such instance could never happen again.

Network intrusion prevention system (NIPS), Business continuity and disaster recovery methods, Forensic tools are some examples of response techniques.

1.5 Tools Tools focus on means or resources to be used for protecting information security. People, process and technology are the ways by which prevention, detection and respond techniques can be deployed to protect confidentiality, integrity and availability of Information. People:

Security is not achieved by security professionals alone. Each and every person included in a business process has to act individually and collectively to create a successful security plan.

Responsibility of security lies with owners and custodians of the system. We will understand more about owners and custodian later in this chapter.

Human factor is the most vulnerable and most exploited factor of successful attacks. People are considered the weakest link in security plan. An organization’s success rate of security strategy will largely depend on security awareness and training of each and every individual.

Security personnel, Network Manager, CTO, Network engineer, Network operator are some examples of people component of security tools.

Process: Everything in an organization is governed by processes. Various components of a particular process may be executed by different person/team/technology. There will be multiple teams at different stages, involved to complete each task.

Even if individuals have motivation and intent to behave securely, they would individually not know how to collectively act to prevent, detect and respond to attack unless there is a preplanned documented process with roles and responsibilities defined for each team.

Security is not responsibility of only security team or security personnel. Security team’s responsibility is to weave security in existing organizational processes and make strategic use of technology in support of security goals.

Example of a simple process is commissioning a router, commissioning/ decommissioning a user on network etc. Technology:

Technology includes all the tools, application, devices and Infrastructure components that are required for business. In today’s technology dependent world, technology becomes the core part of any security planning. In fact, most of the enterprises think security in terms of protecting technology only however, technology is just a part of solution not the complete solution. Technology is greatly influenced by how those are being used by processes and people. Technology is capable of being secure only if it’s correctly configured and people using it do not try to bypass security controls.

Desktops, Laptops, Routers, Firewalls, Servers, NIPS, Databases, Broadband connection etc. are some examples of Technology. 1.6 Beyond Confidentiality Integrity Availability (CIA)

However, CIA remains the major protection aspects of information, few more aspects have evolved over time.

Non-Repudiation

Non-Repudiation is to ensure that users cannot refute/deny sending or receiving a transaction, correspondence or contract. If an individual sends an email and later deny having done so, this is an act of repudiation. Non-repudiation becomes extremely important in online transactions and e-contracts, as users can easily deny initiating or signing it if non-repudiation controls are not implemented. In real world it is similar to sending registered mail, to ensure guaranteed delivery as well as recipient’s acknowledgement. Similarly, a legal document typically requires witness’s signature so that the person who signs cannot deny having done so. Digital signature, notarization and encryption are used to provide non-repudiation. Authenticity

Authenticity is assurance that a message, transaction, or other exchange of information is from the source it claims to be from. Authenticity involves proof of identity. We can verify authenticity through authentication. The process of authentication usually involves more than one “proof” of identity (although one may be sufficient). The proof might be something a user knows, like a password. Or, a user might prove their identity with something they like a key card or certificate. Modern (biometric) systems can also provide proof based on something a user Biometric authentication methods include things like fingerprint scans, hand geometry scans, or retinal scans.

Accountability

Accountability ensures proper identity of authorized users and recording audit trail of their activities in the environment. Audit trail is the log or the history of all system activities in chronological order, providing evidence of data transformation at every stage. Audit trail gives the evidence of data life cycle right from the inception through different levels of processing to the final report generation and storage.In the events of an attack, strong accountability ensures that management is able to pinpoint the users who accessed the data at every stage. Audit trails are also considered evidences for all legal investigations. Accountability is managed through strong access controls and audit logs.

1.7 Responsibility of Information Security This is sometimes the most conflicted and misunderstood aspect when dealing with Information Security. The common opinion is that Information Security department is responsible for overall Security in the entire organization, however truth is entirely different. Information Owner: Accountability of Information Security lies with the ‘Owner’ of Information. The information owner is usually a member of management who is in charge of a specific business unit, and who is ultimately responsible for the protection and use of a specific subset of information. E.g. Chief Financial Officer, Chief Technology Officer, Chief Information Officer, VPs, Head of Department etc. The information owner has ‘due care’ responsibilities and thus will be held responsible for any negligent act that results in the corruption or disclosure of the data.

Information Custodian: The Information owner, who obviously has enough on her plate, delegates responsibility of the day-to-day

maintenance of the data protection mechanisms to the Information ‘Custodian’. The custodian is responsible for maintaining and protecting the data. This role is usually filled by the IT department (e.g. Database Administrators, Network Administrators, Server Administrators etc.) and the duties include implementing and maintaining security controls; performing regular backups of the data; periodically validating the integrity of the data; restoring data from backup media; retaining records of activities; and fulfilling the requirements specified in the company’s security policy, standards, and guidelines that pertain to information security and data protection. We will read more about Asset Owner and Custodian in Chapter 6.

Information Security Department: Information Security works as an independent unit. Their responsibilities include but not limited to

1. Laying down Policies, Procedures and guidelines for Information Protection 2. Weaving Information Security in existing business processes 3. Create Information Security Awareness among employees

4. Deploying and Maintaining Security equipment and tools tools tools tools tools tools tools tools tools tools tools tools tools tools tools tools tools tools 1.8 Perspective of Information Security

As organizations became more and more dependent on IT, information became key success criteria. Most management agrees that Information is critical enough to do reasonable investment for its protection. And streamlining information security with key business goals comes under umbrella of Information security department. Information security department cannot work in silos and collaboration with each and every business function is important to achieve business goal. However, Information security department is viewed as a counter protective department by all other departments. Information security department is seen as Pointing fingers at my hard work Identifying problems which are so negligible

Adding more time to already complex processes Delaying projects Making life difficult by adding so many unnecessary access controls etc. This is only because people feel information security is responsibility of Information security department. It’s like not locking your flat because there’s guard at building entrance. This approach is not healthy and will not help in executing a successful security plan.

Successful security plan can be implemented only if everyone understands that Security is their responsibility. Just like people take responsibility of company provided laptop/car/mobile devices, in the same way responsibility of securing information handled on their laptops or mobile devices, information created or accessed by them, network devices they own or handle, servers they own or handle, traffic that pass through their devices etc. also lies with them only.

Information security department should be seen as facilitators, who can provide guidance through policies and procedures to ensure information protection. They are the go-to people when teams have doubts on how should they be protecting their information. Information Security teams are supposed to help everyone in protecting their assets. Responsibility of information protection still lies with Owner, and flows down to custodian.

Chapter 2 Threat Paradigm 2.1 Threats Paradigm

Now that we have understood some core concepts of Security, let’s understand who and what our enemies are and who we are fighting against in this security battle.

It’s always important to gauge your enemy’s strength before deploying your defenses. If your enemy is weak, you don’t need strong defenses but if your enemy is strong, you need to invest in strong countermeasures.

In this chapter, we will learn about our enemies, how they attack and what could be the impact of their attack on business. Although we will cover all types of threats; our focus will remain on threats specific to IT and Network Infrastructure.

Let’s look into some basic definitions before we delve into various threats; organizations are facing today.

Threat

Threats are basically the enemies we are fighting. Threat is anything (person/thing) which can cause an adverse impact on the

organization's business and business processes. Impact can be loss of integrity/confidentiality or/and availability of information

Some examples of threats to business are malware, earthquake, hackers etc.

Vulnerability

Vulnerability is a weakness which a threat can exploit to cause harm to the organization. It’s an inherent weakness that threat can utilize to cause loss of Confidentiality, Integrity & Availability (CIA). We can understand vulnerability by following analogy; Earthquake is a threat and if building does not have earthquake sustaining measures then that’s the vulnerability an earthquake can exploit. From Network perspective there can be following types of vulnerabilities: Network misconfiguration

Network Design flaws Inherent protocol weaknesses

Inherent encryption protocol/algorithm weaknesses

Operating system weaknesses Access control misconfigurations etc.

Risk

Risk is the potential of a threat to exploit the vulnerability along with its probability. It is basically a product of impact of threat exploiting the vulnerability on the organization and the probability of the threat exploiting vulnerability.

There may be a threat which can exploit a vulnerability and cause great damage to the organization but if probability of this scenario is not much then risk is not much as well. Continuing with our analogy; the building may be vulnerable to earthquake but if it’s not in the seismic zone and there have been no prior instances of earthquake in last 100 years then probability of the threat materializing reduces to a greater extent and hence the risk to the or-ganization is not much.

Attack

When a threat is activated or triggered to exploit a vulnerability, it is called an attack.

Attacks can be either Active or Passive. Passive attack is slow attack which runs in background without creating any noise for detection. It works in stealth mode without creating any anomaly, which can alert the monitoring system of attack in progress. Such attacks normally sniff packets on the network, record sensitive information and send it to the attacker to be used in active attacks. Active attacks, as the name implies are direct attacks on the systems and network. These attacks generate noise and can be detected by system monitoring systems. E.g. Active Password cracking, Denial of service attacks, System enumeration etc.

We will cover more about threats, vulnerabilities and risks in upcoming chapters.

2.2 Attackers or Threat agents

According to Wikipedia, “Threat Agent or Actor is used to indicate an individual or group that can manifest

It is imperative to understand who would want to attack the organization and steal or manipulate the information within. They can be competitors, terrorist, identity thieves or hackers. Hacker word is used liberally; however there are different types of hackers:

hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers:

hackers: hackers: hackers: hackers: hackers:

2.3 Threat Motivation Unless a threat is natural calamity, it will have some or the other motivation. Thieves plan before executing their heist to get maximum benefits out of it. Benefit they are looking for is mostly financial. Hackers also work based on motivation. Motivation can be financial gains, reputational damage (mostly of competitors), theft of information, social cause (for hacktivists), state sponsored espionage/sabotage, vengeance (mostly for insiders) etc. When identifying threats for an organization or a department, threat motivation has to be taken in consideration. Threat motivation for banks will be very different from government electricity board or e-commerce start-up. 2.4 Threat Impact

To understand the risk of a threat, we need to know what damage it can cause to the organization in case it materializes. Organizations are facing various threats with different levels of impact; however, their damage can be classified in three major categories:

■ Loss of Availability

Loss of availability means loss of service or loss of information to valid users. This could be done by a natural calamity/deliberate

hacker sabotage/theft or deletion of information/denial of service attacks. E.g. Bank’s ATM/Braches unavailable, Website unavailable due to hacker’s sabotage etc.

■ Loss of Confidentiality

Loss of confidentiality means that sensitive organizational information is leaked or stolen and is available outside the organization. It may be available to public/competitors or handful of hackers. Hackers may try to blackmail the organization or competitors may make use of this information to harm the organization. If information is available in public domain then anyone can use the information as they want. Organization may also face legal/reputational/financial damage depending on the information leaked.

For example, Theft of social security numbers or Aadhar card details of customers

Theft of credit card numbers along with CVV

RSA token algorithm was stolen and used by attackers to attack various banks. Ultimately RSA replaced the tokens worldwide causing huge financial impact to the organization

Dating site Ashley Madison’s users’ information was published over internet. It not only caused major rift in personal and social lives of the users but extortionists scammed these people to extort around $200 worth bitcoins from them. ■ Loss of integrity

Loss of integrity implies the validity and credibility of the information in question can no longer be trusted as it has been modified by an attacker. A company’s digital records can be remotely manipulated to fit the needs of the attacker.

E.g. Diginotar (Dutch certificate authority) had a security breach resulting in issuing of fake PKI certificates. Within a couple of months’ company declared bankruptcy and closed down. Loss of Confidentiality, Integrity & Availability (CIA) can result in following direct losses to the organization: ■ Reputation Loss

An organization’s reputation is built through years of hard work and an attack can tarnish that image. Once the credibility of the organization is damaged it is sometimes very hard to recover. This is especially true in the case of financial and security organizations.

E.g. If a security company is hacked then people will be varying of its services. HBGary a security auditing firm was hacked in 2011. Its CEO email and other information were publicly posted on internet. In the aftermath of it, CEO had to resign and potential customers of the firm had second thoughts about working with the firm. Although company survived this attack but with permanent scars.

■ Legal Disputes Loss of information can also lead to a legal issue for the organization. In case customers’ or other partners’ information have been violated, they can take legal action against the organization for breach of trust. ■ Financial Loss

Organizations may have to endure financial losses in form of penalties or compensation. Organizations can also lose financially in terms of cancellation of upcoming deals or decrease in share prices.

E.g. RSA had to replace millions of tokens worldwide after a security breach. 2.5 Types of Attacks

Organizations face a plethora of threats. These include Trojans, viruses, worms, social engineering, DOS amongst many others.

New threats seem to be popping up every day.

In the following section we will discuss various categories of attacks that organizations have to protect themselves from and what are their impact dimensions.

Few attacks can cross over from general to targeted and vice versa. E.g. Malware can be utilized as a generic threat and can also be used in targeted attacks.

These categories do not have fixed boundaries. A lot of attacks can be combined to form one single targeted attack.

2.5.1 Generic Attacks Generic attacks do not target a specific demographic, individual or industry. Generic threat agents always exist and anyone who does not have security protection on their computers can be a victim.

Let’s look at some common generic attacks organizations and individuals are facing today:

2.5.1.1 Natural Disasters Natural disasters can occur anywhere anytime and are a major threat to datacenters and the people working there. They mostly cause loss of availability. Typical Natural disasters are:

1. Flooding is especially a concern in flood prone areas. Datacenters in such locations should be properly elevated amongst other precautions otherwise a flood can cause irreparable damage. 2. Tornado/Hurricanes: Tornados can cause power outages and can disrupt communication channels. They may sometimes damage the

buildings as well.

3. Earthquake: Earthquakes can cause massive damage to buildings and lead to loss of life and property. An earthquake hit facility can be out of commission for a long time. 4. Lightening: Lighting storms cause electrical surges and can impact all kinds of electronic equipment. 5. Volcanic Eruptions: Volcanic eruptions can cause massive damage to buildings and people. Ash clouds can leave an entire area unusable. 2.5.1.2 Environmental threats

Environmental threats are the conditions in the datacenter or office environment which can interrupt or damage servers and network. Typical environmental threats are: 1. Temperature and Humidity: Servers and other network equipment need optimal temperature and humidity levels to function. If these levels are not maintained they can result in major outages. 2. Power Outage: Network entities require a clean steady flow of electricity. Frequent fluctuations and surges can cause permanent

damage to internal circuits. 3. Fire and Smoke: Fire and smoke can cause irreparable damage to data facilities. Fire is a threat to both equipment and personnel.

4. Water Damage: Water damages all electronic equipment by causing short circuits. Flooding can be caused by old or damaged water pipelines, drainage lines or improper water proofing. When designing or selecting a datacenter, all these factors should also be taken care of.

5. Chemical, Radiological or Biological Threats: These kinds of threats can cause massive damage to the people working in facilities. Radiological and chemical discharges can cause damage to equipment as well. 2.5.1.3 Malware Malware (Malicious + Software) is a program that is intended to harm the computer/network/data. Earlier malware creators were mostly technical students who created malware just to test their capability however, now motivation is moving from curiosity to financial gains. Malwares are created to either extort users, extract confidential information to steal money, create bot army etc. This section will cover various types of malwares and the kind of threat they pose to the organization. 2.5.1.3.1 Viruses

Viruses are pieces of malicious code usually piggybacking on other software designed to execute a specific function for the attacker on the victim’s machine. Their activation can be time based or event based depending on the attacker’s motives (generally require end user activation). They can self-replicate and infect other machines on the network and also mutate to avoid anti-malware software.

Based on the sophistication of the attacker, viruses can do a variety of attacks on the victim’s machine including data theft, deletion, corruption, and may even replicate so rapidly that they perform a kind of DOS attack denying resources to end users.

Viruses either attach themselves to certain files or they attach themselves to memory locations in boot sector of a hard drive.

2.5.1.3.2 Worms Just like their biological counterparts, worms are self-replicating pieces of malicious code designed to exploit known vulnerabilities and jam or slow down the network. Unlike viruses they don’t require end user activation.

Once a host on a network is infected, worms are programmed to attempt to infect other hosts on the same network independently by exploiting vulnerabilities in the network. Worms generally have the following components: Vulnerability to exploit A worm begins its cycle by exploiting a known vulnerability on a system by using some exploit mechanism (exe file, Trojan horse etc.)

Propagation mechanism

Once it has successfully infected a host, it is programmed to replicate itself and to find new potential targets to infect, thus starting the cycle again without human intervention. Payload The payload is that malicious code that executes an action which the attacker originally designed the worm for. An example would be creating a back door in the infected system.

2.5.1.3.3 Spyware Spyware as its name suggests is simply used to spy on infected users. Depending on what the spyware was created for, it is typically used to collect user information and periodically transmit it back to the attacker. Less malicious forms of spyware are used by some advertising agencies to gain demographic information of potential customers.

However fully malicious spywares are used to perform identity theft and credential theft. These are used for financial gain and to collect user information ranging anywhere from monitoring web browsing activity to monitoring key strokes, names of launched applications, messages and emails sent or received and even audio and video surveillance without the knowledge of the victim. 2.5.1.3.4 Adware

Adware is any software used to push ads to users with or without their consent usually in the form of pop-pup advertisements. Adware, like spyware collects users’ browsing information and pushes targeted ads based on their activities. 2.5.1.3.5 Scareware Scareware is used to scare and scam unsuspecting users. They usually try to convince users that their systems are infected and have links to purchase fake anti-virus software. E.g. if you were an

Apple iPhone 6 user, a pop up message would say that your Apple iPhone 6 is infected.

2.5.1.3.6 Trojan horses A Trojan horse is a piece of malicious code that attaches itself to software which is usually completely legitimate. Once attached Trojans provide backdoors to attackers into an otherwise secured environment. It is important to note that any software can carry a Trojan horse attached to it and all malicious activities can be hidden under the garb of legitimate software.

Common uses of Trojan horses are listed below:

■ Trojan horses are generally used by attackers to create zombies to be used for DDOS attacks.

■ Providing remote access to the attacker. ■ Key logging.

■ Trojans can also be used to target the security apparatus and disable firewalls and anti-virus software.

2.5.1.3.7 Ransomware Ransomware is a dangerous malware that encrypts the hard disk and completely takes over an infected system. They demand a payment as ransom to provide decryption keys. Payment is often demanded in common cryptocurrencies like Bitcoin, ripple etc. Ransomware attacks are on the rise and recently we have seen Wannacry, NotPetya and CryptoLocker ransomwares wreaking havoc across the world. To avoid it, we should avoid risky behavior online and always maintain a current backup of our data to avoid being a victim of ransomware. Also anti-malware software should be regularly updated. 2.5.1.3.8 Rootkits Rootkits have been in play since the early 1990s. Traditionally a rootkit is used for privilege escalation. In other words, once an attacker has gained user level access to a system, he installs the rootkit to escalate his privileges to root or admin level. Root level access to a system increases the risk of other systems being infected in the network as well. Attackers these days have designed very sophisticated rootkits which make detection extremely difficult. Earlier rootkits used to run in the background as standalone processes and hence could be detected by looking at memory processes and monitoring outbound communication.

The newer more sophisticated rootkits affect the system at kernel level and are much harder to detect as they mask their activities. Rootkits have been known to encrypt their outbound communication and piggyback their services on existing ports (without interrupting normal traffic on the port) so that anomalies can’t be detected by anti-malware software. A rootkit might consist of programs that view traffic &keystrokes, alter existing files to escape detection, or create a back-door on the system. Removing rootkits can be tricky as along with the rootkit, the malware rootkit is using should also be removed. The only definitive way to get rid of a rootkit is completely formatting the system. 2.5.1.3.9 Fileless Malware

Recent malware trend is fileless malware. As there are no corrupted files in the system, it is extremely difficult for signature based anti-malware systems to detect these infections. Fileless malware uses windows system programs like power shell and Windows Management Instrumentation (WMI) to run commands and infect other systems. Because these programs are trusted windows programs, their malicious commands are ignored by most security tools. Power shell and WMI are both powerful administrative tools to carry out administrative tasks. Once a machine is infected, malware can rapidly move laterally and infect

other machines as both the tools (Power shell and WMI) can be evoked remotely.

2.5.1.3.10 Cryptocurrency-mining malware

Newest trend of malware is moving towards mining cryptocurrencies. The boom of cryptocurrencies promises much more Return of Investment (ROI) than even ransomwares and unsurprisingly attackers are moving to exploit it. Cryptocurrency mining requires bandwidth and processing power. Anyone can mine cryptocurrencies using their computers bandwidth and processing power. Fileless malware infect multiple computers (personal and corporate) and start mining cryptocurrencies in those computers. This way attacker is benefited without even using his processing power and bandwidth. Benefits can grow exponentially if the whole of bot army is given the job of mining cryptocurrencies. 2.5.2 Targeted Attacks Attacks that are aimed at a particular organization or individual for a competitive edge or financial or political gains are termed as targeted attacks. They can either be physical or technical. 2.5.2.1 Physical Threats Attackers can gain unauthorized physical access in the organizations through methods like tailgating or social engineering. Once an attacker accessed a restricted area such as the datacenter

or server room or even internal office area, it can lead to a whole lot of problems. The attacker in this case basically has free reign and can cause maximum damage. The threats include theft, vandalism, misuse, wire-tapping etc. 2.5.2.2 Technical Threats Technical threats are performed by professional or novice hackers for purposes of revenge/financial/competitive/political gain. Please note that an attack can be a combined attack as well i.e. both physical and technical. An attacker may need physical access into the network to accomplish a part of his attack.

Technical threats can be of different types: ■ Hacking attacks. ■ Denial of Service attacks. ■ Wireless Network Attacks.

■ Specialized attacks. Let’s look at each of attack type in details. It’s imperative for Network professionals to understand all types of technical attacks in order to prevent these attacks.

In this chapter, we are covering only different types of threats and attacks you should be aware of. Controls to mitigate these threats will be covered in coming chapters.

2.5.2.2.1 Hacking Attacks The Attacker’s Process: Attackers follow a fixed methodology. To beat a hacker, you have to think like one, so it’s important to understand the methodology. The steps a hacker follows can be broadly divided into five phases, which include pre-attack and attack phases: 1. Performing Reconnaissance. 2. Scanning and enumeration.

3. Gaining access. 4. Privilege Escalation. 5. Covering tracks and placing backdoors. Let’s look at each of these phases in more detail so that you better understand the steps.

Let’s understand the type of attacks an organization can face during all above attack phases: 2.5.2.2.1.1 Reconnaissance attacks Reconnaissance is the first step in hacking process. In this phase attackers collect information about the organization which can be further utilized in the attack. Information gathering is done by going through Organization’s website, social media sites, job offerings etc. Type of information that attackers look for is mostly; website DNS records, mail server IPs, employee’s information, senior management information, technologies/applications used within the organization etc.

Above information gathering is done passively without involving employees/organization at all. Another form of reconnaissance attack is slightly active in nature where social engineering attacks are utilized to exploit human tendencies and natural instincts. In a social engineering attack, an attacker tries to convince an employee to either perform an unauthorized action or reveal confidential information. It is interesting to note that most real world attacks are either completely or partly based on social engineering. Various Reconnaissance and Social engineering attacks are covered below: Shoulder surfing Shoulder surfing as the name suggests is somebody peeping over someone’s shoulder to gain access to information that is not for their eyes. This could allow them to learn passwords or see sensitive something they are not meant to see. Often, shoulder surfing is stopped by dividing worker groups by sensitivity levels using locked doors. Dumpster diving Dumpster diving is done as part of reconnaissance to obtain valuable information about the target organization. The attacker digs through an organizations trash to learn any valuable information available. This could lead to a more effective social engineering attack.

All documents before being trashed must be shredded or incinerated and no storage media should be dumped in regular trash. Phishing Phishing is an attack which aims to gather sensitive user information such as usernames, passwords, credit card details or any personal information by impersonating a trustworthy entity like a bank or a merchant. Phishing is the electronic way of conducting face to face social engineering attacks. Phishing is generally carried out via emails and fake duplicate websites.

Spear Phishing It is a targeted form of phishing wherein the attack vectors are determined based on the targeted user. Unlike phishing here the messages are crafted for one or few individuals as compared to being a blind broadcast. An example of Spear Phishing

A hacker may sit in a café offering free Wi-Fi close to the organization he wants to target. He can start recording traffic on the network to identify packets of interest. An employee of the target organization visits the café and connects his computer/mobile to café’s Wi-Fi. Attacker gets hold of his area of interests by capturing his search strings.

Let’s say if employee was surfing for used cars deals, the attacker will quickly form a phishing email with fake deals of used cars and send to his email id. When he clicks on the link, attacker can direct him to malicious websites or install Trojans to gain entry to the environment. Whaling

It is also a targeted form of phishing which specifically phish high value targets like CEOs and CFOs or celebrities. Tailored messages as per their needs and interests are sent to them. These people are targeted to get maximum benefits either by stealing financial information or by blackmailing them for money. Vishing Vishing is a form of phishing carried out over the Voice or VOIP (Voice over IP) network. It is basically a social engineering attack done via phone rather than face to face. Vishing is difficult to trace if done using VOIP network or through international calls. Tailgating

Tailgating refers to an act of gaining access to restricted areas by sneaking in behind an authorized person after they entered their credentials. It is also known as piggy backing. Impersonation Impersonation refers to an imposter claiming to be someone else. An attacker claims an identity of an authorized person to use his power and authority. Impersonation is generally a part of social engineering. Pretexting is a form of impersonation wherein the attacker describes a false event and uses it as pretext for social engineering attack. Hoaxes A hoax as the name suggests refers to supplying false information to the target to convince them to do something which will reduce their IT security. It is a form of social engineering.

It is generally in a form of an email warning of some imminent threat and asking the target to perform certain tasks to protect themselves but actually goading them into compromising their security. The victim maybe asked to change configuration settings or delete essential configuration/system files.

Answer is Phishing, Vishing and impersonating respectively.

2.5.2.2.1.2 Scanning and Enumeration attacks

Scanning and Enumeration attacks are utilized to gain more information about network to get an overall picture of organization’s network and defenses. Purpose of these attacks is to identify various ports/services running on organization’s network which can be utilized as entry points in the organization. Vulnerabilities present in various systems are also identified in this

phase. In Medieval era, when an army used to attack a strong fort, they first used to identify all entry points to enter the castle and also determine the strength of rival army. Scanning and enumeration attacks are vaguely the same thing in digital world.

Various scanning and enumeration attacks are covered below: Ping Sweep Ping sweep is a basic network scanning mechanism to figure out live hosts in a given IP address range. ICMP echo requests are sent to all specified addresses and an echo reply from the host implies that the host is active. Port Scan

Once an attacker has figured out live hosts on the network, the next step is to determine the ports that are active on the live hosts and enumerating the services running on those ports. Probes include probing for DNS servers, Email servers, Gateway IPs, various services like HTTP, Telnet, FTP and others. Operating systems fingerprinting is also performed in this stage to identify different types of systems in the network. Purpose of this attack is to map the network to plan out next stage of attack.

Sniffing

Packet sniffers are applications which can tap and analyze the packet flow in a network. An attacker uses such applications to intercept network traffic to sniff out useful information which can be used in subsequent attacks. If an attacker has got a successful tap into the network, he has access to a whole lot of valuable information and can map out the entire network by studying the traffic flows. In case of telnet or rlogin being used for accessing UNIX systems over the network even usernames and passwords pass in clear text. Spoofing

Spoofing refers to masking or falsification of data. In terms of network traffic, spoofing means changing the source address of a packet so as to hide the origin of the packet. Also by spoofing the source of the packet the attacker redirects the responses to the spoofed address.

Spoofing is also common in phishing attacks via email. There are many forms of spoofing and it can be used to redirect response packets, bypass traffic filters, perform social engineering etc.

Vulnerability Assessment Vulnerability assessment scans identify weaknesses in the systems which can be later exploited. Vulnerability assessment identifies the

versions of services running on the system, patch levels and overall weaknesses in the systems. Tools like Nessus/Nexpose can perform deep scanning for vulnerabilities. They can also inform about exploits availability for identified vulnerabilities.

2.5.2.2.1.3 Gaining Access Attacks

This phase utilizes information collected in earlier phases to exploit the systems. Attackers run exploits for the identified vulnerabilities or introduce rogue elements to capture useful information. Authentication attacks are also performed at this stage to crack password and gain access to different users’ accounts. Various attacks used to gain access of the network are covered below: Man in the middle (MiTM)

In a MiTM attack, the attacker inserts himself in the middle of a transaction. It can be in the middle of a client and server or an internal network. By doing so, the attacker can intercept information being exchanged by the client and server. As for example an attacker would place himself between a browser and a web server and have access to any information being entered by the client like passwords, credit card information etc. This kind of an attack is called web spoofing and is carried out by DNS or hyperlink spoofing.

Another example would be when an attacker intercepts traffic in a network and fools both parties into believing that they are communicating with each other. This type of attack is possible due the connection oriented nature of TCP where both parties go through a three-way handshake to establish a connection.

The attacker appears to be the client to the server and the server to the client. The attacker can choose to alter data or merely pass it along. This attack is common in Telnet and wireless networks.

■ TCP Hijacking TCP hijacking is a MiTM attack which refers to an attacker hijacking an existing TCP connection between trusted parties. Simply stated, the steps in this attack are as follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows:

follows: follows: follows: follows: follows:

follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows: follows:

Domain Name System (DNS) Poisoning

DNS is used to translate internet domain names to computer understandable IP addresses. For users it is easier to remember Microsoft.com rather than 189.64.98.134. DNS does this job of translation. DNS servers maintain a DNS cache or Domain Name Table which basically stores domain names and their corresponding IP address for easy access. Domain name translation happens at various levels and table is also maintained at various levels. DNS table are maintained at host level, organization level, ISP level and so on.

For example, if somebody is accessing Microsoft.com, first computer will check its DNS table for any entry, if it did not find anything then it will query Organization’s DNS server for Microsoft.com. If organization also does not have any entry, it will query ISP’s DNS for it. Once IP address is received from ISP, it is cached in DNS cache of organization’s DNS server and computer’s DNS cache for direct access next time. Furthermore, DNS servers keep exchanging information with other DNS servers to keep cache of more and more domain names for quicker access. DNS poisoning is done by exploiting vulnerabilities of DNS implementation to modify DNS cache. IP address of Microsoft.com can be changed to point to a malicious website. Impact of this attack will depend on which level DNS cache was poisoned. If it was done at ISP level, then it will flow to all users of that ISP as well as any DNS server which is exchanging information with ISP’s DNS server.

Address Resolution Protocol (ARP) poisoning ARP Protocol works at OSI Data Link Layer (Layer 2). It functions as a translator of IP addresses to MAC addresses. Switch maintains an ARP table of IP addresses and corresponding MAC addresses for an effective delivery of packets. This is called ARP cache. An attacker can send spoofed ARP reply packets to modify ARP cache of the switch. Also all the computers which are connected to the switch may also save the modified MAC address in their respective ARP caches. Three types of attack can be conducted once ARP is poisoned:

■ Sniffing: The attacker will most likely try to mimic the gateway router. He can send out a gratuitous ARP packet to all hosts in the network poisoning their cache. The hosts will now forward all traffic meant for the gateway to the attacker. The attacker can now sniff the traffic traversing through it. Also if the attacker wants to remain undetectable, he may forward the packets to the actual gateway so that the victim remains unaware. ■ MITM: In above setting, if attacker decides to change the packet before forwarding it to the actual gateway; it will be Man in the Middle attack.

■ DOS: Attacker may also decide to drop some or all packets effectively causing DOS. Also if attacker provided some fake MAC address for gateway IP address then communication will be forwarded to that non-existent address effectively blocking all communication. CAM Overflow/MAC Flooding A switch is basically a smart hub. Instead of forwarding frames through all ports, as in the case of traditional hubs, a switch maintains a CAM (Content Addressable Memory) table storing MAC addresses associated with physical ports and VLAN information. The switch populates its MAC table in real time as traffic flows through it learning source MAC addresses present in the frame and storing them against physical ports.

Since the switch has a finite memory there is a limit to how many entries can be made in a CAM table. If an attacker has connected to a single or multiple ports of the switch he can generate bogus frames containing fake MAC addresses, flooding the CAM table of the switch. Once the limit is reached the switch reverts back to behaving like a hub and starts forwarding all future frames through all ports. The attacker can then sniff all traffic passing through the switch and perform a variety of MITM and DOS attacks. In case of VLANs being used, the attacker may sniff the traffic of a single VLAN but still DOS the entire switch as the CAM table is common for all VLANs. Hence this attack compromises the confidentiality and availability of services to the end user.

Most switches offer some sort of port security features where they can limit the number of MAC addresses learnt from a physical port to a required number and dynamically learn the MAC addresses. Also static MAC address mappings can be applied on most switches. In this case the network admin has to configure physical MAC addresses of users against ports manually. This method is considered more secure but increases the overhead for the admin. VLAN Hopping Attack

For understanding VLAN hopping, understanding the concept of VLANs and 802.1q encapsulation is important. Basically VLANs or virtual LANs are a means of segregating traffic in a switched environment. In a purely switched network, for two parties to communicate they must exist in the same VLAN. Inter VLAN communication can only take place with the help of a L3 device. In large scale networks deploying multiple switches, links between switches are configured as trunk ports. Trunks ports allow traffic of multiple VLANs to pass through them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them. them.

In this type of attack, the attacker double tags the packets over an 802.1q trunked interface. The first tag will be of the native VLAN of the switch of which the attacker is part of. The second

fake tag will be of the target’s VLAN. The second switch on the packet’s path will read this false tag and forward the packet to the victim. Please note that this is a unidirectional attack.

To avoid switch spoofing attacks configure all non-trunk ports to access mode. This will restrict attacker from forming an undesirable trunk link. To avoid double tagging attacks, configure all user access VLANs separately and not use the native VLAN at all. The native VLAN id can be changed to an unused VLAN id (default native VLAN id is 1). Spanning Tree Attack

The goal of Spanning Tree Protocol (STP) is to avoid looping over redundant links in a switched environment. In layer 3 packets there is a TTL flag which is reduced every time a packet traverses a router thus helping to avoid loops in layer 3. There is no inherent mechanism in layer2 to avoid loops which can lead to broadcast storms.

In an STP environment all switches negotiate a root bridge i.e. a root switch based on priority. This election can be influenced by the administrator. All other switches pick a root port which is closest to the root bridge in terms of cost and put all other redundant links in blocked state. The election process is done by

exchanging BPDU (Bridge Protocol data unit) packets amongst switches. If the root bridge goes down the process starts again and a new root bridge is elected and the STP topology reconverges.

In an STP attack the attacker has a rogue switch in the network or a machine mimicking a switch with a negotiated trunk with a live switch. In this scenario if the attacker advertises a superior BPDU with priority 1 the rogue switch can become the root bridge giving the attacker access to all traffic in the network. Also the attacker can keep the entire network in a constant state of reelection by using a minimal max age for the crafted packets and not sending BPDUs within that time. Hence the attacker can use STP to sniff user data and cause a Denial of Service attack.

For this type of attack to be possible the switch must accept BPDUs from the port the attacker is connected to. Hence STP should be disabled on all access ports.

VLAN Trunking Protocol (VTP) Attack

VLAN Trunking protocol is used in large scale networks to propagate VLAN configuration across all switches in a VTP domain via trunk links. If an attacker has an active trunk link with a live switch and the network is running VTP, the consequences are grave. There is no authentication mechanism in VTP and hence the attacker can generate VTP packets and possibly delete the entire VLAN configuration in the network. The repercussions of this will be catastrophic.

VTP should be used with MD5 authentication to ensure packet integrity. Pharming

Attackers host replicas of famous websites and if user is not careful enough then he may end up providing credentials to that malicious website. Pharming attacks are used to bring users to these malicious websites.

Pharming is done by either changing host file of user’s computer where domain name and corresponding malicious IP address is provided. Pharming can also be done by exploiting vulnerabilities of DNS server and direct users to malicious website instead of the original one.

Because the malicious website looks like the original one, user may give out his credentials on malicious website. This attack can target multiple users at a time if it uses DNS poisoning and modify DNS name table to send users to fake website instead of the original one. Routing Information Protocol (RIP) Attack

RIP allows communication among the routers to determine best routes to send packets. Routers keep communicating with each other and based on that information update their routing table. An attacker can send a malicious packet to claim that his device is the shortest past available. Based on that packet, router will update its routing table and send all data to the attacker machine.

An attacker can also spoof address of the device where normally data is sent based on routing table. Data will then be sent to attacker’s machine instead of the original router. Password Attacks

Passwords are the most common method of authentication used worldwide. Its simplicity also makes it target of numerous attacks. Guessing someone’s password based on his birthdate, spouse name, kid’s name, pet, car number plate or phone number will be considered the simplest form of attack and it is applicable even today when people are relatively aware of keeping strong passwords.

Most of the websites/applications require authentication however it is extremely difficult to remember each and every password therefore users tend to repeat the same password everywhere. In corporate scenario, a user has to remember passwords for various applications he deals and tends to keep same or similar password for all the applications. This is a risky situation because if one password is compromised, all other applications will also be vulnerable.

To avoid these situations, organizations use Single sign on where once user logs in, he is provided access to all other applications without logging in. This convenience to user also comes with the risk, that if that master password is compromised then attacker can get access to all other applications user has access to. Let’s see what kind of attacks, a password is susceptible to:

1. Dictionary attack: Users like to keep real words as a password. In dictionary attack, attacker uses a custom word list including most common passwords, dictionary words, user/organization specific words etc. This word list is then used against the password file obtained from the victim’s system. If the password is a real word, then it can be easily cracked using this attack. 2. Brute Force: The statement ‘All Passwords are crackable’ is not an exaggeration. Words are ultimately a combination of alphabets, numbers and symbols available on keyboard. They are finite in nature and hence given enough time and computing power, password can be cracked. Security strives to make password cracking an extremely difficult and time taking process for attacker

to lose interest and move on to easier targets. Brute force attempts to crack passwords by trying all permutations and combinations. This is a time and resource intensive process, however with computational powers available today, simple passwords can easily be cracked in matter of hours or a couple of days.

3. Hybrid: Hybrid attack is a combination of Dictionary and Brute force attack. It takes a custom dictionary as base and adds different prefixes/suffixes to those words like rohanxyz123. It also tries common replacement that people try like @ for a, 3 for e etc.

4. Rainbow Tables: All the passwords are either stored as hashes or encrypted using different algorithms. All above attacks identify the hashing/encryption algorithms used and then change the word list to the same hashing algorithm and then run a match against password file. Most of time goes in hashing the word list. Rainbow tables ease this job. Hashes of all possible words and passwords are available to anyone interested in cracking passwords. Although size is big but it makes password cracking a matter of hours. Any password less than 14 letters is crack able in matter of hours through this attack. 5. Replay: Rather than attacking password file, if an attacker has access to network, he can capture authentication traffic and replay that traffic to the server to get unauthorized access.

2.5.2.2.1.4 Privilege escalation attacks

Once an attacker has gained entry inside the network, he can try to escalate his privileges by performing privilege escalation attacks. There are various types of privilege escalation attacks that an attacker can use.

Buffer overflow attacks were commonly done on windows system exploiting to run an arbitrary code with system privileges. Privilege escalations attacks are commonly used on web applications. If application does not handle authentication and authorization well then privilege escalation is possible. There are two types of privilege escalations: Lateral privilege escalation: Where attacker accesses a user account/ feature with similar privilege than the account he currently owns. For example, if an attacker has compromised a user’s account in a banking site and just by changing account number he is able to download account statement of another account. Vertical privilege escalation: Where attacker accesses a user account/ feature with more privilege than the account he currently owns. For example, let’s assume that user management feature on a website is only available to administrative accounts. Feature is not visible to normal user account however a direct URL of that feature opens up the page for normal user. 2.5.2.2.1.5 Cover the Tracks

Once the goal of hacking is completed, attackers cover their track to maintain their anonymity and to ensure that attack investigation does not lead back to them. They do so by deleting logs and data.

They might also upload some backdoors to gain entry at a future instance. Some attackers even harden the systems after uploading backdoors so that other hackers can’t compromise the system and only they maintain exclusive control via backdoor.

2.5.2.2.2 Denial of Service Attacks Unlike the hacking attacks we learned in previous section, DOS attacks are intended to crash or bring down the system. It does not require sophisticated hacking skills to succeed. It is basically a game of volume of packets. If number of packets sent to a system are more than, it is capable to handle then it may crash. Purpose of DOS attacks is denying access to valid users. Loss of availability may ultimately convert into loss of reputation/finance/ customer to the organization. Denial of service is normally done to attack government websites to make them unavailable to valid users. DOS attacks are also done by competitors to bring down a major web cast or product launch event.

With time, systems have incorporated features to handle DOS attacks but new variants of DOS attacks are also emerging at the same pace. Distributed Denial of Service (DDOS) attack is one such variant. Rather than conducting attack from one source, this attack is simultaneously conducted from multiple sources on single victim system. These multiple sources may be unaware of being used in an attack. These hosts are called zombies or bots. This zombie software lay dormant in users’ computers and wait for master’s instructions. Based on instructions, all these zombies attack on the victim’s site. When attack comes from so many fronts, it is difficult for any system to decide between legitimate and illegitimate connections.

Over a period of time, Botmasters have created zombie armies/botnets and these armies are up for hire. People can use these armies however they want, by paying charges to Botmaster.

Bots are not used just for DDOS attacks to bring down the victims. It’s a delivery mechanism which can be used for variety of attacks.

Once a machine has been infected, it can be used to infect other machines, scan for vulnerabilities in the network and download payload or instructions from the control servers to spy or damage the host.

One such malware is currently on rise known as VPN filter. It has infected more than 500,000 routers/IoT devices around the world at the time of writing this book. This malware has a sniffer which captures login credentials and other useful information. Attackers can make use of the malware in multiple ways. This malware also has a kill switch which can be triggered to remove the malware completely along with the memory of the device. It’s a tricky situation; if triggered this malware can disconnect the victim devices from internet and in many cases render them unusable. It’s difficult to be completely DOS proof, however there are a lot contingency/high availability controls that can be used to counter these attacks. Also companies like Cloudflare are offering DDOS protection. Cloudflare adds an extra layer of protection in front of the website, which combats DDOS attacks and allows valid queries.

2.5.2.2.2.1 ICMP Attacks

ICMP Protocol is commonly used for checking connectivity and troubleshooting. A ping command with an IP address checks if the device is online. An ICMP echo request packet is sent to the device and an ICMP echo response is received by sender. There are various ways DOS attacks can be mounted with ICMP: Ping Flood: Chain of ICMP echo request packets are sent to victim’s IP address clogging victim’s outbound and inbound connection for receiving and replying to the packets. Tools like hping or scapy can be deployed to mount this attack. Same attack can be done in DDOS format where all the zombie machines send ICMP echo requests to the victim machine. Ping of Death: ICMP echo request packet is 56 bytes without header however an IP Packet can be maximum 65,535 bytes. Specially crafted ICMP Packet bigger than 65,535 bytes, is sent to victim machine. As bigger packet cannot be transmitted, it is fragmented and sent. When victim device tries to defragment the packet, it crashes. Patches have been issued for all devices for this vulnerability by most of the vendors.

Smurf ICMP echo request packet is sent to broadcast IP address of another network. The victim’s IP address is spoofed as source address in the request packet. So when router receives the packet, it is broadcasted in the network. All devices reply with an echo response packet to the victim’s IP address. If numbers of packets are huge, it can overwhelm both the victim’s system and the packet’s destination network.

2.5.2.2.2.2 TCP Attacks

TCP is a connection oriented protocol which utilizes TCP handshake to establish connection between two parties. Client sends a TCP SYN packet to request connection. Server replies back with SYN/ACK packet to acknowledge the request. Client again sends an ACK packet to confirm server’s acknowledgment. Once three way handshake process is completed, client and server start

interacting. Attackers misuse this TCP handshake process in various ways to confuse the device in order to overwhelm it or crash it. Attackers also exploit TCP data transmission process to perform DOS attacks.

Some examples of TCP attacks are covered below:

below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below:

below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: Just like ICMP and TCP, other protocols such as UDP, DNS, SNMP, HTTP and NTP, can also be used to flood the victim network. Most of the attacks use maliciously crafted packets with a spoofed source address (Victim device). When destination responds to the packet, it can overwhelm the victim’s network. Most of the attacks listed above can be conducted from a single machine but they are extremely fatal in distributed mode. 2.5.2.2.3 Wireless Attacks

Wireless technologies have provided a comfortable, portable and fuzz free environment to work in. They have provided the freedom to work from anywhere in the office space from canteen to conference rooms, there’s no more a need to look for cables to connect to. Employees can move around with their laptops along with the connectivity to the network. This mobility has invented the culture of ‘Work from Anywhere’. Employees don’t need to have specific desks to work in the organization.

As any other technology, wireless also comes with its own threats and vulnerabilities. Wireless infrastructure is mostly susceptible to below threats: 2.5.2.2.3.1 Rogue Access Points

Users connect to authorized access points (AP) in the organization’s infrastructure. Rogue access point is an

unauthorized AP which users can connect by mistake. Every smart phone is a potential rogue access point as they all come with the hotspot feature; however signal strength of smart phone is not much. A rogue access point set up with malicious intent will require very good signal strength for users to connect to it. Once user connects to the rogue AP, it can simply gain access to all the information being passed through it.

When rogue access point is given the same SSID as organization’s wireless SSID, it is called Evil Twin. If Evil Twin’s signal strength is more than organization’s AP then laptops may prefer to connect to it, giving attacker the access to user’s information or launch Man in the Middle (MiTM) attacks. Even if an organization does not have wireless network, they are still susceptible to rogue access point threat as malicious user can simply set up a guest network with organization’s name and tempt users to connect with it. It is fairly easy to identify a rogue access point these days. Companies like Cisco and other wireless technology companies offer rogue AP detection and even automatic prevention from them. 2.5.2.2.3.2 Eavesdropping Eavesdropping is literally listening to a conversation not meant for you. Eavesdropping in wireless context means scanning through various wireless networks and listening to network communication wherever possible.

Due to the architecture of Wireless network, they have to be published, for users to connect to it. But this publishing makes them vulnerable to attackers who are looking for a network to hack.

Wardriving: Wardriving is scanning through the available wireless networks and looking for vulnerable or interesting network to hack in. Organizations may decide to disable broadcast of their SSIDs so that only users who are aware of its existence can connect to it. However, SSID scanner tools can still find the hidden networks.

Once a vulnerable AP is identified then attackers can connect to it and listen to all the conversation in the network. Cafes and Malls provide open hot spots for customers to connect to. Attacker can connect to any of these easily available hotspots and listen to the communication. Bluesnarfing: Bluesnarfing involves stealing data of a Bluetooth enabled phone/device. An attacker scans for Bluetooth enabled device, pairs with it and extract information from the device.

It can be avoided by keeping Bluetooth disabled. It has also been patched in all latest devices.

Near Field Transmission (NFC) Attacks: NFC also utilizes Radio waves for communication. It allows communication with devices in close proximity without any sort of connection. Attackers try to

capture the communication to extract sensitive information. NFC is normally used for File transfer and also for payments. 2.5.2.2.3.3 Key Cracking

Currently all the encryption algorithms used in wireless technologies are crackable. Some may take more time and resources than others. WEP uses RC4 encryption which uses a 24bit Initiation Vector (IV). IV is basically a random number in crypto context and is used to increase the security of encryption; however, a small or clear text IV is a weakness which can be exploited by attackers. If an attacker captures enough packets and lot of IVs, he can crack the key. With tools available today, it can be done in minutes. WPA and WPA2 use stronger encryption which cannot be cracked with statistical data like WEP. Normally attackers target pre shared keys to hack WPA as they are like passwords and can be brute forced. WPS was introduced to simplify wireless setup process. By pressing a button and a PIN, wireless device can be set up for the user. However, this also introduced vulnerability, for attackers to brute force PIN and set up their own device in the network. WPS is extremely vulnerable and should be disabled. 2.5.2.2.3.4 Denial of Service (DOS)

Jamming: Wireless technologies use radio frequencies for communication. Radio frequencies are vulnerable to interference. AP uses single channel for communication. If two APs with same/ close channel frequency are physically close, signals can cause interference with each other. If enough interference is created, wireless communication can be hampered. Newer wireless standards are however difficult to Jam. 2.5.2.2.3.5 Replay Attack

Replay attack is done by retransmitting captured packets to the AP. It is done either to get in the device undetected or is used for key cracking. 2.5.2.2.4 Advanced Attacks

2.5.2.2.4.1 Advanced Persistent Threat (APT) APT attack is a targeted attack where a determined hacker breaches an organization with sole intention of stealing information or intellectual property. Attacker’s goal is not to do quick damage, deface website or DOS attack and leave, rather goal is to stay in the organization for as long as possible and slowly and steadily gain access to sensitive information. These attacks require skilled hackers who can keep working on different exploits once inside the organization’s network to ultimately gain financial, IP or government data. This attack is mostly done in 5 phases.

Unlike traditional hacking attacks where attacker gets in and leaves, hacker can practically stay in the organization for months in APT to complete his tasks. For an attacker to spend so much of time and resources in conducting this attack rewards have to be extremely lucrative such as financial gains, competitive advantage, state sponsored spying on government entities etc.

Preventing and detecting APT attack takes serious security planning and infrastructure. Attacker may try all the tricks up his sleeves once he is inside the network. Granular access control and zoning of network may help in keeping attacker at bay even when he has gained unauthorized entry in the network. Network monitoring, Data Loss prevention and correlation tools may be required to detect such sophisticated attacks. All prevention and detection techniques along with skilled personnel will have to work in tandem to detect an ongoing APT attack. Damage caused by an APT attack will depend on security infrastructure of that

organization. In coming chapters, we will learn how to incorporate security in all basic networking tasks to strengthen infrastructure and combat APT. 2014 RSA attack was an APT attack which exploited zero-day vulnerability on Adobe application. Sensitive information was extracted from the staging environment.

Chapter 3 Information Security Controls 3.1 Information Security Controls

Security controls are counter measures deployed to avoid, minimize and respond to any breach in confidentiality, integrity or availability of Information.

National Institute of Standards and Technology (NIST) defines three major categories of controls:

Administrative/Management Controls: These are the policies, laws, regulations, guidelines, procedures that govern the overall security plan. For example, government policies may require some basic controls to be in place if organizations are collecting personal information of customers. Information Security Policy, Network security policy, Router commissioning procedure are some examples of administrative controls.

Technical/Logical Controls: These are the application and system controls (hardware/software) controls. Antivirus, Antispyware, Firewalls, IDPS, Maker/checker application controls are some examples of Technical controls.

Physical Controls: Physical controls are put in place to secure a location, accommodating sensitive assets and information. Office

keys, doors, barricades, guards, watch dogs are some examples of physical security controls.

3.1.1 Layering of Controls (Defense in Depth)

Defense-in-depth means coordinated use of multiple security controls in a layered approach. A multi-layered defense system minimizes the probability of successful penetration and compromise because an attacker would have to get through several different types of protection mechanisms to gain access to the critical assets.

The types of controls to be implemented must map to the threats paradigm of the organization. Numbers of layers of controls should also be defined based on criticality of the assets. The thumb rule is, more critical the asset, the more layers of protection must be put into place.

3.1.2 Functionality of Controls Administrative, technical, and physical controls can be further classified based on functionality. The different functionalities of

security controls are preventive, detective, corrective, deterrent, recovery, and compensating. There can be more functional classification but in this book we will look at these 6 types of controls:

Intended to discourage a potential attacker.

Warns attacker of potential discovery and Penalty.

Example: CCTV, Security Guard, Banners.

Intended to avoid an incident from occurring.

Restricts access to sensitive information. Example: Firewalls, IPS, Security Guard.

Detective: Helps identify an incident’s activities and potentially an intruder. Alert concerned team to take appropriate action. Example: IDS, CCTV, Security Guard.

Fixes components or systems after an incident has occurred. Remedy the situation which allowed the activity or bring back to last good configuration. Example: Backup Files.

Recovery: Intended to bring the environment back to regular operations. Repair damage and stop any further damage. Example: Hot or Warm Site.

Controls that provide an alternative measure of control providing similar level of protection as original control. Controls that can be used instead of more desirable/costlier controls; work around. Example: Rather than investing in central log management system, organization may decide to review logs of critical devices daily and review high level log of other devices on weekly basis.

Based on functionality of controls, they can be placed in different layers for protection of critical assets. Some controls offer multiple functionalities such as CCTV is a deterrent as well as detective control.

Ideally when designing security structure of an environment, it is most productive to use a preventive model and then use detective, corrective and recovery mechanisms to support this model.

Basically any incident should be stopped before it starts; if not then it must be detected preferably before it causes any damage. It may not be feasible to try and prevent everything. Whatever cannot be prevented should be detected quickly. In worse case if both the to repair Business business

strategies fail then corrective measures should be taken the damage and business should continue based on Continuity Plan (BCP).Recovery action can then restore to normalcy.

3.2 Examples of Information Security Controls Let’s take a look at various controls we can use for Network Security.

3.2.1 Administrative Controls 3.2.1.1 Administrative – Preventive

This section covers various administrative controls that are preventive in nature, i.e., administrative pre-emptive controls to prevent threat from materializing.

3.2.1.1.1 Security Policies Security Policies and standards define the organization’s commitment to information security. Policies consist of overarching security principles and regulations that organization has to abide with. Statements are to be followed mandatorily. Policies should cover all aspects of information and asset protection during their life cycle, i.e. information creation, information/ asset classification and labeling, information storage, information/ asset access controls, information/asset transportation outside environment and disposal of information /asset when not required. Policy gives an overview of organizational security requirements and depends on various Standards and Procedures to provide practical implementation steps. More on this is covered in Chapter 4. Standards and Procedures are supporting documents to overall Security Policy. Ideally each and every statement of Policy should be explained with appropriate attributes and values in standards. Implementation steps of these attributes/values should be covered in Procedures.

3.2.1.1.2 Business Continuity and Disaster Recovery Plans

Business Continuity Plans (BCP) are proactive business operations continuity plans for all kinds of threats situations that can impact the organization.

BCP identifies various business processes and focus on the most essential business processes which should be functioning for business to run. Purpose of BCP is to ensure that the most critical business processes are functional even when threat is materialized.

Apart from business continuity, BCP also focused on Damage assessment, responding to the threats (Evacuation of personnel, Using Fire extinguisher etc.), resuming business to normal operations and disaster recovery.

Disaster Recovery (DR) is a part of BCP which focuses on protecting organizational information in face of disaster especially when whole facility and assets are rendered unusable like in Earthquake/Tsunami. DR plan focuses on bringing business processes online with minimum downtime and run operations until normal operations/facilities are resumed, after a disaster. DR is more technology oriented while BCP focuses on overall business processes. 3.2.1.1.3 Employee Training

Employee security awareness is extremely important step in maintaining strong security. Unaware employees can be a greater threat than an external attacker and can cause real damage to organizations.

Employees should be regularly educated about latest security threats and their counter measures. It is important to educate employees about information security polices and acceptable usage policies of IT assets. They should be trained on desktop security, strong passwords, malware identification & response procedure, sensitive information handling procedures, hoaxes, phishing and social engineering. Customized training should be conducted for employees depending on their roles and privileges, e.g. Network Administrators, System administrators should be trained to deal with advanced security threats. Mode of training can be instructor led, online mandatory training, posters, quizzes, pamphlets, e-zines etc. Training should be repeated at regular interval as people tend to forget the training overtime. 3.2.1.1.4 Segregation of Duties

Segregation of duties ensures that one person is not responsible for whole process, i.e. various people are involved at different stages for completion of one process. Process is ideally broken in various sub-processes and different people should be responsible to complete each sub-process. This administrative control ensures that one single person cannot control the whole process, which minimizes the possibility of fraud and error.

E.g. person operating on network devices should not be the same person auditing and reporting on the same devices as there’s

clearly a conflict of interest.

3.2.1.2 Administrative – Detective

This section covers various administrative controls that help in detecting frauds/attacks. 3.2.1.2.1 Mandatory Vacation

Mandatory vacation is an administrative control to send people on forced uninterrupted vacation. This administrative control is a great tool to identify fraud and errors. It also helps in training people on various tasks and eliminating dependencies on single employee.

A malicious employee will resist leaves as their fraud can be uncovered by people taking over in their absence.

Mandatory vacations are a win-win for organizations and employees. Employees relax during holidays which can result in more productivity when they come back and Organization can identify any malicious activities.

3.2.1.2.2 Security Audits Reviews and Audits measure effectiveness of security controls. If Security controls are implemented but not maintained properly, they may not serve desired purpose.

Annual or Bi-annual audits can identify: Gaps in organization’s processes.

Security compliance against standards & policies. Vulnerabilities that can be exploited.

Audit can also check security awareness of employees and recommend if there’s a need of generic or specialized trainings.

Audits can be performed by internal security teams or third party auditors. It is important for internal security teams to act as independent auditors and perform a non-biased audit.

3.2.1.2.3 Job Rotation

Job rotation means swapping employees from one role to another after fixed intervals. This administrative control serves dual purpose:

Helps in detection of fraud/noncompliance and errors of an employee when other person takes over.

Training employees on various tasks to avoid dependence on a single employee. A situation where an employee leaves the organization and there is no one to take over his tasks, can be

avoided by Job rotations. Redundancy of people asset is created through this process especially for sensitive roles like network administrator.

Just the awareness of someone taking over the work after fixed duration, can deter employees from malicious tasks for fear of detection.

3.2.2 Technical Controls 3.2.2.1 Technical – Deterrent

This section covers the technical controls that deter/discourage an attacker from mounting an attack. They are technical equivalents of “Beware of Dogs” signboard at the entrance.

3.2.2.1.1 Banners

Banners are legal statements shown to users during logon informing that unless they are explicitly authorized, they are illegally entering the system. In few countries, attackers cannot be tried legally if they were not warned before entering the system.

From Security’s Point of view, Banners should not give away organization or device specific information. From legal perspective, text should be aptly crafted with legal team considering laws of the nation.

Generally a Banner can contain all or part of below information: Notifying that only authorized people are supposed to access the system. Notifying that unauthorized access attempts are punishable by law.

Notifying that all activity is monitored and logged and can be used as evidence in court of law.

3.2.2.2 Technical – Preventive This section covers technical pre-emptive controls that help in preventing an attack. 3.2.2.2.1 Access Control - Authentication, Authorization and Accounting (AAA)

Goal of access control is to disallow all unauthorized access and explicitly allow only authorized users to access organizational resources.

Implementing an effective access control process is a two edged sword. Too much access may cause unauthorized activities and accidental or intentional breaches while too little access may lead to reduced efficiency and productivity. Goal is to balance out both odds and create an efficient access control process which can

identify authorized users, allow them required access and log their activities. Access control can be divided in three phases:

Authentication deals with identity of users and validating that they are indeed what they claim to be. Authentication also ensures that unauthenticated users are disallowed from using the system. Common ways to authenticate are Passwords, Biometrics and Hardware/Software Token. Authorization deals with assigning rights to users based on their job profile. Authorization also ensures that users cannot escalate their assigned privileges. Most commonly used authorization approach is role based approach. Accounting deals with logging all user activities with time stamp.

Access Control is discussed in Chapter 10 in detail. AAA Server and Protocols

AAA services to authenticate users, assign relevant privileges and record user activities are implemented by different vendors in different way. Such central servers which keep a database of user ids, passwords and access provisioning criteria are commonly known as TACACS server, RADIUS server, Cisco ACS, AAA server etc. but they all are essentially AAA servers. A router/switch acts as a client and forward user login request to AAA server for

verification. Rather than locally authenticating and authorizing a user, network devices can use a centralized service provided by AAA server. This arrangement is more flexible, scalable and secure than locally adding and removing users to each and every device. AAA servers are based on Remote Access Dial-In User Service (RADIUS)/ Terminal Access Controller Access Control Service Plus (TACACS+) protocols. RADIUS

RADIUS is an AAA (Authentication, Authorization and Accounting) protocol. It provides all three services. RADIUS authenticates users, decides their authorization level and logs their activities. RADIUS clients receive authentication request from the user and forward it to RADIUS server for verification. RADIUS server replies back with authentication status and authorization level to RADIUS clients. RADIUS client to server communication utilizes UDP protocol. RADIUS is commonly deployed in Remote access including VPN, Dial up and Terminal services as well as Perimeters defenses (Perimeter firewall/ router).

Below diagram is a typical RADIUS implementation:

TACACS+

Like RADIUS, the TACACS+ protocol is also an AAA protocol providing authentication, authorization as well as accounting. TACACS+ is similar to RADIUS but uses TCP instead of RADIUS’s UDP transport. 3.2.2.2.2 Route Authentication Route updates are transmitted to and fro among routers to keep their route table updated. Earlier versions of routing protocols did not provide any mechanism to authenticate the router before accepting routing information from them. An attacker could introduce a rogue device and send malicious/wrong updates to the router.

Recent versions of routing protocols do provide router authentication feature. For example, Routing Information Protocol (RIP) V2

RIP V2 uses clear text password based authentication between RIP routers. RIP updates are accepted only with the authentication password. Cisco offers MD5 hashed password in addition to clear text passwords.

Open Shortest Path First (OSPF) V2

OSPF V2 uses MD5 Hash based message authentication code (MD5-HMAC) for authentication between routers. MD5-HMAC is more secured than normal hashing as it uses cryptographic keys along with hashing, thus providing origin authentication and integrity protection.

3.2.2.2.3 Encryption

Encryption is a process of hiding messages in a way that only authorized users can see the message. Encryption essentially encodes a message and only authorized parties with a particular mechanism can decode the message.

During World War 2, Germany communicated with its own military units via Radio messages. These radio messages were encrypted and anyone could hear it including allies however only people who had Enigma machine along with Decryption code, could decrypt those encrypted messages. Enigma was used to encode the messages as well.

To understand Encryption better, we need to understand all the terms related to encryption.

encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption.

encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption.

encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption.

encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption. encryption.

There are two types of Encryption:

Symmetric Key Encryption Asymmetric Key Encryption 3.2.2.2.3.1 Symmetric Key Encryption It is also known as private-key encryption/cryptography or secret key encryption/cryptography. Single key is used for encryption and decryption.

For example, when encrypting files on hard disk or database user will have the corresponding key on the same device however when encrypted files or message are transmitted to different location both the sender and recipient need to have a copy of shared secret key.

Symmetric key provides strong encryption protection when large keys are used, however even if the key is strong enough it cannot provide any protection unless it is kept secret. It’s like if your banking password is compromised it will be of no use even if it’s an extremely complicated and hard to guess password. Secure key exchange is the challenge in symmetric key encryption. Keys have to be transmitted in a way that they cannot be intercepted. If keys are also sent via same medium as message, then anyone sniffing the message will get both the message and the key. Key has to be exchanged via out of band communication like courier, voice, sms i.e. any other medium but the same medium as the message.

Symmetric key is relatively faster than asymmetric key which is due to design of algorithm and usage of single key to encrypt and decrypt data. 3.2.2.2.3.2 Asymmetric Key Encryption It is also known as Public-key cryptography. Public-key cryptography uses a key pair private key and public key to encrypt and decrypt a communication. Private key must be kept private and secure however public key is openly distributed. Private and public keys are created in pair through an algorithmic function. Anything encrypted by a public key can be decrypted only by corresponding private key. Similarly, anything encrypted by private key can only be decrypted by corresponding public key. Asymmetric key encryption solved the problems related to key exchange as Public key is always available for use while private key is kept secret by user; however asymmetric key is more resource intensive than symmetric key encryption.

3.2.2.2.4 Hashing

A Hash is also known as checksum and message digest. It is resultant output of passing any file/message/text through hashing algorithm. Process is known as hashing and output is always a fixed length string called hash. Hashing algorithms are available in public domain. Hashing is not encryption however it creates a digital fingerprint of that particular file/message. If file or message is tampered even slightly, hash will change completely.

Hashing is used to ensure that message or file is not tampered with during transit or at rest. Hashing is also used as a means of providing origin authentication and confirmation of original content.

Upon receipt of file/message user can compute hash of the received content and compare this hash with the original hash value. If hash values are different then file/message are not the original ones.

Hash can also be created for system files after configuration. Any corruption due to malware or security breach can be detected during periodic hash comparisons with initial hash value. Note: Unlike encryption, hashing is irreversible process. Original file content cannot be created via hash. Passwords are normally

stored as hash files on operating systems, thereby ensuring reverse engineering of passwords from the hash value can’t be done.

As is visible in above image, Hash function creates a fixed length value (256 bit in above pic) for a character string like password as well as any other file irrespective of its size.

Strength of hashing is directly proportional to length of hash string. Some commonly used hash algorithms with their hash length are given below:

below: below: below: below: below: below:

below: below: below: below: below: below: below: below: below: below: below: below: below: below:

3.2.2.2.5 Public Key Infrastructure (PKI) and Certificates Public key infrastructure works on creating a centralized body which is trusted by all parties. This centralized body issues, validates and revokes certificates from the users. These certificates become the identity of user. Every certificate is attached with a private key and public key. Private key serves as user identity while public key is available to everyone who wants to securely communicate with the user.

These centralized bodies are called Certificate Authorities (CA). An organization can create its own certificate authority or use global certificate authorities like Comodo, VeriSign etc. Certificate authorities created internally will only be trusted internally in the organization. For devices which are communicating internally with various devices, internal certificate can be used however devices communicating with customers or external parties should ideally use a globally recognized certificate authority.

PKI Process PKI provides confidentiality, integrity, authentication and nonrepudiation. It can enable SSL transmission, secure VPN Access, two factor authentication, wireless security etc. in the environment. Complete PKI process is detailed below: For two users (Jack and Jill) to communicate, they should trust the same Certificate Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. Authority. 3. Sender’s Process

For Jack to send an encrypted message to Jill, he will first encrypt the message using Jill’s public key in her PKI Certificate. To ensure message is not tampered with in the transit, Jack computes hash of the message using one of the hashing algorithm ( MD5/SHA-1/SHA-2 ) He further encrypts the hash with his private key creating digital signature. This is also called signing the message, which is a digital equivalent of physically signing a document. This step is to provide authenticity to the message as only Jack is supposed to have his private key. Now the complete package with encrypted message, Jack’s certificate and digital signature is sent to Jill. 4. Receiver’s Process Jill will use a 4 step process to decrypt the message and ensure message integrity & authenticity. 1st Step is to verify certificate of Jack with Certificate authority to ensure that Jack’s certificate is valid and not revoked. This step ensures that nobody is impersonating Jack with an invalid certificate. 2nd Step is to decrypt the message using her private key.

3rd Step is to compute message hash of decrypted message using the same algorithm used by Jack. 4th Step is to decrypt the digital signature using Jack’s Public key and extracting message hash from there. Jill will compare the two hashes to ensure message has not been tampered with.

3.2.2.2.6 Digital Signature

Digital signature is basically an electronic variant of hand written signature. It provides integrity, authentication and non-repudiation with proof of origin.

A sender of a message signs a message using his or her private key. This provides unforgeable proof that the sender did indeed sent the message.

Non-repudiation is unique to asymmetric systems because the private (secret) key is not shared. Remember that in a symmetric system, both parties involved share the same secret key, and therefore any party can deny sending the message. Digital signatures attempt to guarantee the identity of the person sending the data from one point to another. The digital signature acts as an electronic signature used to authenticate the identity of the sender and to ensure the integrity of the original content (that it hasn’t been changed). They are used for banking transaction, legal documents, contract papers, e-commerce etc. Digital signatures are legal and valid substitute of paper based signature. Once a document/ email or transaction is digitally signed, sender cannot refute sending/ acknowledging it. Digital signatures using asymmetric encryption (specifically, publickey cryptography where a key pair of a public key and a private key is used) operates as follows:

Signing Process Jack writes a message. He then computes a hash of the message. He uses his private key to encrypt the hash.

He attaches the encrypted hash to the message. The message along with encrypted hash is sent to Jill.

Verification Process

Jill strips off the encrypted hash (the digital signature) from the message.

She uses Jack’s public key to decrypt his digital signature and extract the message hash. She then computes hash of the message herself. She compares the two hash values. If Hash values are same, digital signature is successfully verified and she can be sure of Jack’s identity and message integrity. 3.2.2.2.7 SSH Encryption Secure Shell (SSH) is another good example of an end-to-end encryption protocol. SSH is a secure replacement for common Internet protocols such as FTP and Telnet as well as several Unix R-tools, including rlogin, rcp, rexec, and rshell. There are two versions of SSH available today. SSH - V1 (which is now considered insecure) supports the DES, 3DES, IDEA, and Blowfish algorithms. SSH - V2 dropped support for DES and IDEA but added support for several other algorithms.

SSH provides authentication, encryption and integrity checks to client server communication. Telnet and FTP transfer credentials and other sensitive information in clear text which makes them susceptible to interception via packet sniffing. SSH mitigates the risk of disclosure through encryption. In addition, it also provides client and server authentication as well as integrity checks to ensure data is not tampered with during transmission. It’s a good practice to disable SSH - V1 support and only use SSH - V2.

3.2.2.2.8 Secure Socket Layer/Transport Layer Security (SSL/ TLS) Encryption SSL/TLS uses public key encryption to authenticate the server to the client, and optionally the client to the server. SSL relies on a combination of symmetric and asymmetric cryptography. When a user accesses a website, the browser retrieves the web server’s certificate and extracts the server’s public key from it. Public key cryptography is also used to establish a session key. This session key is encrypted and transmitted to the server. For all future communication this session key is used for encryption. This approach allows SSL to use the advanced functionality of asymmetric cryptography while encrypting and decrypting the vast majority of the data exchanged using the faster symmetric algorithm.

SSL forms the basis for a newer security standard, the Transport Layer Security (TLS) protocol, specified in RFC 2246. TLS is quickly surpassing SSL in popularity. SSL and TLS both support server authentication (mandatory) and client authentication (optional).

Following graphic shows SSL connection establishment steps:

3.2.2.2.9 IP Security Protocol Suite (IPSec)

IPSec is the de-facto standard protocol which works at OSI Network layer to provide holistic security (Confidentiality, integrity, message authentication and access control) to messages in transit. It’s an end to end security protocol which can be used in peer to peer mode, network to network mode or network to peer mode. It’s commonly used in securing VPN communication as well as in Firewalls. Being an open standard, it’s vendor neutral and various vendors support IPSec for secure communication of information across networks.

IPSec is basically a collection of protocols which work together to provide security to data in transit. Following protocols make IPSec Protocol Suite:

Authentication Header (AH): Provides message integrity, message origin authentication, non-repudiation. Encapsulation Security Payload (ESP): Provides message integrity, message origin authentication, message encryption. Security Association: Before a session is established between two communicating parties, they negotiate connection parameters i.e. Authentication algorithms, encryption algorithms, hashing algorithms, their key sizes etc. Security associations are established using Internet Security Association and Key Management Protocol (ISAKMP) IPSec operates in two different modes:

Transport Mode: Mostly used for peer to peer communication. Only the message payload is encrypted or authenticated. Tunnel Mode: Mostly used for network to network communication. The whole message including header and payload is encrypted. A new header is then created for traversal.

3.2.2.2.10 Network Time Protocol (NTP) NTP uses UDP port 123 to allow all client devices to sync their time with NTP Server. A central Network Time Protocol Server should be installed and all network devices should sync their time with the central server only.

This configuration ensures that the logs and their time stamps are accurate and effective correlation can take place. If all devices of an organization are not synced with a central server and are using their own time, then logs will be worthless in any investigation and they cannot be presented as evidence in court of law. Imagine if 40 network devices are sending logs to a central server

but their time is not synced, then central server cannot perform an accurate correlation. NTP V3 has authentication capability for time updates to ensure that device authenticates to NTP server before receiving NTP time updates. This configuration ensures that device only connects with authorized NTP server. 3.2.2.2.11 Virtual Private Networks Virtual Private Networks provide a secure way of communication between network to network or user to network over public network i.e. internet. It’s transparent to user and communication works seamlessly like working in same LAN network. A specialized VPN gateway or a Firewall with VPN capabilities can be utilized to establish VPN connections between network to network (site to site) or network to user.

VPNs are used to connect various branches of an organization spanning across the globe. VPNs are also used to connect remote users to the organizational network from anywhere. They are also commonly used to provide access to vendors and third parties to specific areas of the network.

When a user requests a connection with VPN gateway, a session key is negotiated after user authentication and traffic is encrypted using the same session key. User access to the network is restricted through user profile created in the VPN gateway. User

does not get unlimited access to the organizational network; he can access only the allowed network resources.

When VPN connections are established between two different networks, users can directly access the allowed network resources without any special connection request. It is seamless to the user. Above diagram is an example of Site to Site VPN. VPN uses two main protocols to create a tunnel over internet; L2TP and L2F. Confidentiality, integrity and authenticity is provided with the use of IPSec along with Tunneling protocol i.e. L2TP/IPSec. For Authentication and authorization VPNs generally rely on Remote Access Dial-In User Service (RADIUS)/Lightweight Directory Access Protocol (LDAP) or Terminal Access Controller Access Control Service Plus (TACACS+) Benefits of VPNS

VPN provides cost effective and secure solution for organizations to connect with remote users and branches all across the globe. Sensitive information within braches can be securely passed over public network. Remote users can access organizational network easily and securely from anywhere.

Granular access control can be implemented based on user profile and requirements. Access control to Third party vendors can also be controlled based on least privileges.

3.2.2.2.12 ACLs on Routers

Routers work on Layer three of OSI layer. They direct traffic based on IP address to internal network or to other routers if traffic is not destined for local network. Routers are also capable of basic packet filtering based on created rules.

Each rule is a combination of Match criteria and corresponding Action. Match criteria checks for Packet headers to identify the type of traffic while Action is either Permit or Deny statement. Rules on routers work in linear fashion where each and every packet is compared against rule list in orderly fashion. If packet matches with Rule#1 then packet is directed accordingly, if not, it is matched with Rule#2. The process goes on until packet reaches

at the end of the list. At the end, there’s normally a ‘Deny All’ rule. This sequence ensures that only expected and required packets enter the network and rest are denied.

This sequence can also be reversed by denying non- required packets explicitly and at the end include ‘Permit All’ rule.

Apart from filtering the traffic, access control lists are also used to identify certain traffic using matching criteria such as voice traffic. Action such as assigning high priority can then be taken on identified traffic.

Access lists are applied on router interfaces. For each interface, access list can be applied on inbound (Packets entering interface) or outbound (Packets leaving interface) connections. Access lists can also help in identifying spoofing attacks to an extent. For example, if packet received on an external router, has a private IP as its source address, then it could be a spoofed packet. Access lists can also be configured to block

reconnaissance and network enumeration attacks by denying specific ICMP and UDP traffic used by network scanners to map the network. Network Enumeration and Reconnaissance attacks are covered in detail in Chapter 16. 3.2.2.2.13 Firewall

As the name suggests, firewall acts as wall between a trusted and untrusted network. It allows or denies traffic based on preconfigured rules. Firewall can be software based on hardware based, however normally organizations use dedicated hardware as firewall. A dedicated device which has only essential functions and required kernel ensures minimized attack surface. A basic firewall is an OSI layer 4 device with awareness of Transport Layer parameters i.e. Protocol type, Port numbers etc. Firewall can control traffic based on specific protocols, specific hosts, specific locations etc. There are two categories of firewalls:

Network Firewalls: Regulate and Monitor traffic entering or leaving a network/subnetwork.

Host Firewalls: Regulate and Monitor traffic entering or leaving a host/computer. Firewalls have been evolving at a rapid rate. They have come a long way from simple packet filtering firewalls to deep packet inspection firewalls used currently.

Various types of firewalls can be following: 1st Generation Packet Filtering Firewalls: Packet filtering firewalls control packets by source/destination address or traffic type such as telnet/http protocols. Access control lists decide what action should be taken once the traffic matches with the rule criteria. If traffic does not match any rule criteria, then it is dropped. 2nd Generation Stateful firewalls:

Packet filtering firewalls could perform simplistic packet filtering based on protocols, ports and IP addresses however they had no concept of traffic context or state. TCP handshake is required for session establishment however attackers could manipulate TCP Protocol packets to send various kinds of TCP connection packets without completing the handshake. Too many connection requests without any subsequent connection response could overwhelm the packet filtering firewalls and lead to device crash. For example

Attacker may send a TCP SYN packet for connection request. As part of handshake, Firewall will send a TCP ACK packet and wait for subsequent Acknowledgement from attacker.

Attacker will keep this request open and send another TCP SYN packet and so on. This leads to too many open connections on the firewall waiting for attacker to finish the handshake.

If Firewall keeps waiting for these requests, then at some point Firewall buffer would overflow and crash. To tackle this issue, stateful inspection was introduced. Stateful inspection is session aware and keeps a tap on ongoing sessions within the firewall. State information is stored in a state table which normally has following columns: Source address, Destination address, port number and State status. State status could be Connection Establishment, Session in Progress and Termination. Each new packet is compared against state table to check if status is as expected.

Stateful firewalls also define a time threshold to set the duration of time to wait for subsequent response from sender. If it does not receive any response from the sender within stipulated time, then it ends that connection 3rd Generation Firewalls – Application Layer Firewalls: A newer trend in stateful inspection is the addition of an application layer packet inspection capability referred as deep packet inspection. Application firewall deploys an inspection engine

that analyzes protocols at the application layer to compare vendordeveloped profiles of benign protocol activity against observed events to identify deviations.

This allows a firewall to allow or deny access based on how an application is running over the network. For instance, application level firewalls can identify application attacks like cross site scripting and SQL injections, which can be used to deface web pages and extract database information from the web server. These are sophisticated attacks and most of the hacks we hear are application attacks. Application firewalls assess many common protocols including HTTP, database [SQL], email [SMTP, Post Office Protocol (POP), and Internet Message Access Protocol (IMAP)], Voice over IP (VoIP), and eXtensible Markup Language (XML).

4. Next Gen Firewalls (NGFW): Next Gen Firewalls are one step ahead of Application firewalls. Along with traditional firewall capabilities, next gen firewalls can include a combination of following functionalities:

Intrusion detection/prevention system.

SSH and SSL inspection.

Deep Packet Inspection.

Web application Firewall.

Malware detection.

Unified Threat Management (UTM): Next gen firewalls (NGFW) are sometimes called as Unified Threat Management (UTM) appliance. UTM appliance is a single device with above stated functionalities. Main advantage is the ease of troubleshooting and maintenance as single console can be used to configure and monitor various security capabilities.

Main disadvantage of using UTM device is single point of failure. A single device handling multiple functionalities should have enough processing power and memory to manage all tasks efficiently.

Some organizations prefer UTM approach to secure their network while others manage dedicated devices for each functionality.

5. Application proxy Gateway: As the name suggests, Proxy gateways acts as proxy to the internal network. They work at Application Layer and have all features of application firewall along with a proxy agent.

For example, a user tries to connect to a FTP server behind a Proxy gateway then Application gateway acts as a proxy to the FTP server. It validates the connection, inspects the packet and validates authentication if required, then creates another session with the ftp server and sends the request. User request is not directly sent to the server. It is broken in two connection requests; one from the user to Proxy gateway and another from the Proxy gateway to the server. Similarly when server responds two connection requests are generated; one from server to proxy gateway and from proxy gateway to the user. This whole process is transparent to the user.

3.2.2.2.14 Network Access Control Another common requirement for firewalls at the edge of a network is to perform health checks for incoming connections from remote users and allow or disallow access based on those checks. This checking, commonly called network access control (NAC) or network access protection (NAP) or network admission control, allows access based on the user’s credentials and the results of “health checks”.

Health checks typically consist of verifying that one or more of the following checks:

Updated signatures of anti-malware software.

Updated version of personal firewall software. Configuration settings of anti-malware and personal firewall software (e.g. they are enabled).

Elapsed time since the previous malware scan.

Patch level of the operating system and selected applications.

Security configuration of the operating system and selected applications. These health checks require agent software on the user’s system that is controlled by the firewall. If the user has acceptable credentials but the device does not pass the health check, the user and device may get only limited access to the internal network for remediation purposes. 3.2.2.2.15 Intrusion detection and Prevention system (IDPS)

An IDPS system is configured to passively listen to network traffic to detect any anomalies. Earlier IDPS systems were handling network communication but now they are also managing wireless communication.

IDPS system can detect an incident, alert the administrator and can also respond to that incident. Incidents can range from malware detection, malicious file transfer, violation of security policies or acceptable use policies or protocol anomalies. Originally IDS were required to be configured to passively scan the network traffic however IPS works in active in-line mode (i.e. Traffic passes through the device). IDPS systems are normally placed behind firewalls at each entry point of the network.

IDPS response to malicious traffic could be blocking network traffic from the source, changing firewall configuration, resetting the connection, dropping the packet etc.

Two categories of IDPS systems are used in organizations:

1. Network based IDPS: Analyzes in-line network traffic to identify any potential malicious activity.

2. Host based IDPS: Listen to traffic going in and out of a computer to identify any incidents e.g. malicious file transfer, unauthorized port opening etc.

Efficacy of IDPS systems is based on their fine tuning. As each environment is different, IDPS system has to be fine-tuned to that particular environment. A specific traffic may be malicious for a particular organization while same traffic can be required for business purposes in another organization. Due to which, out of the box IDPS systems create a lot of false positives (normal traffic raised as incident) and false negatives (malicious traffic not raised

as an incident) until fine-tuned and configured according to specific organizational environment.

Types of IDPS systems: 1. Signature Based IDPS:

These are most commonly used IDPS systems. They analyze potential incidents by matching traffic signature with pre-defined rules set in the device.

Mostly rules are by default configured by the vendors but they can also be manually configured. Based on matching criteria, a subsequent response is also defined in the rule. IDPS devices should remain updated with latest threat signatures to counter new threats. Even though rules are by default configured, they should be fine-tuned for the organization specific traffic. Signature based IDPS work great for known threats however they are completely inefficient against unknown threats. 2. Anomaly Based IDPS:

Anomaly is defined as anything not normal. This type of IDPS raises an alert for any suspicious behavior which does not match with normal traffic profile. This normal traffic profile is different from organization to organization. Initially IDPS system is placed inside the network in observation/ learning mode where it monitors normal traffic at all times. Duration of learning depends on organization to organization. Once a normal traffic profile is created, each and every packet is matched with this profile. Along with normal traffic profile, custom rules can also be created on IDPS systems which will be matched against traffic.

These IDPS systems had proven effective against unknown attacks. Any new exploit or malware will create suspicious activity on network which will not match with the normal profile of IDPS systems.

Challenge of such systems is that existing malicious activities will be considered normal, if they were present during profile creation.

3. Stateful Protocol Analysis based IDPS:

These IDPS systems use protocol definition to identify any deviation from the normal protocol behavior. They raise an alert if any deviation is observed in network traffic which does not match with defined protocol behavior.

These IDPS systems are mostly equipped with generic protocol definition profiles provided by vendor. In stateful inspection, IDPS systems keep track of the state of the session at application level and network level. For example, TCP handshake requires different flags in packets in a specific order for handshake to be successful. IDPS keeps a track of state of session based on these flags. These IDPS systems are extremely resource intensive and may not work well with proprietary protocols. 3.2.2.2.16 Data Loss Prevention (DLP) Data Loss prevention addresses insider threat to ensure sensitive information is not disclosed by end user accidently or deliberately. DLP products can be software based or dedicated hardware based.

DLP can be deployed on network to filter data streams of data in motion or DLP can be deployed on hosts directly to protect data usage on end points. Depending on security requirements, DLP can also be deployed in Hybrid Mode (Network and Host) in an organization.

Business rules are created on DLP system to classify and protect information based on their classification. Rules are created around data; i.e. if data matches with defined criteria (confidential Information) then take corresponding action (Monitor/Block). Rules

can be applied on all methods of sharing information i.e. removable devices, Emails, Upload on Public sharing websites etc.

For example, if an employee tried sending confidential information to an email address which is not white-listed, permission would be denied or if an employee tried to copy sensitive information to USB permission would be denied. DLP effectively protects information at highest level specifically from insider threats. DLP is also known as content monitoring and prevention systems, Data Leak prevention etc. because it directly works on content.

3.2.2.3 Technical – Detective This section covers technical controls that help in detecting an ongoing attack. If preventive controls fail, detective controls can help in identifying the attack source and methodology to minimize damage and block future instances.

3.2.2.3.1 Audit Logs Logging is a critical detective control, which helps in detecting a breach while it is happening and also provides forensic support after an event. For Logs to be effective, it is essential that all devices in the network are time synchronized otherwise event correlation will not be possible.

All network devices allow central logging to a syslog server. These logs can be separately analyzed manually or by SIEM tools. More on this is covered in Chapter 12 & 13. To properly capture the essence of activities on a device, at least following categories should be logged: Access control Logs including authentication success/failure, authorization success/failure, user activities, privilege user activities etc. Device Logs including events, errors, critical conditions (shutdown, restart, crash), capacity utilization (CPU, Memory, Bandwidth etc.) etc. Traffic Logs including permitted traffic, denied traffic, suspicious traffic (outbound file upload, malicious file download) VPN

network sessions and traffic, wireless network sessions and traffic etc.

Log management is much more than just configuring logging on the devices. Confidentiality and Integrity has to be ensured while transferring logs on the network. Ideally traffic should be isolated to management VLAN and encrypted in transit and in storage. Additionally, stored logs should be protected via access control, cryptographic hashes to prove integrity and regularly backed up. Unless logs are being correlated and analyzed true advantage of logging is not achieved. Automated scripts or SIEM tools should be used to understand what is really happening in the organization. Tools should be configured to generate alerts based on specific criteria.

3.2.2.3.2 Network Monitoring System Network monitoring system is deployed for health monitoring of all the systems on the network. It’s a detective control to check availability of network components. Network monitoring system checks CPU Utilization, memory utilization, storage status, availability of a device/computer and other things to maintain a health profile of each device.

Monitoring system works by probing all the connected hosts and devices on network for various parameters. Depending on vendors

it generally raises an alert if one or more of the following conditions are met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: met: Monitoring systems work as an early warning system to troubleshoot the problems before they can cause some real damage to the organization. Sometimes a machine can crash for hardware reasons however most of the times crash can be averted if steps are taken on time, based on monitoring alerts.

Depending of vendors various protocols are used to verify different parameters. Most common protocols are Ping and SNMP. Ping is used to check availability of the host on the network. Also ping is used to check transmission time and loss of packets

during the transmission. It gives a clear idea of traffic congestion on the network.

SNMP is used to check various health parameters including CPU utilization, Memory utilization etc. SNMP V3 is the most secure version available for use. Earlier versions only provided authentication by means of a community string (password), which made it vulnerable to hacks if a strong community string is not user or if it is compromised. SNMP provides a lot of useful information to hackers therefore it’s imperative to secure the communication. SNMP V3 provides encryption, authentication as well as integrity protection for SNMP communication.

More details on this topic are available in Chapter 13.

3.2.2.4 Technical – Corrective This section covers technical controls that help in quickly correcting a situation and bringing systems and network back to normalcy. 3.2.2.4.1 Network Configuration Files Backup

Network configuration files of all network devices should be secured and routinely backed up. In case of crash, they should be readily available to bring back the device as soon as possible. Network configuration files contain sensitive information and if they get compromised, then attacker will have all the information

required to mount the attack. It’s extremely important to secure these files with limited access control. A lot of times, configuration files are extracted for troubleshooting, audits etc. Due care should be taken in protecting those copies as well. Configuration files may also have passwords which should ideally be removed from the file copies. A lot of Network monitoring systems provide the functionality to save configuration files in a specific location. From there these files can be backed up automatically however one should ensure that even in backup they should either be encrypted or access controlled. Very limited people should have access to those files and local copies on administrator’s laptops should be completely avoided as security controls may not apply on those copies.

Also avoid using TFTP server to backup configuration files as TFTP protocol does not provide authentication controls. Data can also be sniffed over the network. Use SFTP or any other secure method to transmit backup copies to storage server for safe keeping. 3.2.2.4.2 Business Continuity Planning In today’s connected world, businesses may face financial and reputational losses if their services are rendered unavailable for some duration. Business continuity planning is done for such scenarios to ensure business is up and running when crisis hits.

Business continuity planning is done by identifying critical services of the organization which should be available to ensure business is running. All supporting services and resources are also identified which are required for these critical services to run. All risks and threats are identified to these critical services and all disaster scenarios are brainstormed. Once risk assessment is done, controls are identified and implemented based on business services criticality. Maximum controls are implemented for core business services and their supporting services while adequate controls are implemented for other support services.

Human life is paramount and all necessary steps should be considered for safety of human life in Business continuity plan. Business continuity plan in general should addresses below points:

1. Facilities Recovery or re-routing

In a crisis situation, Business continuity manager should perform damage assessment and take a call if current facility can be made workable or base has to be moved to another location for business continuity. Business continuity plan should have alternative locations for operations to be moved along with all necessary arrangements i.e. utility (electricity/water), network connectivity etc. 2. Redundancy

For critical services to be available even in crisis, redundancy has to be planned at every level. Redundancy for devices and servers should be planned through clustering and fault tolerance. In clustering, Devices are clustered together to balance the load among themselves and perform effectively even in the peak traffic. If one node becomes unavailable, then other nodes take over seamlessly. Fault tolerance should be applied at each level e.g. at least two vendors for network connectivity, two sources of basic utilities like electricity etc. Switches and routers should be configured in activepassive or active-active mode where if one device goes down, other takes over immediately. In Storage, Redundant Array of Independent Disks (RAID) solutions should be used to duplicate data on various discs so that if one disk crashes, data still remains available. Backup copies of data should also be kept at two physically different locations. People redundancy should also be considered. All single point of failures should be eliminated in terms of technology and People resources.

3. User Training The most important aspect of Business continuity is user training. In crisis situation if user is not trained then panic will spread. Users are trained through mock drills and awareness trainings so that they are aware of what has to be done in crisis. User should

be able to follow business continuity plan step by step to ensure business services are up and running as soon as possible in the event of crisis. 3.2.2.5 Technical – Recovery

This section covers the controls that are utilized in dire situations. Sometimes there are unavoidable/unexpected situations for which organizations couldn’t be prepared for. In such situations recovery controls are used to minimize business impact and for business continuity. 3.2.2.5.1 Disaster Recovery Planning

Disaster Recovery Planning is an important aspect of Business Continuity planning. Business continuity takes into account all kinds of risks and outages however; Disaster is normally declared when services are rendered unavailable due to natural or human induced disaster. Disaster recovery mainly focuses on technical recovery supporting business critical processes including servers and network infrastructure on which critical processes depend upon.

Disaster Recovery planning is done by identifying assets/services supporting business critical functions. All of these assets/services are then assigned a Recovery Time Objective (RTO) and Recovery Point Objective (RPO) value. RTO specifies how much time is allowed for recovery of a crashed service. It can also be called as Maximum tolerable outage. E.g. If RTO for a specific service is 0, then real time replication of devices and data on another site needs to be done however, if RTO is 72 hours for a service then backing up once in 2 days should be enough. RTO also dictates the type of Service Level Agreements (SLA) required with third party service providers such as SLA of 0 downtime has to be agreed with Data Centre service providers and even Telecommunication services for assets with RTO as 0.

RPO specifies how much data an organization can afford to lose before business continuity is hampered. E.g. If RPO of an

asset/service is 2 hours, then backup of that data should be done every two hours. Or if RPO of an asset/service is 72 hours then backup of data should be done once in 72 hours. RPO dictates the frequency of backup. Financial transactions may require every last bit of the data and their RPO can be few seconds. Once RPO/RTO is defined for each asset/service, Disaster recovery planning is done. For few services real time replication may be required while for others backup could be sufficient. A documented Disaster Recovery plan should be created with details of each service, their RTO/RPO, arrangement of alternative site, responsibilities of each department /individuals, contact details of service providers etc.

In case the original site is unusable one of the following sites can be chosen to continue the operations. 1. Hot Site: Hot site is mostly a replica of original site with all the necessary services and latest backups. They can be operational within minimum time if original site is down. Hot site is equipped with all essential hardware and software. All users have to do is to connect to hot site rather than original site and they can start working normally. Hot Sites are the most expensive to maintain as it doubles the expenditure.

2. Warm Site: Warm site has basic infrastructure like network connectivity, phone lines, electricity etc. They may also have servers/ computers but they are not operational like hot site. They can be made operation within some time frame. It is less costly option than hot site however it compromises on business operations for that specific period required to bring up the warm site.

3. Cold Site: Cold site provides bare minimum space and facilities to the organization. Organization is supposed to buy all essential things to set up the facility. It is the cheapest option of all the alternatives and also the most time taking. When deciding for alternative sites, organization should consider their availability requirements and also the budget. 3.2.3 Physical Controls

3.2.3.1 Physical – Deterrent This section covers physical controls that discourage attackers from attacking. E.g. presence of traffic inspector/CCTV cameras deters traffic offenders. 3.2.3.1.1 Fences

Surrounding the whole campus with fences is a good deterrent. Fences help in getting rid of multiple entry points and help to create one or more entry points (as required), in an otherwise open area. Limited entry points help in implementing better security controls. More the height of the fence, the better security it can provide. Along with height, another factor to consider is light. In absence of proper lighting fences can be compromised. Depending on level of deterrence required, material (ropes, barbed wires, concrete, wood etc.) for fences should be selected. Proper monitoring through CCTV cameras/regular patrol may also be required if area covered is quite big. 3.2.3.1.2 Watch Dogs Properly trained dogs can prove to be a good deterrent.

3.2.3.1.3 Security Guards Security Guards are not only deterrents but also work as preventive, detective and corrective control. Presence of security guards can easily deter any intruder. Security guards can prevent

intrusion, trespassing and can also raise alert of any suspicious activity. Security guards are normally trained to comprehend the thief/suspicious person until authorities arrive.

Selection of security guards should be done based on their skills as well as background checks should be done to ensure no unscrupulous person is recruited as a security guard. 3.2.3.1.4 External lighting and cameras

CCTV cameras covering the ground with brightly lit spaces can be a good deterrent. In absence of proper lighting other deterrents like cameras and fences may not be as effective. Due care should be taken when lightning the areas. Entry and exit points should be brightly lit. Positioning of cameras is equally important. All entry exits of sensitive locations should be covered by CCTV cameras leaving no blind spots. There should be people monitoring and managing those cameras at all times.

3.2.3.2 Physical – Preventive

This section covers physical controls that help in preventing attack/ threats from materializing. 3.2.3.2.1 Mantrap Mantrap is used to ensure only one person can enter at a time. This is used to prevent tailgating (where an unauthorized person can piggyback an authorized person and gain access physically). Mantrap can be created in several ways. Idea is to create two entry points one after another which is either continuously monitored or has just enough space for one person at a time. Mantraps are normally used to prevent unauthorized access to highly sensitive areas like datacenter entrance, cash or precious jewels handling areas etc. 3.2.3.2.2 Power generator/backup

Longer electricity outage can prove extremely harmful to the organization. All electronic systems, building facilities, elevators and escalators depend on stable electricity power, loss of which can seriously damage electronic systems. Datacenters have to be continuously cooled down. All professional datacenters normally account for this calamity and have

Uninterrupted Power Supply (UPS) and also power generators in case outage lasts a long time and UPS batteries are used up. All servers and devices are also connected to dual power supply to seamlessly transition to other in case of outage. There is no downtime in such transition.

3.2.3.2.3 Physical Access Control There are several ways to grant physical access to secure facility and these can generally be categorized in three factors: Something you know Pin, Password Something you are Finger print, facial scan, retina scan Something you have Smart card, Identity Cards, Token, Digital Certificate

3.2.3.2.4 Identity cards Identity cards should be mandatory for Offices as well as Datacenters. Smart card based access control should be placed all over the office. Physical access cards should be provided based on

location and departments. One card should not be given to access all locations.

If datacenter is in shared building space, then ideally there should be two cards provided. One for the building and another one which can access only specific datacenter location. Employees should wear their cards at all times preferable where they can be seen by guard. Cards can be colour coded for easier detection of intrusion. Access of employee should be promptly removed before/on his last day or termination. Access removal should be done on priority basis to ensure that employee cannot enter the premises based on his good relations with security staff. Access should be automated (card or biometrics) rather than relying on security guard completely. There’s always human factor involved especially if employee used to work there earlier.

3.2.3.2.5 Door access controls For sensitive locations, locks can also be used. Without explicit authorization, no one should be able to enter those locations. Locks can be wireless, number lock, swipe card based or biometrics.

Wireless lock detects card when it comes close to the receiver. Swipe card has to be swiped on the receiver. User has to input a numeric password to gain access to number lock. This password should be kept confidential and only required people should have access to it.

■ Biometrics: Biometric technologies identify a person using distinctive physical traits such as fingerprint, retina, hand geometry, face recognition, voice etc.

Fingerprint and hand geometry are commonly used biometrics for physical access control. They can be used in conjunction with other factors like number locks and smart cards or they can be used as a standalone control. When deciding a biometric device for access control, some factors should be considered:

Local culture: Sometimes a country’s local culture may not be comfortable with specific biometric e.g. Gulf countries where women cover their faces may not be comfortable with Facial recognition biometric control. Ease of use: During Peak times, biometric should be able to work swiftly and efficiently. Employees should not stand in queue for entry/exit.

Error ratio: All biometrics are prone to some kind of error ratio; care should be taken to select a device with least amount of error ratio. False positives and false negatives can negatively impact productivity or compromise security.

Latest technologies of locks implement multi factor authentication for physical entry. It can be any combination of passwords, biometrics and physical access cards. 3.2.3.2.6 Humidity Ventilation Air Conditioning (HVAC) When planning for facilities of datacenter, HVAC has to be properly managed. Servers and other devices produces huge amount of heat and require a balanced HVAC system. Proper guidelines are available to maintain HVAC in datacenters and levels should be constantly monitored and maintained.

Devices and servers can go bad with two little cooling and also with too much cooling as too little cooling can cause overheating

and static discharge while too much cooling can cause condensation on devices. Air quality and pressure are also important for datacenters. Dust and corrosive gases can clog or deteriorate electronic devices. Even if Datacenters are in managed facilities, organization should monitor and constantly review reports of HVAC levels in the datacenter. 3.2.3.2.7 Fire Suppression Fire is the most unpredictable threat. It can be caused by a lot of things at any point in time. Trick is to automate the system so that if fire breaks out, it is detected and controlled as quickly as possible.

Fire suppression systems should be installed in offices as well as datacenters in accordance with country’s fire safety regulations. Fire safety drills should be conducted at regular intervals for users as human life safety is paramount. Users should be trained to not bring any inflammable substances in offices as well as dedicated smoking areas should be created if required. Fire/ smoke detection sensors and fire extinguishers should be installed everywhere. Employees should be trained to use them when required.

Latest ultrasonic detection systems can detect fire, smoke, sudden rise in temperature. They can also alert the monitoring team. Fire has 4 main elements: temperature, fuel, oxygen and chemical reaction. To put out fire, one or more elements have to be taken out. Water based extinguishers fights temperature and oxygen while Carbon dioxide based extinguishers take out oxygen. Soda acid based extinguishers take out fuel and gas based extinguishers interferes with chemical reaction of fire. An organization can use an automatic system to detect and suppress fire. Sensors can be connected to HVAC systems and whenever smoke/ fire is detected, alarm is activated and suppression system is triggered. Most commonly used automated fire suppression systems are:

■ Water Based As the name suggests, it uses water to suppress fire, however in many cases water can cause equal if not more amount of damage to the systems e.g., In art galleries, Datacenters etc.

Advanced systems are created to have two layers in water based system. When first sensor is melted, system releases high pressurized air, filled in pipes to extinguish fire. If air is unable to put off fire, then water is released. With this method, for false alarms or small fires, water is not released. ■ Gas Based Gas based systems work best in enclosed spaces. When fire is detected by sensors, alarm is activated for people to clear out the area and then inert gas is released to take out oxygen from the equation. It puts out fire easily and quickly. This system works best for datacenters as there’s no damage caused to the electronic equipment. Following gases can be used for the system: Argon Gas Suppression System or Argon Clean Agent Systems. FM 200 Clean Agent System.

Novec 1230 Clean Agent Systems. FE-13 Clean Agent Systems. Carbon Dioxide Co2 Clean Agent Systems etc. 3.2.3.2.8 Physical Security of Network Devices

When designing your network, utmost care should be taken for placement for network devices. In smaller organizations, router and switches are sometimes placed in guest areas, reception area, storage and even pantries.

Planning for a secure location for all network devices should be done. Ideally, they should be in datacenter along with other servers. For office spaces they should be placed in racks secured in a locked room, which only required people have access to. For Wi-Fi Access points, location is extremely important. If Wi-Fi routers are placed in corner of the office, it may leak signals outside. They should be placed in centrally located locations inaccessible to normal users. A lot of organization place Wi-Fi access points in false ceilings or mount them on roof. 3.2.3.3 Physical – Detective This section covers physical controls that help in detecting an ongoing attack. 3.2.3.3.1 Motion Sensors

Motion sensors alert monitoring teams of intrusions or suspicious activities based on change in pressure, temperature, light, sound, infrared etc. They should be fine-tuned to minimize false positive and negatives. 3.2.3.3.2 Sensors and Alarms

HVAC sensors monitor environmental factors and alert monitoring teams if thresholds reach warning/critical levels as defined in the system. Even slight changes in environmental factors can cause damage to electronic devices therefore sensors should be properly monitored and errors should be quickly rectified. 3.2.3.3.3 Closed-Circuit Television CCTVs work as a deterrent as well as detective controls. When properly monitored, malicious activities can even be thwarted via alert mechanism and corresponding corrective action. They should be placed to cover all sensitive areas and entry exit points.

CCTV cameras should be placed to get an overall view however care should be taken that they are not invasive of people’s privacy e.g. ATM rooms. Camera should be placed to capture face of the user not his Card or Pin number. For audit Purpose, it’s a good idea to create an inventory of all the controls used in your environment. Sample spreadsheet given below can be a good starting point.

Section 2 - Securing the Network

This section covers the essentials of network security implementation.

Following Chapters are covered in this section:

Chapter 4: Decoding Policies Standards Procedures & Guidelines

Chapter 5: Network Security Design

Chapter 6: Know your assets

Chapter 7: Implementing Network Security

This section delves into What’s and How’s; What to protect and How to protect in the design itself based on Policies and Standards. This section starts with understanding policies and standards, identifying your assets, securely designing your network and finally secure process of introducing new assets to the network. It also covers all best practices to be followed for different network assets.

Chapter 4 Decoding Policies Standards Procedures & Guidelines

4.1 Documents Hierarchy

Every organization requires documented policies and standards to define organizational rules and to standardize organizational processes; however, policies and standards are used for much more:

Enforcing organization’s rules and regulations.

Better decision making.

Clearing doubts and solving legal disputes.

Compliance with regulatory authorities (such as TRAI for telecom and RBI for banking sector).

Compliance with industry security standards (such as ISO-27001/ PCI-DSS/HIPAA).

Security audits.

In this section, we will specifically discuss about Security policies, standards, procedures and guidelines and understand how they all fit together.

Security Policy is the major document, which represents management’s commitment of protecting assets in the organization. It contains high level statements which covers overall security requirements. Although policy provides a direction, it is not enough in itself and needs supporting documents like standards, guidelines and procedures to provide detailed implementation strategy. Information security policy drives organizational security strategy. Policy is supported by standard, which in turn is supported by procedures and guidelines. Standards provide specific action statements based on policy. Procedures and guidelines further detail the action statements to maintain consistency across the environment.

Document creation and maintenance is a gradual process. Different companies follow different hierarchical structure. Smaller companies may just write detailed policies to cover everything, however as organizations expand and mature, document structure and hierarchy also mature towards a more granular approach.

4.2 Policy

Information security policy is the document that top level management endorses to communicate their security commitment and obligation. It dictates the role of security in the organization and security requirements of the organization. Policy provides a high level view of protection requirement of organization’s assets (including information, people, process and technological assets).It also defines a framework of compliance with legal and regulatory requirements. Policy focuses on ‘What to do’ , rather than ‘How to which is left to supporting documents. Security Policy should ideally be created by Information security team, however input from Legal, Human Resource, IT and various Business Units should be invited and incorporated as required. If

Policy is created in silos and enforced directly on all departments, there may be a few aspects that are overlooked or few aspects that are too stringent for business to work effectively. Information security exists to support business and not to hinder it.

Policy should be easy to understand without technical jargons in it. It should cover most of the following if not all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all: all:

Normally a Master Information Security Policy is created to give an overall requirement statement and several Sub-Policies are created to cater specific areas/audience/processes. Master Policy ideally has broad statements of organization’s information protection goals. It should define the regulatory compliance; that organization is required to follow (Government Laws, PCI-DSS/HIPAA etc.) Sub-Policies are granular policies dedicated to a specific area and audience. These policies are an effective way to organize information for users; to get specific information directly, rather than going through one detailed policy e.g. Computer security Policy, Network Security Policy, Personnel Security Policy, Security Management Policy, Physical Security Policy, Acceptable Use Policies etc.

Some examples of common policies available in organizations are: Acceptable use Policies (Email, Internet, Company Laptop etc.). Network Security Policy. Physical Security Policy.

Access Control Policy. Risk Management Policy. Asset Management Policy. Operations Management Policy.

Clear Desk and Clear Screen Policy. Cryptographic Management Policy. Remote Access Policy. Wireless Communication Policy. Mobile Device Policy.

We will mostly focus on Network Security Policy in this book.

Acceptable use policies are written specifically for normal users. These policies dictate the acceptable way of using company resources. E.g. Acceptable use policy for emails defines rules for using company emails, such as using a company specific disclaimer and signature, no forwarding of chain emails, no personal emails etc.

Acceptable use policies should be written for all company services that users avail, such as emails, mobile devices, wireless, internet, company laptop etc. 4.3 Standards

Standards are mandatory documents as they describe how to implement security policies. Security policies are somewhat vague and depend on supporting documents. Standards are the detailed documents that describe the attributes required to support policy statements.

For example, Policy may state - Audit trails should be enabled on all critical devices.

If a network professional tries to decode this statement, he will have several questions like:

What are critical devices in our environment?

Audit trails are enabled but what level of audit trail should be enabled to satisfy the policy?

All these questions should be answered in the Standard. For this example, there could be Asset/Information Classification standard, which will define the classification criteria of the organizational assets. Network security standard should specify the level of audit trail to be enabled.

Standards provide consistency to organizational processes. If organization is supporting 1000 routers and all are configured differently, it's going to be a nightmare for security management as well as network operations. Standards define what is acceptable and what is not, so at any point in time anybody can refer to the document to configure a router, without deviating from the allowed configurations. There are commonly two different types of standards:

4.3.1 Technical Standards

As the name suggests, technical standards explain the allowed values or configuration for a specific technology. For example, Network Security Standard defines common attributes to be used on all network devices however Router Security Standard defines router specific attributes to be used on all organizational routers. When drafting technical standards, remember to include different stages of device/technology life cycle. Broadly these can be categorized in three major phases:

1. Initiation/Pre-Production Phase

a. Initial setup and upgrade. b. Hardening and Lockdown.

c. Functionality Testing. d. Technical Compliance Audit. 2. Operation/Production Phase

a. Change Management. b. Periodic Vulnerability assessment. c. Periodic Upgrades. d. Periodic Backup e. Monitoring. 3. Disposition/Post Production Phase

a. Media Sanitization. b. Destruction of media.

4.3.2 General Standards

General standards define generic standards for the whole organization or specific departments e.g. Physical security standard, Asset classification and Labeling standard, Password standard etc.

4.4 Procedures and Guidelines

4.4.1 Procedures Procedure is at the lowest level in document chain. Procedures are more detailed than policies, standards and guidelines documents.

These explain the step by step process to implement clauses, defined in policies and standards. Procedures include screenshots and graphics to elaborate the process. Procedures are mostly drafted separately for different technologies. E.g. Password Procedure for Active Directory and Network devices will be different however they will be based on Password standard. 4.4.2 Guidelines

Guidelines are the collection of suggestions to implement policies and standards. They are not mandatory in nature.

For example, Password guidelines may suggest ways to create a strong password, such as think of a song you like and take alphabet of each word and replace ‘e’ with ‘3’ and ‘a’ with '@'.

Security Hardening guidelines are the most common guidelines used in organizations.

4.5 Document Format All the documents including Policies, Standards, Guidelines and Procedures should follow organization specific format. Generally, there's a template available which should be used within the organization. Important things to note are Version control and Document History.

Some organizations also create specific document management guidelines. These guidelines help in standardizing document format throughout the organization. Even if such guidelines are not available, each document should at least have a name, version, document history, contributor's name and approval authority details. Documents once created, should be approved by relevant authorities e.g. Network Security standard may be jointly approved by Security Head and Network Head; however, Network Security procedure may be approved by Network Head only. Depending on your organization’s structure, authority will differ. Document approvers are document owners and it is their responsibility to ensure that documents are managed according to organization’s guidelines.

A Sample Network Security Standard information page is shown here. Important terms to understand about document format are:

Version: When document is initially created, it is given version number 0.1 and subsequent changes keep incrementing the version by 0.1, which means consequent versions will be 0.2, 0.3 and so on. When document is approved, it is given version number 1.0 irrespective of the current version. For minor amendments/ document undergoing revisions, version number is incremented by 0.1 (1.1, 1.2...). When these changes are finalized, version number is incremented by 1.0 eliminating the figures after decimal (2.0, 3.0.). Version number provides instant information about the document e.g. Version 5.05 would mean there had been 5 major revisions and approval and 5 minor amendments after the last major approval. Status: Status of document reflects current status i.e. Draft/Final. If document has been reviewed and modified and is in the process of approval, status should reflect Draft version. If document is approved by concerned authority, then status should reflect Final version.

Modification History: Modification history is maintained to keep track of changes in the document, over a period of time. History should provide a clear picture of what was added/deleted/modified with clause or section details. History should also record dates of approval of modifications and approval authority details. There should be a defined process to update documents. Documents should not be updated at random. Document changes can also be done through formal change management procedure, where justification behind the changes is formally recorded.

Documents should be reviewed at least annually or as specified in Information Security Policy. Even if there are no changes required in the document, modification history should record that document was reviewed, however no changes were required. Review process should undertake any new security requirements, policy changes, infrastructure changes, changes approved by change management/architectural review board and vendor recommendations.

4.6 Decoding Policies and Standards 4.6.1 Step 1: Identify relevant policies and note down relevant statements

There are various ways an organization may decide to write an Information Security Policy. Ideally policies should be written by Security team and distributed/discussed by main stakeholders. Stakeholders like Legal, HR, Network Team/System Team, Business units can decide if the policy statement holds good in the current infrastructure or there will be huge investment/resource requirement just to fulfil some statements in the policy.

Organization has to take a call if such an investment is justified as opposed to the potential risk. In such cases, those statements can be marked as future updates and may not be included in final policy. For these future updates, compensatory controls may be considered.

There are no hard and fast rules on how policies should be written but normally organizations follow one or hybrid of below techniques. 1. Granular Policies Granular Policies are created for security aspects of various technical and business units. In most of the cases, corresponding standards are created for each policy.

2. Detailed Information Security Policy

Detailed Information Security Policy covers all technical and business security aspects. This is one stop document for all

security needs. Separate standards are created for different technical and business units.

3. High Level Security Policy A very high level security policy is drafted which covers common security aspects across technical and business units. Details are included in sub - policies which are drafted for each specific area. Corresponding standards may be created for each sub-policy.

When trying to look for relevant policies, it's important to look at all the policies and identify relevant points. It's not necessary that network policy may have all the relevant points that are required for creating comprehensive standards and procedures.

4.6.1.1 Sample Policy Decoding Exercise

Information Security Policy of XYZ Ltd. Human Resource_

Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_ Resource_

Physical and Environmental Security

Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security

Network Security

Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security

Data Backup Security

Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security Security

Logical Access Control

Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control

Properly prepared notes will guide you in drafting related network security standard and procedure. Above illustrated notes is just an example of how to decode policies. Your notes will differ subjected to infrastructure and architecture of the environment. For example, if your organization does not have central logging server for logs or centrally managed access control server for network devices, your notes will be different to accommodate compensating controls.

4.6.2 Step 2: Identify gaps between policies and standards/Drafting standard

If your organization has existing Network standards, then you need to analyze them for accuracy and completeness, however if your organization does not have network standards then you might have to draft one from scratch. 4.6.2.1 Identifying gaps The notes created above can guide you to perform gap assessment between standards and policies. You can use notes as a checklist to assess standards. Standards should ideally cover all the points in your notes and add relevant explanation/value to policy statements. If there’s any gap, then standards should be updated. 4.6.2.2 Drafting a Standard A Standard should not be as detailed to include specific commands for each tasks (covered in Procedures/Guidelines) and it shouldn’t be as high level as a Policy. It should explain the policy statements and bring down its ambiguity. A basic Standard may have following components: Purpose: Purpose of the standard. Scope: Applicability of this standard.

Standard Statements: Statements detailing policy requirements in context of specific technology/business unit. Enforcement: Consequence of not following the standard.

Exceptions: Authority to approve exceptions to the standard.

4.6.2.3 Sample Router Security Standard Snippet

Purpose: - Purpose of this document is to standardize router configuration. Scope: This standard is applicable to XYZ head office and datacenter. Statements: ➢ Pre-Production Phase:

Routers have to be labeled, tagged and inventoried based on Asset Classification

Routers are to be configured with XYZ organization standard build.

Routers to be assigned IP addresses within 192.168.3.0 -192.168.3.255 range.

Routers are to be configured and locked down in test environment. ➢ Operation Phase: Routers to be updated to latest stable version/n-1 version. Change management to follow Secure Change Management Access control to be managed according to Access Control

Enforcement: Anyone violating above statements will be subjected to disciplinary action. Exceptions: Any exceptions to above statements should be approved by head of networking department. Reference Documents:

Asset Classification Standard. Secure Change Management Guidelines. Access Control Standard.

Network devices these days, handle communication from different layers such as firewalls handle layer 3, layer 4, layer 7 filtering. It is recommended to use maximum security features but a lot of times network administrators like to keep it clean for troubleshooting. So firewall is used essentially as a layer 4 device and layer 7 features might not be enabled. Layer 7 filtering can be handled by network intrusion prevention system (NIPS) or a separate unified threat management system (UTM).

In audits though, it can become an issue as a security feature is not enabled and capability of a device is not used in totality. If Standards/Hardening guides also do not include enabling of such feature, auditor can still identify a finding that Hardening guides need improvement along with network configuration.

To avoid such scenarios, include a statement in network security standard which clearly mentions that routers will be used as layer 3 devices and firewalls will be used as layer 4 devices. Any upper layer filtering will be handled by NIPS/UTM (depending on your network)

Refer to existing standards rather than including all details in all standards e.g. you can refer to Password Management standard, when defining Password management for network devices. You can mention that “Passwords management to be done in accordance with Password Management Standard”. Referring to the right document is important as it maintains consistency across the environment. Referring to the standards also helps when one standard is modified. Modification can only be done at one place and rest of the documents will automatically refer to the updated document. 4.6.2.4 Drafting a Router Security Standard Exercise

01 Purpose

Document describes security aspects of routers in production environment of XYZ Network._

02 Scope

Document is applicable to XYZ’s datacenter network, head office network infrastructure and network personnel.

03 Standard Statements

3.1 Router Physical Security

Security Security Security Security Security

3.2 Router Access Control

Control Control Control

3.3 RouterAccreditation

RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation

RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation RouterAccreditation

3.4 Router Management

Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management

Management Management Management

3.5 Router Disaster Recovery and Business Continuity

Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity Continuity

Continuity Continuity

04 Enforcement

Anyone violating security standards may be subjected to disciplinary action .

05 Exception

Exceptions to be signed by CTO.

4.6.3 Step 3: Draft Guidelines/Procedures for Policies and Standards 4.6.3.1 Drafting Procedures Once Standards are created, Procedures are drafted to provide detailed steps required for anyone to implement Policy and standard statements without going through all the documents again and again. Well defined procedures provide implementation consistency throughout the Network.

Procedures are the most commonly used documents by technical personnel. It is important that they contain detailed step by step processes to complete a task or statement in standard. Once network procedures are implemented properly, devices are automatically compliant with policies and standards. Therefore, it is the job of Network management team to ensure that procedures are accurate and complete. Procedures can include step by step processes, flowcharts, graphics and screenshots for the ease of implementer. Let's take an example of our sample standard statement and create a Procedure statement.

Logs:- Router log level is enabled at Notification level. Logs are directly sent to Central Logging server. Procedure statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement: statement:

1.  ! Enable logging 2.  hostname(config)# logging on

3. 4.  ! Enable service timestamping of logs 5.  hostname(config)# service timestamps debug datetime localtime 6.  hostname(config)# service timestamps log datetime msec localtime

7.  hostname(config)# clock timezone UTC 5 8. 9.  ! Enable and configure logging to logging server 10.      hostname(config)# no logging console 11.      hostname(config)# no logging monitor 12.      hostname(config)# logging trap 5 13.      hostname(config)# logging source-interface loopback 0

14.      hostname(config)# logging 192.168.1.24

4.6.3.2 Drafting Guidelines Like procedures, guidelines are also detailed documents; however, they are not mandatory in nature. Guidelines are supporting documents for implementing policies and standards. Essentially they are collection of suggestions for a specific task e.g. Password creation guidelines, Router hardening guidelines etc. From security perspective, network team commonly uses hardening guidelines in the environment. Hardening guidelines are also known as benchmarks, security configuration checklist, baselines, best practices etc. Hardening guidelines are basically a list of instructions to configure systems to a particular security level (minimum security baseline). Hardening guide provides command level instructions to secure particular devices/solution. These are used to lock down the devices in pre-production phase.

Hardening guides are device specific. Ideally for each make of device, there should be separate hardening guide e.g. Cisco router hardening guide, Juniper router hardening guide etc. Hardening guides should be approved and agreed upon, after that they can be used for configuring devices. Devices are used by various different organizations with varied security requirements. That is the reason devices come with a lot of features/protocols to be able to serve different needs of different type of industries. However not all the features are

required by single organization therefore it's a good idea to lock down devices based on the features required by organization. Hardening guides act as a one stop shop for all security related configurations to be done on any device before introducing it to production environment. Operation and design decisions are not part of hardening. Hardening guides contain explicit commands to enter in device console in order to achieve a specific objective. Some objectives may require a series of commands which should be explicitly stated. Hardening guides are also used as checklists to ensure minimum baseline level of security is always intact especially after major upgrades/changes. To draft hardening guides/checklist you don’t have to start from scratch. Hardening guides are normally created by IT Vendors, Government and other organizations. Some common hardening guidelines available freely are are are are are are are are are are are are are are are are are are are are are are are are are are are are are are These guidelines can be customized according to your environment. E.g. CIS Security Configuration Benchmark of Cisco IOS has a lot of settings for user access control, however if Cisco

ACS takes care of that part in your environment, then you would not require access control statements in Cisco router hardening guide as access controls statements will be part of Cisco ACS hardening. Generic components of device hardening guides are:

1. Legal Warning It's recommended to display a legal warning for anyone connecting to your network. Legal warning should establish that unauthorized access is punishable by law and all connections are monitored.

Legal banner should be written according to state/country rules or as advised by Legal Team. It should be displayed every time someone is connecting to the router. Legal warning is important for Legal cases to establish that perpetrator was pre warned that his actions will be prosecuted. 2. Reduce Attack Surface Secure interfaces All unused interfaces, physical ports, console ports should be disabled and shut down when not in use. Interfaces that are not in use and inappropriately protected, pose a big threat to the entire network.

If console ports are not appropriately protected anyone who manages to get physical access can shut down the device. If Console ports are required, then they should be aptly protected physically and logically (via access controls). Disable IP Proxy-ARP, IP Directed broadcast on relevant interfaces. Enable appropriate port security on all switch ports individually. Disable Unnecessary protocols Devices come with various protocols and allow various versions of these protocols to suit organization’s needs. Open protocols and services are entry gates for attackers. Finger services, UDP small and TCP small services should be disabled. Common services to disable are daytime, chargen, echo, discard, IP BootP server, MOP, IP Source-Route, IP identd, http, ntp, cdp protocols etc. NTP if required should be enabled on specific ports to communicate only with authorized peers. Clear text protocols like Telnet, SNMP etc. should be disabled and replaced with their secure alternatives. Use the most recent protocol version as a rule of thumb. Newer versions of most of the protocols provide more security features compared to older ones e.g. SNMP V3 provides authentication and encryption (when configured in “AuthPriv” mode).

When choosing a routing protocol, go for the most secure options available. RIP V2, OSPF V2, BGP V4 provide peer authentication and integrity checks along with routing capabilities. 3. Change Defaults Network devices come with certain default configuration i.e. default username and password, default SNMP read/read-write community string etc. All these defaults should be deleted and stronger user credentials and community strings should be used. 4. Enable strong Authentication Network device native authentication should be configured in a secure manner. Complex passwords should be created with alphabets, numbers and special characters. Configure password settings in accordance with organization’s password standard.

Passwords should be encrypted/hashed using strongest method available.

Cisco allows for passwords to be encrypted with Cisco 7 encryption or hashed with MD5. Cisco 7 encryption can easily be decrypted even on Google search itself. MD5 hash can also be broken however with a little more efforts. As MD5 hashing is stronger than Cisco 7 encryption, hash all passwords with MD5 encryption.

It's imperative that passwords confidentiality is maintained by creating multiple unique roles and user credentials on the device. Generic accounts require password sharing within the team. It's a threat to accountability as well as confidentiality of password.

Passwords can also be stolen from backup copies of configuration files or network sniffer dumps. Network configuration files should be aptly protected in storage with encryption and access control. Use of network sniffers should be limited to specific users and data captured by Sniffers should be deleted securely if not required. If required, data should be protected with encryption and access controls. Use of encrypted protocols (SSH/HTTPS) for managing network devices can minimize the risk of network sniffing. Rather than relying on native authentication, centralized TACACS+/Cisco ACS server should be used to implement Authentication, Authorization and Accountability (AAA). Access control related configuration can be part of those server's hardening guides. 5. Prevent spoofing attacks Proxy ARP

Proxy ARP is a way by which a device can respond to ARP requests on behalf of other device. However, this feature can be misused by attacker by spoofing device’s MAC address. Hosts can

forward traffic to attacker’s address unknowingly. To avoid this attack disable Proxy ARP on relevant interfaces. Source Routing

IP Protocol allows source/host to decide the network path, a packet should take. In networked environment specialized devices such as routers, decide the path a packet will take, based on its destination. IP Source route packets can be used by attackers to gain more information about the network. Unless specifically needed, IP source routing should be disabled. ICMP Echo requests ICMP Echo requests (Ping/Traceroute)are extensively used for troubleshooting; however, they are also used by attackers to detect live hosts. ICMP echo requests should be blocked at least on external devices so that devices are not visible to casual hackers or script kiddies.

Session Time out Session time out should be configured for all console and Virtual TTY sessions so that locked resourced can be freed when not in use.

Unicast Reverse Path Forwarding Unicast Reverse Path forwarding (µRPF) is a way to determine the validity of source address. µRPF performs a router table lookup for source path and incoming interface. It determines if a packet is using the same path, a sender would normally use to reach the destination. If it is not valid path, then packet is considered spoofed and dropped. µRPF option should be enabled on routers and firewalls. 6. Prevent Denial of Service attacks DOS attack can be caused by misusing TCP 3-way handshake technique. 3-way handshake process is shown below:

Attacker can manipulate this process by bombarding with just one type of packet. Device sends a response and waits for sender to finish TCP handshake process. However, attacker does not finish the process, causing a lot of open connections at the router end. Open connections finally overwhelm the router processor, leading to crash.

Configure a timeout for TCP connections which discards open connections and frees up the memory if response is not received in stipulated time i.e. 10 seconds. ICMP Redirects and ICMP Unreachable ICMP Redirects and ICMP Unreachable messages are sent back to sender to provide them the status of their packet; however, an attacker can use this function and bombard the device with ICMP requests which require generation of either ICMP redirect message or ICMP unreachable messages. If these requests overwhelm the system, it can crash. To avoid this situation, disable ICMP redirects and ICMP unreachable messages. IP Directed Broadcast

IP directed broadcast is a carefully construed ICMP echo request packet sent to IP broadcast address of a network/subnet. Packet uses a spoofed source address. When these broadcast requests are responded by the network, they can overwhelm the source address and continuous stream of such broadcast packets can also overwhelm organization’s network device.

This way attacker can attack an IP Address (spoofed source address) using the organization’s network bandwidth which adversely impacts organization’s network.

4.6.3.2.1 Sample Snippet of Hardening Guide

1. Remove all Default settings:- If there are any factory default configurations or user accounts, remove or disable them. 2. Set Banner: - Set Banner to company specific warning that you would have agreed with Security/Legal Team. A standard banner will probably look like below: "WARNING; Access to this device is restricted to authorized personnel only. If you are not an authorized user, disconnect now. Any attempts to gain unauthorized access will be prosecuted to the fullest extent of the law." 3. Disable all non-essential/clear text protocols:-

Telnet Finger CDP IP BootP MOP MOP IP Source Route FTP HTTP

4.6.4 Step 4: Self Audit and Exception Handling

Ideally all network configurations should be according to policies, standards, procedures and guidelines. Vendors/Security research organizations provide tools to audit configuration based on recommended practices. Microsoft baseline security analyzers, Microsoft compliance manager, examples of the audit tools. These tools can be customized according to environment and configuration can be checked against approved hardening guides. Self-audit can be conducted to ensure compliance to policies and standards, however in the real world there are always discrepancies. For business requirement, there may be necessity to violate some statements on few devices. If violations are only required for few devices, they are called exceptions. You can formally file for exception if there’s a procedure/form available with Security team, otherwise file for an exception and get a written approval from the information owner. Procedure will define the authority to approve the exception. It is generally Information owner or Security head or both. Exception filing should include at least the following: √ Policy/Procedure clause for which exception is sought.

√ Scope of exception i.e. List of devices/servers/computers/users. √ Business justification of exception.

√ Expiry date of exception. A sample Risk Exception/Acceptance Form may look like below:

Signed off Forms should be kept for audit purpose. For quick reference and tracking of risk exceptions, a spreadsheet can be maintained. We will see sample spreadsheet later in Chapter 9.

Chapter 5 Network Security Design

5.1 Network Security Design Principles

Till now we have learned about security goals and methods. We also discussed about potential threats that can hamper the business and various controls available to us to combat them.

However, it is important to understand the correct way to design your network using those controls, to ensure maximum security. Network Security design decides which controls will be required in the organization and how they will be implemented in the environment.

A well thought of design based on organization's security requirements lays the foundation of Organizational Security Plan. A scalable, efficient and cost effective security system can be deployed if design incorporates basic design principles.

In this chapter, we will understand these principles in detail. Security design principles are important when deploying new solutions. Also over a period of time we need to ensure that design principles are not violated, when making changes to the existing infrastructure.

5.2 Defense in Depth

Defense in depth principle defines a layered security structure to achieve the ultimate goal of protecting organization’s data. At each layer a combination of physical, technical and administrative controls are implemented. Combination of controls includes deterrent, preventive, detective and recovery controls so that breach can be detected and responded to in that layer itself.

In real world each layer is further layered to decrease the likelihood of a security breach occurrence. To get to data, attacker will have to peel away different layers and also layers within layers. For example, at Perimeter security, there can be layering of routers, firewalls, IDS/IPS systems to block and detect any attack pattern before it enters the network.

Also when similar controls are layered, they should ideally come from different vendors. For example, using Juniper firewalls as perimeter firewalls and using Cisco firewalls in DMZ. This approach helps in ensuring if attacker is exploiting vulnerability present in Juniper devices, he will get stuck at Cisco firewall layer.

Basic idea behind layered approach is: First deterring the attacker from attacking physically or logically.

If he continues, then prevent and delay him from causing any real harm by putting various roadblocks at each layer. If attack can’t be prevented then attack should be detected and responded to as soon as possible (Logging, Monitoring, SIEM, IDS/ IPS) ideally before he reaches data.

If everything fails, then there should be alternative arrangements for business to continue (BCP and DR plans) 5.3 Security Zones And Network Segmentation A Network security zone is a specific part of network, where assets have similar/same security requirements in terms of: Confidentiality, Integrity and Availability. Accessibility and Access control. Logging and Monitoring.

In the design phase itself, network security zones should be defined appropriately. By definition Security zone is a network segment with well-defined perimeter and controlled entry and exit points. Only information related to servers/devices inside the segment can enter or exit a security zone.

Security zones can be created based on: Assets with same criticality levels. Assets with same functionality (Email cluster servers). Assets providing services to internet (Web Application server).

Assets providing services to Trusted Third parties/Vendors. Assets used for administration of network (Logging/Monitoring/ Managing). Assets used by employees (Workstations).

Assets used in Wireless Network. Assets used by Third parties, on site. Assets used in Development/User Acceptance Testing Labs. Assets used in Remote access.

Common examples of security zones are: Demilitarized Zone (Servers that need direct access to Internet are kept in an isolated zone separate from internal network). Wireless zone. Remote access.

Third Party. UAT Labs (Testing area). Employee Zone.

Management Zone (Device Management).

Zones should be grouped together and assigned a trust level to control inter-trust zone communication. Low trust level areas are the zones available to large number of users including unauthorized users e.g. Internet, Wireless, remote access zones etc. DMZ can be Medium trust level zone while core server area in datacenter can be High trust areas.

Communication from Higher trust level to lower trust level is normally allowed while traffic from lower trust level to higher trust level is either restricted completely or has to go through stringent scrutiny of Firewalls/IDS/IPS/DLP controls etc. Firewall access control lists/ IPS is used to control traffic between different trust zones. Dedicated firewall devices, firewall functions in IPS devices,

and access control lists in network routers and switches decide where network traffic can go and where it cannot.

To implement Network security zones an optimized IP addressing schema should be used which is flexible enough to accommodate future requirements. IP Addressing schema can be designed according to Location of assets, Function of assets, Assets criticality levels.

Network Segmentation can be done switches. Router will be required to different Virtual LANs and Firewalls communication between two zones. controls like IDS/ IPS/DLP etc. can zones. 5.3.1 Benefits Of Security Zones

at Layer 2 with the help of route traffic between two will be required to control the To monitor traffic, additional be deployed between different

1. Compartmentalization of resources: All servers/devices are grouped together to provide optimum security depending on requirements. 2. Reduced attack surface: Unnecessary services and ports can be blocked at entry level to reduce attack surface. Any traffic not relevant to specific security zone should be filtered at entry level.

3. Reducing scope of Audit: If only specific services are under regular audit requirements, such services can be compartmentalized to reduce scope of audit. This security zone can have additional controls based on audit requirements e.g. PCIDSS require specific controls for financial information protection therefore all required devices can be clubbed in one zone and appropriate controls can be applied to that zone. 4. Protecting critical information: Critical servers and devices can be grouped together and assigned high trust level to ensure protection of critical information. For audit Purpose, we can refine the control matrix spreadsheet created in Chapter 3 to create a zone based inventory of controls. With zone based inventory we can identify the control gaps present in different zones. Purpose is to have Physical, Administrative and Technical controls in all the zones with special attention to low trust zones. Also we should ideally have deterrent, preventive, detective and recovery controls in all the zones.

In Chapter 15 & 16, we will discuss more on usage of below spreadsheet, to show auditors, the various safeguards existing in a specific zone. This spreadsheet can thus help in reducing number of finding or their severity rating.

Hint: There's no right and wrong answer. You can fill up above spreadsheet as per your organization. 5.4 Secure Remote Access

Today organizations do not work traditionally i.e. from a specific location. They are global entities with users spread across the world. Organizations have to enable users to work from anywhere without losing productivity.

For enabling these facilities for user, remote access is used by the organizations. Remote access works seamlessly for user; experience is more or less same as being in the office LAN network. Remote access can be enabled from one specific location to internal network (site to site) or for an individual (anywhere in the world) to internal network (client to site).

Important factors to consider for securing remote access are:

Authentication and Authorization of remote user.

Secure Communication Link between client to site and site to site. Restricting access to resources for remote users.

Log and Monitor the communication traffic.

Verifying security posture of client. Remote access management will be explained in detail in Chapter 10.

5.5 Secure Third Party Access Organizations today are walking the thin line between meeting customer’s expectations to being compliant with security processes. Vendors or third parties are contracted for their technical expertise to support the environment especially when there’s an issue. At the time of crisis organizations can be extremely accommodating to vendor’s request even if it violates security policy and standards.

It’s imperative to understand that vendors are talented beings trained to resolve issues quickly however they may not be favorably disposed to your organization’s policies. Accountability of

security compliance still lies with the organization not with the vendors. It is recommended to follow best practices in order to avoid security violations. Some of the best practices include:

Include organizational security requirements in the contract or while renewing the contract. That way, vendors are legally abided to follow organizational security practices when dealing with the assets. Organization’s security policies and processes should seamlessly flow to vendor and they should comply with them. Organization should reserve the right to audit vendors against the security policies to ensure their compliance. Vendors should also sign Non- Disclosure or Confidentiality agreements. Define allowed remote access methodologies in the contract itself. By standardizing remote access methods, you can implement security controls and keep a tap on vendor activity. It is easy to provide desktop sharing to vendors when crisis arises but it will bypass all security controls in the process and leave organizational critical assets completely at disposal of vendors. Define Access controls processes for vendors in initial stages. Vendor may ask for administrative accounts for all their technicians or only one account to be shared by all. However, organization should provide unique identities for each user and very limited administrative accounts. Disallow password sharing in the contract itself with penalties defined.

Vendor access should be segregated in separate zone with permissions only on required devices/servers. Vendor area should be physically and logically segregated with least privileges onsite as well.

Vendor communication traffic should be subjected to Firewall/ IDPS controls. This traffic should be logged and monitored through SIEM/Traffic monitoring applications. Organization’s users should also be trained in communicating with vendors. Ideally there should be named people dealing with Vendors. They should be responsible for managing, sending/ receiving any information to and from vendors.

I have seen violation of security practices by both vendors and customers. Vendor technicians don't take enough care of credentials provided to them and share them with other technicians easily. Organization's users also send out log files, configuration files, sniffer dumps etc. to vendors when troubleshooting an issue, without encrypting the files. These files contain network IP addresses and other information and should be at least encrypted with basic encryption. These files should also be reviewed for confidential information, credentials etc. before sending to Vendors.

5.6 Least Privilege Least privilege principle states that users should have just enough privileges on IT resources necessary to perform their job. Full administrative access should be limited to very few people and custom roles should be created for different types of job profiles.

Applying this principle appropriately reduces the risk to a great extent. For example, if a third party was provided minimum privileges on network devices, then the impact of misuse of those privileges is greatly reduced however if they are granted administrative accounts, impact of misuse will be higher. 5.7 Segregation Of Duties Segregation of duties principle states that one individual should not own any process completely e.g. planning, implementing, monitoring and auditing. At every level there should be checks and balances to prevent and detect any misuse of privileges. Segregation of duties ensures detection of fraud or malpractices. Segregation of duties also ensures there’s no conflict of interest between two responsibilities e.g. Person/Team responsible to implement changes on network router is also responsible for monitoring network traffic. In case there’s a breach ofnetwork security due to improper change implementation, concerned person/team can try to cover that up even if detected. Similarly logging should be automatic and real time. Administrators should

not be able to tamper or delete logs from central logging server and logging team should be separate from administration team. Network administration process is normally divided into 3 categories to be managed by different teams/personnel. Engineering/Planning: responsible for planning network architecture /device testing.

Operations: maintaining devices, change management and upgrades

Security: Security compliance auditing/monitoring.

These 3 processes should be performed by three different teams/ personnel. These teams should be trained enough to take over other's tasks after regular intervals. This is known as Job Rotation. Job rotation is an administrative control which helps to identify any fraud/malicious practice that administrators may be involved in. Administrators require administrative privileges to perform their job but these privileges can also be misused, without any check on them. 5.8 Encryption Goal of encryption is to make data unusable even if attacker manages to gain access to it. Policy or Regulations should be considered before planning for encryption. If Policy requires encryption of data at rest and in transit, then information flow

should be properly studied to ensure encryption is implemented at each level i.e. Network level, Database level etc. Data can transit over different network channels like Wireless, wired, VPN, Internet etc. therefore planning should incorporate all different channels and type of technology to be used on all channels. E.g. if encryption plan included usage of SSL certificates, then decision should be taken to use organization's CA or third party CA's certificates. (More on SSL certificates in covered in Chapter 3) A lot of organizations use self-signed certificates, for securing internal communication, which is not a good practice as attackers can spoof such certificates to lure users in accepting fake certificates.

In network applications, encryption can be provided either via End to End or Link encryption.

5.8.1 End To End Encryption

End to end encryption is higher level encryption between two applications. As the name suggests, it is end to end encryption from one user/application to another user/application. Also in this encryption users/applications have the flexibility to choose what gets encrypted and how. While transmitting packets, header information is not encrypted, only the payload is encrypted therefore each hop does not require a decryption key. However, if the packet is captured by an attacker, he can make out the source and destination of the packet. 5.8.2 Link Encryption

Link encryption is a low level encryption which is transparent to users. The whole packet including payload, header and routing information is encrypted and passed through the channel. Packet is decrypted at each hop to read header information which makes it vulnerable if hop is compromised, however link encryption is faster than end to end.

Organizations have to carefully weigh pros and cons of both the methods and decide which method to use based on security requirements. Both the encryption methods can be used together to provide higher level of security.

5.9 High Availability Availability has become major concern for organization these days as businesses have become global and highly competitive. E.g. if

an ecommerce website is not working even for little bit of time, they may lose a lot of potential customers to competition.

Network infrastructure should be able to support business demands for high availability. High availability principle states that there should be no single point of failure in the whole infrastructure i.e. all the components (servers/devices/communication links) used between user and service providing systems should be in redundant/fault tolerant configuration. This configuration ensures that when one device fails other can take over seamlessly. For example, a single switch to manage any network segment represents a single point of failure. There should be alternative way to reach that network segment in case of failure of that switch.

Due care should be taken to eliminate all single point of failures. Ideally architecture should be designed in a way to make it resistant to any failure however if failure occurs, systems should still be able to perform even if efficiency is reduced.

Also ensure that systems can manage the extra load in case of failure of one device. For example, in redundant configuration if devices are running at 80% capacity each, in the event of failure other device will simply overload and fail. 5.10 Network Access Control (NAC)

Network access control software primarily performs pre-admission checks before a device is granted permission on the network.

These pre-admission checks are based on defined security policies. This software is also called Network Admission Control (NAC) software.

Before allowing any host to connect to the organization’s network, NAC software can check for various conditions like User Authentication and Authorization.

Anti-Virus signature update status. Operating system patch level. Host intrusion prevention system.

Firewall enable status. Antispyware enable and signature update status. Once conditions are met according to the specified policies, host is allowed to connect to network. NAC can also decide what a host can do once it is connected to network, i.e. what resourced can it access. For example, a network administrator logging in remotely may not be allowed to access network device management interface, however same administrator will be allowed access to management interfaces when logging in locally.

If conditions are not met based on policies, then NAC can either deny access so that user can prepare the system according to policies or NAC can facilitate users to connect to relevant servers to update signatures and patches. Only when policy conditions are met, NAC can allow access to network.

NAC can be enabled for wired users, wireless users, remote access users, VPN users etc. It is more beneficial to remote users as their security posture is hard to control. NAC should also be utilized for Third Party users and visitors connecting to internal network. 5.11 Security Of Test Environment We know test environments by different names like Staging, User Acceptance Testing (UAT), Training environments etc. Organization may have one or more such environments depending on their requirements. Testing environments should be properly defined in Design phase. Security has to be incorporated in Test environment as well. Major security considerations are:

5.11.1 Isolation All the testing environments should be physically and logically isolated from, each other, user environment and production environment. Even if physical separation is not possible, they

should be logically separated. Ideally internet access should be disabled, however if it's a business requirement, internet access should be tightly controlled with firewall and IPS devices. Even Wireless connection should be separate for user and testing environments. All devices permanently in use in testing environment should follow the same security standards as production network. Security of testing environment is as critical as production environments.

If production is tightly controlled, attackers can try to get in test environments if they are not secured enough. Most of the times test environments are replicas of production environment and attacker can gain a lot of understanding about production environment from test environment.

5.11.2 Access Control

Physical and Logical Access control should be present in testing environments. Only authorized personnel with genuine need to access these environments should be allowed access. Ideally if there are multiple testing environments, different users should be allowed in each environment for the purpose of segregation of

duties. Just like production environment, authorized access to these environments should be controlled, monitored and reviewed. 5.11.3 Documented Procedures

There should be formal documented procedures when transitioning from one phase to other (Development to Testing or Testing to UAT). At each transition phase, there should be a sign off to ensure all relevant tasks for that particular phase have been completed. A final sign off should be done when all test phases are completed and solution is ready to be transitioned to Production. 5.11.4 Data

A lot of times, organization tend to use production data in test environment for ease of testing simulation. This practice can put data in great risk as testing environment does not have as much controls as production environment and attackers can try to get information from there. E.g. using production router configuration file, to configure router in testing environment. Production data should never be used in testing environment. If there’s a requirement to use production data for testing purpose, then it should be scrubbed/garbled before usage.

Same principle of non-duplication of data should be used between different Testing environments as well.

Chapter 6 Know Your Assets

6.1 Identifying Assets

Before planning of protection measures, organization has to understand what needs to be protected. It needs to take stock and create an inventory of all the assets to understand what should be protected. Inventory also helps in classifying assets on the basis of their criticality to the business. Classification also

helps in determining what kind of protection is required to different types of assets.

As a network professional, you may not be responsible for all company assets, but you will be responsible for assets under your purview. You will understand extent of your responsibilities in coming sections.

6.2 Different Types of Assets

For the purpose of classification and inventory, organizational assets can be classified in five categories. 6.2.1 Information Assets

Information asset is a collection of data organized as one unit. Data is anything that is entered/stored in a computer system, but it is not the information itself. When data is structured, organized and presented in a meaningful manner, it is called Information. All information of the organization can be classified under Information Asset if it has a value i.e. Loss of which can cause financial, business, legal or reputational impact to overall business.

For network team, you need to look at Information that is created, processed, transmitted and stored in your departmental assets.

Information Asset categories include but are not limited to: Databases: Information about your customers, personnel, production, sales, marketing, and finances is generally stored in Databases. A Database is collection of linked tables where information is stored. Applications access this information to give meaningful results. Soft Copies: Policies, Standards, financial reports etc. which are created by the organization or contracts which have been signed with other entities needs to be protected and secured.

Archive: Old business information that may be required to be maintained by law, for a certain period of time. Business Continuity and Disaster Recovery Plans: These plans are developed to provide guidance during a crisis situation, to

maintain continuity of business. Absence of these plans will lead to ad-hoc decisions in a crisis situation.

Printed Hard Copies: Printed Network Diagrams, Printed Circulars, Passbooks, Memos, Contracts, Schedules, Contact Forms and List etc. fall under information assets.

6.2.2 Software Assets

As the name suggests, software assets are the applications that are used to operate computer hardware and business processes. There are broadly two categories of software assets; System Software and Application Software. If you visualize software assets as layered architecture, system software is the interface between computer hardware and application software and application software is the interface between end user and system software.

Application software: Application software performs business functions and implements business logic. It is important to protect this application, as breach may cause severe repercussions. A lot of application is used off the shelf however specialized business applications are customized or created from scratch. Some examples of Application Software are: SAP, Tally, and Word Processors etc.

System software: System software works as an interface between hardware and application software. They transform application commands to machine commands. Operating systems like Windows, Linux, IOS, Android etc. fall under this category.

Most of the software under this category are available off the shelf and are configured according to organization's policy and procedures. 6.2.3 Physical Assets

These are the tangible assets. Computer equipment and Peripherals: Routers, Firewalls, Servers, Desktops, Laptops, NIPS etc.

Networking equipment: Modems, routers, EPABXs and fax machines.Storage media: Magnetic tapes, DVDs, Removable Disks. Service equipment: Power supplies, air conditioners, MPLS, Data connection.

Furniture: Server Racks.

6.2.4 Services Assets These are invisible assets that organizations use as service and are valuable for business activities:

Computing services that the organization has outsourced. Communication services like voice communication, data communication, value added services, wide area network etc.

Environmental conditioning services like heating, lighting, air conditioning and power.

6.2.5 People Assets

Information is accessed and handled by people from within the organization as well as the external entities for business purposes. For asset inventory, it is necessary to identify such people from within the organization as well as outside the organization. The people assets shall include roles handled by:

Permanent Employees. Contract Employees.

Contractors & his employees.

6.3 Asset Responsibility

Asset Owner

As we understood in Chapter 1, Data owner is responsible for protection of assets under his purview. Asset protection requirement includes but not limited to: Asset classification and inventory.

Define and Implement security requirements as per classification.

Define and Implement Backup requirements as per classification.

Define and Implement physical and logical Access Control as per classification. Define secure operation guidelines as per classification.

Review security compliance and fix any gaps.

Define and Implement Business Continuity. Therefore, asset owner is overall responsible of securing his assets at all levels. He is also accountable and answerable if a breach occurs.

Asset Custodian

Asset owner in turn delegates his responsibility to asset custodian to maintain daily operations. Asset custodian is responsible for implementing, reviewing, and verifying security requirements defined by asset owner. His responsibility includes but not limited to: Implement and maintain security controls based on asset classification, security policy, standards and guidelines.

Backup systems and secure backup copies.

Approve, Modify and Reject physical and logical access control to assets. Follow defined guidelines for operations like Monitoring, Logging, Change management, incident management etc. Review and fix any security gaps identified in audits.

Follow periodic restoration drills as mandated by Business Continuity Plans. 6.4 Asset Valuation

It is important to classify assets in order to provide them adequate protection as per the policy. Too much protection is as dangerous as too little protection. There has to be a balance maintained between asset protection and company finances. If stringent controls are applied on all the assets, it will be a terrible waste of company resources similarly if you apply very loose controls on all assets, your critical assets will be left unprotected.

If given a choice will you opt for gold or bronze? Similarly, malicious people are looking for gold in your organization and that's what you

need to protect most. It's no point protecting bronze and leaving gold unprotected.

Without evaluation, gold (critical assets) cannot be differentiated from bronze (normal assets). Critical assets need more protection however unless assets are evaluated, how would an organization know which asset is critical.

Organization should have documented procedures and defined criteria to evaluate assets. Without defined criteria, all asset owners may end up valuing their assets as business critical and the whole purpose of evaluation will be defiled.

6.4.1 Step 1: Information Asset Classification and Handling

This process is also known as determining sensitivity levels of information assets as it majorly focuses on confidentiality of information. Information assets can be assigned various sensitivity levels depending on the organization. Military organizations may require more than 5 levels i.e. public, internal, sensitive, confidential, highly confidential, secret etc. to define sensitivity of their information. However normal organizations do not need that many sensitivity levels for their Information. A three or four tier

structure is more than enough. More sensitivity levels increase the complexity of Information asset management. It is the responsibility of Information asset owner to perform classification of all the information assets under his purview.

purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview.

purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview. purview.

Most organizations also define Information Asset Handling guidelines based on classification to provide instructions on labeling and handling various types of Information. Handling guidelines are important for audits as auditor will be interested to find out if they are being followed. Sample information asset handling guideline is shown below:

If employees have not read these guidelines, they are bound to make mistakes. Most of violation happens in this area and violations become behavior if employees are not trained frequently on Information Asset Handling. Managers should also keep an eye for any violations and educate employees if they see any.

Standards and Procedures are supposed to be password protected according to Information handling guidelines. Are they password protected everywhere they are kept? (Check shared folders/ SharePoint/your laptops etc.)

Are documents labeled as ‘Internal Use’ lying around on Printer or employee’s desk?

Are you or your team sending logs/IP addresses to vendors or external parties via emails?

Are you or your team encrypting logs before sending to external parties?

Do you and your team know if vendors have signed an NDA with your organization?

Are you or your team giving out sensitive information on support calls? Are you or your team checking information in internal email trail before sending out to Third parties? 6.4.2 Step 2: Asset Risk Valuation

6.4.2.1 Determine CIA Values In Step 1, we learned about Information Asset classification and handling based on their sensitivity levels however for determining criticality of any assets, Integrity and Availability factors have to be considered as well. Asset Relevance/Criticality value gives a complete picture of Asset's importance for business processes. E.g. an ongoing merger draft document is highly confidential however its impact on day to day business processes may not be so high.

To valuate any asset including Information, Non Information and People assets, you need to identify their Confidentiality, Integrity and Availability (CIA) value. There’s no hard and fast rule to define the criteria to evaluate asset. Every organization, depending on their business can have a different definition. Most of the organizations define asset criticality criteria in Risk Management Procedures/Guidelines documents.

For Non-Information Assets (IT Hardware, Software, Network communication, Storage media and Services) evaluation criteria should be defined considering at least two things:

Dependability, reliability and replicability of the asset itself. Confidentiality, integrity and availability of the information that is created, processed, stored or transmitted by the asset. For People assets ratings should be defined based on: Confidentiality, integrity and availability of the information processed by the asset. Dependability and expendability of the asset.

Below table is a Sample Asset Criticality Matrix, for assigning Low, Medium and High criticality levels based on confidentiality, integrity and availability of Asset.

6.4.2.2 Calculate overall Asset relevance/criticality level Once asset CIA values have been determined based on Asset criticality matrix, a final asset criticality value can be derived. There can be multiple ways to achieve final value depending on organization’s Risk Management Procedure/Framework.

Risk Management Procedure/Framework defines the various criticality levels and the methodology of determining those criticality levels. Most of the organizations follow Qualitative Risk management methodology, where High/Medium/Low levels are used. We will learn more about Risk Management in Chapter 9. Calculation of Asset criticality level will ultimately feed into risk management.

Confidentiality = High = 3 Integrity = High = 3 Availability = High = 3

Maximum (CIA) = 3

Once Asset criticality value is obtained, definitions of this value should be checked from Risk Management Procedure/Framework. These definitions give business meaning to the values obtained. Sample table is given below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below:

below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below: below:

Let's take some sample assets and calculate their criticality levels based on Asset criticality matrix given above. Few assets are given for your information; fill up the rest based on your company environment.

Once assets are evaluated, you have to identify security requirements for each type of assets.

Basic security requirements like hardening or lockdown are mostly applicable to all devices however critical and highly critical equipments may require additional controls for protection. These requirements are defined in Policies, Standards E.g. Policy clauses like below specifies security requirements for highly critical and critical assets.

assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets. assets.

Explore all relevant policies, standards and procedures to identify specific requirements based on asset criticality. Decoding Policies and standards is discussed in detail in Chapter 4. Once you have classified and evaluated your assets and implemented required control, you can breathe easy as you have done all what is required. At the end your asset inventory should look like below Sample Asset Inventory: Note: (Spreadsheet is transposed to present all the required fields. You can create a spreadsheet using values in first column). column). column). column). column). column). column). column). column). column).

column). column).

column). column). column). column). column). column). column). column). column). column). column). column). column). column). column). column). column). column). column).

Most of the organizations have asset inventory in specialized software Configuration Management Database (CMDB). If filled properly every department can simply request for a list of assets under their purview

from CMDB however in my experience, I have observed that Asset Database is not really managed properly. Even though software provides a lot of details, not all details are filled in by Asset department especially criticality ratings.

In such cases a manual asset inventory can be managed for each department.This is for your own reference to ensure that you are complying with Organizational Policies and Standards. Ideally you should have a copy of assets under your purview and make sure all the required fields are keyed in. You can probably give feedback to Asset Team to incorporate all the details in the Database itself or keep an updated copy with all the required details for Audit purposes. 6.5 Asset Classification/Rating Review

Asset criticality Review: Assets go through different phases of lifecycle. Their status may keep changing like Pre-Production, Production, in service, Faulty or Moved to different Location. It is important to regularly review asset inventory and change assets criticality if required. Interval of review can be defined by Policy or it should be done at least annually.

If a router is moved from Production to Lab environment, it may not need same level of protection as earlier however if its criticality level is still High, and controls are not present as required then Auditors may mark it as a finding.

Information Sensitivity review: Information sensitivity changes more quickly than assets. They should be reviewed frequently and status should be modified as required. Sensitivity review schedule should be fixed but also take into account any major event like Product Launch, Merger etc. as such events may demand change of sensitivity levels of related documentation.

Merger Information of two companies may be Confidential until it is announced in Public. Once it is in Public domain, it should be treated as Public information and previous labeling and handling procedures will not be applicable anymore.

6.6 Audit Requirement

Prepare below Documents and Evidences in advance for audits: Asset Inventory with criticality and sensitivity levels. Evidences of handling Information as described in Information handling procedures such as: Emails sent to Vendors with Encrypted attachments.

Password sent out of Band (SMS). Ensuring NDAs are signed by Vendors before divulging Confidential information (Copies of Signed NDAs/Confirmation sought from Legal).

Chapter 7 Implementing Network Security

Till now we learned about threats that can harm our environment and controls that we can implement to guard our infrastructure. We also understood how to decode policies, standards and hardening guides and implement security controls based on that. We also learned about basic network security design principles and ways to deploy various controls to provide reasonable security to all assets. Also in previous chapters we understood how to classify assets based on their sensitivity and criticality and then plan for their protection based on security requirements defined in Policies and Standards.

In this chapter, we will put everything learned till now, in perspective and learn how to incorporate security from the very

beginning. We may not be present when organization was in nascent stage and infrastructure was built from scratch. However, as all assets have a lifecycle (i.e. they are created/procured, configured, used and disposed of), addition of new assets to existing infrastructure is a constant process. Assets could be added for scaling up the existing infrastructure or for replacing existing ones.

In this chapter, we will discuss the first few steps of the lifecycle where assets are procured, configured, tested and then introduced to Production/LIVE environment. It is essential to incorporate security in the initial level itself to ensure that new vulnerabilities are not introduced in the Production environment along with the assets. Mantra of a secure system is to remove all existing vulnerabilities from the environment and ensure new vulnerabilities are not introduced whenever there's a change in environment.

7.1 Introducing Assets to Production Environment Most of the organizations have a defined process to introduce assets to production environment. Overall solution is approved by an Architecture board or similar committee. Architecture board focuses on overall architecture changes that solution will introduce and if these changes are compliant with organizational policies, standards and network design.

Once Solution is approved by architecture board, various components are procured. After acquisition, they should be inventoried, classified and labeled as required by Policies and

Standards. Then these assets should be hardened based on organizational standards and hardening guides. After hardening and patching, assets should be tested for security compliance and vulnerabilities to ensure there are absolutely no lapses in security. After the review, device is ready for operational configuration and Testing.

Once Configuration is over, they should be tested in testing environment. Various levels of testing including unit, integration and User acceptance testing should be carried out. After the user acceptance testing, asset is ready to be introduced to Production environment. Section below provides a generic pre-production checklist that should be followed before assets are introduced to the production environment. Along with Pre-Production checklist industry best practices should be followed before allowing an asset to Production. Best practices for network design, different types of networks and devices are included in Best Practices section. 7.2 Pre-Production Check List

Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification Classification

Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Labeling Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Testing Understand the impact on overall security posture due to introduction of new device.

Ensure device is fulfilling its purpose in the environment.

Ensure device is able to manage the load it is expected to manage in production environment. Latest patches patches patches patches patches patches patches patches patches patches patches patches patches patches patches patches Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening

Hardening Hardening Hardening Hardening Hardening Hardening Hardening Hardening Vulnerability and compliance compliance compliance compliance compliance compliance compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance compliance compliance compliance compliance compliance compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance

compliance compliance compliance compliance compliance

compliance compliance compliance compliance compliance

compliance compliance compliance compliance compliance

compliance compliance compliance compliance compliance

compliance compliance compliance compliance compliance

compliance compliance compliance Get a Sign Off/Acknowledgement on above steps, before going to production, for audit purpose. 7.3 Best Practices for Network Design

Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design

Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design

Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design

Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design Design

7.4 Best Practices for Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Firewall Firewall Firewall Firewall

Access Control (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

(AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA)

Management Management Management Management Management

Management Management Management Management

Management Management Management Management

Management Management Management Management Management Management Management

Management Management Management Management Management

Management Management Management Management Management Management Management

Management Management Management Management Management Management

Management Management Management Management Management Management

Management Management Management Management Management Management

Management Management Management Management Management Management

Rule Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set

Firewalls operate on first match basis so it is important to follow proper order of rules. Rule set should end with a Deny all and Log rule to ensure all denied traffic can be observed to identify attack patterns.



Ingress Filtering should be enabled on Edge/Perimeter firewalls. Filtering Inspects and blocks inbound traffic with following source addresses: Private multicast and reserved IP ranges

1. 2. 3. 4. 5.

Class A: 10.0.0.0 - 10.255.255.255 Class B: 172.16.0.0 - 172.31.255.255 Class C: 192.168.0.0 - 192.168.255.255 Class D: 224.0.0.0 - 239.255.255.255 Class E: 240.0.0.0 - 254.255.255.254

Unrouteable and Illegal addresses

1. 255.255.255.255

2. 127.0.0.0 3. 0.0.0.0 Internal Network IP Range √





√ √ √ √ √ √ √ √

Egress Filtering should be enabled on Edge/Perimeter Firewalls. Only traffic originating from internal IP source range should be allowed and rest should be dropped and logged. All permit rules should have business justification and configured with appropriate source, destination and port numbers. They should be restricted to relevant zones e.g. HTTP traffic to be forwarded to Webserver zone, FTP traffic to be forwarded only to FTP server zone and so on. HTTP traffic destined for any other zone other than Webserver zone should be dropped. Rules with any source or any destination should not be created. It's best to be specific about source and destination. Rules should also be specific about ports rather than allowing any port on the destination. Rules should not conflict with each other. Rules opening ports of clear text/malicious services should not be created such as telnet port/common ports utilized by worms etc. Loose source routing and strict source routing should be blocked by firewall ICMP echo request and replies, ICMP broadcast request and UDP echo request should be blocked. Outgoing ICMP unreachable and time exceeded messages should be blocked. DMZ outbound connection to external interface should be restricted. DMZ inbound connection to internal interface should not be allowed. Description for all the rules should be provided indicating business reason and Change Request number.

High Availability

Availability Availability Availability Availability Availability Availability Availability Availability Availability Availability Availability

7.5 Best Practices For Router And Switches

Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches

Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Switches Access Control (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) (AAA) Management Management Management Management Management Management Management Management Management Management Management

Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management

Rule Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set Set VLAN Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Management Routing Updates

Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates Updates High Availability Availability Availability Availability Availability Availability Availability Availability Availability Availability Availability 7.6 Best Practices for VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN VPN Access Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control

Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Control Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption: Encryption:

7.7 Best Practices For Wireless Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network

Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network

Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network

Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network Network

Access Control & Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption Encryption

Section 3 - Secure Operations

This section covers how to integrate security in daily operations of Network teams.

Following Chapters are covered in this section:

Chapter 8: Secure Change Management

Chapter 9: Vulnerability and Risk Management

Chapter 10: Access Control

Chapter 11: Capacity Management

Chapter 12: Log Management

Chapter 13: Network Monitoring This section covers all operational tasks that Network team is directly or indirectly responsible for and how security can be incorporated within each task. Once security is included in the design phase then only Operations phase needs to be taken care of to maintain the security posture.

Chapter 8 Secure Change Management

8.1 Change Management Infrastructure and services are constantly going through change to cope up with business requirements. Any modifications to existing infrastructure or new deployment have to go through a formal change management process. Change management is a fairly mature process in most of the organizations. Change advisory board is responsible for approving and rejecting changes based on specific criteria like:

like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like:

like: like: like: like: like: like: like: like: like: like:

like: like: like: like: like: like:

like: like: like: like: like: like:

like: like: like: like: like: like:

like: like: like: like: like: like:

like: like: like: like: like: like:

like: like: like: like:

like: like: like: like:

like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like:

like: like: like: like: like: like: like: like:

like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like: like:

like: like: like: like: like: like: like: like: like: like: like: like: like: like: like:

Goal of change management is to minimize the risk of any change and return to secure stage after change is implemented.

There are majorly two threats that we deal with during a change management process:

Accidental Misconfiguration: These are accidental configurations which were skipped or introduced during a change process.

Deliberate Misconfiguration: These are additional deliberate configurations done while access was provided for another change. In smaller companies, deliberate configuration may not need pretext of change however in bigger organizations, access to production environment is mostly available only to implement an authorized change. From audit and security perspective, assuming all the components of network have passed through pre-production security assessment, only place a violation can occur is during change implementation.

Even after following proper change management process, if device misconfigurations/unexpected ports and services running/access control violations etc. are encountered again and again during audits, then there's a need to review change management process.

Good news is that if we fix the issues in change management process, we have won half the battle. We still need to fight newly discovered threats and vulnerabilities but we can be sure that we are on the right track.

Change management process itself is out of this book's scope; however, we will cover the security aspects to be incorporated in each phase of change management process to ensure a smooth change rollout. We will also understand how to avoid common mistakes, which leads to audit findings.

8.2 Secure Change Management Process 8.2.1 Initiation Phase

Drivers of change can be regular updates of firmware/operating system or applications, incorporating new business functionalities, fixing audit findings etc. There should always be a valid business justification for the change. Change is initiated, planned and discussed among all stakeholders. If stakeholders agree and support the change then testing phase begins. Once Testing is completed and it is ensured that change will not impact organization negatively, then only formal change management process begins. 8.2.1.1 Supporting a Change Once change is initiated, support/approval is requested from all stakeholders. From Network perspective, there will be two scenarios where Network team support will be required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: required: In both the cases, impact assessment should be conducted by Network team to understand the impact of the change on overall network security posture.

Impact assessment can start by asking relevant questions: Security Policy/Standard/Hardening Guide Compliance Does this change require device configuration change? If yes, will this change violate any hardening configuration? If yes, do you need to raise a risk exception or there’s a work around to be implemented?

o Does this change require any changes in access control list?

• If yes, is it opening any new services or ports? Are these services clear text or encrypted? Are these services allowed on network? o Will this change cause any violation of security policy/ standard? Do you need to raise a risk exception or there’s a work around to be implemented? Capacity Management What will be the impact on traffic? If additional traffic is expected, can existing devices handle it?

Is capacity management of other components i.e., storage, processing, memory etc. taken care of? Access Control

Who all will be permitted through these ports? Can access be restricted to a specific network segment? In case change is modifying initial secure and stable state of your devices/servers, you should raise a risk acceptance. Risk should be accepted by asset/information owner as change is initiated to fulfil a business requirement. A Sample Risk Exception Form was discussed in Chapter 4. Allowed Ports and Services It's a good practice to keep a network diagram marked with allowed ports. If allowed ports on your network are too many to mark on network diagram, then create a list of open ports in various network segments.

Whenever you are requested to open a new service or port ask the questions mentioned above. If risk exception is accepted, add the

services and ports to your list, for future reference. You can always refer to this diagram for any new change request.

Remember you can raise a risk exception only in the case of security violations. Most of the requests to open service/ports may be required for business functions without violating security standard. In that case just mark your network diagram or update your list. Also include change request number in your services and port list, to indicate the reason of opening of port.

It's a good practice to update change request number in comments section of Access Control List. That way any change can be tracked back to its source.

8.2.2 Preparation Phase 8.2.2.1 Testing the change

Change should be appropriately tested before commissioning on Production network.

There can be one or multiple testing environments in an organization depending on requirements. Testing environments are explained in detail in Chapter 5.

Just like new solutions/devices go through testing phases, all changes should also go through same phases of testing. Changes should be tested for impact on hardware performance, service integration, load balancing, business functions, service level agreements, security posture of the asset and environment etc.

There should be a documented procedure to follow during testing phases. At the end of each phase there should be test completion form signed off by testers to ensure that all relevant testing procedures were conducted according to documented procedures. These forms will be required for change request approval as well as audit. User acceptance testing (UAT) is normally the last testing phase, where users ensure that change has not impacted the user experience and functionality. Different phases of testing ensure that the change is compatible with the existing environment. All upgrades, patches, addition/deletion to device configurations should be properly tested before introducing to production environment to avoid any surprises.

Auditor will be interested in

Isolation of Testing, UAT and Production environments Segregation of duties among people operating in different environments

No data replication among these environments Proper transition process from Testing to UAT and UAT to production environment User acceptance for each and every change

Make sure you have enough evidences to prove above points. Ideally there should be a list of things to be verified in Testing Environments and at the end of testing, User acceptance should be required to prove that change has been successfully tested. 8.2.3 Planning Phase

8.2.3.1 Creating request for change

Most of the organizations have a formal change process in place. Change requestors are supposed to fill up a form, which normally

covers: Business justification of the change

Step by step change implementation procedure Impact assessment of the change on asset and environment

Roll back procedures

Testing results/certifications Schedule of change implementation and downtime duration if any

Name and contact details of stakeholders Communication plan to inform stakeholders

Change request is then sent to all the stakeholders and change advisory board. All stakeholders perform their own impact assessment and then accept or reject change. Change advisory board checks if all the procedures of change management are duly followed and impact assessment is completed by all stakeholders. If everything is in order it accepts the change or else rejects the change. Network team should ask questions given in Section 8.2.1.1 irrespective of whether they are change originator or supporter. It is important to check for any security standard or policy violation for departmental

assets at this point before deciding on accepting, rejecting or raising change as a risk exception.

Once the change is approved, tasks for each implementation team are assigned and all stakeholders are informed about the upcoming change. 8.2.4 Implementation Phase

8.2.4.1 Implementing change

This is the most important phase of Change Management. All the pre planning will be in vain if anything goes wrong here. Utmost care should be taken while implementing the changes, to ensure that there are no unauthorized changes done along with authorized ones, when accessing the device. Only prescribed steps should be followed to avoid any inconsistency and unwanted change in the secured environment.

8.2.4.2 Monitoring change

While change is being implemented, it should be monitored very closely:

To ensure only authorized activities are carried on.

To ensure there's no adverse impact on overall network and security posture. Post implementation checks should be carried out, to ensure there's no functional level, service level and security level impact. Even with utmost care, things can go wrong during implementation as testing environment is not same as production one, therefore changes should be monitored for a period of time specified in change management process. If there's any negative impact during the monitoring phase, implementers should follow precise steps to roll back the change. Roll back procedure ensures that the system returns to the original stable state.

If there was an incident following a change, make a note of it. Auditors will be very interested in finding out how it was handled. Incident following a change indicates that change management process is not as effective as it should be. If change management team has already worked on the root cause and implemented corrective actions, then auditor may not highlight it as a finding.

But if only that particular incident was resolved without taking any corrective actions in change management process, then Auditor will definitely pin point this issue as a finding.

8.2.5 Post Implementation Phase Once change is implemented successfully, all relevant documents should be updated. If there were any lessons learned, then they should be noted down for future reference in respective documents. If new ports or services were open, make sure your service and port list document is updated.

You can also create a spreadsheet to track the changes done to your assets, over a period of time. If auditor questions you on a particular configuration in any device, you can easily track it back to any change details. Over the period of time we tend to forget why and when specific changes were done, that's why a simple spreadsheet update can go a long way for audit.

A sample spreadsheet may look like below:

This spreadsheet is not a required document for audit. This can be prepared only for your own reference. It's a good practice to mention change request number as a comment in router/firewall configuration file for tracking purpose, however it may not be possible for all changes, in those cases spreadsheet can be used. 8.3 Audit Requirements

Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization’s infrastructure.

Chapter 9 Vulnerability and Risk Management

9.1 Vulnerability and Risk Management

In Chapter 2, we learned that vulnerability is a flaw or defect that can be exploited by a threat leading to damage to organization. It’s basically a bug or lack of control which can be exploited by threats. Depending on the organization, vulnerability assessment can be a daily, monthly or an annual process. Organizations may have an internal security team to run vulnerability scans or they can contract third parties to do it regularly. It can also be a combination of both.

Vulnerability management process includes Identification of Vulnerabilities, their analysis and Risk Rating Assignment and finally remediation of vulnerability. This is a continuous process as new vulnerabilities are identified every now and then on popular

products and protocols. Also changes made on devices can also open up new vulnerabilities.

Most of the big organizations like Google, Oracle, Microsoft etc. release vulnerabilities and their respective patches to public on regular basis. Microsoft does that every week while oracle comes up with cumulative patches at least 4 times a year. Apart from that, vulnerabilities are identified by individuals and posted online.

Risk Management is more comprehensive than vulnerability management and we can safely say that vulnerability management is a part of risk management. Vulnerability management is done for technical vulnerabilities while Risk management takes into account all types of vulnerabilities and threats including technical.

An example of vulnerability is: is: is:

An Example of risk is: is: is: is: is: is: is:

is: is:

As we can see Vulnerability is only a small part of computing overall risk to the organization. We will learn risk calculation later in this chapter.

Every Organization handles vulnerability and risk management differently. Some handle them separately as two different things with no connection with each other while some integrate vulnerability management in business risks seamlessly. I believe that they both should be integrated to manage a consistent risk management throughout the whole organization. Problems with handling these two operations separately are quite a lot:

Vulnerability Management in itself will not give a complete picture of risk that business faces. A critical vulnerability in Cisco firmware allows for remote execution means nothing to business unless they understand the risk it poses to the organization.

Vulnerability Management tools provide risk rating along with vulnerability severity rating but risk rating is calculated based on their own algorithm not according to organizational risk framework. Handling two types of risk rating can get extremely confusing for all stakeholders.

Third party security assessments (Audits/Pentests/ Application assessment) also provide risk rating based on their own methodology. This only adds up to the confusion. However, if Vulnerability and Risk management are integrated then it can provide better picture to business especially if they need to

invest in new security controls. Although integrated approach is better but it will require a lot of manual assessments. There are Governance, Risk & Compliance (GRC) tools available which allow importing vulnerabilities to GRC tools however it may not be feasible for organizations to use those GRC tools to handle risk management. In this chapter, we will not delve in how it should be handled but we will try to understand how to manage vulnerabilities and risks irrespective of the organizational approach to handle it. Vulnerability assessment is mostly performed by security team but Asset owners and custodian are responsible for: Keeping up to date on vulnerabilities identified on assets under their purview

Treating vulnerabilities in timely manner as required by Policy 9.2 Common vulnerabilities found in Network Environment 1. Network Protocol weakness

Network is essentially about disassembling information in form of packets, transmitting it over a specific route through switched connections and assembling these packets back to construct data. This whole process is carried out with the help of network protocols. Two most common protocols used are TCP/IP and UDP.

TCP follows a specific order to establish a connection.

A SYN packet is sent to server from client to initiate a connection. Server acknowledges the SYN packet with a SYN-ACK packet sent back to client. Client acknowledges the SYN-ACK Packet and sends an ACK packet to establish a connection. Over the period of time attackers have been able to misuse protocol design to their advantage.

Scanning and enumeration tools use this design by sending only SYN packet or only ACK packets to the server. Server waits for corresponding packets and keeps the session and port open. These create HALF OPEN Socket Problem. As the server is waiting for connection, port is open for the attacker to conduct the attack.

Protocol misuse can also be used to conduct DOS attack. SYN attack (as explained in Chapter sends SYN packets to the server with spoofed IP addresses. Server opens a new session for each request and waits for ACK packet to complete the 3-way handshake. Large number of open sessions overwhelms the server resources causing it to crash. Smurf and Ping flooding (as explained in Chapter are also types of DOS attacks due to protocol misuse. 2. Device misconfiguration Device misconfiguration or non-secure configuration can lead to several kinds of attack. If policies and procedures to introduce device in production network are not stringent enough, misconfiguration is possible. If standard secure configuration is not applied to device, instead each device is configured manually without following written procedures; misconfiguration is possible due to human element.

Misconfiguration can also be introduced during change phase or while troubleshooting for any issue if post change testing is not done.

Misconfiguration can be accidental or deliberate done by an insider to carry out malicious activity.

3. Default/weak Passwords

Default Passwords or weak passwords can impact network a great deal as the first thing attacker tries is default password or tries to guess passwords. Password sharing is another common vulnerability for network where common accounts are created on each device i.e. admin and passwords are shared among network admins as well as contractors. Sometimes strong passwords are used however encryption or hashing algorithms to protect the password are not secured enough and easily decrypted e.g. Cisco 7 encryption/MD5 hashing.

Same principle applies to SNMP strings. Just like passwords default community strings/easy to guess community strings are used for SNMP communication. 4. Access Control list misconfiguration

Improper misconfiguration of access control list can also lead to unwanted inbound/outbound traffic. Access control lists are configured to control traffic between different security trust zones of network. If access control lists are not configured properly an attacker can get access to secured zones i.e. From Guest network, attacker is able to connect to internal network. Access control lists can also induce a DOS attack if there are conflicting rules created in the lists which may restrict legitimate user traffic.

5. Obsolete firmware or protocols

Vendors regularly update firmware versions to combat any vulnerabilities inherent in the firmware. Newer versions of protocols are also released to resolve existing vulnerabilities as well as add new security features which were not available earlier. Sometimes organizations do not deploy newer version of firmware which exposes devices to multiple vulnerabilities. SNMP V3 has introduced multiple security features which were not present in V2. It is in organization’s best interest to use the most secure protocol or algorithm available at any point in time for example using WPA2 instead of WEP or using SHA2 instead of MD5. 9.3 Vulnerability and Risk Management Process

Regardless of frequency of vulnerability assessment in the organization, an asset owners or custodians should be aware of any vulnerabilities/ threats on assets, under their purview. It’s better to create a procedure for handling vulnerabilities, for process consistency and preservation of all required evidences. Procedure can be a simple five steps document; workflow is provided below for reference:

Let's understand each step in detail: 9.3.1 Keeping Abreast of New Vulnerabilities Related to Departmental Assets

Once asset inventory is created including corresponding operating systems and applications, asset owners/custodians should remain in contact with the vendor for any new security advisory i.e. Microsoft releases security advisory every week on their website.

Some vendors might expect clients to subscribe to their mailing list or visit their site often. Third party vulnerability advisory services can also be used to track new vulnerabilities. The organization can invest in these services and all the IT teams (Network/System/Application) can keep track of vulnerabilities affecting their assets. Some popular vulnerability database websites are:

Open Source: https://web.nvd.nist.gov/ Commercial: https://secunia.com

Security department may also collate and release security advisory for a week/month for all infrastructural assets and share with relevant asset owners.

Evidences Required by Audit:

To prove you are aware of vulnerabilities affecting your products keep record of:

of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of: of:

Management spreadsheet and corresponding actions taken on it). Explained in detail in further sections.

9.3.2 Receipt of Vulnerability and Verification Once vulnerability is received either from internal vulnerability scan/vendors or any other source, it should be verified for accuracy. Verification is extremely important because:

Vulnerability scanners are prone to reporting false positives. False positives mean reporting a vulnerability which does not exist.

Vendors/Website generalize vulnerabilities based on version number of application or device’s operating system for example check below vulnerability:

“Cisco IOS 12.4 and 15.0 through 15.5 and IOS XE 3.13 through 3.17 allow remote authenticated users to cause a DOS Attack via BGP.”

Vulnerability details and dependencies for exploiting the vulnerability should also be analyzed i.e. what other factors are required to exploit the vulnerability; does it require authentication/SNMP community string information/specific protocol or service running etc. If dependencies are not present, then also device is not vulnerable to that specific vulnerability. Even if network has Cisco devices running version 15.0, if vulnerable component (BGP) is not being used then devices may not be vulnerable to this vulnerability.

9.3.2.1 If Vulnerability does not exist Once it is verified that vulnerability does not exist, Record the conclusion in assigned ticket/spreadsheet. Justify the conclusion based on evidence. Evidence can be:

White paper from Vendor or Vendor’s response email on your query regarding the vulnerability. Evidence of Closed Port/Service:

For example, there’s SSL vulnerability on port 8101 on a router however port 8101 is closed according to your verification.

You can download Port query tool or any similar application.

Use ping command on command prompt to ensure your connectivity to the router.

Use port query to query the port 8101 and capture response. Take a screenshot of ping and port query command together.

You can also telnet the port on command prompt; telnet 10.10.10.14 8101 Connection failure message will confirm port non listening status Screenshot showing affected component is disable/inactive Screenshot showing that firewall handling the network segment is blocking the port effectively limiting the access to the vulnerable service. (Note, Firewalls generally block everything they don't allow specifically, so screenshot may only show allowed ports rather than blocked ports. Absence of port/service is the evidence.) Risk acceptance form; if vulnerability is known and accepted (Explained later in this chapter under Risk Acceptance heading)

9.3.2.2 If Vulnerability Exists Vulnerability can be positively identified if:

Software version is same as affected version and component affected is actively used.

Vulnerable service is running on device and using the affected port (if SSL is using port 8101 in above example).

All dependencies required to exploit the vulnerability are present along with vulnerability source.

Vulnerabilities and corresponding evidences can be managed through an automated workflow ticketing system used in the organization or it can be managed through a spreadsheet.

Sample Spreadsheet is given below:

Comprehensive Vulnerability Management spreadsheet is provided in Appendix 1 for remediation tracking of accepted vulnerabilities.

9.3.3 Assign Risk Rating This step may not be required if vulnerability assessment is not part of Risk Management, however even if risk management is handled separately, it is good to understand the process and terminologies as it may be required if business unit is supposed to do risk management separately. Normally vulnerability severity rating is provided by the vulnerability assessment tools but this vulnerability severity may not depict the real risk to the organization. Ideally Risk should be calculated for each vulnerability before planning to take action on that.

It can be done in collaboration with security team or independently by following risk management guidelines/framework to calculate the risk rating. If risk is calculated independently then concurrence from security team should be taken for consistency. Risk is a product of Threat Impact (Vulnerability severity and Threat severity), probability of attack and asset criticality. For simplicity, let's keep Risk = Threat Impact X Probability of Attack X Asset Relevance Threat Impact: It's a product of vulnerability severity and corresponding threat severity. It is basically the impact of vulnerability; in case it is exploited without considering the controls in place.

Probability: Probability defines the likelihood of the attack taking place in terms of controls in place, expertise of hacker, exposure of resource etc.

Asset Criticality: Asset criticality is same as the one calculated in Chapter 6. It is based on Confidentiality, Integrity and Availability values of the asset. Every organization follows a different definition and risk calculation methodology but ultimately the goal is same. Terms used here are more or less similar in all the organizations. In Risk Management Methodology or framework, look for below tables. These tables will help in calculating the risk. Threat Impact can be derived from below sample table:

table: table:

table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table:

table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table:

table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table: table:

Asset inventory will provide the Asset relevance/criticality rating: Same Table has been used in Chapter 6.

6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.

6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.

Probability Table will provide the probability ratings:

ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings:

ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings:

ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings: ratings:

For Risk Calculation consider High as 3, Medium as 2 and Low as 1. Risk Assessment matrix may look like below:

Let's take an example of Asset XYZoffice101

XYZoffice101 XYZoffice101 XYZoffice101 XYZoffice101

XYZoffice101 XYZoffice101 XYZoffice101 XYZoffice101

Once risk rating is calculated, another table in the document should explain the Risk Rating in terms of business impact:

impact: impact: impact: impact: impact: impact: impact: impact: impact: impact: impact:

Risk register can also record non-technical vulnerabilities e.g. insufficient access controls (physical or logical), lack of backup copies, single point of failure etc. Recording such vulnerabilities in terms of business risks helps management understand the business impact and support the mitigation. All these calculations are typically done using formulas in a spreadsheet. A sample Risk Assessment sheet also called risk register is provided in Appendix 2 for reference.

9.3.4 Formulate and Implement Treatment Plan

There are typically four ways a Risk can be treated: Risk Remediation Risk Avoidance

Risk Transference Risk Acceptance

Risk Remediation: Once the vulnerability has been verified and risk rating assigned, it is time to create a remediation plan. Recommendations are mostly provided along with the vulnerability description; however, more information can be obtained from the vendor or online. Before starting on the remediation plan, consult the Risk Management guidelines/framework once more to look for Risk Treatment section. Sample Risk Treatment Action table is provided below.

below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below.

below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below. below.

This table can help in prioritizing the risk remediation plan. High risks should be given more priority and should be fixed within the time given in this table. Time will vary from organization to organization. Also Risk may be identified on one or two devices in scope; however it may exist on all the existing devices/servers. Care should be taken to remediate the risk in a holistic manner rather than patching on identified devices/servers. If risks are only fixed on specific devices/ servers, other devices will remain unpatched and pose greater risk to the organization. Risk can also be reduced by increasing security on controls around the affected asset. For example, if there's a vulnerable service running on a server which is required by application to function, access controls can be implemented to limit access to that service from other network segments by blocking port on firewall. Service can still be vulnerable and running but risk may get reduced to acceptable levels.

Implementing Treatment Plan and Verifying Closure:

Once Risk Treatment plan is created, it should be implemented within the stipulated time period. Appropriate change management procedures should be followed while implementing changes. Once the risk is treated, it should be verified by the assessment team and marked as closed. If risk is still open then treatment measures should be discussed with security teams/vendors and appropriate treatment should be executed unless risk is completely fixed. Ideal course of action is to remediate all the risks, organization is exposed to; however, sometimes there are exceptional situations, where risks cannot be remediated because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: because: Whatever be the reason, risk has to be treated in some or the other way. Risk can be treated in these 3 ways if remediation is not possible:

Risk Avoidance: Risk can be avoided by disabling the component which is vulnerable. For example, if a legacy application is vulnerable and support is already over from the vendor, it may be more cost effective to buy a new application rather than trying to

fix the application. In that case Legacy application can be shut down effectively avoiding all the risks related to that. Risk Transference: Risk can be transferred to a third party or insurance firm. Risk of any solution can be transferred to third parties ensuring they cover the risk specifically. It may mean that they use some other technology which is not vulnerable or they employ all advanced technologies to reduce the risk.

E.g. If backup tapes are not kept properly (i.e. Free from moisture/dust) as required, then there's a risk of corruption of all backup data. This risk can be transferred by outsourcing backup tape storage to specialized companies who specialize in secure tape storage. Risk Acceptance: Risk can be accepted based on organization's risk appetite. Risk Treatments means investment of time, money and efforts. Organization can accept some risks to save on investment. E.g. in our example above, Organization has decided to accept all low risks, rather than investing time and money on it. Risk appetite of the organization means how much risk an organization can absorb, without facing any major damage. In our example above, all Low risks are absorbed and there's no effort required from Risk treatment teams. Risk appetite will differ from organization to organization. Military organizations may have 'Zero' appetite for risk.

If all other risk treatment options are not possible, organization can decide to accept the risk. Remember Risk can only be accepted by Asset owner; however involvement of Security team is important to assess if risk acceptance on one asset impacts overall security posture of the organization or not.

Risk acceptance can also be done if there's a conflict in business process function and vulnerability fix. Sometimes when vulnerability is fixed, it may impact a business process negatively. In that case it's better to record the incident as justification of accepting the risk as this Risk will appear again and again in all future audits until fixed.

Your organization should have form/workflow for Risk Acceptance. Risk Acceptance should always have an expiry date to reevaluate the risk to see if Risk is still at same criticality level, or if any new fixes are released.

A sample Risk acceptance/Risk exception form is available in Appendix 4. This form is also explained in detail in Chapter 4. When Risk Acceptance form is approved by Authorized personnel, it is a good practice to record the approvals in a spreadsheet. This helps you to track the expiry of the risk and to determine whether to reevaluate the risk or renew risk acceptance for some more time.

Inventory of risk acceptance gives you an easy reference in consecutive audits as vulnerability will be detected by scanners until fixed. In my experience this small step is highly recommended to save a lot of time and resources during audits.

A Sample sheet may look like below:

9.3.5 Root cause analysis (Post Mortem and Corrective Actions) On bi-annual or annual basis, business units should come together with security teams and try to analyze root cause of the vulnerabilities identified in last year or half. Mostly when vulnerabilities are identified, all the resources are focused on remediation. However, once they are fixed or at least planning is completed, there's a need to reflect upon them to understand the process gaps that are contributing towards these vulnerabilities.

This introspection is important to minimize number of vulnerabilities and avoid repetition of vulnerabilities, in future.

A risk register can help to classify the vulnerabilities, to understand process weaknesses better. Comprehensive Risk Register/Risk Management Sheet sample is provided in Appendix 2.

Network Vulnerabilities can be classified mostly in below labels: Network Protocol weakness Obsolete or old version of firmware/application Device misconfiguration Default/weak Passwords Access control list misconfiguration 1. Network Protocol Weakness

If vulnerabilities fall under network protocol weakness Check if they are newly identified

Check when vulnerability was released online and what were Vendor recommendations

If vulnerabilities were indeed recently released on internet, then there's no reason to worry. This means that vulnerability/risk management processes are working as planned and all the steps discussed above including vulnerability verification, risk rating assignment and finally risk treatment are effectively working. If vulnerabilities are old, then follow same guideline as 'Obsolete or Older Version of firmware/application' 2. Obsolete or Older Version of firmware/application If most of the vulnerabilities are old flaws for which vendor has already released patches, then that means there's a need to revisit patch update schedule and patch update coverage. If required, patch update frequency should be increased and patch update coverage can be enhanced. Coverage basically means if organization is patching windows systems and other operating systems only however vulnerabilities are being identified in applications like Adobe/Java/Web servers/WordPress, then patch coverage should increase to cover all applications installed on production systems. Also for those affected applications/firmware, organization should keep in touch with the vendors for all new security advisories. advisories. advisories. advisories. advisories. advisories. advisories. advisories. advisories. If most of the vulnerabilities fall under any of the above categories, then it's a serious issue. All the above categories

should have been fixed in pre-production phase itself.

There's a need to revisit pre-production checklists/procedures. Either they are not being followed properly or the documentation itself is not comprehensive. Both the scenarios warrant change in documentation or process itself. Please check Chapter 7 for more details. However, if all the guidelines were followed during pre-production phase but these vulnerabilities were introduced over the period of time through changes then change management process has to be revisited. Please check Chapter 8 to follow secure change management process. 9.4 Handling Zero Day Sometimes Vulnerabilities are released online without corresponding remediation steps. This happens when an individual or a third party identifies vulnerability and releases it online. It also happens in cases where a malware infection is spreading through system vulnerability and anti-malware systems are unable to block it. As vendors were not aware of this vulnerability, they may take some time to issue a fix. Such vulnerabilities are known as zero day vulnerabilities. Zero-day vulnerability can be extremely dangerous, as till the fix is made available, an attacker can exploit this vulnerability wherever affected application/system is in use.

Infamous RSA attack where RSA proprietary algorithm was stolen and RSA had to replace millions of token worldwide, started with exploiting zero-day vulnerability on Adobe Flash.

Heartbleed and Shellshock zero day vulnerabilities were discovered in OpenSSL library and Bash libraries in 2014. Before the fix was released many incidents of vulnerability exploitation were reported. The Canada Revenue Agency reported 900 Social insurance numbers were stolen using Heartbleed vulnerability. Agency had to shut down the website for a while after detecting the attack.

UK website Mumsnet reported hijacking of several users account due to Heartbleed vulnerability.

After Shellshock was disclosed publicly, within hours attackers starting probing, scanning and exploiting the bug. Bash script is widely used worldwide which created a vast scope for attackers. Shadow brokers hackers group came into picture when they released a lot of zero day exploits allegedly from NSA. These exploits were quickly used to attack various organizations. Ransomware Wannacry also used one of the exploit 'Eternal Blue' to spread the infection. Microsoft releases Patches on every second Tuesday of month however we all know upgrade schedule differ from organization to organization. For some organization it can be even once a year. Attackers make

use of Microsoft Patch Tuesdays, create exploits for specific vulnerabilities and launch attack on susceptible organizations.

It is recommended to at least update signatures on Network Intrusion prevention systems, which normally updates signatures frequently and patch your systems as soon as possible. If upgrades are not possible on all systems, at least DMZ network should be prioritized as they are more susceptible to attacks. Zero day vulnerabilities leave organizations exposed to risk until fix is available. It also puts a lot of pressure on Security and Asset owners to fix the issue 'Somehow'. Although dealing with zero-day vulnerability depends on case to case basis, there are some generic guidelines that can be followed: Preventive Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls:

Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls: Controls:

Mitigating Controls: If Zero-day vulnerability is identified:

1. Identify the affected assets, application, ports and services 2. Risk Avoidance: Check if affected component can be deactivated until fix is available.

3. Risk Remediation:

a. Increase strength of security controls (Authentication/ Authorization/Availability etc.) around the vulnerable services. b. Segregate affected assets in logical network section and access to affected services should be restricted to authorized users only. c. Check with Network Intrusion prevention system vendor, if signature for that vulnerability is available. If yes enable the signature to monitor and block any activity with respect to that vulnerability. d. Enable Host firewalls/Host IPS on affected assets if possible. e. When permanent fix is not available, vendors or third parties sometimes release compensatory controls to contain the issue. Keep in touch with vendor for work around. 4. If nothing is possible, increase monitoring of the service/application or port so that any unusual activity can be detected by trained personnel.

9.5 Audit Requirements Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization's infrastructure.

Chapter 10 Access Control

10.1 Introduction

Access control is not a new phenomenon. Since ages we have been using access control to protect our assets. We have heard stories of, Queens and their personal palaces where only King and her staff could enter, Treasures stored in safe locations with Voice activated passwords like 'Open Sesame' and many more. Simplest form of access control is to use a locked diary to write your personal feelings or eldest member of family to have keys of the safe in house. These are all examples of access control, where access is being granted only to authorized individuals.

Access control is managed at two levels:

Physical Access: Managing physical interaction between user and hardware

Allowing or denying a user to physically enter specific location where hardware is kept i.e. accessing a server rack in a datacenter

Logical Access: Managing interaction between user and information/ application within the hardware

Allowing or denying a user access to data/application contained within a hardware device like server or router

Access control is a preventive security control which controls how users can communicate or access the assets in a secure fashion. It is a control, not just for data confidentiality but also for integrity and availability. It basically controls the following:

Who has control of viewing the data? Who has control of modifying the data?

Who has controls of destroying the data? In short, any type of access to assets is prevented through access control. It's a combination of identification, authentication, authorization and accounting techniques. To control access to a particular resource, it is essential to:

Determine who is requesting access, is he a valid user?

Ensure user is who he claims to be

Determine the level of access granted to him and finally recording all the activities in his account with his username and timestamp Let's understand these concepts in detail: 10.2 Identification

Identification is presenting an identity or claiming to be a specific user. It is basically informing the computer that you are this entity. Some common examples of identity are user ids, ATM cards etc. Everyone in the organization should be identified with unique ids. Common or generic Use of common ids negatively as system User ids should not Maxine_NetworkOps

ids like admin, root should be prohibited. among various users can impact accounting activities cannot be pinned to a specific user. reveal any role specific information e.g.

10.3 Authentication Presenting an Identify is not enough to prove that the user is who he claims to be. User can very well be a malicious hacker who happens to know a valid user id; therefore, it is important to prove to the system that user is indeed who he claims to be.

There are several ways to authenticate and generally they can categorized in three factors: Something you know A numeric or alphanumeric combination which only the user knows Cheapest and Simplest form of authentication used globally ATM Pin, Login Password Something you are

User's physical trait which is unique to that user only o Finger print, facial scan, retina scan Commonly used in corporates, personal devices, Airports etc. Something you have

A Physical object which is personal to user or has been assigned to the specific user Smart card, Token, Digital Certificate, Personal Mobile Commonly used for financial transactions, government dealings, Email authentication etc. User can opt for any of the above factors (password/fingerprint/smart cards) to prove his identity to the system. In our day to day lives we all use passwords/patterns/fingerprint for our personal mobiles and PDAs, which is simply a form of authentication. Multifactor Authentication

Multifactor authentication or strong authentication involves two or more of above stated factors in authentication process for e.g.

Something you know + something you have = Password + Token

Something you have + something you are = Smart card + Finger scan

10.4 Authorization Once a user is identified and authenticated successfully, access to resources can be granted. Extent of this access has to be predetermined based on role of the user and Least Privilege principle which states that users should be granted just enough privileges to perform their job responsibilities.

Most commonly used approach for centralized access control is Role based approach where a user is assigned to a specific role and role is given privilege to access the resources. Customized roles should be created for different levels of job roles to satisfy least privilege principle. Role should also define the level of privileges i.e. View Only, Modification or Full Control.

For example, user can be assigned to Network Admin role and Network admin role can be provided specific privileges to access specific resources e.g. Cisco Level 4 privileges on Router no 1, 2 and 3. 10.5 Accounting

Once user is identified, authenticated and authorized, user's activities and usage statistics should be recorded to create an audit trail. This audit trail is useful to see what the user is doing once authorized. Audit trail when analyzed and monitored properly can detect security breaches. Audit trail is also useful in forensics to analyze an incident and understand the method of operandi in order to avoid similar incidents in future.

Accounting will only be effective if everyone is organization can be identified with a unique user id and all devices are time synchronized. If organization uses common or shared ids, then malicious activities cannot be tracked back to a specific user. Audit Trail is also considered valid evidence in the court of law.

10.6 Access Control Policies And Procedures

Access Control Policy provides an overall view of managing access control in the whole organization; however, each business unit may have to translate it in procedures to provide specific instructions to carry out policy directives.

To draft access control understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand

understand understand understand understand understand understand

procedures, a business unit needs to understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand understand

understand understand understand understand understand

understand understand understand understand understand

And then define

define define define define define define

define define define define define define

define define define define define

define define define define define

define define define define define

define define define define define

define define define define define define define define define define define define define define define define

10.7 Access Control Implementation Access control is not just about granting and removing access. From operational perspective there are a lot of things that should be considered for Access control implementation. An adequate access control implementation can be achieved by covering following areas:

User registration and de-registration. Password Management.

Asset Classification. Access Provisioning.

Network Admission Control (NAC).

Privilege Management.

Remote Access and Connectivity. Third Party Access and Connectivity.

User rights review Procedure.

Access control procedures should ideally cover how business unit is handling access control requirements, ideally covering all above areas. Each area can be a procedure in itself, or a detailed access control procedure can cover all above topics. Let's understand each of these areas in detail:

10.8 User Registration And De-Registration

User registration and de-registration should be properly documented. Procedure should define the process of creating a new user in the system with unique identification based on a specific naming convention, and providing him rights to company assets that will be required by him to perform his duties effectively. This will also entail basic physical access to his work place. Physical and Logical access should be provided based on 'Least Privilege principle'. Asset allocation should also be included in user registration process e.g. company's laptop, company's mobile etc.

In most of the companies, User registration is either a Human resource function or Active directory team's function after HR intimation. However, a lot of assets are department specific and user creation and deletion on those assets, is handled by that department itself.

De-registration process should include removing/disabling user ids from the systems, retrieval of allocated assets, removal of physical access and return of any identification cards. If user was granted access to various stand-alone devices/applications manually, then the user id should be deleted manually at the same time.

Most of the organizations disable user ids and not delete them for a certain period of time to ensure smooth transitioning. Ensure that similar practices are followed for stand-alone devices/applications as well.

Departmental user creation and deletion process should be properly aligned with HR Process. Removing user from active directory is easy however if all applications are not integrated with Active Directory then deleting user from all different assets can become a nightmare. I have observed that even when the user was terminated, his physical access to datacenter and devices was not removed immediately, which provided a window of opportunity to the disgruntled employee to cause damage to the organization. To avoid this, try to integrate as much as possible with central Access provisioning and de-provisioning systems like Active Directory. If Cisco ACS is used for network devices provisioning, then ACS can be integrated with Active Directory. De-provisioning on stand-alone devices should be taken care of as part of de-provisioning process.

10.9 Password Management

It is important to be consistent with password policies throughout the organization. Password policies can be centrally enforced through a directory service like Active directory; however same password policy should be enforced in standalone applications, servers and network devices as well. Good password components are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are:

Passwords should be stored in hashed/encrypted format using a strong hashing/encryption algorithm.

Users should also be educated on creating a good password. A Good password follows all the complexity criteria and is easy to remember. If Password is extremely random, users will end up writing them somewhere.

If access control policy requires strong authentication for all/specific systems, then arrangements should be made to include biometric, token, encryption certificates etc. along with password. Apart from Password strength, it is important to have documented procedures of password issue/reset for each individual system which is not controlled through central directory e.g. Standalone servers, network devices, VPN devices, applications which are not integrated with centralized directory. Password issue and reset procedures for non-integrated devices/application should be same or similar as central directory procedures. E.g. if Active directory password issue procedure requires password to be sent half on email and half on phone then similar password issue process should be incorporated in procedure for non-integrated systems. Important considerations for password reset and issue procedures are: For Password reset, user should raise a request on Organization service desk portal or sign a physical form.

A request can be raised on behalf of the user from some other id however password should always be sent to the user. An email from Manager's email id can also be accepted as request and approval. If password reset is allowed on phone, then helpdesk user should confirm identity of user by asking questions that only user can answer?

For such system, user initially enrolls in identity management system by creating 5 to 10 personal questions and providing their answers. Help desk can randomly choose any 3 to 4 questions to confirm the identity of user before resetting password. If user identity cannot be confirmed using above systems, reset password should be sent to manager. Helpdesk support should only have required rights to perform their work. There should be security controls implemented for people providing password support. This task is extremely sensitive and ideally should not be outsourced to frequently changing third party personnel. Password communication to the user should be out of band. Half of the password can be sent to manager's email id while half can be verbally told to user over phone.

Password should be one-time password and user should be required to change it at logon. Auditor may ask for evidences if appropriate Password reset procedure is created and followed. Apart from password reset procedure document, he/ she may ask to show evidence first hand and request you to do a password reset for your account in front of him/her.

It is observed that passwords are configured on Virtual Teletype (VTY) ports however for Console and Auxiliary ports passwords are skipped. Authentication should be configured on each and every interface with same criteria or else interfaces should be disabled.

It's also observed that VPN Passwords, Admin passwords are sometimes kept in password protected spreadsheet which is shared among the team. This practice is not secure at all and can be raised as a very high rated finding in audit.

10.10 Asset Classification

We learned about asset classification matrix in Chapter 6. Continuing the same matrix example, let's understand the role of asset classification in access control.

To define appropriate access control, each business unit should classify information assets based on the Classification Matrix. Once classified, appropriate access controls should be defined in access control procedures and implemented on the unit's assets. 10.11 Access Provisioning At the time of user registration and de-registration, basic rights are provided for user to work and perform general tasks like access intranet and emails. However, for IT/Network Operations

team different level of rights on network devices like routers, firewalls etc. will be required to perform the job.

It's easier to have centralized user management tool to manage those rights, like AAA server (TACACS+/Radius server) to manage user rights on network devices. Cisco ACS can also be used for Access provisioning on Cisco devices and it can also be integrated with Active directory. Cisco ACS is AAA server which can handle:

Authentication: Cisco ACS can provide centralized secure logon to devices. It can also support strong authentication methods like tokens.

Authorization: Once user is authenticated Cisco ACS can determine which devices a user can access and what kind of operation is allowed. These authorization lists/roles are created in advance and users are attached to them. Commands allowed can also be specified to a specific role. These roles can also be time bound like 9am to 6pm. Accountability: Cisco ACS also logs all users' activities in network devices once they are authenticated and authorized.

However, for a small to medium organization, it may not be feasible to buy a tool for user provisioning. It can be handled with simply a Spreadsheet. Tool definitely makes it easier however work can still be done without any tool.

To manually maintain access provisioning following things should be considered:

Have a standard configuration across all devices. Have standard role profiles based on Job requirement for example, Net Ops Level 1, Net Ops Level 2. Assign these roles to individuals rather than assigning them privileges directly.

Decide the commands for each level and keep overlapping of commands to absolute minimum.

Highest level of administration should only be provided to few named individuals. At least two people should be granted highest level of rights on each device to ensure business continuity in absence of one person. Create unique user ids for each person on each device and do not use common ids like admin. Distribute devices in the team so that only required user ids are created on each device. Use strongest available encryption and hashing for user passwords Create a user provisioning matrix. Use this matrix for access provisioning and de-provisioning. This step is extremely important

in manual access management as deletion of user from one device will not ensure removal from other. User will have to be added and removed from each device manually. A simple example of User Access Matrix is provided below:

* Level 5,10,15 are custom level created for administrative privileges with 5 being lowest and 15 being highest. Second Spreadsheet will be required to record names of users against role names.

Draft all required processes and procedures; for example, Network Device Password reset procedures, Network Device access control procedure, and Network Device User registration and deregistration procedures etc.

These procedures are different from user registration and de-registration procedures used at organizational level as they are specific for network devices. Alternatively, you can also include a section for network devices and server administration in original user registration/deregistration process for ease of use.

Manual user management may he required even when you have a user management tools which supports only specific brands like Cisco ACS. Cisco ACS will only support Cisco devices however if you have a mixed environment with other brands like Juniper or Huawei, you may have to manage them manually. 10.11.1 Access Provisioning Process Let's see user access provisioning process here. We can continue with the User-registration example shown earlier.

After user id is created and basic rights are provided, network management team assigns roles related to user's job profile. In this case Network Operations, Network Planning, Firewall Administration roles have been granted to the user. Access provisioning procedures should:

Mention Forms to be raised at each level; Location/URLs. Define the Approval authority and workflow for each process.

Define Implementation steps for each process: Steps for new user creation and role profile assignment.

Steps for initiating physical access. Steps for updating user access records along with physical access.

Define how user ids, passwords, training manuals etc. will be communicated to new users.

10.11.2 Segregation Of Duties When assigning user role profiles, segregation of duties should be considered. One person should not be given conflicting privileges where he can perform the task and also review it. Ideally following role profiles should be allotted to different individuals. Network Planning Engineer and Network Operations Engineer.

Network Planning Engineer and Network Manager. Network Operation/Planning Engineer and Log Monitoring Team. Segregation of duties can also be implemented at a more granular level. E.g. Two people been assigned to deploy a change. One person should deploy and another should verify the deployment through a pre-defined checklist.

It can be a challenge for small/medium organization where limited people are handling Network Management however by dividing

each task in two parts and assigning these two parts to two different individuals can help achieve segregation of duties. E.g. Network planning engineer can review the deployment of change once Network Operation engineer has completed the job. 10.11.3 Access De-Provisioning On User's last day in the organization, User id should be deactivated in user provisioning tools (Cisco ACS/TACACS+ server etc.). If tools are not used, then User Access Matrix should be referred to check all the roles that user has been assigned on various devices and access should be removed from all those devices manually. It is imperative to do this task without delay as Auditors may ask for user id deletion logs. Also if user had any special physical access as part of his job then that should also be removed on the same day. Ideally after HR intimation of user's last day all logical and physical access removal should be scheduled for that day itself. All required preparations should be done before the last day itself i.e. Raising removal request with concerned departments/change request to deactivate user ids etc.

As per Human Resource process, user's id should be deactivated from central directory and assigned assets and cards should be collected.

Once everything is completed, all records should be updated especially if manual access provisioning is maintained. User ID status should change from Active to 'De-Activated' and after 30 days (or time period mentioned in Access Control Standard) User id should be deleted permanently from the devices and status should be changed to 'Deleted'. Access de-provisioning procedures should: Mention Forms to be raised at each level; Location/URLs.

Define the Approval authority and workflow for each process. Define Implementation steps for each process. Steps for removing user from role. Steps for removal of physical access.

Steps for updating user access records along with physical access. Define communication process for the Human Resource.

10.12 Network Admission Control (NAC) NAC was covered in Chapter 3 however it is worth mentioning here in access control as NAC can provide access control at Network layer. NAC can be utilized in multiple ways to secure user segment appropriately. Integration with Centralized Directory:

NAC can be integrated with Active Directory to ensure only valid users can gain access to Corporate Network. Based on the user accounts, network profiles should also be allocated to the user i.e. VLAN/Wireless SSID Limiting Network Access: Based on user's location NAC can be programmed to limit user only to a specific VLAN/Wireless SSID. E.g. Guest/Third Party users can be restricted to connect only to Guest Wireless SSID. Posture Assessment and Correction

NAC can be integrated with existing patching servers/antivirus to assess laptop's patch level and antivirus signature level before allowing connection to the network. In case Patch levels or antivirus signature are not updated as per organization's policy then laptop can be quarantined to a specific vlan and patches and antivirus updates can be done in that isolated network. Network access can be provided once laptop's posture is as required. 10.13 Privilege User Access Management

Privilege user management is access management of administrative/ privileged users. Privilege users are responsible for critical tasks and any critical task going awry may result in loss of availability, integrity or confidentiality. Administrative access is required for operations however accidental/deliberate misuse of admin privilege can cause great damage to organization. Moreover, attackers always try to gain access to privilege user accounts to perform the breach. Therefore, Privilege user access should get more importance in terms of security. Goal of privilege user access management is to add multiple layers of defense to increase the complexity of privilege access in order to secure it as much as possible. Different layers of privilege access can be:

Authentication:

For privilege users, password should be stronger than normal user. Two factor or multifactor authentication can be included for administrative users. For Password reset and issue, approval from manager should be required. All the best practices mentioned in "10.9 Password Management" should be followed more stringently even if they are not followed for normal users. Password should be changed more frequently than the normal user. More sensitive passwords can be kept in safe and changed after each use. For highly sensitive encryption keys or passwords, two or three parties can be provided parts of password and to access, they should all come together with their distinct parts to form one full password.

Privilege management tools can be used for better handling of privilege password. Tools can create a temporary password for specific task and password expires once allocated time is over. These tools also allow recording of privilege user activity during the session.

Authorization: Devices should be distributed among privilege users so that all administrators don't have administrative access on all devices. A minimum of two administrative level users should be created on each device to ensure continuity of business. Ideally all privilege users should have a normal account to work and a separate privilege account to perform administrative tasks.

Administrative account should be used only for administrative activities. Privilege activities can be limited to specific IP addresses available locally in Data Center-command center or office premises to ensure any privilege activity cannot be done remotely.

Accounting: Logs of privilege level activity should be recorded and monitored more closely than regular logs. Any abnormal activity should be detected and reported through incident management. Auditor may ask for privilege management logs to compare it with corresponding change request. If activity logs show anything which does not correspond to any change request, then it can be raised as unauthorized change and a finding.

10.14 Remote Access Management If users are allowed to connect remotely, then authentication should be handled appropriately with proper procedure and processes in place to create, delete and reset passwords. Special

care should be taken to verify user identity when resetting administrative level user password. Remote access procedures can be drafted separately or in a dedicated section in existing Access control procedures.

IPSec/SSL VPNs are commonly used methods of remote access. VPNs are explained in Chapter 3. Authentication:

Authentication should be handled with TACACS+/Radius or similar technologies which support strong authentication. Password management should be same as stated in Password standard. Distinct user ids should be created for each user and common ids should be avoided. Authorization:

Users should be assigned to clearly defined roles based on their required access. Their required access should ideally be approved by asset owners and verified by security team. Role should specify assets and type of authorization allowed. Accounting: User activities should be appropriately recorded and monitored as remote access is commonly used way for hackers to access internal system.

Remote Administration: Risk factor increases quite a lot if remote administration is allowed in the organization therefore ensure that maximum controls are used for remote administration. There should be strong authentication methods required for user authentication. Avoid providing highest level of privilege in remote access profiles. Logging and monitoring should be at highest level. 10.15 Third Party Access Management Third parties that organizations deal with can introduce a lot of threats to the organization even if organization is taking all due care measures for security.

Security should be included as selection criteria to select a vendor. Vendor should be properly audited for secure processes, technology and security aware personnel. When outsourcing services to vendor's location, a proper audit should be conducted to ensure vendor policies are at par with organizational policies. Technologically vendor should be equipped with security technologies like Firewall, Antivirus, NIPS, Data Loss Prevention and other relevant controls.

Third Party access management procedure should be drafted to standardize access control of all third parties; organization is dealing with.

Procedure should consider following points at minimum: Management of Physical and Logical access to the Vendors.

Third Party user registration and deregistration process. Requirement of strong authentication. Requirement of access provisioning based on Least Privilege. Requirement of logging and monitoring. Approval Processes and procedures (by vendor account manager as well as organization's account manager). Third party User Access Review. Third party-secure connectivity requirements i.e. Encryption requirements, Secure Site to Site VPN etc. Third party secure connection is covered in detail in Chapter 5. 10.16 User Access Review User access review is a very important part of Access Control Process. Depending on organization's policy monthly, quarterly or annual review might be required. It should be done at least annually.

There are two types of user access review that should be done. They can be combined or done separately. 10.16.1 Idle account review

This is the review of last login date of each user id to find out which user accounts have not logged in from last 3 months or 6 months (depending on idle account criteria specified in access control standard) This can be an automated report generated from active directory/ TACACS+ server or user activity logs. If users have not logged-in in last 6 months, analysis is required to check if they still require this access, are they still in organization, have they changed roles etc. These idle accounts can be disabled for a while and if there's no activation request then they should be permanently deleted. Their corresponding managers can be asked to review if access is still required. Auditor may request for evidences of idle account review.

Keep below documents/reports as evidences: ■ Data collected for idle account review i.e. User last login reports ■ Analysis of reports and discussion with relevant stakeholders on whether those idle accounts are still required Action taken based on analysis report i.e.disabling of idle accounts Report of deleted accounts based on the review 10.16.2 Access Review

Over the period of time, user privilege goes on increasing with change in roles and responsibilities. Often privileges are added to user account as and when required however they are not removed so aptly when not required. Access review is review of user privileges against users 'Need to Know' and 'Least Privilege' principle. User access list can be generated from AAA servers/Active directory and compared against user role, job responsibilities and 'Need to Know'. Managers should especially focus on users, who have recently moved out or in their department as they may have extra privileges. Any privileges which were added for specific tasks should be removed once task is over. Also any extra privileges

given to the user in absence of primary user should be removed once primary user is back. Auditor may request for evidences of user access review. Keep below documents/reports as evidences:

Data collected for user access review i.e.user access reports from Active directory/AAA server, Access control matrix

Analysis of reports and discussion with relevant stakeholders on whether those user privileges are required. Analysis criteria and Findings

Action taken based on analysis report i.e. removal of extra privileges/roles

Any user's change of role and change of privilege done accordingly i.e. Network Operations to Network Planning.

10.16.3 Remote User Review

Along with internal users, remote users or VPN users should also be reviewed in similar manner. Idle account review and access review both should be conducted and evidences should be kept.

For remote user's access, review should be more frequent because potential of misuse is higher than internal users. Failed logon attempts should be logged and reviewed on daily basis or at-least weekly basis. Unusual amount of high number of failed logon attempts points to password guessing attempts and user should be immediately consulted if it was him or someone else.

Such consultations with user over telephone/email should be kept as evidences and recorded in weekly review as action items.

10.16.4 Exceptions Handling Ensure that all review findings have been acted upon however sometimes there are practical/business reasons for not implementing the recommendation of user access review findings. In that case, exceptions should be handled similar to handling risk exception. Risk exception handling is explained in detail in Chapter 4.

For example, if your review finding recommends removal of rights of a user who has recently transferred to another project however he may still require the access as he is still helping in prior projects then file a risk exception and collect appropriate approvals from asset owner along with expiry date of that access. Keep these emails as your evidences for auditors.

All loose ends should be tied. Either finding should be closed by acting upon them or risk exception should be filed appropriately. 10.17 Audit Requirements Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization's infrastructure.

Chapter 11 Capacity Management

11.1 Capacity Management

Capacity management is about predicting and analyzing future growth trends and managing infrastructure resources accordingly. A good capacity management ensures optimum utilization of assets without impacting performance of business critical processes. Capacity management is an established ITIL process however our focus here is to understand the security aspects related to Network Teams.

Capacity Management is a management process to understand existing infrastructure performance, predict the impact of proposed and forecasted changes and finally plan to incorporate those changes in the most cost-effective manner with minimum or zero impact on business.

Capacity Management has to be looked from 3 different aspects:

Resource/Component capacity management

Resource/Component capacity management looks at each asset, like routers, servers, switches, firewalls etc., separately to measure their performance and future capacity requirement. It's the lowest level of capacity management, which is quite effective in managing individual assets. Service Capacity Management

Service capacity management is next level of capacity management, which looks at service as a whole by grouping individual assets, servicing one common goal, e.g. mail, intranet, and wireless services. A service may have various assets like servers, routers, printers etc. Capacity management is done for the whole service so that upgrade of one asset does not overload other assets in the service. Business Capacity Management Business capacity management is the highest level of capacity management approach. Instead of focusing on individual assets or group of assets, it focuses on a complete business process like Account reconciliation, Procurement, Customer relationship management etc. Capacity management is planned and implemented for the whole business process including all assets across the infrastructure.

Indian e-commerce giants Flipkart and Myntra both have faced backlash from consumers because during peak sale time, their website crashed or was too slow to handle customer transactions. This is one example of not able to predict load on servers at peak time and not managing capacity accordingly.

11.2 Documented Policies and Procedures Capacity management is an important process for the organization, therefore appropriate policies and procedures should be crafted for standardizing and streamlining.

Policy should define the organization's requirement to ensure technological assets are maintained to support organization's daily activity and forecasted demands in the most cost-effective manner.

Policy should clearly define: Requirement of regular monitoring of IT assets

Establishing thresholds of various parameters to identify capacity red flags Requirement of annual/bi annual assessment of future capacity planning requirement vis a vis budgeting

Roles and Responsibilities of responsible teams to monitor usage, raising capacity assessment request, performing capacity assessment & planning, approving authorities and implementing capacity change requests

11.3 Capacity Management Process Capacity management includes assessing capacity requirements, monitoring and measuring performance, analyzing performance data and making decisions on capacity expansion based on that data. Below flowchart shows a sample capacity management process. Let's understand various phases of process below:

11.3.1 Assess Capacity Requirements When selecting new IT systems for infrastructure, capacity management should be considered; e.g. following statement included in Business Justification for procuring a Cisco router "Cisco router was selected because it can support 1 gbps WAN link (based on vendor's claim and organization's previous experience) which is required for organization's web server optimum performance."

Auditor would be interested to see if capacity management was taken in consideration when new systems were introduced and what were the tests conducted related to the same. Business Justification provided in any document before procurement can be submitted or tender documents requesting for device quotation where their load management is one of the criteria can be submitted for Audit purpose.

Once a device is procured, it should be tested in testing labs for load management, to ensure device does not fail in production environment. Load Testing can be performed with traffic generators in lab. Results of such testing should be kept as evidence for audit.

Capacity management requirement are generally driven by Service Level agreements (SLA), Expected performance of assets for efficient business productivity, Potential/Upcoming changes to the system, Compliance to regulatory standard, Business Forecast etc.

For example, Service Level agreement may require emails to travel from point a to b in maximum 10 seconds. Capacity management has to test and incorporate enough processing storage and bandwidth for mail service to ensure that there are no holdups in relaying the email to fulfil SLA condition.

All applicable requirements from SLA's regulatory authorities and others should be included in Policy and taken under consideration for all future infrastructure upgrades and expansions.

11.3.2 Monitor and Measure Performance

Capacity Management Tools Most of the networks monitoring tools have the capability to monitor various parameters to ensure that asset is performing optimally and utilization is in a normal range. Following parameters are normally monitored; however, companies can add parameters based on vendor recommendation/capacity policies

CPU

Memory Disk Space

Bandwidth utilization Additional Parameters

Transmission Delay

Packet statistics

Threshold for each parameter should be defined in Capacity Management Policy/Standard. Below is a sample of thresholds defined in Capacity Management Standard:

Monitoring teams can define these thresholds in monitoring systems and create alerts and reports based on these thresholds.

All devices should be synchronized with a central time server. If Time sync is not done for all devices, monitoring data may not make any sense at all. Data collection from all devices should be done after equal intervals. If data on few devices is collected after 10 minutes and others on 5 minutes, it will not present the real picture of the device. Ensure that interval value is specified in policy/standard as well as configured on Network monitors.

A weekly report should be presented to capacity management team to analyze the statistics. Report should present an overall view of healthy systems, systems reaching warning and critical levels.

More on monitoring is covered in Chapter 13. 11.3.3 Trend Analysis

Capacity Management team should review weekly reports, monthly and quarterly trends to take decisions on management of systems reaching critical/warning levels.

Further investigations can be done for sudden spikes or abnormal traffic bottlenecks. Sometimes a little performance tuning based on the problem can solve the issue without any investment. For example, if router handling video transmission is performing inefficiently, because of file transfer or any other bandwidth intensive transmission, then QOS can be set up on the router to prioritize video transmission.

There are two types of response to analysis - Reactive and Proactive

Reactive is to take steps once issues have arisen and customers are impacted. Proactive is to take steps even before issues have arisen and nip them in the bud before business is impacted.

Ideally proactive approach is best, but sometimes if issues crop up then reactive approach should be taken. Capacity Assessment reports should be created for any investigation/ performance tuning undertaken by capacity management Team/ Network Team. Capacity Assessment report should clearly mention: Triggers for capacity assessment request (Regular monitoring).

Investigation conducted and finding of investigation. Recommended way forward.

Capacity is sufficient.

Capacity is sufficient with some fine tuning. Capacity is sufficient for now, however situation to be reviewed in next 3 months.

Capacity is not sufficient - Options for remediation identified.

Options for mitigation and corresponding cost.

Capacity assessment report should be presented to management and stakeholders to approve appropriate action. Action taken on

assessment report like change request of expansion / performance tuning should be recorded in meeting minutes for audit purpose.

11.3.4 Capacity Expansion Expansion requests can be submitted in Capacity Plan document on bi-annual/annual basis, depending on the frequency specified in Policy. Capacity plan can include potential expansion projects based on regular monitoring reports, Quarterly/Annual Trend analysis of capacity, Future growth projections and forecast of infrastructure requirements based on Upcoming Business needs, e.g.

Adding processing power to a slow application server. Based on yearly trends, backup and logs take up to 10 TB hard disk space per year, so budgeting for 10 TB hard disk space for next year. Budgeting for more bandwidth backbone as new bandwidth intensive programs (Vulnerability Assessment, Remote OS and Application updates) will run continually going forward, based on Business plan for next year. Capacity plan should justify the business need of investing in expansion (maintaining SLA, optimum performance, better productivity of employees, support business future plans etc.), different options available for each expansion, budget requirement of each option etc.

For example, if monitoring shows that a particular application is taking long time to perform database queries and updates, which is adversely impacting productivity of employees, expansion can be proposed to increase memory or processing power of the database server (whatever is required) and their respective costing. Capacity Plan may contain several different potential expansion projects however not all of them can be approved by stakeholders. It's a good idea to rate them from Most important to least important on the basis of urgency, e.g. maintaining SLA is more important than employees facing delay in response. Stakeholders can decide based on budget constraints and expansion project's importance to business. Capacity management should be considered as a project with thorough testing, on not just the assets but also the related components in the service to check any potential adverse impact. Even after implementation assets should be monitored carefully to ensure performance objectives are met.

11.4 Audit Requirements Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization's infrastructure.

Chapter 12 Log Management

12.1 Logging

Log is a detailed record of performance, errors, events and activities of any application, system, device, server or user.

Mostly all the systems today are capable of creating logs. An example of logs would be Windows Event logs, Router syslog etc. In any organization, logs are required for following purposes:

Troubleshooting

Logs are foremost used in troubleshooting any problem in the network, application, service or device itself. Troubleshooting involves viewing and analyzing logs to isolate an issue and then resolve it.

Compliance

Logs are the fundamental requirements of many security and regulatory standards like ISO27001, PCI-DSS etc. Each industry domain has to comply with some or other regulatory standard which may require keeping logs of activities and events.

Audit

For any Audit against a standard or otherwise, logs and related policies should be maintained efficiently. Legal

For litigations, logs are admissible in court as evidences, if maintained correctly.

Forensics After a breach or any issue, incident can be recreated with the help of logs to avoid any future instances. Logs are also used to investigate any event or crime. 12.2 Log Management Process and Documentation

Organizations may have dedicated teams for log management or smaller organizations may delegate this operation to Network and System Administration.

Log management is done to ensure:

Detailed and valid logs are generated as per policy/regulation requirements.

Logs are securely transmitted from device to storage servers, over the network.

Logs are securely stored on storage servers for required time period. Effective analysis is done to detect any malicious behavior or error.

Logs are disposed of securely when they are no longer required.

Organizations should have Log management standards to define logging requirements of the organization, for example:

Log retention period (Regulations may require to retain logs for up to 7 years).

Level of Log collection (Critical only or debug level). Log transmission (secure). Log Protection requirements etc.

Logging procedures can be created by concerned department separately to translate policy and standards in practical application. Log management procedures should have following components: Log Generation.

Log Transmission. Log Storage. Log Analysis. Log Disposal. 12.3 Log Generation

12.3.1 Clock Synchronization

To ensure validity of logs, all devices in the network should be synced with a central Network time server. If clocks are not synced centrally, all devices may have different time and logs generated will refer to local host time. When logs are correlated on central server, they will not show the real picture of chronology of events because time of event will differ from device to device.

12.3.2 Timestamping Logs Ensure all logs are timestamped before they are sent to a central logging server. Without timestamps, logs analysis from different devices will prove extremely difficult. As far as possible use a standard Date and Time format across all the devices and systems in the network for e.g. dd/MM/yy HH:mm:ss 12.3.3 Logging Level Enable logging on all devices by selecting a suitable Logging level as specified in the Logging standard. For example, Cisco has following Log Severity Levels:

Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels: Levels:

Levels: Levels:

NOTE: Selecting level 5 will log all events from level 0 to level 5

All devices handle logs differently and their severity levels are also different however a standard logging level can be established which map across all different devices. For example, most of the devices have an 'Information' level which logs all types of logs. If you decide to log only emergency level logs, that will not give a clear picture of events happening in the environment or help in troubleshooting. Auditor can point out that the purpose of logging is not met with only emergency level log information.

However, if you choose to log debugging level information, volume of information will increase drastically and it will be operationally difficult to manage storage and to appropriately analyze the logs. Choose the log level depending on log requirements and resources available. From Security's point of view, higher the log level, the better but that may not be feasible from operation's point of view. Whatever level you choose, ensure that at least below are logged: Date and Time.

User identifiers. User Logon and Logoff time.

User activities (commands executed).

Source Identifier (IP Address/Mac Address).

Authentication attempts (Successful and failed).

Authorization attempts (Successful and Failed) [Lower level user trying to execute higher level command].

Changes to system configurations/Access control lists. Exceptions/Fault logging.

Service Name/Protocol /Port Number.

Network Traffic violations. Discuss log level with security team and agree on a feasible level. If decided level is not in conformance with the Policy, it's better to get an exception approved by Asset owners and Security Management to avoid Audit finding. A sample exception form is provided in Appendix 4. 12.4 Log Transmission

Logs are typically stored locally on the device, but that storage may not be enough to manage and retain logs as per policy and standard. To retain, analyze and store logs in a secured manner they should ideally be sent to a central logging server typically knows as syslog server.

Purpose of syslog server is to centrally store logs in appropriate format for audit, compliance, regulatory and troubleshooting purposes. Logs from devices can be sent to syslog server in real time, near real time or in batches depending on network bandwidth. Syslog supports TCP and UDP transmission. UDP is quicker than TCP however there are chances to lose some information in transit. Organizations have to understand the logging requirement to decide over speedy transmission but missing logs or slow transmission but ensuring all logs are present.

By default, syslog traffic is clear text; therefore controls should be implemented to ensure that logs are not tampered with during transmission. Below controls can be implemented to secure logs during transmission: Syslog traffic should be isolated to a separate management VLAN

Transmission can be encrypted using secure tunnels etc. 12.5 Log Storage

Normally Logs are stored in device storage and then transferred to central logging server for processing.

Device storage has limited capacity and logs can easily fill-up the device's RAM and bring it down. For that reason, logs are allocated fixed buffer space which when full, starts to overwrite

existing logs. For this reason, log storage buffer should have enough space to collect logs till they are transferred without overwriting the existing logs. Timing of log transfer from buffer should be managed to avoid filling up the buffer completely. Access controls should be managed for users so that no one is able to tamper logs. Once logs are transmitted to syslog server, it should be tightly managed with access privileges based on need to know. Device Administrators should not have direct access to log database to delete or tamper logs especially related to their own activity. Encryption and Hashing controls should be implemented to protect confidentiality and integrity of logs. Controls should be decided based on Logs sensitivity label according to Information Classification criteria. (Information classification is described in Chapter 6)

For Forensic analysis, maintaining integrity of data is extremely important or else logs are not admissible in court as evidence.

Logs should be backed up regularly to protect and retain them for stipulated period of time as per organization's policy/regulations.

12.6 Log Analysis Manual analysis of logs is possible but it is an extremely tedious task for any team to accomplish. Team should be highly trained to look for malicious activities, abnormal behavior etc. however

even then there are chances that malicious events may go undetected due to sheer volume of logs. That's when log analyzing tools can be utilized which can automate log analysis and generates alerts for any suspicious activity. Purpose of log analyzers is to analyze logs for specific criteria and generate alert if criteria are met.

Some example of criteria can be: Logging into router after office hours.

Change in access control list.

More than 10 Firewall Fail/Drop/Reject events from a particular host. More than 5 failed logon attempt in 1 minute.

Internal workstation scanning all router ports.

Internet usage from servers (critical ones which do not require internet access). Logs hash mismatch.

No logs received from a monitored device in last 1/2 hours.

Configuration changes (especially without any corresponding change request).

Logs should be regularly analyzed and frequency should be included in standards/procedures. If it is not feasible to analyze logs every day, it is recommended to prioritize devices/certain criteria to be analyzed everyday like Administrative activity, failed logon attempts etc.

GREP function can be used to look for specific logs in large log files. Scripts can also be created to separate specific logs like Failed Logon Attempts to a different file and other events in different file. It will be easier to look at these individual files rather than the whole log file every day.

All other low priority logs can be included in weekly task list. Automated reports can be generated from the log analyzer tool to highlight prioritized devices' events.

Procedure should also define the process of reporting an event if alert criteria is met. Event can be raised automatically by log analyzers or manually by log analysis team after verifying that alert.

Auditors, however, prefer automatic incident generation rather than manual ones. This is a preferable method which keeps track of the entire incident taking place in organization without any human intervention. If log analyzer raises any incident which was supposed to

be a valid/authorized action, incident can be closed with relevant evidences and comments.

For example, Log analyzers can generate an alert that access control list was modified, however if it was an authorized change by network team then alert can be closed by providing change management request reference.

It is important to keep evidences of Log review activity. For example, if administrative logs are to be reviewed every day, daily report can be generated for administrative activity and saved on intranet/File server and there should be signoff from responsible parties that they have indeed reviewed the logs, even if there were no events raised. A simple spreadsheet with following fields can be maintained for tracking Log Review.

Result of daily/weekly log reviews can be uploaded to intranet or sent to managers via email. Email/Uploaded logs with timestamp can serve as evidence of log review. If just the spreadsheet is maintained in a shared location, it may not be considered as evidence because of lack of timestamp to the activity.

Auditor can ask randomly for log review of any week/ day. Such evidences if maintained properly can be handed over to auditor without any problem. Monthly summary reports of Log review and observations can be submitted to management as well as to auditor as evidence.

Summary report can contain: Number of incidents raised/Malicious activities detected Top Attack sources

Top 10 authentication failures Top 10 authorization failures etc.

12.7 Log Disposal

Once log is transferred to syslog server, device logs can be cleared off from the buffer to make space of more logs. Logs should be regularly backed up as required by standard. Old/archived logs can only be disposed after their retention period is over, unless an ongoing legal proceeding requires logs from that period.

Logs should be assigned information classification label according to Information classification standard. Disposal can thus be handled according to the procedures described in Information classification criteria.

Depending on logs sensitivity, they can be simply deleted or overwritten (tape media) or they may be required to be destroyed and incinerated. 12.8 Audit Requirements Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization's infrastructure.

Chapter 13 Network Monitoring 13.1 Monitoring

As we learned in Chapter 3, along with preventive controls, we need a set of detective controls to be able to detect any potential issue. Even if we have best of the preventive controls enabled in our environment, without proper detection, attacks may go completely unnoticed. Monitoring various aspects (Physical, Technical and Environmental) of an organization, helps in detecting any issue and taking corrective measures preferably before the issue does any real damage. There are different kinds of monitoring that Organizations use depending on their risk appetite. Normally banks, governments, military may use the maximum controls as their risk appetite is smaller than private organizations.

13.2 Physical, Environmental And Content Monitoring

Physical Monitoring of the environment is performed by deploying CCTV cameras, motion sensors etc. to monitor the environment 24X7.

Physical security team/guards have to monitor the feed to take any action. If monitoring is not in place, anyone can simply walk into the premises.

Environmental monitoring is done to detect any issues beforehand with temperature, ventilation and humidity within critical environment like datacenter. If environmental controls are not monitored properly, they can cause deterioration or total failure of hardware components. Following factors should be closely monitored: Temperature: High or Low temperature can cause system crash. Optimum temperature maintained in datacenter is between 18 to 22 degree Celsius. Humidity: High humidity can cause condensation and low humidity can result in electrical discharge. Airflow: Airflow in Air conditioning vents as well as in server racks. Electric voltage: Voltage is monitored for any brown outs.

Power consumption: Power consumption is monitored for heavy usage which can result in overheating. Content Monitoring is performed to: Detect malicious content (worms, virus, etc.) embedded in the documents.

Detect unauthorized transmission of classified/sensitive information via USB/Emails/Uploads on online servers and other means. Detect malicious/prohibited websites visited by employees (as organization is also liable for this offence for not performing due care).

Data Loss Prevention (DLP) is a monitoring system which tags the content and allows data transmissions operations, based on permitted rules created in the system. For example, rule may state, "confidential data cannot be copied to removable disk or sent via email". If user tries to copy confidential data to a pen drive, DLP can block the action and alert concerned people. Assuming above monitoring tasks may not fall under networking department; we will focus on Device, Log and Network Traffic Monitoring in this chapter. Real-time log, device and traffic monitoring can detect malware infection, zero day attacks, hardware component failure/faults, network performance issues, hacking attempts etc. Monitoring data is used for troubleshooting as well as in generating statistics for Capacity/Expansion planning, Maintenance Contracts etc.

13.3 System Monitoring Standards and Procedures

System monitoring standards and procedures (also known as event management standards and procedures) should define organization's requirement for monitoring assets. Procedures should explain the whole process in detail with screenshots if possible. Different procedure documents can be created for various monitoring aspects because different teams may be handling those aspects. Following components should be included in policies and standards:

standards: standards: standards: standards: standards: standards: standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards:

standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards: standards:

2. Monitoring Variables As discussed above, monitoring variables are the interesting events that may impact CIA of the organization. What variables or events have to be monitored across the network in real time should be clearly defined in monitoring standards, e.g.

Health Parameters: CPU performance

Memory Utilization Storage utilization

Components faults

Network traffic parameters Peak and average utilization

Availability of ports, services offered, device Log Monitoring

Unsuccessful access attempts (authentication/authorization)

Privilege account successful access attempts Privilege account activity

Malicious network traffic etc. Along with variables, polling frequency should also be defined in standards and procedures. All devices should be polled at equal interval like 10 mins. If Devices are probed for data at varied frequencies, monitoring data will not provide a clear picture of the infrastructure.

3. Alert Criteria/Threshold

For each variable, threshold should be defined, where an alert will be generated and communicated to concerned personnel. Different levels of thresholds should be specified (if possible) like warning, major, critical etc. Warning events should be closely monitored to ensure they don't turn into major or critical events. Some sample events could be: Warning: 40% hard disk space remaining, Major 30% hard disk space remaining, Critical 15 % hard disk space remaining Unsuccessful authentication attempts more than 5 in one minute

Http service not reachable from last 10 mins Each severity level should be appropriately defined with respective communication requirements and deadline to act upon it. Sample Event Severity Response Matrix is provided below:

An important point to cover in Event management procedures is to when will an event be graduated to an Incident; basically the

criteria of an event becoming an incident and the respective process to follow.

Events are basically potential incidents. Event management is considered pre-Incident management. Few factors if checked in initial stages itself may not become an incident. For example, in a clustered environment, one of the servers is facing high CPU utilization. If checked at appropriate time, it may not result in an incident.

Event management and Incident management are fairly mature ITIL processes; however, in this chapter we will only look at aspects concerning security/Network teams. Auditors prefer an automated process to handle event as well as incidents rather than a manual one. Automated process includes automatic event/incident generation in workflow manager where an alert is generated by monitoring tools and later assigned to respective teams. 4. Roles and Responsibilities

Roles and Responsibilities should be clearly defined for all stakeholders.

Event Management teams handle any exceptions recorded against specified criteria, based on their severity. Events are then sent to concerned department for their action. Incident Management Team handles any incident which has impacted CIA of the organization. Their roles and responsibilities should be clearly defined in the procedures including workflow, contact points, communication structure and approval authorities.

Standby technical teams like Network, Server, Database teams who should be notified when issue arises, should be mentioned in the procedures as well.

Complete workflow of event/incident initiation, assigning severity, contact details of standby teams, follow-up procedures, escalating procedures, approval authorities, corrective action, closure etc. should be incorporated in event/incident management procedures.

5. Incident Database More or less similar Incidents are generated across the network on different devices. Organizations can overtime build a database of common incidents and their respective corrective actions. This can serve as a knowledge base for concerned teams and provide guidance to resolve the issue. A spreadsheet with list of common incidents, their reference, severity level, resolution steps, and approximate recovery time can be used as incident database. This database should be available only to concerned teams.

Similarly, each team (Network/Database/Server) can create an Event database of events which are frequently assigned to them by monitoring team. This will help ease the daily work.

6. Advanced Network Tools allowed for investigation

Authorized tools should be listed in procedures along with authority of teams who can use it. Network sniffing tools can view all the information generated in the network including confidential information which is not accessible otherwise. Access control and usage restrictions should be defined for network tools.

For example, Monitoring Standard should define:

"Wireshark (packet sniffing) tool should be used only with Network Manager's approval and for purpose of troubleshooting only."

An email approval should be kept as audit evidence. 7. Monitoring Data Protection and Retention

As part of monitoring the whole network, a lot of sensitive/confidential information can be captured and transmitted over the network therefore monitoring traffic should be aptly protected. Ideally Monitoring traffic should be isolated to management VLAN, accessible only to authorized individuals. Monitoring traffic should also be secured with appropriate encryption at rest and in transit.

Monitoring Data should be retained as long as required for any pending investigation/incident/alert. It should be appropriately destroyed when no longer required. Disposal criteria should be defined in standard and procedures. Evidences for Audit Evidence of documented monitoring policies, standards and procedure

Evidence of monitoring variables as specified in monitoring policies/ procedures (a physical walkthrough of Auditors or Screenshot from the monitoring tool can be provided as evidence)

Sample alerts sent to Standby teams like SMS/Emails

Database of common alerts and their remediation steps Evidence of Segregation of Duties between Monitoring team and Troubleshooting/Network Operations Team

Approval for network tools (such as Wireshark) usage for the purpose of investigation

13.4 Traffic Monitoring

Network Traffic monitoring tools can monitor several parameters of network, such as Protocols, Services and Ports.

IP addresses. Traffic Bottlenecks/stops. Bandwidth utilization.

Traffic monitoring tools monitor and log traffic on different network connections and create beautiful graphs to depict the traffic performance. Monitoring team can easily create a benchmark traffic graph of the organization's daily traffic. Any anomalies can easily be identified if graph is swaying away from normal traffic. Sample graphs are shown below:

As can be seen from above example, if Monday's traffic is the regular pattern of traffic then Friday's traffic shows clear changes from that pattern. These changes are worth investigating to identify the root cause.

Common free Traffic monitoring tools are available from Solarwinds, GFI etc. Network monitoring procedures should have detailed traffic monitoring procedure, criteria/threshold of malicious traffic and next steps if malicious traffic is encountered such as alert administrator, raise an incident, troubleshoot using packet inspection tools etc. 13.4.1 Intrusion Detection and Prevention System (IDPS) IDPS perform in-depth analysis of network traffic; they not only identify source and destination addresses but also analyze the content of packet (payload) to verify if content falls under safe category. If IDS detects any malicious activity, it can alert concerned teams however IPS goes one step further by quickly responding to the attack based on rules created by network/security administrators. E.g. blocking the source IP address of malicious traffic. IDS/IPS technology can handle malicious payloads more effectively even if payload is specifically crafted by advanced hackers to hide its traces. IDS/IPS devices also generate comprehensive logs of traffic analysis. Even if attack is not detected at IDPS end, it can be detected when logs are correlated at a central location. These logs can help in detection of the attack or in post attack investigation. We will learn about correlation in "13.6 Log Monitoring".

Snort is a free open source Network Intrusion detection system, which can easily be deployed by organizations.

13.4.2 Packet Inspection Tools To conduct investigation on a problem or potential problems, sniffers can be used. Sniffers capture all the packets from the network and helps in categorizing based on different filters like host wise, protocol wise, port wise etc.

Sniffers can detect unusual activities on the network and provide required details about protocol, host and destination addresses etc. for any captured packet. Some common sniffers are Ethereal, Wireshark etc.

Use of sniffer or such utilities should be restricted by need to know and access control and on designated workstations only. Only few authorized users should be allowed to run such utilities as these tools can provide significant information to a malicious user.

These tools should only be used for troubleshooting and should not be run continuously. Organizations should have a clearly documented procedure for usage of power utilities like sniffer. At least a basic approval process should be included for usage of power tools in the environment.

Packet capture files can also capture confidential information passing through the network. In any case, they will definitely have network information which will be useful to any malicious user. Therefore, these files should be kept in a centrally restricted location. Ideally files should be encrypted and only required people should have access to these files. 13.5 Device Monitoring Network monitor monitors components of network devices/servers for device availability, service availability and performance and can create alerts of different severity levels for administrators/event management teams. Most of the network monitors have the capability to send critical alerts to administrators or designated personnel via SMS/email. Network devices/servers are made of hardware components. Knowing when those components have failed or are in process of failing, gives administrators some time to take proactive action to repair the component or use backup device without causing any interference in the network.

Device and service availability can be monitored by pinging device interfaces/ports after regular intervals. Performance is monitored by calculating time taken by a connection request like ping or http page request. Thresholds should be set for each factor and alert should be generated and communicated once threshold is crossed.

13.5.1 Simple Network Management Protocol (SNMP)

Network Monitors manage all network devices centrally via SNMP agents. SNMP is supported by most of the network devices irrespective of their brands. With the use of SNMP agents following parameters (objects) can be monitored: Port association with IP Address, Mac address DNS name etc. Changes in such associations.

Hardware components failure. Free Memory. CPU utilization.

Interface Utilization.

Interface/Port availability. Disk Space utilization.

Custom parameters provided by Vendors depending on device. SNMP works with two basic components; agents and monitoring station. SNMP agent is inbuilt in networking devices which has to be enabled and configured. Once agents are configured, monitoring station polls all the networking devices for SNMP messages. Agents can also push messages in case of errors conditions.

Network Monitor polls the network every few minutes to monitor above mentioned parameters (objects) and alerts administrator/event management team if threshold of any

parameter (object) exceeds the configured value, for example Critical Alert: 10% Disk space remaining. SNMP uses community strings for authentication. Community strings are like passwords which administrators use to access SNMP agents. Community strings are two of types 'Read Only' and 'Read Write'. 'Read Only' strings allow viewing of SNMP information however 'Read Write' strings can be used to configure the device. By default, values of 'Read Only' community string is "Public" and 'Read Write' community string is "Private". These values should be changed before using devices in production network. Several versions of SNMP are supported by devices, however thumb rule is to always use newest version as it will have more security features.

SNMP V3 is the newest as of now and if organization is using an older version, auditor will always point that out. SNMP V3 has the ability to encrypt data transmitted between agent and monitor, which weren't available in earlier versions. All data sent via SNMP V2 and V1 was clear text and anyone sniffing the network could view it. SNMP V3 provides integrity protection to ensure there are no replays of messages as well as no unauthorized change in transmission. SNMP V3 also provides remote configuration of devices and view based access control. Views can be created for authorized users to access objects based on need to know.

SNMP V3 supports a user and group based access control. SNMP users are created by selecting one of three modes available -

noAuthNoPriv/ authNoPriv/AuthPriv 'Auth' means Authentication which provides data integrity and data origin authentication for SNMP exchanges between agent and monitor. 'Priv' means Privacy provided by encryption of SNMP exchanges between agent and monitor. Below Table explains different modes available for SNMP access control.

Administrators can assign specific objects to different users, for example Sever objects can be assigned to System Administrators while Network objects can be assigned to Network Administrators.

It is recommended to limit the number of hosts that can poll SNMP information so that malicious users cannot extract useful information from SNMP messages even if they know community string.

Network monitors available in market have basic to advanced feature set. Some popular network monitors are offered by Solarwinds, GFILanguard, ManageEngine which provide free trial versions. 13.6 Log Monitoring - Security Information and Event Management Log analysis was previously discussed as part of Log management, however for active log monitoring organizations are deploying Security Information and Event Management (SIEM) tools. SIEM provides organizations real-time analysis of potential/ongoing security breaches and attacks, enabling them to proactively respond to attack before damage is done. SIEM tools gather data from different sources and analyze them contextually to detect an attack or security breach.

Traditionally log analysis is done separately by different teams (network/server/application/security etc.) without any correlation. Such log analysis can detect Virus infections, Spam, Phishing,

DDOS attacks, however to detect advanced attacks like advanced persistent threats; advanced tools like SIEM are required.

SIEM can analyze and provide actionable information to administrators/ designated personnel to prevent the attack in minimum time. SIEM tools have following three basic features: Accumulation of Data SIEM tools handle complete log management process from collecting the data from different sources to storing them until they run out of their usefulness. SIEM tools can collect data from Network Devices. Servers. Vulnerability Management and Compliance management tools.

Applications. Identity management server. Intrusion Detection and Prevention system.

Antivirus/Antispyware/Antispam servers. Correlation of Data

Once data is collected, SIEM tool looks for common attributes and bundles it together to look for any malicious event. Correlation techniques/rules are a combination of out of the box rules and custom rules created specifically for the organization. As all environments are different, SIEM tools have to be fine-tuned according to the environment. Effectiveness of correlation of SIEM tool will depend on the amount of fine tuning done. Alerting and Responding

If any malicious activity is detected by SIEM tools, it can alert the concerned teams or individuals via SMS/emails based on configuration of the tool.

SIEM Tools can also create an incident automatically with relevant details for concerned teams to investigate.

SIEM Tools can also be configured to automatically respond to certain threats by running automated script. Response is also recorded in corresponding incident. Dashboard and Reporting SIEM tools provide elaborate Dashboards and reports to give detailed description of any incident or show trends overtime. Reporting comes in handy for investigation, legal cases compliance, standards and audits.

SIEM tools although extremely useful are usually expensive for smaller organizations however with security standards like PCI-DSS, ISO27001 etc. requiring proactive real time monitoring of logs,

more and more organizations are deploying some form of SIEM tools. Security Incident Handling

Although Security Incident Handling may not be responsibility of Network team however below points apply to any kind of incidents that Network team may have to handle or manage. If it's an automated process, details of few incidents can be provided to auditor. Details should include, Incident reference, Incident Initiation (Date and Time), Incident details, Incident Severity, Investigation steps, Remediation of Issue, Recovery time, Lesson Learnt and Closure of Incident (Date and Time).

■ If it’s not an automated process, ensure all records are maintained manually in chronological order.

■ Time taken to fix the issue with respect to time allotted to that severity level with justifications of delay if any.

■ Auditor may further ask about action taken on Lesson Learnt then he/she can be provided with Meeting minutes of Weekly meeting

where this incident was discussed and any action proposed to avoid this incident in future. Document can be attached to the incident ticket itself. 13.7 Audit Requirements

Please Note: This conversation is only to give you an idea on providing evidences to Auditor. Evidences can change based on organization's infrastructure.

Section 4 - Managing Audits

This section is the 'Real Deal' all the previous chapters are leading to.

Following Chapters are covered in this section:

Chapter 14: Information Security Audit

Chapter 15: Technical Compliance Audit

Chapter 16: Penetration Testing

This section covers different audits that organizations may be subjected to. It uses two pronged approach to explain Auditor's planning and expectations as well as Auditee's rights and preparations. This section is the core of the whole book and all previous chapters, data collection, evidence creation, risk management come together at this juncture.

Chapter 14

Information Security Audit 14.1 Information Security Audit

Information Security audit is a methodical and evidence based approach to assess an organization's compliance against established policies, regulatory requirements or security frameworks such as ISO27001. Information security audit is also known as Process security audit/Non-technical compliance audit. It aims to identify gaps between documented organizational policies/industry standards and its implementation in the organization's processes.

Successful completion of audit is in organization's best interest as it increases the trust value in customers' view.

Auditors conduct audits by interviewing internal teams, mapping observations to policies/standards and reviewing documents and processes followed in the organization. Upon completion auditor provides a formal opinion on the whole process and organization's state of non-conformance. Auditor is comparable to the Judge in a court room. Judge's judgment is based on evidences submitted by the lawyers during the case hearing; similarly auditor identifies gaps based on the evidences submitted by internal teams. Auditor can ask precise questions for understanding the environment better however, he will always make decision based on appropriate evidence. Internal teams can present

any story or hypothesis, but if they don't have valid evidences to substantiate the theory, it will not be accepted by the Auditor.

An audit involves two parties:

Auditors can be internal team of security auditors or a third party hired to perform the audit. Whatever be the case, team of auditors should act independently without any prejudices. It is their responsibility to collect appropriate data, analyze and point out any issues to the organization to decide on further action. Auditors are in unique position to possess a lot of confidential data related to different departments. It is important for them to maintain ethics and confidentiality of data during and post project.

Third party auditors should sign confidentiality agreement with the organization before starting the project.

Auditee is the department/team responsible for processes covered within the scope of audit. They are responsible to answer auditor's queries related to processes, provide relevant documentation and evidences to auditors as requested by them. Auditees are also responsible to verify and confirm findings, provided by auditors and finally drive the remediation project to fix those findings. Normally teams select a security champion to respond to auditor's queries on team's behalf. Teams should ensure that the selected person has complete knowledge of team specific business processes, and also has basic understanding of security concepts and audit requirements. Normally this person is the manager or a senior member of team with enough knowledge and authority to drive projects. Note that inefficiency of this person will mean more audit findings for the team.

14.2 Audit Management From Auditor’s Side A typical audit process is shown below:

Let's understand different phases of audit in detail. Over here we are assuming, auditor is a third party, contracted to perform an independent audit. 14.2.1 Define Scope Of Audit

Scope of audit is decided by organization and auditors together, based on organization's obligations (such as SLA stating yearly audits are required for all processes), regulatory/ISO-27001/PCI-DSS compliance and budget constraints. Based on above factors, organization may decide to only include critical business processes, some specific services or all the business

processes.

Scope of audit also determines what kind of audit to perform. Audits can be performed against organization's policies and standards or industry security frameworks, such as ISO-27001, PCI-DSS etc. This means that auditor will check if security is implemented in organizational processes as per organizational policies/industry frameworks.

14.2.2 Create Project Plan Project plan is created in collaboration with auditors and organization. Project plan decides on audit methodology, estimated duration of audit, data collection methodologies (Interviews/documents) and schedule of various phases of audit.

Auditors may require some information before the audit starts, such as organization's policies to start planning audit activities. Audit methodology should be shared with the organization including risk assessment criteria. Auditors may use industry standard criteria however organization may require them to use organization specific risk assessment criteria to understand risk severity from organization's risk perspective. For example, a risk rated high from auditor's perspective might be of medium severity based on organization's criteria.

Process audits are conducted for a particular past period; they are not point in time audits. Process audits check processes over a

period of time e.g. Apr 2017 - Mar 2018. Time period is specified by organization based on required audit frequency.

Appropriate sampling set should be decided by auditor to do a random check on data rather than covering all possible information. For a quarterly activity, annual audit may cover two samples and for a weekly activity, annual audit may consider 12 samples (one from each month).

14.2.2.1 Opening meeting/audit initiation meeting Opening meeting is conducted by auditor to share information about upcoming audit with auditees.

It mostly covers: Audit purpose.

Audit plan.

Audit overall schedule.

Type of audit (against policies/framework). Risk methodology followed.

Coverage of audit (in-scope business processes).

Time period of audit (e.g. Apr 2017 - Mar 2018).

Type of documents that auditors will be requesting.

Contact information of project manager and other stakeholders.

Detailed schedule of interviews with different teams.

Tentative release of final report. Time allotted to auditees to get the findings fixed.

Slight changes can be made to audit plan based on auditee's feedback. 14.2.3 Perform Fieldwork

Fieldwork includes collecting information through interviews, observations, written documents, and evidences provided by Auditees.

Auditors normally submit a preliminary request for information based on type of audit. If assessment is based on organization's policies and standards, auditors may submit high level questionnaire based on policies and standards, for organization to respond. If audit is based on some security framework like ISO-27001/PCI-DSS then auditors may request for mandatory documents (as required by framework) and submit a high level questionnaire based on framework requirements, for organization to respond.

Once Information is provided by the auditees, auditors conduct interviews to understand, if information provided is indeed followed practically in day to day processes. During Interview phase, auditors may ask for evidences in support of auditee's statements.

For example:

Purpose of auditor here is to ensure that: Access review was done quarterly as required by standard

Activity was included in the task list, so it can be tracked and completed

Results were shared with appropriate teams to take action based on the review

14.2.4 Analyze Data

Once auditors have required information, they analyze data to identify any non-conformance to the policy/security framework. They collect evidences to prove non-conformance and rate the risk of nonconformance based on agreed risk rating criteria. Auditors also provide recommendation to remediate the finding.

Continuing above example, if auditor found out that access review is not done quarterly, as date of last access review was Mar 2018 and previous one was Sep 2017, it's a clear non-conformance to the standard.

Auditor will highlight this finding as:

Auditors review all the findings to identify if they should be reported as non-conformance or mere observations. It is always better to report all the findings so organization can take appropriate decision to deal with them. Observations are those findings which may not seem so significant to be rated even as low findings.

14.2.5 Report Findings

When all the findings are identified, they are submitted to auditees as preliminary report. This report is not final as auditees have not acknowledged the findings yet. Auditors and auditees come together to have a clarification meeting, where they discuss the findings and their respective recommendations.

Once all the findings are accepted by auditees, findings can be finalized and presented in final report. Final report should include:

Scope of audit, sample size and any limitations.

An executive summary to provide an overview to management about the posture of security conformance and severity of overall risk faced by organization due to non-conformances.

All findings classified and presented appropriately with their respective details, severity ratings and evidences. Relevant clauses in policy/security framework against which nonconformance is identified. Clearly articulated recommendations preferably customized to organization's environment and alternative recommendations or workarounds wherever necessary.

Now it is auditee's turn to remediate the finding as per auditor's recommendations. Maximum time allotted to auditees to remediate the finding can be mutually agreed with auditors and auditees in initial stage itself.

Depending on the mutually agreed time period, auditees have to fix the findings as soon as possible or before the due date. 14.2.6 Verify Remediation Of Findings

Once auditees confirm to auditors that remediation of findings presented in final report is completed, auditors have to verify the findings, to ensure it is appropriately fixed. Sometimes some aspects of the findings are fixed however few aspects still remain open. Auditor should collect fresh evidences from the auditees and update current status of the finding as Closed/Open/Partially Remediated.

If findings are not closed, auditees should discuss with auditors and perform remediation once again to close the finding. Again these findings have to be verified by auditors before closure.

Once all findings are closed, audit can be successfully closed. Auditor can provide a certification and final report with current status of the all the findings as closed.

14.3 Audit Management From Auditee's Side Till now we have seen various phases of auditing process from auditor's perspective to understand auditor's responsibilities, methodologies and audit activities.

Below we will understand the same phases from auditee's point of view and understand how to prepare for each phase to pass the audit successfully with minimum findings. A successful audit from auditee's point of view is to anticipate and be prepared for auditor's questions and finalize the report with minimum findings.

Below sections will also empower auditees to understand:

Significance of the finding.

Rating of the finding.

Auditor's justification behind the finding.

This knowledge will help auditees to discuss the finding with the auditor.

Sometimes auditor just gets the small picture based on documents and interviews. However, people dealing with the environment day in and out understand the big picture.

We can't blame the auditor as he is new to the environment and has understanding of business processes based on what's documented and explained in interviews, which may not be enough to understand the whole environment in few days. Therefore, it is imperative for auditees to understand auditor's thought process and explain the big picture to them.

In my experience, I have seen people normally accepting the findings related to policies and standards, without any questions, as probably they feel they don't have the expertise to question the auditor. My aim here is to explain auditor's thought process for auditees to really understand the finding before accepting it.

14.3.1 Audit Preparation Normally audits are conducted at regular intervals around same time every year. If you are aware of tentative audit schedule, you can start arranging information and data accordingly. For most of the things, you need to prepare throughout the year but close to audit you can start taking stock of everything to ensure it's in order.

14.3.1.1 Compliance task list

It's a good practice to have a compliance task list for the whole year and include daily, monthly, quarterly and yearly tasks in different worksheets. Delegate these tasks to team members and ensure all tasks are completed as required. Keep a track of date and time of task completion and also collect evidences of task completion.

Sample compliance task list is provided in Appendix 3. Purpose of this spreadsheet is to keep track of tasks, completion dates and

evidences.

14.3.1.2 Updated documents

Updating documents should be included as part of annual task list or as specified in policy. Ensure all the documents are updated including procedures, networking standards and network diagrams. More often than not, there's nothing much to update in policies and standards. If that's the case, ensure that documents are reviewed as part of annual review process and modification history is updated to reflect that document is reviewed however no changes were made.

If there are changes made, ensure that they are reflected in document modification history in the beginning of document and document version has been updated. More on this is covered in Chapter 4.

Now let's look at all the audit phases again from auditee's perspective:

14.3.2 Scope Of Audit

Scope of Audit is normally decided by management and auditors so there's not much in auditee's control.

14.3.2 Project Plan

14.3.2.1 Opening meeting/audit initiation meeting

Auditors share their audit plan, schedule and methodology with Auditees during opening meeting.

This meeting is important as it offers information essential for the preparation of audit. Important points to note during opening meeting are as follows:

1. Scope of Audit

Take note of the business processes included and not included in the audit. Also take note of audit time period. Information security audits are conducted for a specific past period e.g. April 2017 to Mar 2018. Covered business processes and time period defines the context and boundary of the audit; all evidences submitted must be in the same context.

2. Type of Audit

Audits are conducted against organization's policies and standards or against international frameworks like ISO-27001, PCI-DSS etc. Take note of this.

3.Risk assessment methodology

Take note of risk assessment methodology that auditors are following. They can use organization's risk assessment methodology or any industry standard. This information will help you in discussing risk rating later in the audit, during clarification meeting.

For example, we calculated risk in Chapter 9 based on below risk assessment criteria:

Auditors may use an industry standard or their organization specific risk assessment criteria e.g.

In above example, take note that auditor's risk assessment methodology does not incorporate asset relevance. It only takes in probability/ likelihood and impact/vulnerability severity in consideration.

4. Schedule

Take note of interview dates, documents that auditors are expecting (policies, procedures, checklists etc.), duration of audit, estimated dates of preliminary findings etc. Schedule will help you to manage your time and resources.

Also make note of maximum remediation time allocated for each finding. Is it the organization's standard or based on auditor's contract with organization? Normally the clock starts after report is finalized but being aware of schedule will help you manage time and resources effectively.

Auditor may request for organization's information security policies in advance to draft initial questionnaire for fieldwork phase. Auditor may also ask for previous findings of similar audit conducted earlier to ensure those findings are appropriately addressed.

14.3.3 Perform Fieldwork

Fieldwork phase focuses on information collection. Go through the network security policy and network security standard if audit is based on organization's policies. Plan what kind of documents/evidences can be provided for each clause stated in policy and standard.

Also focus on findings identified previously in similar audit. Auditor is definitely going to scrutinize those areas specifically, as they are known week points of the organization. Make sure that all the findings identified earlier were not just fixed at that point but also

corrective actions were taken to ensure those findings are not repeated.

Your ability to understand auditor's questions and respond to them appropriately will win half the battle. Being prepared beforehand will help you provide evidences on time to ensure that minimum findings are identified in the audit.

Trick however is to anticipate auditor's questions in advance and be ready with the responses. For each policy/standard clause, auditor may have some basic questions along these lines.

14.3.3.1 Evidences

Audit is evidence based exercise, so any evidence you provide should be precise and to the point for auditor's reference. It's a good practice to see evidences from a third party point of view and ask yourself that whether this evidence proves, what you want to prove. A lot of times evidences provided are not enough.

For example, if you want to prove that port 8081 is closed on a particular windows server, you can log on to the server, run a netstat

command, take a screenshot and provide it to auditor. It's a perfectly valid way of providing evidence however auditor may reject this evidence.

From a third party view, you can see a netstat command however you can't see on which server it was run. For this situation run hostname command in another shell which shows server name and take screenshot of both the shells together.

Evidence Format: Evidence should be provided in non-editable format to prove its integrity to auditor e.g. Network configuration files should not be provided in text files as they can be modified before passing to the auditor.

Policy documents should have proper versioning information and document properties should correspond to that. For example, if policy was approved 6 months back based on version information, document properties should not show "Modified 3 days ago". Ideally documents should be provided in pdf format.

Log files, Risk exception approval forms, sign off documents etc. should be kept on Intranet server as it maintains the time stamp when documents were uploaded.

Exceptions to non-editable format rule are live documents which are updated on daily/weekly basis like "Log review activity spreadsheet" (created in Chapter 12). These evidences can be provided as of today basis.

Evidence Validity: Evidence should only be provided from the in-scope business processes/devices/servers as well as within the time period of audit. Opening meeting should mention the time period covered by audit and all the evidences has to be provided within that time period

e.g. April 2017 to March 2018.

14.3.3.2 Sample auditor queries and their tentative responses

Let's take few examples of Network Security Policy clauses and identify corresponding evidences which can be provided to auditors.

14.3.3.2.1 XYZ network system documentation must be kept up-todate.

Suggested Response:

Provide Network Security Standard highlighting the clause stating all network system documentation like network diagram, network procedures, network operations handbook etc. will be reviewed once a year or after any major upgrade.

Suggested Response:

Provide document modification history page of all the above documents. If any document does not have document history page attached, e.g. a network diagram, manage version control of such documents in a separate spreadsheet to record date and time of changes, change triggers (such as change request number) and approval authority of change.

Suggested Response:

Provide definition of major upgrade ideally stated in some documentation for example "a major upgrade is any change affecting at least 25% of the whole infrastructure or any platform related changes on business critical processes".

Provide the list of major changes with their change request details.

Suggested Response:

Keep a list of major upgrades and their effect on various systems. Ensure that all network documents were reviewed and updated after such upgrades. Ensure that document version and modification history was also updated to reflect the changes. Even if no changes were done in the document, modification history should be updated to reflect that review was indeed done.

Evidence will be same as provided for question.

Suggested Response:

Provide compliance task list with updating documents as part of annual tasks. In the same list you can incorporate document reviews in ad-hoc task list. Sample compliance task list is provided in Appendix 3.

14.3.3.2.2 Logging and monitoring of network traffic of XYZ network will be performed for network status, breaches, performance and network administration activities.

Suggested Response:

Provide Logging and Monitoring standards and procedures

Suggested Response:

It should be ideally stated inside Logging and Monitoring standard and procedures

Suggested Response: It should be ideally stated inside Logging and Monitoring standard and procedures

Suggested Response:

As this will entail a lot of screenshots of traffic monitoring and network device monitor, a better approach will be to arrange for a walkthrough of monitoring room. Auditor can sit with monitoring personnel to understand the parameters that are being monitored and look at configuration of monitoring applications. Auditors may ask for screenshots from monitoring personnel.

Provide a screenshot of central log repository. Ensure that you filter logs for networking devices only. Provide screenshots of log level from router/switch configuration files e.g. screenshot of a Cisco router's configuration file showing log level 5/6/7 as required by standard.

Suggested Response: Again for this question, I want to emphasize that before you answer this question, filter your security incidents based on in-scope business processes and time period of audit. If there was a major security incident but that was not part of in-scope business processes or it happened before audit time period, it should not be considered in this audit. You don't have to discuss it.

If your infrastructure has SIEM, you can respond that logs are reviewed in real time and SIEM raises an alert through an automated process. Provide an example where an incident was raised on any network device. Auditors generally prefer an automated process.

However, if you don't have SIEM in place ensure that logs are reviewed regularly as stated in the standard. Share the "Log review activity spreadsheet" (as explained in Chapter 12) and share any incidents raised as part of log review.

Suggested Response: Walkthrough of monitoring facility will answer the first part of question, however for second part share any incident where link was down and corrective actions such as router reboot was taken to rectify the situation as soon as possible.

Suggested Response:

Walkthrough of monitoring facility will answer the first part of the question, however for second part share any incident that was raised as part of monitoring activities.

Suggested Response: Share capacity assessment reports, weekly monitoring reports and capacity management meeting minutes where a change was implemented based on performance monitoring report. Ensure that all the evidences are between the audit time period and related to inscope business processes.

Hope you understood what kind of questions auditor may ask for every single policy statement. Similarly, you can anticipate questions for each policy statement relevant to your work area and be prepared with the answers beforehand. 14.3.3.3 Some good practices to handle auditor's questions

1. Relax; it's not a job interview. 2. You should not try to hide information or lie to auditors as it defeats the purpose of conducting an audit. 3. Delegate responsibility of liaising with the auditor to one of the senior members of the team. This person should be knowledgeable enough to understand network processes as well as security aspects of the processes.

4. It's ok to say you don't know and you will get back later, however you should not do it for majority of questions.

questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions. questions.

14.3.4 Analyze Data

This phase is mostly performed by the auditors but auditees may also be occasionally involved to verify their analysis. Auditors collate, read, observe and take notes of all the data collected and compare it with the policies/standards. 14.3.5 Report Findings

Once auditors have finished analyzing the findings, they will report them in preliminary report and discuss the findings to seek your agreement.

Ideally auditor arranges for a clarification meeting where they explain the findings and auditees need to provide their agreement. This is the next most important phase from auditee's point of view. Ensure that this meeting is attended by network middle management team along with people who attended opening meeting and handled fieldwork phase.

From auditee’s point of view, there are three points to consider for each finding:

1. Verify if the finding is correct Auditor can raise something as a finding, if he has not received relevant evidences. Verify if the finding is correct, look at all the evidences auditor is pointing out and then analyze from a broader perspective.

Ensure that relevant documentation and evidences are provided immediately to the auditor, if the finding is not right. Think if there are any workarounds/controls implemented which can negate the risk and inform auditors about it during clarification meeting.

2. Verify if the severity rating is correct

Severity rating is very important from management's perspective. Before agreeing to the finding you should ensure, severity rating is absolutely correct. Using mutually agreed risk assessment criteria you should calculate the risk yourself and compare it with auditor's risk rating. If there's any discrepancy, then explain the logic to auditor and also explain any additional controls implemented which can reduce the probability of the risk materializing. Here in, control matrix spreadsheet (created in Chapter 3) will come in handy to explain auditor of any preventive/detective measures in place to reduce the probability. If auditor accepts the justifications, he can reduce the risk rating. 3. Accept the finding and evaluate recommendation

If you feel that finding is acceptable and auditor has valid justification and evidence for raising it, then you should accept the finding. Also check the recommendation provided by auditor and evaluate if it can be implemented in the environment within allotted time. If there are major process changes which may not be implementable within the stipulated time, then inform auditor of your limitations and request for a workaround until process is implemented. Auditor may be able to close the finding based on the workaround.

Once the findings are accepted and evidences are provided to nullify the non-accepted ones, auditor will finalize the report with mutually agreed findings. 14.3.6 Remediation Phase

Once you have accepted the finding and understood the remediation, you should start planning to fix it. It is important to not just fix the findings but also include corrective measures in day to day processes so that the finding is never repeated in future audits. When remediating, change the document first (if required), then go down to actual implementation phase.

Let's take an example of finding to remediate:

Documentation:

Check whether document management standard has proper definition of major upgrade. If not, include a definition of major upgrade. E.g. "a major upgrade is any change affecting at least 25% of the whole

infrastructure or any platform related changes on business critical processes". Once definition is included in standard, ensure that document management procedure includes the process of reviewing documents. Procedure may explain the review process, change approval process, communication process to concerned parties etc.

Remediation:

Once documentation is taken care of, create a list of network standards and procedural documents. You can either decide to update all the documents annually in a particular month, or you can spread out document review throughout the year. Whatever is decided, include it in compliance task list (annual/monthly tasks) and delegate the task to team members. Team members have to review the documents for accuracy and completeness. Track completion of tasks in regular meetings. Create a year wise list of all the major upgrades, that impact networking infrastructure. You don't have to go backwards, start from this year onwards. Ensure that the corresponding change request includes a task to review/update the documents.

After the major upgrade, start the process of reviewing and updating relevant documents. Add document review task to Ad-Hoc section of compliance task list and track completion in regular meetings. Ensure all the network standards and procedural documents are updated based on major upgrades.

Evidence of this remediation task is the document modification history page of each document. For documents which do not have a document history page attached e.g. Network diagram, you can manage version control of network diagram and track modifications in compliance task list as required. For example, please check sample compliance task list provided in Appendix 3. You can also manage version control through intranet applications such as Microsoft SharePoint or other similar applications. 14.3.7 Verification Phase

You cannot close the finding even if you have followed auditor's recommendation and fixed it. Only auditor is authorized to close the finding after reviewing the relevant evidences. Provide evidences to auditor for verifying remediation. He will check if the finding is fixed or not and update the status accordingly.

Possible statuses that auditor can provide are:

are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are: are:

It is a good practice to remediate the findings as soon as possible. If you wait till last moment and auditor deems the finding as Open/Partially Closed, your finding may remain open. Normally auditors allot a specific time period during which they review the finding. If

findings are not closed within agreed time period, then organization has to shell out extra money to get the findings reviewed.

Once all the findings in the final report are closed by auditor, audit is considered closed.

Chapter 15 Technical Compliance Audit 15.1 Technical Compliance Audit

Technical compliance audit evaluates architecture and technical configuration against organization's standards or industry standards like NIST/CIS etc. While Process audits focus on process aspects of the organization, technical compliance audit looks at technical aspects of compliance. Compliance audit is not a measure of security posture of an organization rather it checks whether all the controls required in standards and regulations are indeed configured. Compliance audit needs a detailed evaluation of the organization's standards and controls.

Compliance audit also has similar stages as process audits discussed in Chapter 14.

15.2 Technical Compliance Audit from Auditor's Point of view

15.2.1 Define Scope Of Audit Auditors define scope of audit in collaboration with Auditees. They may conduct a pre meeting to go through network diagrams and architecture documents to set the scope of audit. Management

will decide the type of audit to be conducted i.e. Compliance against Organization's policy and standards or Compliance against industry standards. Scope of audit should include number of devices/servers/applications to be included in the audit along with their details.

For large organizations where numbers of devices are in thousands, it will not be viable to contract auditors to go through each and every device. Instead auditors look at bigger picture and sample devices. Remember auditors charge based on scope of work so management can limit the number of assets based on budget and time limitations. 15.2.2 Draft Project Plan

Project plan is submitted by auditors after understanding scope, organizational structure and type of audit to be performed. Generally, it includes a couple of meetings with organization's security team/ management.

In pre audit meetings, auditors select the critical areas they want to focus like DMZ, Remote access, Perimeter, Management VLAN, Business critical process VLAN etc. Then they request configuration files of critical firewall/routers/switches from those areas. They can also request for configuration files of one device from each VLAN segment, depending on number of assets they can include in scope.

They may also schedule interviews to understand architecture and design in detail with relevant teams. Scope of audit, schedule of interviews, risk methodology, type of information required from auditees is shared in Opening meeting, with auditees.

15.2.3 Perform Fieldwork As information is asked from auditees, these audits are offline audits. There's no impact on organization's daily activities.

Fieldwork activities for auditors is to select the devices, communicate with the administrators on what information is required in which format and verify information for adequacy once received.

If audit is against organization's policies and standards, auditors also requests for relevant documents.

Apart of collecting technical information, auditor may conduct interviews with network team to understand network design security requirements and if they are indeed implemented in network architecture. Following areas may come under scrutiny: 15.2.3.1 Architecture and design

Sample interview questions: Is network segmentation and zoning done appropriately and trust levels are assigned? Is IP addressing scheme and naming conventions are followed consistently? Are Encryption requirements met as defined in information classification and Network standards? Is all single point of failures eliminated? Is appropriate BCP and disaster recovery is managed?

Is segregation of duties implemented? Are clear text protocols in use in network? Is NAC implemented appropriately i.e. handling authentication, posture assessment and correction?

Are IDPS covering all entry and exit points? Do firewalls cover all entry and exit points? Are all public facing devices kept in DMZ?

Are all third party and remote access connections secured? Are all devices time synchronized?

Are all access control good practices being followed? Are all devices logged and monitored?

15.2.3.2 VPN security Sample interview questions: Is VPN device physically and logically protected? Are all third party and remote connections are routed through VPN? Is VPN using strong encryption and hashing algorithms? Is client authentication and authorization managed through appropriate profiling of user groups? Is VPN entry point located in DMZ and traffic is subjected to firewall inspection before entering internal network? Is split tunneling allowed? Is there a justifiable reason for it?

Is idle time out set? 15.2.3.3 Wireless security Sample interview questions: Are all access points physically protected?

Is proper site survey performed before implementing WLAN? Are documented evidences available? Is wireless LAN separate from local LAN?

Is wireless placement done according to network standards i.e. only for guests/only for internal user network/only for non-sensitive area as required by Network standard?

Are WLAN devices locked down as required?

Does access point placement take 'signals leaking outside organization's boundary', into account? Is SSID generic and does not give out any information about the organization/department?

Is WLAN using strongest Encryption and hashing available?

Is NAC implemented on Wireless Network?

Is monitoring and logging enabled for all Access Points?

Is wireless IDPS implemented? Are rogue access points regularly scanned and dealt with?

15.2.3.4 Firewall security Sample interview questions:

Are firewalls deployed on all entry and exit points and also between critical VLANS?

Do all firewall rule changes, especially opening of new ports go through stringent approvals and scrutiny?

Are firewall rules not complicated and regularly optimized? Are Egress/Ingress filter rules created appropriately and all other rule creation guidelines are followed?

Is NAT implemented appropriately?

Firewall Rules Analysis

Firewall rules should be optimized on regular basis. Rules are created for specific purposes, if it no longer serves the purpose, rules should be removed. Also they should be optimized for clarity and reducing complexity.

It's a good practice to write Change request number against every rule (in comments section) and expiry date if rule is created for specific time. These expiry dates should be tracked or these rules should be removed during review if expiry is over.

Auditors would like to see if such an exercise is conducted regularly and results of this exercise. A change request based on rule optimization exercise can serve as evidence.

15.2.4 Analyze Data Once Information is collected, auditors analyze data to see if controls are implemented as required by standards.

Auditors analyze infrastructure architecture and design against requirements specified in architecture document.

Auditors either manually check configuration files or use tools like Nipper/CIS - RAT to identify any deviations from standards. Auditors also analyze firewall rules to ensure that there are no insecure or conflicting rules in place.

Once analyzed, auditors organize the findings in preliminary report. Ideally each finding should have following details:

15.2.5 Reporting Once organized, auditor presents nonconformance report to auditees, in clarification meeting. Findings must be discussed, verified and accepted by auditees before finalization. Once auditees have accepted the findings, they are collated in final report.

Final report is similar to Process audit report with below components:

Executive summary. Scope and limitations.

Risk Methodology used. List of Findings with description, rating and remediation steps.

Remediation time allotted. Check Reporting section in Chapter 14 for more details.

15.3 Technical Compliance Audit from Auditee's Point of View 15.3.1 Audit Preparation

Unlike other audits, this is straight forward audit which goes exactly by checklist that means preparation can be done in advance to pass this audit with flying colors. It's like the Math's exam in High School where if steps of solving the problem are right and conclusion is right, then one can be assured of full marks.

Auditors will be using Networking standards, Hardening guides and Network architecture documents as their checklist. If in preproduction phase itself due care has been taken that network architecture requirements are met, network standards are complied to and hardening guides are followed to harden the devices, first battle is won. Pre-production phase is covered in detail in Chapter 7.

Second battle is not allowing any unauthorized changes or changes that impact initial hardening and compliance in operations phase. If secure change management processes as described in Chapter 8, is followed consistently, then second battle is also won. Regular compliance reviews of devices also keep the team updated on conformance status. Automated tools like Cisco compliance management and configuration solution, Solarwinds and many more can be used to get an updated report on device conformance. These tools should be run regularly and any findings identified should be mitigated.

Compliance can also be checked manually by making sure the device configuration meets all the standard and hardening guides' requirement. If you don't have resources to check compliance of all the devices in the network, use similar criteria of sampling as auditors. Focus on critical areas more and sample one or two devices from less critical areas.

Include Compliance review task in your Monthly task list so that you can divide the task across the year. (More about Compliance Task list is discussed in Chapter 14). Wherever you find that there's a valid justification for not complying with hardening guide or standard, raise it as an exception and keep a proper record. Review exception list monthly as exceptions are not raised for forever. They always have an expiry, where they either have to be renewed or fixed. Expired exceptions will not help your case if auditor identifies the same finding. (More about managing exceptions is discussed in Chapter 9).

15.3.2 Project Planning

1. Scope of Audit Understand which devices are selected for review and how auditors require the information. They may ask for configuration files or screenshots of GUI. 2. Type of Audit

Understand if this audit is conducted against organization's Standards and Hardening guides or against industry standards. Keep a note of which industry standard they are auditing. If you have created your organization's standards and hardening guides based on industry standards like NIST or CIS, then you don't have to worry much. 3. Schedule of Audit Keep a note of schedule to make time for interviews and extract information for auditors. As this is offline audit, you will have to plan your time and resources to provide information to auditors in the required format. 4. Risk Assessment Methodology

Risk assessment methodology is extremely important in this audit. Risk rating of a non-conformance finding is tricky even for auditors. Also in this audit, auditors do not map the whole network and may not be aware of all the security controls present in network. They will only be looking at one asset at a time.

Make sure that you understand risk assessment methodology and definitions of high, medium and low risk finding. If you are not clear, you may ask auditor to explain risk rating in detail with examples. Also confirm with auditors that they will consider the controls present in the environment, in clarification meeting. Basically it's just to ensure that auditors will be willing to

consider controls when you explain them in clarification meeting later. (More on Risk Assessment Methodology is discussed in Chapter 14) 15.3.3 Fieldwork And Analysis Auditor may conduct interviews to understand design and architecture of environment. Ensure that you have a senior member of the team attending this meeting preferably someone who was involved in initial phases or who regularly attends Joint Architecture board meetings (or anything similar your organization conducts). You have to explain to auditor how you have implemented architecture and design requirements. You also need to explain to auditors that all new major projects need approval from Architecture board meetings (or any high level architecture review that your organization follows).

Provide device information to auditors in required format. Ensure that you remove password strings from configuration files before passing it on. Don’t remove the hashing/encryption method used like MD5/Cisco 7 but do remove the password string.

For example:

15.3.4 Reporting Keep your "Controls/Segment spreadsheet" (explained in Chapter 3), ready for this phase. You will have to use it extensively during clarifications meeting.

1. Verify the finding Verify the finding for its accuracy. You may question why to verify the finding when auditors already reviewed configuration files and you are also going to do the same, however this step is extremely important. During one of the audits that I was managing, Auditor flagged that MOP protocol is not disabled on all the interfaces. Basically auditors use automated tools to analyze configuration files. These tools tend to look for certain configuration statements and if they don't find it anywhere in configuration files they flag it. In this case they were looking for "no mop enabled"

This finding was accepted but when network team started to fix the finding, they found out that newer versions of Cisco IOS have mop disabled by default on most of the interfaces.

This information nullified the finding but it was too late because finding was already accepted and presented to the management. Technologies are constantly changing and improving. It is difficult for auditors or network professionals to keep a tap on each and every change, however little research or conversation with vendor during clarification phase can minimize the findings. Take some time to verify the findings and communicate with vendor before accepting them. Auditors do not expect you to accept the finding without verification and allot some time for you to get back to them on acceptance. 2. Review Exceptions List

Always review your exceptions list because exceptions are bound to be flagged as findings. Show auditors that you are aware of these findings and they have already been accepted by Asset owners and management. You may have to show auditors the Risk acceptance/exception sign off form for each finding. Auditor will close the finding based on this information.

3. Controls evaluation Once you have verified all the findings and reviewed your exception list, you will have to accept the remaining findings.

Before you accept the findings for remediation, you should present your controls/segment spreadsheet to auditor. Highlight the segments where affected assets are located and explain to auditors what preventive, detective and recovery controls are present in that segment/VLAN. E.g. Monitoring and Log analysis are detective controls which should definitely be considered even if there are no preventive controls for specific assets.

After analyzing the controls spreadsheet, auditor may ask for specific evidences. Once you have provided evidences, auditor can reduce the probability rating thus reducing the overall risk rating of finding.

During one of the audit that I was managing, Auditor flagged a 'High' Finding (Impact = High, Probability = High) "Cisco 7 encryption is used to encrypt local password. We all know that Cisco 7 encryption is extremely weak encryption and passwords can be decrypted simply in Google search.

However, Network Team debated the rating as Cisco ACS was used to manage user access controls on network devices. Their argument was that local passwords even if weak cannot be used to login to routers as far as ACS is available. It's only in the case of emergencies

where Cisco ACS in not online, one can use local password. Also this local password is not 'enable' password which can allow users to change configuration so again risk is not “High”. Auditors accepted the justifications and lowered the risk rating to 'Low' (Impact = Medium, Probability = Low).

You may feel that why to do this extra work when ultimately you have to fix the finding, but ratings are extremely important to management and customers. A lot of low findings may not mean weak security posture, however couple of high findings can depict security posture in a very bad light in front of customers and management.

Once all the findings are reviewed by your team and auditors, you can accept the finding to be finalized in final report. 15.3.5 Risk Treatment

Once all findings are agreed in clarification meeting and evidence provided for non-accepted ones, risk treatment planning should start. There are four common ways to treat a risk: 15.3.5.1 Risk remediation Treat the risk on the asset it was identified and similar assets in the infrastructure (wherever viable). Plan a full-fledged strategy to address the risk across the network, not just on the asset it was

identified. This is done to ensure that same risk is not identified later in any audits.

15.3.5.2 Risk acceptance Based on organization's risk methodology, low risk findings may be accepted. Risk acceptance can also be done based on budget restrictions, resource limitation or any other reason. But it should be signed off by management/asset owner as they are the only ones able to accept the risk. During auditor's re-verification phase, signed off risk acceptance form can be provided as evidence. Keep a record of exception/risk acceptance raised and approved. Risk acceptance is done for a specific time period. Review records regularly to ensure risk acceptance is still valid. Before risk acceptance/exception expires, it should be either renewed or fixed.

(More on Risk Acceptance is discussed in Chapter 4 & 9) 15.3.5.3 Risk transference

Risk can be transferred to third parties and contract should include risk management and treatment. In that case they have to take care of treating the said risk. 15.3.5.4 Risk avoidance

Organizations can avoid the risk by terminating its source. For example, if risk is identified on one of the legacy system, which is in process of being replaced, organization can choose to shut down the system to avoid the risk.

15 .4 Good Practices To Avoid Compliance Findings 1. Pre Implementation Phase Follow architectural and design requirements (high availability, segregation of duties, defense in depth, network segregation, appropriate naming convention etc.).

Ensure that project plan goes through architectural board review. Keep evidences of approval.

Follow networking standards requirements on access control, encryption of data, VPN standards, firewall controls etc. Follow hardening guides to lock down the device.

Ensure that all controls for specific segment are recorded in controls/segment spreadsheet.

Note: Follow Best practices as defined in Chapter 7. 2. Operations Phase

Conduct regular compliance reviews and remediate any findings. Maintain exceptions/risk acceptance spreadsheet and review it regularly for expired exceptions. After Audit, conduct a post mortem of why were the findings identified and where exactly the problem lies. Is there a problem in pre-production phase or there are changes on the devices which were not verified against security compliance? Once problem is identified, fix the root of the problem. For example, if you identified that you are following all the steps in pre-production phase however during change management security compliance is not checked, you can remediate this finding by delegating someone to review security compliance of all the changes, going forward. Even if security team is managing review of all the changes during change management, it is recommended to have one of the network team members to review the changes for security compliance. Network team members understand exactly what commands will be used and their impact on security. Conduct regular scan for rogue wireless devices.

Conduct regular firewall rule analysis and fix the findings.

Over the period of time, firewall rules keep growing, making it more and more complex and overhead to firewall performance. Firewall checks every packet against each rule until a match is found. If there are a lot of rules, more time and resources will be required for each packet, affecting firewall's efficiency. Organization should undertake regular firewall rule optimization, to keep efficiency of firewall from degrading. Rules can be optimized by: Consolidating different rules together in one rule.

Monitoring rules for maximum hits and keeping commonly used rules up in the rule order. This can be done by checking logs for rules or by using tools like Algosec, which can display rule utilization with help of charts and graphs. Rules not being used should be removed. Explicit deny and permit rules should be included and 'any any' rules should be removed.

Keeping generic rules below specific rules so that specific rules are enforced. If generic rules are defined above specific rules, then generic rules will be matched first. 3. Backup and Disaster recovery Perform regular backups of server data and device configuration with the help of backup teams.

Network team should get involved in regular restoration drills conducted by business continuity and disaster recovery teams. Complete responsibility of backup, business continuity and disaster recovery normally does not fall under Network team therefore these topics are not covered in this book.

Chapter 16 Penetration Testing 16.1 Penetration Testing (Pen Test)

A Penetration testing or ethical hacking is a planned security assessment exercise, conducted in a controlled environment to evaluate organization's security posture. This exercise simulates a hacker's steps to attack the network. Normally third party professionals are contracted to conduct this exercise. It's a real life test for organizations to identify how strong their system defenses are and if security breaches are detected and responded to, in time.

Purpose of penetration test is to identify weaknesses in the system before they are exploited by attackers. Management can opt for announced or unannounced penetration test depending on their goals for conducting the assessment. Penetration test is considered successful if pen testers are able to defeat the security controls and perform security breach. Security breach is basically gaining access to the network, escalating privileges to extract sensitive business information.

Pen Testers can simulate as an external attacker, disgruntled employee, remote user and/or a social engineer to cover all possible attack scenarios. Attack methods include active/passive methods of information gathering to complex targeted attacks on internal environment.

There are 3 broad categories of penetration test as listed below. Categories are based on how much organizational information is provided to pen testers for the technical assessment. Industries sometimes use a hybrid approach where pen test starts with a Black box testing and once Black box testing phase is completed, Gray box or White box testing commences.

Black box Testing External Attacker Simulation Access: None Location: Any

This test simulates an external attacker with no or little prior knowledge of target environment. Pen Testers are not provided any information about the target or at the most public domain addresses are provided.

Purpose: Identify any weaknesses on perimeter infrastructure which can compromise CIA of internal network. Focus is to evade external firewall/intrusion detection systems to gain access to internal network. White box testing

Malicious User Simulation Access: Full access Location: Internal Network

This test simulates a malicious insider/disgruntled employee with complete knowledge and access of internal environment. Pen testers are provided complete information of environment including IP addresses, source code, network diagram, design and architecture documents etc.

Purpose of this test is to analyze network design, architectural components, source code etc. to identify any core vulnerability that can be exploited by users who understand the system completely. For example, a thorough source code review can reveal existence of backdoors that may have been planted during development stage but were never removed. Gray box Testing

Malicious User Simulation Access: Basic employee access Location: Internal Network

This test simulates a normal malicious user with basic access to the internal network. This assessment lies somewhere between black box and white box assessment methodology as only some information is shared with pen testers rather than complete disclosure. Pen Testers are provided with basic information like IP addresses, normal user credentials etc. for the assessment. Purpose of this assessment is to identify if an employee with limited access can compromise CIA of organization's network. Focus is to identify weaknesses using normal employee access and escalating the privileges to penetrate the system.

Remote User Simulation Access: Basic VPN Access Location: Any Purpose of this test is to Identify if any remote user, sitting anywhere in the world can compromise CIA of organization's network. Focus is to identify vulnerabilities using basic remote user access to breach the system. 16.2 Stages of Penetration testing Penetration testing is a systematic exercise with various stages:

16.2.1 Reconnaissance

Reconnaissance phase focuses on collecting information about the organization. It is especially useful when pen testers are simulating external attackers or social engineers. At this time organization is not directly engaged and information is collected passively from different sources. All the data collected is publicly available but pen testers look for important information that can be useful for Penetration testing exercise.

Two types of information are collected: Technical and NonTechnical Technical information collected are Websites, IP addresses, Network blocks, Mail servers, Ports, Applications, Devices, DNS server address etc. This information forms the basis of next phases of pen test. Main sources of this information are whois.net, job postings, technical forums, search engines etc. Non-Technical information collected are email ids, organization's line of work, products and services offered, third parties and vendors, news, location etc. This information when used along with technical information can be very useful. For example, email ids can provide useful information on naming convention used for email ids. Guessing of email ids can be accomplished for higher management which can be used for social engineering. Main sources of this information are search engines, social networking sites, physical trash bins, organization's website etc. 16.2.2 Scanning

Scanning focuses on identifying active machines, discovering open ports & access points, fingerprinting the operating system and uncovering services on open ports. It also focuses on evading firewall and IDPS to penetrate internal network. Ports, protocols, services and their respective version information help to identify what kind of vulnerabilities a system may have. For example, if

SSHv1 is discovered on port 22 then vulnerability assessment phase can focus on known vulnerabilities of SSHv1. 16.2.3 Enumeration

Scanning phase collects basic information about the network, however enumeration phase analyzes all the information collected in its entirety. Enumeration phase focuses on mapping the whole network, identifying poorly protected file shares, valid user accounts, groups, network resources etc. Pen testers try various enumerations like: SNMP enumeration to capture SNMP messages LDAP enumeration to identify user ids and groups DNS queries to get names and IP addresses

NetBIOS enumeration to identify users, open shares, system information etc.

16.2.4 Vulnerability Assessment Vulnerability assessment is the main testing phase where systems are actively tested for weaknesses. Vulnerability assessment categorizes and rates all vulnerabilities based on their potential impact.

Tools like Nessus/Nexpose are utilized for this phase. Tools scan for different versions of services and applications running on the system, their patch levels, misconfigurations etc. to identify vulnerable service/ application.

Tools can conduct intrusive/non-intrusive scans depending on organization's requirement. Intrusive scans deploy Denial Of Service attacks (Buffer overflow etc.). If organization does not want these tests to be done, they can opt for non-intrusive scans where Denial Of Service conditions are excluded from the test. 16.2.5 Exploitation/Penetration

This phase focuses on exploiting discovered vulnerabilities to breach the system. Purpose is to gain unauthorized access to business critical information. This phase tries to determine the extent of harm an attacker can cause if he manages to penetrate the system. Some quick wins can be exploiting:

Misconfigured servers. Clear text protocols.

Badly configured ACLS.

Weak Passwords.

Unprotected network shares.

Other exploits will depend on discovered vulnerabilities. Pen testers create their own scripts and exploits or they also use open source/ commercial exploit frameworks like Metasploit/Coreimpact etc. Pen testers can try different attack scenarios like ARP Poisoning, VLAN hopping, Man in the Middle, Spoofing, Wireless attacks, Sniffing etc.

Kali and Backtrack are open source operating systems which provide a collection of tools to be used for all above phases.

Most of the times organizations restrict Pen testers from going for a full blown exploit as it may result in production system crashing/disclosure of critical information. In that case Pen testers just verify the finding with harmless exploits causing no interruption to business functions.

16.2.6 Reporting

Penetration testing report is more comprehensive than a vulnerability assessment report. It not only includes the identified vulnerabilities but also explains the testing methodology in great detail so it can be recreated.

Common components of a pen test report are:

Executive summary intended for management.

Scope and limitations. Step by step exploitation process of vulnerabilities and the extent of access gained.

Description of all the findings identified, their severity rating and recommendation.

Report is then submitted to relevant stakeholders to remediate. Once findings are remediated, pen testers are informed to re-test the finding and close the issue.

16.3 Pen-Testing vs. Vulnerability Assessment Vulnerability assessment focuses on identifying as many vulnerabilities as possible but pen test focuses on breaching the system and stealing some critical information.

Vulnerability assessment identifies vulnerable areas of infrastructure while pen test simulates a real world attacker and finds out what level of damage he can cause.

Vulnerability assessment findings are based on response of specific probes by scanners, which may or may not be right while pen test findings are verified findings.

Vulnerability assessment can be conducted using automated tools, but pen testers use human intelligence to collate data to get in. Pen Test can even conduct social engineering attacks to exploit human weaknesses. This can show organizations that their security training may not be adequate. Typically, periodic (weekly/monthly) vulnerability assessment should be performed by internal security team, and all the findings should be remediated as soon as possible. Professional Pen testers should be contracted annually/bi-annually to check security posture. Security posture determined by pen testers is point in time assessment. As new vulnerabilities and exploits keep coming in, regular vulnerability assessment is a must.

16.4 Pen Testing from Auditee's Point

If Pen Test is conducted unannounced then internal network teams may not have prior information therefore, there isn't much that can be done in initial stages.

Monitoring as usual should go on and if monitoring team detects some unusual activity in traffic then normal procedure of raising incident, responding to attack and investigation should follow. Network monitoring is discussed in Chapter 13.

However, if teams have been notified on upcoming penetration tests, monitoring teams can use this opportunity to tag unusual traffic to pen testing stages. Traffic during scanning stage of penetration test can be analyzed to create specific filters and alert so that such traffic can be detected in real time scenario. E.g. if it was discovered during scanning phase that port scan of more than 5 systems was conducted in a minute then such alerts can be created for future use.

Teams managing security systems like IDPS/SIEM should also keep a tap on logs generated and define alerts using such logs. This exercise will greatly help in real time attack scenarios.

16.4.1 Initial Phase

For Gray box testing, auditees are supposed to provide some information to the pen testers. Ensure that wherever possible

devices/servers IPs provided are not the primary servers. Secondary servers/devices which have similar configuration can be provided for pen testing purpose. Reason is to minimize the impact on business processes during pen testing. When findings are identified on secondary servers/devices, ensure that remediation is done on primary devices as well. 16.4.2 Reporting Phase

There isn't much to discuss if pen testers were able to infiltrate the system using specific vulnerabilities, however for other vulnerabilities which were not exploited, handle them exactly as you would handle the vulnerability assessment findings.

First check if vulnerability is a false positive by probing specified ports and analyzing their responses. Recreating the finding helps in understanding the issue properly. Connect to the same network segment that pen testers connected and then try to probe the ports. Check which firewall is managing that specific VLAN and if same port is open in firewall as well. If not, then vulnerability is contained within the network segment which reduces the probability of exploitation

Verifying the finding is the first step. Second step is to identify all the control available for that segment. You can use the controls spreadsheet created in Chapter 3. You can explain the controls implemented in that network segment to pen testers. That may be

the reason why they were not able to exploit those vulnerabilities. This reasoning can help your case when discussing the risk rating of the finding in clarification meeting. Pen testers may be able to reduce the risk rating if they understand that even though vulnerability exists, it's still not easily exploitable and contained within the network segment.

More on handling vulnerabilities is discussed in Chapter 9. 16.4.3 Risk Treatment Phase Once you agree to the findings in clarification meeting, you have to start treating the risk. There are four common ways to treat a risk:

16.4.3.1 Risk Remediation Plan a full-fledged strategy to address the risk across the network, not just on the asset it was identified. This way, you make sure that same risk is not identified later in any audits. 16.4.3.2 Risk Acceptance

Based on organization's risk methodology, you may be able to accept the low risk findings. Risk acceptance can also be done based on budget restrictions, resource limitation or any other reason. But it should be signed off by management/asset owner as they are the only ones able to accept the risk.

During pen testing re-verification phase, you can provide signed off risk acceptance form as evidence.

Keep a record of approved exception/risk acceptance. Risk acceptance is done for a specific time period. Review records regularly to ensure that risk acceptance is still valid. Before risk acceptance/exception expires, it should be either renewed or fixed.

(More on Risk Acceptance is explained in Chapter 9) 16.4.3.3 Risk Transference

Risk can be transferred to third parties and their contract should include risk management. In that case they have to take care of treating the said risk. 16.4.3.4 Risk Avoidance

Organizations can avoid the risk by terminating its source. For example, if risk is identified on one of the legacy system, which is in process of being replaced, organization can choose to close down the system to avoid the risk.

16.5 Good Practices to Avoid Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities

Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities Vulnerabilities

Appendix Appendix 1: Vulnerability Management Sheet

Appendix 2: Risk Management Sheet

Risk Management sheet will be a combination of Table 1, 2 and 3. They have been listed here separately for ease of understanding. Final Risk Management Sheet will look like below:

below: below:

Table - 1

Table - 2

Table - 3

Appendix 3: Sample Compliance Task List

Appendix 4: Risk Acceptance Form