Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media: Cyber Science 2022; 20–21 June; Wales 9811964130, 9789811964138

This book highlights advances in Cyber Security, Cyber Situational Awareness (CyberSA), Artificial Intelligence (AI) and

563 117 9MB

English Pages 475 [476] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cyber Science 2022 Committee
Preface
Sponsors and Partners
Keynote and Industry Panel Speakers
Contents
Contributors
Cyber Threat Intelligence, Ransomware Datasets and Attack Detection
Practical Cyber Threat Intelligence in the UK Energy Sector
1 Introduction
2 Background
2.1 Barriers to CTI Sharing
2.2 Current CTI Sharing in the UK Energy Sector
2.3 Evaluation of Sharing Methods and Platforms
3 Methodology
3.1 Experimental Configuration
3.2 CTI Sharing Models
3.3 Tags and Taxonomies
3.4 Sharing in MISP
4 Verifying Sharing Models
4.1 Source and Subscriber
4.2 Hub and Spoke
4.3 Peer to Peer
4.4 Hybrid
5 Verifying Taxonomies—Events and Tags
6 Discussion
7 Conclusions
7.1 Future Work
References
A Free and Community-Driven Critical Infrastructure Ransomware Dataset
1 Introduction
2 Ransomware Datasets
2.1 Strain-Based Ransomware Datasets
2.2 Detection-Based Ransomware Datasets
2.3 Payment-Based Ransomware Datasets
2.4 The Need for a Critical Infrastructure Ransomware Dataset
3 Critical Infrastructure Ransomware Dataset
3.1 Obtain
3.2 Scrub
3.3 Explore
3.4 Limitations
3.5 Dataset Format, Hosting, and Usage Tracking
4 Requesters and Dataset Use
4.1 Industry
4.2 Government
4.3 Educators
4.4 Students
4.5 Journalists/reporters
5 Incorporating Recommendations to Make the Dataset Community-Driven
5.1 Recommendation 1: Document Modifications (V10.1, August 2020)
5.2 Recommendation 2: MITRE ATT&CK Mapping (V10.1, August 2020)
5.3 Reporting Missing Incidents and the Contributors Tab (V10.4, Oct 2020)
6 Conclusion
6.1 Recommendations that Could not Be Accommodated
References
Criteria for Realistic and Expedient Scenarios for Tabletop Exercises on Cyber Attacks Against Industrial Control Systems in the Petroleum Industry
1 Introduction
2 Background
2.1 Scenario Development
2.2 Characteristics of a Scenario
2.3 Characteristics of a Tabletop Exercise
3 Method
4 Results
4.1 Interview Findings
4.2 List of Criteria
4.3 Example Scenarios
5 Discussion
6 Conclusion
References
CERTs and Maritime Cybersecurity
Exploring the Need for a CERT for the Norwegian Construction Sector
1 Introduction
2 Background
2.1 Challenges Specific to the Construction Sector
2.2 Working Method and Analytical Framework
2.3 Limitations
3 National Frameworks for ICT Security
3.1 Framework for Handling ICT Security Incidents
3.2 Sectoral Response Units
3.3 ICT Security Units (CERT)
3.4 International Collaboration Forums
4 Results from Interviews
4.1 Vulnerabilities
4.2 Incident Management
4.3 Challenges Facing the Industry
4.4 Sector CERT
5 Summary and Conclusions
5.1 The Needs of the Industry
5.2 Organization of an ISAC
References
Holistic Approach of Integrated Navigation Equipment for Cybersecurity at Sea
1 Introduction
2 Architecture of the System
3 Methodology for Data Collection
4 Conclusion
References
Building Maritime Cybersecurity Capacity Against Ransomware Attacks
1 Introduction
2 Related Work
3 Profiling Ransomware Attacks in the Maritime Domain
3.1 Analysis of Recent Ransomware Cases
3.2 Attack Methodology
4 Situation—Aware Training Curriculum: Design Directions
4.1 Audience and Scope
4.2 Learning Objectives
4.3 Learning Content and Pedagogy
5 Training Platform
5.1 High-Level Architecture
5.2 Example of a Training Scenario
6 Conclusions
References
Cyber Situational Awareness Applications
A Decade of Development of Mental Models in Cybersecurity and Lessons for the Future
1 Introduction
2 Folk Metaphors and Formal Models
2.1 Folk Models
2.2 Formal Models
3 Interface Design and Perceptions of Security
3.1 Interface Design
3.2 Perceptions of Security
4 Mental Models in Specific Cybersecurity Domains
4.1 Platform
4.2 Technology
4.3 Type of User
4.4 Social Factors
4.5 Tools
4.6 Applications
5 Discussion
6 Conclusions
References
Exploring the MITRE ATT&CK® Matrix in SE Education
1 Introduction
2 Adversarial Frameworks
2.1 MITRE ATT&CK
3 ATT&CK Mapping Project
3.1 Cybercrime Class Overview
3.2 Project Description
4 Results
4.1 Part 1 (Sub)Techniques Identified and Excerpt Evidence
4.2 Mapping Ratios, Indications, Redesign
5 Discussion
5.1 Lessons Learned
5.2 ATT&CK Matrix Revisions and Extensions
5.3 Biases
5.4 Limitations
6 Conclusion
References
Municipal Cybersecurity—A Neglected Research Area? A Survey of Current Research
1 Introduction
1.1 Research Motivation
2 Methodology
2.1 Selection of Studies
2.2 Inclusion and Exclusion Criteria
2.3 Search Results and Reduction Process
3 Literature Review
3.1 Smart Cities
3.2 Operational Technology
3.3 Elections
3.4 Human Issues and Cybersecurity Awareness
3.5 Crisis Management
3.6 Management and Governance
3.7 Municipal Technology
3.8 Research Methods
4 Discussion
4.1 Thematic Contributions
4.2 Identified Gaps and Need for Research
5 Conclusion
References
Novel and Emerging Cyber Techniques
Near-Ultrasonic Covert Channels Using Software-Defined Radio Techniques
1 Introduction
2 Background
2.1 Related Work
3 Methodology
3.1 Implementation
3.2 Channel Bandwidth
3.3 Testing
4 Results
4.1 Results at 18 kHz
4.2 Results at 20 kHz
4.3 Results at 22 kHz
5 Discussion
5.1 Input Devices
5.2 Output Devices
5.3 Observations
5.4 Countermeasures
6 Conclusion
7 Future Work
References
A Preventative Moving Target Defense Solution for Web Servers Using Iptables
1 Introduction
2 Prior Work
3 Threat Model and Assumptions
4 Methodology
4.1 Design Overview
4.2 Implementation
4.3 Restrictions
4.4 Testing
4.5 Hardware Platforms Used
5 Understanding the Performance
5.1 Iptables Live Modification
5.2 Website Performance Is Degraded
5.3 DIM Temporarily Impacts Performance When Switching Between Web Servers
5.4 Fingerprinting Accuracy Is Decreased as the Frequency of DIM Rotation Increases
6 Discussion
6.1 Modifying Iptables Rules
6.2 Performance Observation
6.3 Security Observation
6.4 Challenges and Future Work
7 Conclusions
References
Hardening Containers with Static and Dynamic Analysis
1 Introduction
2 Review of Literature
2.1 Background on Container Technology
2.2 Container Security
2.3 Hardening of Containers
2.4 Threat Models
2.5 Rule-Based Security Monitoring of Containers
2.6 Software Protection Mechanisms
2.7 Static Analysis
2.8 Dynamic Analysis
3 Approach
3.1 Static Analysis
3.2 Dynamic Analysis
4 Findings and Analysis
4.1 Static Analysis
4.2 Dynamic Analysis
4.3 Hardening Policy
5 Limitations
6 Conclusion
7 Future Scope
References
Efficient Cyber-Evidence Sharing Using Zero-Knowledge Proofs
1 Introduction
2 Previous Work
2.1 Traditional Cyber-Evidence Sharing Approaches
2.2 The Fiat-Shamir ZKP Scheme
3 Design
3.1 Merkle-Trees
3.2 ZKP
3.3 Protocol
4 System Implementation and Evaluation
4.1 Implementation
4.2 Fiat-Shamir Identity Scheme Performance
4.3 Merkle Tree Performance
4.4 Subset Data Verification
5 Related Work
6 Conclusion
References
Artificial Intelligence Applications
Uncertainty and Risk: Investigating Line Graph Aesthetic for Enhanced Cybersecurity Awareness
1 Introduction
2 Cybersecurity Awareness and Data Visualisation
2.1 How to Improve Cybersecurity Awareness?
3 Visualising Uncertainty in Data
3.1 Risk and Uncertainty in Cybersecurity
4 Study
5 Findings and Discussion
6 Conclusions
7 Future Work
References
An Explainable AI Solution: Exploring Extended Reality as a Way to Make Artificial Intelligence More Transparent and Trustworthy
1 Introduction
2 Explainable AI and Methods
3 Extended Reality for Collaboration
4 Augmented and Virtual Reality: Affordances for Transparency and Trustworthiness
5 Development of the XAI XR Solution
5.1 Solution Functionality
5.2 Development of the Web Application
5.3 Development of the VR Environment
6 Solution Evaluation
6.1 Focus Group Study
6.2 Video Presentation Study
6.3 Discussion
7 Recommendations
8 Conclusion
References
A Study on the Development of Crisis Management Compliance by Personal Information Infringement Factors in Artificial Intelligence Service
1 Introduction
2 Overview of Personal Information Infringement Factors and Risk Management Compliance
2.1 Cases of Personal Information Infringement Incidents
2.2 Crisis Management System to Response to Personal Information Infringement
3 Framework and Method of Research
4 Factors of Personal Information Infringement in AI Service and Method of Calculating Infringement Response
4.1 Composition of Personal Information Infringement Factors
4.2 Calculation Method of Personal Information Infringement Risk in AI Service
4.3 Simulation of Estimating and Responding to the Risk of Infringement of Personal Information
4.4 Personal Information Protection Matters of Response to Personal Information Infringement
5 Conclusion
References
Multidisciplinary Cybersecurity—Journalism and Legal Perspectives
Threats to Journalists from the Consumer Internet of Things
1 Introduction
2 Related Work
2.1 IoT and Journalism Threat Modelling
2.2 IoT Privacy Threats
2.3 IoT Material Threats
3 Methods
3.1 Research Questions
3.2 Literature Synthesis
3.3 Analysis
3.4 Category Creation
3.5 Threat Modelling
4 Categorisation
4.1 Regulatory Gaps
4.2 Legal Threats
4.3 Profiling Threats
4.4 Tracking Threats
4.5 Data and Device Modification Threats
4.6 Networked Devices Threats
5 Discussion
5.1 What Are the Distinctive and Novel Threats to Journalism from the Consumer IoT?
5.2 How Can We Categorise These Threats in a Way That Is Easily Comprehensible by Journalists?
5.3 Challenging Areas and Known Unknowns
5.4 Future Work
6 Conclusion
References
The Transnational Dimension of Cybersecurity: The NIS Directive and Its Jurisdictional Challenges
1 Introduction
2 The Jurisdictional Regime of the NIS Directive
2.1 Regulatory Jurisdiction Over Operators of Essential Services (OES)
2.2 Regulatory Jurisdiction Over Digital Service Providers (DSPs)
3 The Jurisdictional Regime of the NIS 2 Proposal
3.1 Negotiations Between Co-legislators
3.2 Stakeholder Consultations
4 A Comparative Review of Other One-Stop-Shop Mechanisms Based on ‘Main Establishment’
4.1 The GDPR One-Stop-Shop Mechanism
4.2 The DSA Proposal One-Stop-Shop Mechanism
5 Recapitulation and Concluding Remarks
References
Refining the Mandatory Cybersecurity Incident Reporting Under the NIS Directive 2.0: Event Types and Reporting Processes
1 Introduction
2 Reporting Obligations from NIS Directive to NIS 2.0 Proposal
2.1 Reporting Framework Under the NIS Directive
2.2 Reporting Framework Under the NISD 2.0 Proposal
2.3 The European Parliament’s Position on the Reporting Framework Envisaged Under the NISD 2.0 Proposal
2.4 The European Council’s Compromise on the Reporting Framework Envisaged Under the NISD 2.0 Proposal
2.5 How the NISD 2.0 Proposal Responds to the Flaws of the NISD
3 Conclusion
References
Multidisciplinary Cybersecurity—Healthcare, IoT and National Perspectives
Assessing Cyber-Security Readiness of Nigeria to Industry 4.0
1 Introduction
2 Review of Related Works
3 Industry 4.0 and Cyber-Security
3.1 Threat Sophistication, Landscapes and Industry 4.0
3.2 Cyber-Attack Sophistication: AI-Based Attacks
3.3 Cyber-Attack Sophistication: Bio-inspired Attacks
4 Intangible Assets and Knowledge-Based Economy of Industry 4.0
5 Education Curriculum and Industry 4.0
6 Method and Survey Materials
7 Analysis, Results and Discussions
8 Conclusion and Recommendations
Appendix
References
Case Studies in the Socio-technical Analysis of Cybersecurity Incidents: Comparing Attacks on the UK NHS and Irish Healthcare Systems
1 Introduction
2 The STAMP Approach on the NHS WannaCry Cyberincident
3 The STAMP Approach on the Irish Healthcare Cyberincident
4 Common Areas of Concerns and Difference Between Case Studies
5 STAMP Control Structures for Flaws in Case Studies
6 Integrating NIST Control Taxonomies to STAMP Approach
7 Conclusions
References
A Study on the Impact of Gender, Employment Status, and Academic Discipline on Cyber-Hygiene: A Case Study of University of Nigeria, Nsukka
1 Introduction
2 Literature Review
3 Materials and Methods
3.1 Survey Participants
3.2 Research Goal and Procedure
3.3 Research Hypothesis
3.4 Data Analysis
4 Result
4.1 Descriptive Statistics of Demographics
4.2 Descriptive Statistics of Cyber-Hygiene Knowledge Concept and Threats
4.3 Descriptive Statistics of Cyber-Hygiene Culture
4.4 Demographics and Cyber-Hygiene Culture
5 Discussion
6 Conclusion and Future Work
References
Coexisting Blockchain and Machine Learning
Blockchain-Based Cardinal E-Voting System Using Biometrics, Watermarked QR Code and Partial Homomorphic Encryption
1 Introduction
1.1 Related Works
1.2 Objectives and Contributions
2 Mathematical Background
2.1 Discrete Wavelet Transform (DWT)
2.2 Blockchain Technology
2.3 ElGamal Homomorphic Cryptosystem
2.4 Cardinal Voting
2.5 Range Proof of Knowledge
3 Scheme Methodology
3.1 Voter Authentication
3.2 Authorized Cardinal E-Voting
4 Security Analysis
5 Performance Analysis
5.1 Theoretical Analysis
5.2 Experimental Analysis
5.3 Usability Analysis
6 Future Work and Conclusion
References
Neutralizing Adversarial Machine Learning in Industrial Control Systems Using Blockchain
1 Introduction
2 System Architecture
2.1 Layer 1: VNWTS Testbed
2.2 Layer 2: Data Management
2.3 Layer 3: ML Training Engine
2.4 Layer 4: Blockchain Virtual Machine
2.5 Layer 5: ML Testing Engine
2.6 Layer 6: Interface
3 Architecture Overview
4 Experimental Results
4.1 Adversarial Machine Learning
4.2 Blockchain-Based Evaluation
5 Conclusion and Future Work
References
A User-Centric Evaluation of Smart Contract Analysis Tools in Decentralised Finance (DeFi)
1 Introduction
2 Related Work
2.1 Smart Contract Vulnerabilities
2.2 Decentralised Finance (DeFi) and Related Smart Contract Attacks
2.3 Smart Contract Analysis Tools
3 Research Approach
3.1 Methodology Used for Selecting Tools
3.2 Survey
3.3 Evaluated Tools
3.4 Vulnerabilities Dataset
4 Evaluation of Results
4.1 Evaluation Criteria
4.2 Effectiveness—Test Results
4.3 Accuracy—False Positives
4.4 Test Results and Tools’ Rating
5 Conclusion
Appendix—Rating Criteria
References
Recommend Papers

Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media: Cyber Science 2022; 20–21 June; Wales
 9811964130, 9789811964138

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Springer Proceedings in Complexity

Cyril Onwubiko · Pierangelo Rosati · Aunshul Rege · Arnau Erola · Xavier Bellekens · Hanan Hindy · Martin Gilje Jaatun   Editors

Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media Cyber Science 2022; 20–21 June; Wales

Springer Proceedings in Complexity

Springer Proceedings in Complexity publishes proceedings from scholarly meetings on all topics relating to the interdisciplinary studies of complex systems science. Springer welcomes book ideas from authors. The series is indexed in Scopus. Proposals must include the following: • • • • •

name, place and date of the scientific meeting a link to the committees (local organization, international advisors etc.) scientific description of the meeting list of invited/plenary speakers an estimate of the planned proceedings book parameters (number of pages/articles, requested number of bulk copies, submission deadline).

Submit your proposals to: [email protected].

Cyril Onwubiko · Pierangelo Rosati · Aunshul Rege · Arnau Erola · Xavier Bellekens · Hanan Hindy · Martin Gilje Jaatun Editors

Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media Cyber Science 2022; 20–21 June; Wales

Editors Cyril Onwubiko Research Series Ltd London, UK

Pierangelo Rosati Dublin City University Dublin, Ireland

Aunshul Rege Department of Criminal Justice Temple University Philadelphia, PA, USA

Arnau Erola Department of Computer Science University of Oxford Oxford, UK

Xavier Bellekens Lupovis Glasgow, UK

Hanan Hindy Department of Computer Science Ain Shams University Cairo, Egypt

Martin Gilje Jaatun University of Stavanger Stavanger, Norway

ISSN 2213-8684 ISSN 2213-8692 (electronic) Springer Proceedings in Complexity ISBN 978-981-19-6413-8 ISBN 978-981-19-6414-5 (eBook) https://doi.org/10.1007/978-981-19-6414-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Chapters “Municipal Cybersecurity—A Neglected Research Area? A Survey of Current Research”, “The Transnational Dimension of Cybersecurity: The NIS Directive and Its Jurisdictional Challenges” and “Refining the Mandatory Cybersecurity Incident Reporting under the NIS Directive 2.0: Event Types and Reporting Processes” are licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). For further details see license information in the chapters. This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Cyber Science 2022 Theme Ethical and Responsible use of Artificial Intelligence

Cyber Science 2022 Committee

Committee Chairs Arnau Erola—Department of Computer Science, University of Oxford, Oxford, UK Aunshul Rege—Temple University, Pennsylvania, USA Cyril Onwubiko—Centre for Multidisciplinary Research, Innovation and Collaboration, UK Hanan Yousry Hindy—Computer Science Department, Faculty of Computer and Information Sciences, Ain Shams University Martin Gilje Jaatun—University of Stavanger, Norway Pierangelo Rosati—Dublin City University, Dublin, Ireland Xavier Bellekens—Lupovis

Publicity Chairs Eckhard Pfluegel—Engineering and Computing, Kingston University, UK Joe Burton—The University of Waikato, New Zealand Kevin Curran—Ulster University, Northern Ireland, UK Phil Legg—The University of the West of England, UK Uri Blumenthal—MIT Lincoln Laboratory, MIT, USA

Organising Committee Abdelrahman Abuarqoub—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Arslan Ahmad—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK

vii

viii

Cyber Science 2022 Committee

Chaminda Hewage—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Elochukwu Ukwandu—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Fiona Carroll—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Jasim Uddin—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Liqaa Nawaf—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Mohammad Safar—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Rajkumar Singh Rathore—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Tahir Sheikh Bakhsh—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK Thanuja Mallikarachchi—Cardiff School of Technologies (CST), Cardiff Metropolitan University, Wales, UK

Programme Committee Amin Hosseinian-Far—Business Systems and Operations, University of Northampton, UK Antonis Mouhtaropoulos—Department of Computer Science, University of Warwick, Coventry, UK Arghir-Nicolae Moldovan—National College of Ireland (NCIRL), Ireland Arnau Erola—Cyber Security Centre, University of Oxford, Oxford, UK Aunshul Rege—Temple University, Pennsylvania, USA Avishek Nag—University College Dublin, Ireland Bertrand Venard—Audencia, OII, University of Oxford, UK Boniface K. Alese—Department of Computer Science, Federal University of Technology, Akure, Nigeria Carlos A. Perez Delgado—School of Computing, University of Kent, UK Charles Clarke—Kingston University, London, UK Ciza Thomas—College of Engineering, India David Brosset—Naval Academy Research Institute, France Dimitris Kavallieros—Center for Security Studies (KEMEA), Greece Domhnall Carlin—Queen’s University (QUB), Belfast, Northern Ireland, UK Edwin K. Kairu—Carnegie Mellon University, CMU Africa Eliana Stavrou—Computing Department, UCLan Cyprus, Larnaca, Cyprus Elisavet Konstantinou—University of the Aegean, Greece Fatih Kurugollu—Cyber Security, University of Derby, Derby, UK Felix Heine—Hannover University of Applied Sciences, Germany

Cyber Science 2022 Committee

ix

Filippo Sanfilippo—University of Agder, UIA, Norway Florian Skopik—Cyber Security Research, AIT Austrian Institute of Technology, Austria Francisco J. Aparicio Navarro, Cyber Technology Institute, De Montfort University, UK Frank Wang—Kent University, UK Georgios Kambourakis—University of the Aegean, Greece Gerardo I. Simari—Universidad Nacional del Sur in Bahia Blanca and CONICET, Argentina Harin Sellahewa—School of Computing, The University of Buckingham, UK Hasan Yasar—Division of the Software Engineering Institute, Carnegie Mellon University, USA Hayretdin Bahsi—Center for Digital Forensics and Cyber Security, Tallinn University of Technology, Estonia He (Mary) Hongmei—School of Computer Science and Informatics at De Montfort University, UK Huiyu Zhou—Queen’s University Belfast, Belfast, UK Ivan Silva—Instituto Metrópole Digital (IMD), Federal University of Rio Grande do Norte (UFRN), Brazil Jason Nurse—University of Kent, UK Jens Myrup Pedersen—University of Aalborg, Denmark Jingyue Li—Department of Computer Science, Faculty of Information Technology and Electrical Engineering, NTNU, Norway Joe Burton—Political Science and Public Policy Programme, The University of Waikato, New Zealand Kevin Curran—Faculty of Computing and Engineering, Ulster University, Northern Ireland, UK Kim Tam—Plymouth University, UK Kostas Kyriakopoulos—Digital Communications, Loughborough University, Leicestershire, UK Kumar Bandeli—Data Science, Walmart Inc., USA Lakshmi Prayaga—Department of Applied Science, Technology and Administration, University of West Florida, USA Lynsay Shepherd—Abertay University, Dundee, Scotland, UK Maria Bada—Queen Mary University of London, UK Marios Anagnostopoulos—University of the Aegean, Greece Martin Gilje Jaatun—University of Stavanger, Norway Michal Chorasm—Telecommunications and Computer Science, University of Science and Technology (UTP), Bydgoszcz, Poland Michalis Diamantaris—Institute of Computer Science, Foundation for Research and Technology (FORTH), Greece Nicholas Savage—Portsmouth University, UK Palvi Aggarwal—Carnegie Mellon University (CMU), USA Panagiotis Trimintzios—Cyber Crisis Cooperation & Exercises Team Operational Security, ENISA, Europe

x

Cyber Science 2022 Committee

Petra Leimich—Edinburgh Napier University, Edinburgh, Scotland UK Phil Legg—The University of the West of England, UK Philipp Reinecke—Cardiff University, Wales, UK Pierre Parrend—ECAM Strasbourg-Europe, France Ruth Ikwu—Cardiff University, Wales, UK Sean Mckeown—Edinburgh Napier University, Scotland, UK Shamal Faily—Cyber Security Research group (BUCSR), Bournemouth University, UK Stefanos Gritzalis—University of the Aegean, Greece Suleiman Yerima—Cyber Security, De Montfort University, UK Susan Rea—Nimbus Centre at Cork Institute of Technology, Ireland Thaddeus Eze—Computer Science Department, University of Chester, UK Thomas Pasquier—Department of Computer Science, University of Bristol, UK Thomas T Y Win—School of Business & Technology, University of Gloucestershire, UK Tim D. Williams—University of Reading, Reading, UK Ulrik Franke—Software and Systems Engineering Laboratory (SSE), RI.SE, Sweden Uri Blumenthal—MIT Lincoln Laboratory, MIT, USA Uwe Glässer—School of Computing Science, Simon Fraser University, Canada Vanderson de Souza Sampaio—Fundação de Medicina Tropical Dr. Heitor Vieira Dourado, Brazil Varun Dutt—Indian Institute of Technology (IIT) Mandi, India Vasil Vassilev—School of Computing, London Metropolitan University, UK Zisis Tsiatsikas—University of the Aegean, Karlovassi, Greece

Preface

Cyber Science is the flagship conference of the Centre for Multidisciplinary Research, Innovation and Collaboration (C-MRiC), a multidisciplinary conference focusing on pioneering research and innovation in Cyber Situational Awareness, Social Media, Cyber Security and Cyber Incident Response. Cyber Science aims to encourage participation and promotion of collaborative scientific, industrial and academic interworkings among individual researchers, practitioners, members of existing associations, academia, standardisation bodies and government departments and agencies. The purpose is to build bridges between academia and industry and to encourage the interplay of different cultures. Cyber Science invites researchers and industry practitioners to submit papers that encompass principles, analysis, design, methods and applications. It is an annual conference with the aim that it will be held in the future at various cities in different countries. Cyber Science as a multidisciplinary event is maturing as a mainstream and notable conference, first, for its quality, second, for its uniqueness and, finally, for its structure, contribution and originality; something existing mainstream conferences do not normally possess. A testament to the significant interest Cyber Science has thus so far gained. In 2015, the first Cyber Science conference was held on June 8–9, 2015, at the Hotel Russell in Central London, UK. The conference was opened by the IEEE UK and Ireland Computer Society Chair, Professor Frank Wang, and featured four keynote speakers from academia, government and industry. In 2016, the second episode of the Cyber Science conference was held on June 13–14, 2016, at the Holiday Inn in Mayfair London, London, UK. The conference was opened by the Chair, IEEE UK and Ireland, Prof. Ali Hessami, and featured six keynote speakers from academia, government and industry. In 2017, the third episode of the Cyber Science conference, in partnership with Abertay University, was held on June 19–20, 2017, at the Grand Connaught Rooms in Central London, UK. The conference was opened by the Secretary, IEEE UK and Ireland, Dr. Cyril Onwubiko, and featured eight keynote speakers from academia, government and industry.

xi

xii

Preface

In 2018, the fourth episode of the Cyber Science conference, in partnership with Abertay University and the University of Oxford, was held on June 11–12, 2018 at the Grand Central Hotel, Glasgow, Scotland. The conference was opened by the Minister for Public Health, Cabinet Secretary for Justice and Member of the Scottish Parliament, Mr. Michael Matheson, MSP, and featured six keynote speakers from academia, government and industry. At the conference, a Workshop organised by the Computer Science Department, University of Oxford, Oxford, UK, on Cyber Insurance and Risk Controls (CIRC 2018) was co-located with the CyberSA 2018 conference. In 2019, the fifth episode of the Cyber Science conference, in partnership with the University of Oxford, University of Derby and SINTEF Digital, Norway, was held on June 3–4, 2019, at the Department of Computer Science, University of Oxford, Wolfson Building Parks Road, Oxford OX1 3QD, United Kingdom. The conference was opened by the Chair of the IEEE UK and Ireland Section, Professor Mike Hinchey. A workshop organised by the Computer Science Department, University of Oxford, Oxford, UK, on Cyber Insurance and Risk Controls (CIRC 2019) was colocated with the CyberSA 2019 conference, while a workshop organised by SINTEF Digital on Secure Software Engineering in DevOps and Agile Development (SecSE 2019) was co-located with the Cyber Security 2019 conference. In 2020, the sixth episode of the Cyber Science conference in partnership with the Dublin City University was held online (virtual) June 15–19, 2020, due to the COVID-19 pandemic, as the conference was initially planned to be held at the Dublin City University, Dublin, Ireland. The conference was opened by Dr. Cyril Onwubiko, the IEEE Computer Society Distinguished Speaker. At the conference, a Workshop organised by the Computer Science Department, University of Oxford, Oxford, UK, on Cyber Insurance and Risk Controls (CIRC 2020) was co-located with the CyberSA 2020 conference. In 2021, the seventh episode of the Cyber Science conference in partnership with the Dublin City University was held online (virtual) June 14–18, 2021, again due to the COVID-19 pandemic. The conference was officially opened through a welcome address delivered by the President of Dublin City University, Prof. Daire Keogh. In 2022, the eighth episode of the Cyber Science conference, in partnership with the Cardiff Metropolitan University, Wales, has held a hybrid event (in-person & online) between June 20–21, 2022. The conference was officially opened by the Dean of School of Technologies, Cardiff Metropolitan University, Wales, Prof. Jon Platts. This Cyber Science 2022 conference proceeding is a compilation of peer-reviewed and accepted papers submitted to the conference. The conference invited four keynote speakers who spoke at the event, namely: 1. Professor William (Bill) J. Buchanan OBE—Professor, School of Computing Edinburgh Napier University, Scotland, UK; 2. Dr. Maria Bada—Lecturer in Psychology, Queen Marry University, London, UK; 3. Navrina Singh—Founder & CEO Credo AI, USA; 4. John Davies MBE—Co-founder & Chair of Cyber Wales, UK.

Preface

xiii

I would like to thank the organisers—Cardiff Metropolitan University, Wales, and special thanks to their academics, Dr. Chaminda Hewage and Dr. Elochukwu Ukwandu who helped with organising the event and performing several tasks including moderating and chairing sessions. I am so grateful to the Programme Committee Members and the other Conference Reviewers for graciously contributing their time, assuring a scholarly fair and rigorous review process. I would also like to thank Springer for accepting to publish Cyber Science 2022 in their proceedings on the Complexity book series. I would like to thank Dr. Hanan Hindy, Prof. Aunshul Rege, Dr. Pierangelo Rosati, Dr. Arnau Erola, Dr. Xavier Bellekens and Prof. Martin Jaatun for helping in many ways with the conference. Their continued support over the years had allowed me time to focus on the strategic vision of the conference. Finally, I would like to thank the authors of the papers and the delegates present at the event; there would be no conference without you! London, UK

Cyril Onwubiko, B.Sc., M.Sc., Ph.D. Cyber Science 2022 Conference Chair

Sponsors and Partners

xv

xvi

Sponsors and Partners

Keynote and Industry Panel Speakers

Professor William (Bill) J. Buchanan OBE—Professor, School of Computing, Edinburgh Napier University, Scotland, UK William (Bill) J Buchanan OBE is a Professor in the School of Computing at Edinburgh Napier University, a Fellow of the BCS and a Principal Fellow of the HEA. He was appointed as an Officer of the Order of the British Empire (OBE) in the 2017 Birthday Honours for services to cybersecurity. Bill currently leads the Blockpass ID Lab and the Centre for Cybersecurity and Cryptography. He works in the areas of blockchain, cryptography, trust and digital identity. He has one of the most extensive cryptography sites in the World (asecuritysite.com) and is involved in many areas of novel research and teaching. He has published over 30 academic books and over 350 academic research papers. Along with this, Bill’s work has led to many areas of impact, including three highly successful spin-out companies (Zonefox, Symphonic Software and Cyan Forensics), along with awards for excellence in knowledge transfer, and for teaching. Bill recently received an “Outstanding Contribution to Knowledge Exchange” award and was included in the FutureScot “Top 50 Scottish Tech People Who Are Changing The World”.

Dr. Maria Bada—Lecturer in Psychology at Queen Mary University London, UK Dr. Maria Bada is a Lecturer in Psychology at Queen Mary University in London, a visiting lecturer at Royal Holloway, University of London, and a RISCS Fellow in cybercrime. Her research focuses on the human aspects of cybercrime and cybersecurity, such as profiling online offenders, studying their psychologies and pathways towards online deviance as well as the ways to combat cybercrime through tools and capacity building. xvii

xviii

Keynote and Industry Panel Speakers

She is a member of the National Risk Assessment (NRA) Behavioural Science Expert Group in the UK, working on the social and psychological impact of cyberattacks on members of the public. She has a background in cyberpsychology, and she is a member of the British Psychological Society and the National Counselling Society.

Navrina Singh—Founder & CEO at Credo AI, USA Navrina Singh is the Founder & CEO at Credo AI, USA. Credo AI is on an end-toend governance platform for managing compliance and measuring risk from your AI deployments at scales. Navrina is also a board member @Mozilla, a Young Global Leader at the World Economic Forum and an Ex-Microsoft & Qualcomm. While her remit is vast, she is laser focused on ethical and responsible AI. You can reach her at www.credo.ai @navrinasingh @CredoAI.

John Davies MBE—Co-founder & Chair of Cyber Wales, UK John Davies MBE is the Co-founder & Chair of Cyber Wales, UK, the largest cyber security ecosystem in the UK. John has also chaired the Wales Cyber Resilience Board, a Welsh Government Steering Committee working with the National Cyber Security Centre to enhance cyber resilience across Welsh Public Sector organisations and providing policy and best practice advice for the Private Sector www.cyberwatc hing.eu.

Contents

Cyber Threat Intelligence, Ransomware Datasets and Attack Detection Practical Cyber Threat Intelligence in the UK Energy Sector . . . . . . . . . . Alan Paice and Sean McKeown A Free and Community-Driven Critical Infrastructure Ransomware Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aunshul Rege and Rachel Bleiman Criteria for Realistic and Expedient Scenarios for Tabletop Exercises on Cyber Attacks Against Industrial Control Systems in the Petroleum Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrea Skytterholm and Guro Hotvedt

3

25

39

CERTs and Maritime Cybersecurity Exploring the Need for a CERT for the Norwegian Construction Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrea Neverdal Skytterholm and Martin Gilje Jaatun Holistic Approach of Integrated Navigation Equipment for Cybersecurity at Sea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clet Boudehenn, Jean-Christophe Cexus, Ramla Abdelkader, Maxence Lannuzel, Olivier Jacq, David Brosset, and Abdel Boudraa Building Maritime Cybersecurity Capacity Against Ransomware Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Georgios Potamos, Savvas Theodoulou, Eliana Stavrou, and Stavros Stavrou

57

75

87

xix

xx

Contents

Cyber Situational Awareness Applications A Decade of Development of Mental Models in Cybersecurity and Lessons for the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Robert Murimi, Sandra Blanke, and Renita Murimi Exploring the MITRE ATT&CK® Matrix in SE Education . . . . . . . . . . . . 133 Rachel Bleiman, Jamie Williams, Aunshul Rege, and Katorah Williams Municipal Cybersecurity—A Neglected Research Area? A Survey of Current Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Arnstein Vestad and Bian Yang Novel and Emerging Cyber Techniques Near-Ultrasonic Covert Channels Using Software-Defined Radio Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 R. Sherry, E. Bayne, and D. McLuskie A Preventative Moving Target Defense Solution for Web Servers Using Iptables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Cimone Wright-Hamor, Steffanie Bisinger, Jeffrey Neel, Benjamin Blakely, and Nathaniel Evans Hardening Containers with Static and Dynamic Analysis . . . . . . . . . . . . . . 207 Sachet Rajat Kumar Patil, Neil John, Poorvi Sameep Kunja, Anushka Dwivedi, S. Suganthi, and Prasad B. Honnnavali Efficient Cyber-Evidence Sharing Using Zero-Knowledge Proofs . . . . . . . 229 Arman Zand and Eckhard Pfluegel Artificial Intelligence Applications Uncertainty and Risk: Investigating Line Graph Aesthetic for Enhanced Cybersecurity Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Joel Pinney and Fiona Carroll An Explainable AI Solution: Exploring Extended Reality as a Way to Make Artificial Intelligence More Transparent and Trustworthy . . . . . 255 Richard Wheeler and Fiona Carroll A Study on the Development of Crisis Management Compliance by Personal Information Infringement Factors in Artificial Intelligence Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Young Jin Shin

Contents

xxi

Multidisciplinary Cybersecurity—Journalism and Legal Perspectives Threats to Journalists from the Consumer Internet of Things . . . . . . . . . . 303 Anjuli R. K. Shere, Jason R. C. Nurse, and Andrew Martin The Transnational Dimension of Cybersecurity: The NIS Directive and Its Jurisdictional Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Paula Contreras Refining the Mandatory Cybersecurity Incident Reporting Under the NIS Directive 2.0: Event Types and Reporting Processes . . . . . . . . . . . 343 Sandra Schmitz-Berndt Multidisciplinary Cybersecurity—Healthcare, IoT and National Perspectives Assessing Cyber-Security Readiness of Nigeria to Industry 4.0 . . . . . . . . . 355 Elochukwu Ukwandu, Ephraim N. C. Okafor, Charles Ikerionwu, Comfort Olebara, and Celestine Ugwu Case Studies in the Socio-technical Analysis of Cybersecurity Incidents: Comparing Attacks on the UK NHS and Irish Healthcare Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Joseph Kaberuka and Christopher Johnson A Study on the Impact of Gender, Employment Status, and Academic Discipline on Cyber-Hygiene: A Case Study of University of Nigeria, Nsukka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Celestine Ugwu, Modesta Ezema, Uchenna Ome, Lizzy Ofusori, Comfort Olebera, and Elochukwu Ukwandu Coexisting Blockchain and Machine Learning Blockchain-Based Cardinal E-Voting System Using Biometrics, Watermarked QR Code and Partial Homomorphic Encryption . . . . . . . . 411 Aniket Agrawal, Kamalakanta Sethi, and Padmalochan Bera Neutralizing Adversarial Machine Learning in Industrial Control Systems Using Blockchain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Naghmeh Moradpoor, Masoud Barati, Andres Robles-Durazno, Ezra Abah, and James McWhinnie A User-Centric Evaluation of Smart Contract Analysis Tools in Decentralised Finance (DeFi) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Gonzalo Faura, Cezary Siewiersky, and Irina Tal

Contributors

Ezra Abah Edinburgh Napier University, Edinburgh, UK Ramla Abdelkader LAB-STICC, UMRCNRS6285, Brest Cedex 9, France Aniket Agrawal Indian Institute of Technology, Bhubaneswar, India Masoud Barati Newcastle University, Newcastle Upon Tyne, UK E. Bayne Division of Cybersecurity, Abertay University, Dundee, Scotland Padmalochan Bera Indian Institute of Technology, Bhubaneswar, India Steffanie Bisinger Office of the Inspector General, New York, USA Benjamin Blakely Argonne National Laboratory, Lemont, USA Sandra Blanke University of Dallas, Irving, TX, USA Rachel Bleiman Temple University, Philadelphia, PA, USA Clet Boudehenn Naval Academy Research Institute (lRENav), Arts Et Metiers Institute of Technology, Brest, France Abdel Boudraa Naval Academy Research Institute (lRENav), Arts Et Metiers Institute of Technology, Brest, France David Brosset Naval Academy Research Institute (lRENav), Arts Et Metiers Institute of Technology, Brest, France Fiona Carroll School of Technologies, Cardiff Metropolitan University, Cardiff, UK Jean-Christophe Cexus LAB-STICC, UMRCNRS6285, Brest Cedex 9, France Paula Contreras SnT, University of Luxembourg, Esch-Sur-Alzette, Luxembourg Anushka Dwivedi PES University, Bengaluru, KA, India Nathaniel Evans Argonne National Laboratory, Lemont, USA

xxiii

xxiv

Contributors

Modesta Ezema Department of Computer Science, University of Nigeria, Nsukka, Nigeria Gonzalo Faura School of Computing, Dublin City University, Dublin, Ireland Prasad B. Honnnavali PES University, Bengaluru, KA, India Guro Hotvedt Watchcom, Oslo, Norway Charles Ikerionwu Department of Software Engineering, School of Information and Communication Technology, Federal University of Technology, Owerri, Imo State, Nigeria Martin Gilje Jaatun SINTEF Digital, Trondheim, Norway; University of Stavanger, Stavanger, Norway Olivier Jacq France Cyber Maritime Association, Brest, France Neil John PES University, Bengaluru, KA, India Christopher Johnson School of Computing Science, University of Glasgow, Glasgow, UK Joseph Kaberuka School of Computing Science, University of Glasgow, Glasgow, UK Poorvi Sameep Kunja PES University, Bengaluru, KA, India Maxence Lannuzel Naval Academy Research Institute (lRENav), Arts Et Metiers Institute of Technology, Brest, France Andrew Martin Department of Computer Science, University of Oxford, Oxford, UK Sean McKeown Edinburgh Napier University, Edinburgh, UK D. McLuskie Division of Cybersecurity, Abertay University, Dundee, Scotland James McWhinnie Edinburgh Napier University, Edinburgh, UK Naghmeh Moradpoor Edinburgh Napier University, Edinburgh, UK Renita Murimi University of Dallas, Irving, TX, USA Robert Murimi University of Dallas, Irving, TX, USA Jeffrey Neel Argonne National Laboratory, Lemont, USA Jason R. C. Nurse School of Computing, University of Kent, Canterbury, UK Lizzy Ofusori School of Management, Information Technology and Governance, University of KwaZulu-Natal, Durban, South Africa Ephraim N. C. Okafor Department of Electrical and Electronic Engineering, School of Engineering and Engineering Technology, Federal University of Technology, Owerri, Imo State, Nigeria

Contributors

xxv

Comfort Olebara Department of Computer Science, Faculty of Physical Science, Imo State University, Owerri, Imo State, Nigeria Uchenna Ome Department of Computer Science, University of Nigeria, Nsukka, Nigeria Alan Paice EDF, London, UK Sachet Rajat Kumar Patil PES University, Bengaluru, KA, India Eckhard Pfluegel Kingston University, Kingston upon Thames, UK Joel Pinney Cardiff Metropolitan University, Cardiff, Wales Georgios Potamos Open University of Cyprus, Nicosia, Cyprus Aunshul Rege Temple University, Philadelphia, PA, USA Andres Robles-Durazno Edinburgh Napier University, Edinburgh, UK Sandra Schmitz-Berndt SnT, Université du Luxembourg, Esch-Sur-Alzette, Luxembourg Kamalakanta Sethi Indian Institute of Information Technology, Sri City, India Anjuli R. K. Shere Department of Computer Science, University of Oxford, Oxford, UK R. Sherry Division of Cybersecurity, Abertay University, Dundee, Scotland Young Jin Shin Pai Chai University, Daejeon, Korea Cezary Siewiersky School of Computing, Dublin City University, Dublin, Ireland Andrea Skytterholm SINTEF Digital, Trondheim, Norway Andrea Neverdal Skytterholm SINTEF Digital, Trondheim, Norway Eliana Stavrou Applied Cyber Security Research Lab, University of Central Lancashire Cyprus, Larnaca, Cyprus Stavros Stavrou Open University of Cyprus, Nicosia, Cyprus S. Suganthi PES University, Bengaluru, KA, India Irina Tal Lero, School of Computing, Dublin City University, Dublin, Ireland Savvas Theodoulou Open University of Cyprus, Nicosia, Cyprus Celestine Ugwu Department of Computer Science, Faculty of Physical Science, University of Nigeria, Nsukka, Enugu State, Nigeria Elochukwu Ukwandu Department of Applied Computing and Engineering, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff Wales, UK Arnstein Vestad NTNU, Norwegian University of Science and Technology, Trondheim, Norway

xxvi

Contributors

Richard Wheeler School of Technologies, Cardiff Metropolitan University, Cardiff, UK Jamie Williams The MITRE Corporation, Bedford, USA Katorah Williams Temple University, Philadelphia, PA, USA Cimone Wright-Hamor Pacific Northwest National Laboratory, Richland, USA Bian Yang NTNU, Norwegian University of Science and Technology, Trondheim, Norway Arman Zand Kingston University, Kingston upon Thames, UK

Cyber Threat Intelligence, Ransomware Datasets and Attack Detection

Practical Cyber Threat Intelligence in the UK Energy Sector Alan Paice and Sean McKeown

Abstract The UK energy sector is a prime target for cyber-attacks by foreign states, criminals, ‘hacktivist’ groups, and terrorists. As Critical National Infrastructure (CNI), the industry needs to understand the threats it faces to mitigate risks and make efficient use of limited resources. Cyber Threat Intelligence (CTI) sharing is one means of achieving this, by leveraging sector-wide knowledge to combat ongoing mutual threats. However, being unable to segregate intelligence or to control what is disseminated to which parties, and by which means, has impeded industry cooperation thus far. The purpose of this study is to investigate the barriers to sharing and to add to the body of knowledge of CTI in the UK energy sector, while providing some level of assurance that existing tooling is fit-for-purpose. We achieve these aims by conducting a multivocal literature review and by experimentation using a simulated Malware Information Sharing Platform (MISP) community in a virtual environment. This work demonstrates that trust can be placed in the open-source MISP platform, with the caveat that the sharing models and tooling limitations are understood, while also taking care to create appropriate deployment taxonomies and sharing rules. It is hoped that some of the identified barriers are partially alleviated, helping to lay the foundations for a UK Energy sector CTI sharing community. Keywords Cyber Threat Intelligence · CTI · Information Sharing · Cybersecurity · Situational awareness

1 Introduction The UK energy sector is classed as Critical National Infrastructure (CNI) [39] and is therefore vital to UK national security, making it a high-profile target for cyber A. Paice EDF, London, UK e-mail: [email protected] S. McKeown (B) Edinburgh Napier University, Edinburgh, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_1

3

4

A. Paice and S. McKeown

adversaries. Over the last decade, a persistent threat of cyber espionage from hostile state actors towards the UK energy sector has been observed. Links between hostile state actors (HSAs) and cyber criminals are also growing. Hostile state actors have reportedly tasked hacking groups with malicious cyber activity such as data theft on their behalf. The growing sophistication of the threat against the UK energy sector has led to a rethink of the siloed defence of critical assets. With an inside view of the sector, it was noted that there is currently a duplication of effort and time when disseminating new threats which could allow an adversary to attack several organisations in turn, probing their defences and potentially exploiting a shared weakness. These issues indicate that there may be the potential to improve the collective defence of the industry as a whole. By engaging in Cyber Threat Intelligence (CTI) sharing activities organisations can inform each other of cyber incidents in near real time, allowing for the timely deployment of countermeasures. Reporting organisations could then expect that service in return. Although it appears to be a logical step to share threats discovered in an organisation in the same sector with others, the practice has not yet seen widespread adoption in the sector. Information and intelligence on threats to the UK energy sector are obtained from open-source, proprietary vendors via alerts sent by the National Cyber Security Centre (NCSC), and other agencies, as well as internal monitoring performed by each organisation. While this reporting goes some way to keep the sector informed, each separate part of the UK energy sector may not know which threats their competitors and peers are facing in real time. There are many reasons why CTI is not shared and many have attempted surveys, literature reviews, and assorted studies. This paper examines some of the key reasons the UK energy sector has been slow to adopt CTI sharing and through the use of an experimental case study, constituting several sharing models and taxonomies, we aim to provide assurance that these potential barriers can be overcome. The main contributions of this paper are as follows: – An exploration of the barriers to entry for CTI sharing in the energy sector, and what has stopped its adoption in the past. – Empirical testing of a trust model methodology to provide assurance for CTI deployment in the UK energy sector. We perform a multivocal literature review to identify barriers while making use of the open-source Malware Information Sharing Platform (MISP)1 to simulate and empirically verify a variety of sharing models and intelligence tagging taxonomies identified in the literature.

1

https://www.misp-project.org/.

Practical Cyber Threat Intelligence in the UK Energy Sector

5

2 Background Sharing CTI, knowledge, and information between peers and experts is seen as a critical countermeasure to the growing cyber threat [38]. Sharing can allow for a tremendous situational awareness and faster responses to emerging threats [44]. There are, however, potential barriers to sharing: standardisation, competition, and trust have been identified among the reasons why organisations struggle to share [44]. The objective of CTI sharing is the exchange of information and intelligence across traditional boundaries [7]. Chandel et al. [9] claim that CTI sharing is underdeveloped and limited by many technical barriers. They also argue that poorly defined CTI community standards lead to opportunistic and non-sharing behaviours, such as freeriding, where participation is limited to consuming the CTI and not contributing to the community [4]. There appear to be reasons deeper than organisations just wanting to consume CTI as it is seen that some entities are happy to pay for CTI from a central source [20]. Tounsi el al. [38] describe the benefits of sharing as ‘undeniable’, though other work claims that we currently lack empirical evidence to support such positive claims [43].

2.1 Barriers to CTI Sharing We will first explore barriers to adopting mutual CTI sharing, before discussing the existing sharing ecosystem in the UK Energy Sector in Sect. 2.2. Trust: Many of the themes in this section fundamentally reduce to some facet of trust in sharing, such as confidence in the sharing mechanisms, legal oversight, and quality of intelligence. If there were total trust in a system of sharing, then many of the issues explored in the literature would be solved, and sharing would be seen as a totally normal and worthwhile endeavour. Trust in CTI sharing is a significant area of research and is ongoing [1, 41]. A key theme that has been identified is that one of the challenges to solving CTI sharing is the establishment of a trust relationship between those entering a sharing relationship. Trusted relationships can take a long time to build and constant effort to sustain, i.e., they are hard to gain and easy to lose [25]. Additionally, such relationships are difficult to build for untrusted participants [38]. Wagner et al. [41] examine CTI sharing platforms, noting that many of them establish trust manually, arguing that platforms such as Malware Information Sharing Platform (MISP) need to be found through traditional trust establishment techniques such as face-to-face meetings and between a closed circle of trusted members. The authors also suggest that this limits the usefulness of this small trust circle as much of the sharing participation is in private. Indeed, this behaviour may be a reflection of the human condition, with Tounsi et al. [38] suggesting that this may be an instinctual response to the unknown. Face-to-face trust is emphasised in the European FI-ISAC (Financial Institutes—Information Sharing and Analysis Centre), where stakeholders

6

A. Paice and S. McKeown

are required to attend meetings, being excluded if they fail to attend three successive meetings [15], building a degree of trust and investment in the relationships which are formed. In regulated sectors, such as the UK energy sector, which hold sensitive information, it could be argued, for the purposes of safeguarding sensitive and classified information, that not trusting without due diligence is a sensible approach. Such considerations are compounded due to some CTI sharing platforms conducting insufficient peer vetting when sharing information about vulnerabilities [41], potentially causing some organisations to have difficulty placing trust in them. Wagner et al. [41] attempt to address the issue of trust in sharing approaches with a trust taxonomy. The taxonomy attempts to associate a trust level with a source of CTI for its full life cycle. Peers rate the quality, timeliness, and other criteria to generate a score. Sharing activities would reveal the presence of those peers who just want to consume (in which case another sharing model would be more appropriate for them). This approach is considered to be most appropriate for industries that widely share similar problems and experiences across peers, fostering a sense of mutual defence in a closed ‘trusted community’ [28, 46]. Reputation: Reputational damage is one of the reasons a business may be reticent to make contributions, at least without some degree of anonymity. Bad press from a breach could damage the organisation’s reputation [38] and make them and others more cautious about sharing openly. For organisations to participate in effective CTI sharing, they must build up a reputational capital and earn credibility, which may limit the participation of newer members. In a similar manner to trustworthiness, the reputation of stakeholders is gained over time and is damaged easily [42], but may be even more difficult to re-accumulate. Some form of anonymisation and unattributed information could help solve this worry [17]; however, there is currently no complete solution to the problem [10, 20, 32, 34]. An additional concern is that the sharing of raw data could expose details of the victim’s infrastructure and encourage other threat actors to attack based on the information presented [41]. Legal and Privacy Issues: The framework of obligations and information exchange required for CTI sharing invokes a complex regulatory landscape. Organisations find data protection and privacy laws as one of the biggest concerns for CTI sharing [25]. Of all the challenges examined, legal and privacy issues could stop organisations from sharing at all. This is understandable, as the law is continuously changing and the risk appetite of legal action could be too much for some companies to manage. One example is the EU’s adoption of the General Data Protection Regulation (GDPR), prompting large-scale international alignment for the sake of harmony, the complexities of which are discussed by Sullivan and Burger [37]. One point of note is the introduction of several new categories of personal data, IP addresses, which could cause issues for sharing CTI, though in many cases this data may be processed for the purposes of legitimate public interest [25]. However, there are potential legal challenges that could be mounted [8], with issues pertaining to privacy being thematically similar to those of reputation and trust [44], such as with pseudo anonymisation of data. Competition and Conflicts of Interest: The cooperation between competing firms where they are seeking a competitive edge to protect commercial interests and

Practical Cyber Threat Intelligence in the UK Energy Sector

7

intellectual property has been dubbed ‘coopetition’ [46]. While organisations work together in mutual interest to lower costs, blame culture and reluctance to admit fault can make it difficult to participate in information sharing [29]. There is also some concern that collusion attacks could cause organisations to be forced out of trusted communities, damaging reputations, such as by collectively scoring their contributions poorly [42]. Technical and Financial: Some organisations may be ready to share their threat intelligence but feel there are insufficient CTI sharing models (which are discussed in Sect. 3.2) and collaboration platforms that cater to their particular needs, creating a barrier to entry [41, 42]. This is particularly the case in sensitive or critical industries, such as the UK energy sector. ‘Sharing security artefacts between industry peers is a technically complicated, slow, untrusted, and an overly bureaucratic task’ [1]. Interoperability and automation have been highlighted as issues to CTI sharing, especially in peer-to-peer models [25]. Economic considerations are also at play. CTI sharing can be seen as being expensive [31], a drain on resources [44], or as a means of eroding competitive economic advantages [1]. Quality of Intelligence and Sources Incorrect or bad quality CTI can cause resources to be expended unnecessarily. One issue is that there are limited tools for formally validating report structures, such as the commonly used STIX [26] format. As a result, shared CTI data often include incomplete or incorrect information [27]. Additionally, there is concern about the quality and validation of indicators of compromise [9], particularly when pertaining to relevance, timeliness, accuracy, comparability, coherence, and clarity [38]. It is also argued that many platforms are good at providing a quantity of data, rather than quality intelligence that can be actionable [33]. Abu et al. [3] state that there can be a problem with data overload and that 70% of feeds are ‘sketchy and not dependable in terms of quality’; they, however, offer no quantifiable scale to measure ‘sketchiness’. A Quality of Indicators (QoI) model is proposed by Al-Ibrahim et al. [4] for the assessment of the level of contribution by participants in CTI sharing by measuring the quality rather than the quantity of participation. In this model, the QoI and intelligence are sent to an assessor and given a score. The QoI model uses machine learning which contrasts with the system model [41] where industry peers decide on the quality. Currently, there has been no research to compare which model could confirm the quality of CTI being shared.

2.2 Current CTI Sharing in the UK Energy Sector Currently, the UK Energy Sector looks to the European Energy Information Sharing and Analysis Centre EE-ISAC [11] for guidance on CTI, as well as UK government sources [40]. Academic studies on CTI sharing in the energy sector are not numerous. There has been some work by governments and international organisations such as the EU. The Cyber Security in the Energy Sector Recommendations [13] gives a broad outline of best practices and broad guidance to share CTI via an energy Information

8

A. Paice and S. McKeown

Sharing and Analysis Centre (ISAC). The study also recommends a united interface for sharing with international allies. The study also finds that there is no EU-supported pan-European trusted platform (e.g., ISAC) for exchanging CTI in the energy sector. The UK’s Cyber Security Information Sharing Partnership (CiSP) and the European Network and Information Security Agency (ENISA) both help with the sharing of cyber threat information, allowing organisations to better detect campaigns that target particular industry sectors [17]. The Cyber Security Information Sharing Partnership (CiSP) is a joint industry and government initiative set up to exchange CTI as fast as possible as it was seen that the time it was taking to disseminate products was taking too long. The goal of the project was to increase situational awareness and to reduce the impact on UK businesses of the increasing cyber threat to the UK industry, especially that of critical national infrastructure such as the energy sector [36, 44]. Launched in March 2013, CiSP now sits under the management of the NCSC, a part of the Government Communications Headquarters (GCHQ) [36, 40]. Membership in CiSP provides the ability to securely engage with other government departments and industry peers and partners to seek advice and learn from each other. Discussion on CTI matters is encouraged at all levels, and the collaborative environment helps to provide an earlier warning of threats that had been seen before. CiSP also helps to improve the members’ ability to protect their assets and provides free access to network monitoring reports [30]. However, these approaches rely on a top-down sharing, rather than industry peer collaboration, potentially limiting their effectiveness [38].

2.3 Evaluation of Sharing Methods and Platforms Compounding the problems noted above, particularly in relation to issues of trust, the robustness, and effectiveness of the platforms which implement CTI sharing, is unclear. Sauerwein et al. [32] note that there is a sparse scientific analysis of the stateof-the-art threat intelligence sharing platforms, with very little empirical research exploring this space [45]. Few overviews and comparisons of sharing platforms are available, and many of them are incomplete, sufficiently transparent, or outdated [7], with attempts to study tools being hindered by the lack of detailed information on proprietary platforms [7], or clear bias towards owned commercial products (e.g., [5]). Modern studies which evaluate tools either do so via a literature-based evaluation [7], or by surveying organisations [10], rather than direct empirical testing. The work presented in this paper aims to address some of the barriers and issues identified in this section by demonstrating, empirically, that existing open-source tooling can remove, or mitigate, some of these barriers and provide a level of assurance that CTI can be shared effectively, and securely, in the energy sector and other sensitive industries.

Practical Cyber Threat Intelligence in the UK Energy Sector

9

3 Methodology In order to explore the barriers preventing CTI sharing, and to develop an empirical understanding of whether the MISP platform is suitable for the energy sector, two main evaluations were performed in a simulated environment: 1. Sharing Model Evaluation: Exploring four main abstract sharing models: Source/Subscriber; Hub and Spoke; Peer to Peer; and Hybrid (described in Sect. 3.2 and Table 1). Models were evaluated in order to determine their suitability for the energy sector, and whether they address barriers identified in the literature. 2. Taxonomy and Tag Evaluation: To determine whether the use of tagging and application of taxonomies (described in Sect. 3.3) can be used to explicitly enhance privacy and trust, both of which have been identified as critical barriers to entry. The same approach is taken for the sharing model evaluation; however, there are scenarios where CTI is classified only for use in UK organisations, limiting the propagation of specific documents, which should take precedence over the deployed sharing model.

3.1 Experimental Configuration The CTI sharing tool used in this work, MISP, was chosen as it is open-source software, which is freely available, mitigating some financial barriers, while also seeing more usage in the industry as the emphasis shifts to attacker Tactics, Techniques, and Procedures (TTPs), rather than simply Indicators of Compromise (IoCs) aggregation [19]. The aim is not to test MISP as a tool, as such, but rather to demonstrate that identified barriers could be overcome with a little assurance and some empirical experimentation. Ten intelligence reports were obtained from open-source threat intelligence providers, all of which are marked for free distribution. A large sample was not required, as scaling is not tested, only the sharing behaviour of the platform. Reports were ingested manually, with the associated JSON file for MISP being made available on Github,2 presented in the STIX format [26]. The structure of a cyber event on MISP (e.g., attack, malware identification) can be split into three phases: event creation; populating of attributes and attachments; and publishing/sharing. Each event can contain multiple reports pertaining to the same attack vector, malware, etc. Manually populating these events allowed for repeated, controlled, experimentation. Repeat experimentation was facilitated via the use of Virtual Machines (VMWare Fusion Pro 11.1.0) and snapshots, such that each run could be repeated without introducing ordering effects, re-importing data each time. This was necessary as MISP creates a 128-bit Universal Unique Identifier for each event (UUID) [18], which prevents deleted items from being re-imported. 2

https://github.com/smck1/Energy_CTI_Experimental_Files.

10

A. Paice and S. McKeown

Table 1 CTI sharing model overview Model name

Description

Use case

Source and Subscriber

Single central source shares with subscribers Subscribers do not share with the central source The peers do not share with each other directly

A subscription to a government RSS feed or email list that is giving a subscriber regular update on threats

Hub and Spoke

Peers share with a central hub The central hub shares with peers The peers do not share with each other directly

A central intelligence repository such as a government agency who wish to produce intelligence for consumption but wish to have feedback and intelligence fed back to the central source in a 1:1 relationship

Peer to Peer

Each Peer shares with each other Peer to Peer is post to all. A detected threat could be shared No need for any intermediary rapidly to all members in the mesh hubs or central sources

Hybrid

Any combination of Peer to peer Hub and Spoke

A sector providing feeds to peers in that industry such as energy. The peers could share their own CTI with each other and not have to report back to the central source if they so choose

Multiple MISP instances, based on the master MISP OVA files (version 2.4.130), were run (one for each entity in the sharing models in Sect. 3.2) to allow for the propagation of events to be confirmed. The MISP Hardware Sizer3 was used to determine that 1 virtual CPU and 2GiB of RAM would suffice for each instance.

3.2 CTI Sharing Models The four sharing models discussed in the literature are outlined here. For convenience, a summary of the models is depicted in Table 1, with a brief discussion of each presented afterwards. Source/Subscriber: Also known as the centralised model [20], the simplest of the four models, where CTI is consumed by the subscribers, but not produced by them. Subscribers require a great deal of trust in the source to participate in this model [38]. In many cases, it will come from a national source [2]. Hub and Spoke: This model places a central clearing house for CTI sharing that intelligence or information with the spokes. The spokes consume CTI from the hub and can share back CTI with the central hub in a 1:1 relationship. The model allows 3

https://www.misp-project.org/MISP-sizer/.

Practical Cyber Threat Intelligence in the UK Energy Sector

11

the group of intelligence producers and consumers to share information [34]. Where private companies direct the source, this can be seen as controversial as they may be more interested in profit or competition with other providers [35]. Peer to Peer: Also known as the decentralised model [20], this approach may be the most effective in the sharing of CTI between peers in similar sectors and removes the problem of needing to trust a central repository [1]. A robust system of trust and regulatory conformity needs to be established between each of the peers [42]. If implemented successfully, this model could be compelling for sharing thematically similar problems and solutions to those peers that are qualified in that field of interest [34]. The model could help produce timely and actionable products much faster than waiting for a central entity to make those decisions on what the end peer receives. Hybrid: A combination of the Peer to Peer and Hub and Spoke models [42]. Noor et al. [24] state that the hybrid model enables the best elements of both models, with Peer to Peer elements effectively collecting strategic CTI, while Hub and Spoke behaviour adds value to the raw CTI. The Hybrid model allows CTI from a central source to be shared with their peers [35]; however, this could cause some issues for some proprietary platforms where the licensing only allows exclusive use by the subscriber and concerns around classification and source protection. This model could be useful for the energy sector, where feeds come from central government sources, with members being vetted before joining a community of sharers [6], while also facilitating timely peer sharing.

3.3 Tags and Taxonomies Many terms are used in the literature to define ordered classification systems used in threat intelligence,4 to facilitate understanding in both technical and non-technical consumers. While there is currently no consensus on concepts and definitions related to CTI taxonomies [12, 22], all that is required for our definition is that a taxonomy groups objects and describes their relationships, has a set vocabulary, and maps this knowledge in a readable format [14]. A comparison of every such taxonomy would not be feasible in this study, as the research is ongoing, and taxonomies are continually evolving with the everchanging nature of the threats [14, 22, 41]. As such, we will focus on the terminology used in the UK energy sector, which uses the Traffic Light Protocol (TLP), UK government protective marking, and custom company protective marking. An example given from EDF is depicted in Fig. 1. The Traffic Light Protocol was created by the UK’s Centre for Protection of National Infrastructure (CPNI), a UK government agency. The TLP design encourages the sharing of sensitive information and helps establish trust within the sharing 4

E.g., ‘information exchange standard’, ‘ontology’, ‘taxonomy’, ‘data type’, ‘data/field format standard’, ‘data/Field representation format’, ‘classification’, ‘semantic vocabulary’, ‘field’ and ‘knowledge map’, ‘Machine tag’ [14].

12

A. Paice and S. McKeown

Fig. 1 EDF UK protective marking taxonomies

of information and intelligence. Its purpose is to ensure that sensitive information is shared with the appropriate audience. It is not, however, a classification marking scheme on its own, more that it is used as an indicator to reflect how sensitive the information or intelligence is to aid in collaboration. The current standard is defined by the Forum of Incident Response and Security Teams (FIRST) Standards Definitions and Usage Guidance [16]. Restrictions are colour coded: TLP:RED for non-disclosure; TLP:AMBER for limited disclosure to participant’s organisations; TLP:GREEN for limited disclosure to a restricted community; and TLP:WHITE for unrestricted disclosure [17, 23]. In order to evaluate sharing taxonomies and determine an order of primacy (sharing model vs. document sharing restrictions), the EDF taxonomy depicted in Fig. 1 was converted to the JSON format (Fig. 2) and imported into MISP (Fig. 3).

3.4 Sharing in MISP Your Organisation Only: Intended for internal dissemination only. Events with this setting will not be shared outside of the instance. Upon Push: do not push. Upon Pull: pull only for internal users. This Community Only: Users that are part of the local MISP community will be able to see the event. This includes the internal organisation, organisations on this MISP server, and organisations running MISP servers that synchronise with this server. Upon Push: do not push. Upon Pull: pull and downgrade to ‘Your Organisation Only’. Connected Communities: Extends ‘This Community Only’ to servers connected to synchronising servers (i.e., extending to two hops away from the originating instance). Any other organisations connected to linked instances that are two hops away from this own will be restricted from seeing the event. Upon Push: downgrade to ‘This Community Only’ and push. Upon Pull: pull and downgrade to ‘This Community Only’.

Practical Cyber Threat Intelligence in the UK Energy Sector

Fig. 2 Snippet of the custom taxonomy created for this experiment, in raw JSON format

Fig. 3 Snippet of the custom taxonomy created for this experiment, as represented in MISP

13

14

A. Paice and S. McKeown

All communities: This will share the event with all MISP communities, allowing the event to be freely propagated from one instance to the next. Upon Push: push. Upon Pull: pull. As this work focuses on dissemination to external organisations, experiments use both the ‘Connected Communities’ and ‘All Communities’ models. Experiments were repeated 25 times to assure consistent behaviour and were carried out by a trained intelligence analyst. Tested configurations use Push unless otherwise specified. Five Peers were simulated: The local EDF UK organisation, and four external peers, with names being chosen to demonstrate potential example peers, but otherwise referred to in the results as v1 (EDF UK) and v2–v5 (external peers). Discussion pertaining to the verification of sharing models is presented in Sect. 4, while Sect. 5 discusses the validation of tags and taxonomies.

4 Verifying Sharing Models 4.1 Source and Subscriber Beginning with the simplest case, a single source disseminates information to consumers (Fig. 4a), allowing for verification that this works as intended, and that sharing is uni-directional. The directed adjacency matrix, depicting when data is propagated to each party, is provided in Fig. 4b. A ‘1’ indicates event propagation from the party vy (left) to vx (top), while a ‘0’ indicates that the event was not shared. By convention, we ignore instances of a party sharing with itself, such that (v1, v1) (EDF UK, the source sharing with itself) is 0. In this case, both Connected and All Communities produced the same results, which were both in line with expectations. All subscribers received the events shared with them. Once the events were published, the subscribers received the events. This model does not allow flow back to the source and that was replicated with this model. However, it should be noted that server misconfiguration can cause this model to fail, which could cause serious data breaches.

4.2 Hub and Spoke In Hub and Spoke, there is again a central source, but it can receive events from the consumers, though they do not share with each other directly (Fig. 5a). One of the behaviours discovered here was that the event was not automatically shared beyond the first hop as the event was downgraded to share with ‘This Community Only’. This was predicted for the Connected Communities setting (Fig. 5b) (though sharing

Practical Cyber Threat Intelligence in the UK Energy Sector

15

(a) Source and Subscriber model.

(b) ‘Connected’ and All Communities results (identical). Fig. 4 Source and Subscriber sharing model results

had to be triggered manually by the administrator after the first hop), but the same behaviour was also present for the All Communities setting, which was unexpected. Note the red/underlined items in Fig. 5c which denote sharing that defied expected behaviour. In this case, propagation required re-publishing to reach the second hop, which was not expected behaviour for All Communities. Fortunately, the issue results in under-sharing, rather than over-sharing, meaning that no critical information can be accidentally disclosed.

4.3 Peer to Peer Peer to Peer connects peers directly to each other in a distributed fashion, with no central source (Fig. 6a). As per expectation, regardless of whether All or Connect Communities settings were used, peers propagated to each other without fail (Fig. 6b). Each of the four external nodes was updated immediately upon publication by the sharing node. Due to the downgrading (to ‘This Community’) that happens at each hop, events did not propagate more than a single hop, preventing unintentional disclosure.

16

A. Paice and S. McKeown

(a) Hub and Spoke model.

(b) (left) ‘Connected Communities’ results. (c) (right) All Communities results. Fig. 5 Hub and Spoke sharing model results

(a) Peer to Peer model.

(b) ‘Connected’ and All Communities results (identical). Fig. 6 Peer to Peer sharing model results

Practical Cyber Threat Intelligence in the UK Energy Sector

17

4.4 Hybrid The Hybrid model is the most flexible, facilitating peer sharing, but optionally making use of a centralised source, with optional upstream sharing. Once again the Connected Communities (Fig. 7b) behaviour was as expected, with downgrading meaning that events are only propagated a single hop. However, in the default Push configuration, once again the All Communities setting resulted in some unexpected behaviour (Fig. 7c, with unexpected results in red/underlined). While unexpected a priori, these results are consistent with the findings for the Hub and Spoke model. In order to explore the propagation issues, this setup was tested using a Pull-based configuration to EDF UK (v1) from the Third Party Organisation (v5). Once the EDF

(a) Hybrid model - Firstly in all PUSH configuration, with a second run where PULL is enabled from v5 to v1.

(b) (left) ‘Connected Communities’ results. PUSH and PULL configurations. (c) (right) All Communities PUSH results. Unexpected results are marked in red.

(d) All Communities PULL results. Differences to PUSH are highlighted in bold. Fig. 7 Hybrid sharing model results

18

A. Paice and S. McKeown

UK instance pulled the events from the Third party, the other instances shared with each other, demonstrating the predicted behaviour (Fig. 7d). MISP’s behaviour in PUSH mode was then considered to be a bug at this point, with results being reported to the project.5 The MISP project responded with an update that the description of delegation behaviour in their documentation did not align with the actual behaviour, such that it is actually intended, but was initially misrepresented.

5 Verifying Taxonomies—Events and Tags The testing in Sect. 4 demonstrated that many models in MISP require manual intervention in order for events to pass beyond the neighbouring nodes. As such, the Peer to Peer model was used to test tagging behaviours as we only need to focus on whether the event is initially shared or not, as opposed to its subsequent tertiary propagation. Tags correspond to the levels from the EDF energy sector taxonomy in Fig. 1. Rules for propagation are set by the node, and individual CTI documents are tagged with these properties at creation time by analysts. Additional rules are set to Allow or Block specific organisations, with examples rules, in a Boolean combination, depicted in Fig. 8. It should be noted that MISP was found to be very sensitive to the formatting of the imported custom taxonomies, and care should be taken when creating and validating the JSON input (Fig. 2). Several mixes of rules were explored in order to determine the order of precedence, and whether or not unintentional sharing could be triggered, resulting in information breaches. Allow All/Block All: Sharing all tags and blocking all tags in order to ascertain whether basic rules work as intended. Combinations of Allow and Block: A mix of allow and block, again to verify basic functionality. Contradicting Tags: Opposing tags—verifying, for example, that an event tagged with TLP:GREEN (marked as Allow) and OFFICIAL—SENSITIVE: SNI (marked as Block) would ideally be blocked from sharing. In reality, similar occurrences would likely be organisations (or groups thereof) specific rules. This testing also determines if multiple Allow tags override a single Block tag. Across all combinations of tags, the ideal behaviour was observed. Events were Allowed or Blocked, and in cases where multiple tags were assigned, Block always takes precedence over Allow. No number of Allow tags override a single Block tag. For CTI sharing, particularly in critical infrastructure, this is optimal behaviour as it avoids potentially dangerous over-sharing of sensitive information. This behaviour holds true when setting rules for specific organisations, where Blocked organisations override event tags.

5

https://github.com/MISP/misp-book/issues/202.

Practical Cyber Threat Intelligence in the UK Energy Sector

19

Fig. 8 Example MISP tag configuration—allowed/block propagation rules using Boolean combinations

6 Discussion The various abstract models presented in the literature can be replicated in free, open, platforms such as MISP. The experiments have verified that such tools, when configured appropriately, are fit-for-purpose, with the caveat that some behaviour should be verified for deployment (as with any piece of software). In the case of the experiments in this paper, the unexpected behaviours were simply miscommunications in the documentation; however, misconfiguration was also verified as being an avenue leading to unwanted disclosure. This is particularly critical in the energy sector, where sharing of sensitive information can be an offence of criminal negligence, not to mention the reputational damage and financial consequences. Just as it is essential for a human analyst to be involved in creating the intelligence [42], similar care must be taken when maintaining CTI sharing systems. In particular, the creation of sharing taxonomies must be done carefully, with MISP being sensitive to formatting errors. Applying organisation-based filters appears to be a good way to mitigate some of these problems. While relatively minor in practice, the issues discovered here could cause serious consequences, meaning that the authors recommend more public testing of platforms is pursued going forward, in addition to testing carried out before deploying a new tool, or version, in-house. The exposition of the various sharing models in this work should be useful for implementation going forward, with the Hybrid model being flexible enough to allow peer participation and nuanced dissemination controls. Assuming an understanding of the sharing implementation, and correct deployment, this testing should provide

20

A. Paice and S. McKeown

some level of assurance that tools such as MISP are suitable to engage in interorganisation CTI sharing and to mitigate risks of trust and over-sharing resulting in reputational damage. It should also be noted that there is a degree of anonymity built-in when sharing events, as no evidence of the source of the event after the first hop, meaning that only direct trusted peers would have direct source identification, unless specified in the documents themselves. This in itself may reduce the perception of appearing vulnerable and influence the rate of sharing and cooperation which is usually tempered by the competitive barriers. Ultimately, as the NCSC put in their annual report in 2019: ‘improving the cybersecurity of the UK is far from a solo effort.’ [21], and the fewer barriers to wider, meaningful, participation in CTI sharing, the better.

7 Conclusions A number of barriers to Cyber Threat Intelligence (CTI) sharing in the UK energy sector have been discovered, with a focus on assurance to alleviate issues caused by the barriers of trust; reputation; competition; and technical/financial constraints. A simulated Threat intelligence sharing infrastructure was created in a virtual environment using the open-source Malware Information Sharing Platform (MISP). Four CTI sharing models from the literature were tested using real CTI data, processed by a trained CTI analyst to emulate a live system as closely as possible. While the testing discovered some deployment issues with the platform, they provide a level of assurance that the segregation and dissemination of CTI data, for some set of criteria, should be possibly when carefully crafted taxonomies, tags, and forwarding rules are maintained by trained professionals. In particular, safeguards on the platform would prevent critical events such as accidental dissemination of classified or sensitive information, which can be demonstrated to regulators. We do, however, recommend that there is an increased rate of publicly disseminated testing and assurance information made available in the CTI domain in the future, as it will help avoid potential issues and bugs, and therefore dissemination catastrophes. The COVID-19 global pandemic has demonstrated that UK energy sector organisations cannot stand alone in ever-increasing complex threats to critical national infrastructure. MISP allows for a lower barrier of entry in terms of cost and ease of getting CTI shared quickly. With adequate testing, fears can be assuaged (such as those of over-sharing), and an appropriate sharing model for the industry can be adopted to allow for high levels of participation. The use of MISP in the UK energy sector would enable the threats to the industry to be shared quickly and securely, and this study can be used as a foundation to encourage wider CTI sharing in the sector, ideally in the near future.

Practical Cyber Threat Intelligence in the UK Energy Sector

21

7.1 Future Work Future work could increase the interactions between academics; intelligence practitioners; and energy sector companies, in order to create a deeper understanding of the field and add to the body of knowledge. The energy sector is highly regulated, and higher levels of assurance can be achieved by working together to test and formalise safe and secure processes. Automating the testing of CTI processing would allow for frictionless deployment while increasing assurance. The use of directed adjacency graphs could help model large sharing networks and predict the future behaviour of sharing networks, which could accurately predict results for a system working as it should and even when it is not as observed during experimentation. Other pragmatic concerns could revolve around working with live feeds in order to better understand the characteristics of large-scale CTI networks and to include multiple levels of internal dissemination in the modelling.

References 1. Sharing is Caring: Collaborative analysis and real-time enquiry for security analytics. In: 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) (2018). https://doi.org/10.1109/Cybermatics2018.2018.00240 2. CiSP—NCSC.GOV.UK (2020). https://www.ncsc.gov.uk/section/keep-up-todate/cisp 3. Abu, M.S., Selamat, S.R., Ariffin, A., Yusof, R.: Cyber threat intelligence—issue and challenges. Indones. J. Electr. Eng. Comput. Sci. 10(1), 371–379 (2018). https://doi.org/10.11591/ ijeecs.v10.i1.pp371-379 4. Al-Ibrahim, O., Mohaisen, A., Kamhoua, C., Njilla, L.: Beyond free riding: quality of indicators for assessing participation in information sharing for threat intelligence (2017). https://doi.org/ 10.1145/1235 5. ANOMALI: The Definitive Guide to Sharing Threat Intelligence. Tech. rep. (2019). https:// www.anomali.com/resources/whitepapers/the-definitive-guide-tosharing-threat-intelligence 6. Bakis, B.J., Wang, E.D.: Building a National Cyber Information-Sharing Ecosystem. Tech. rep., MITRE (2017). https://www.mitre.org/publications/technicalpapers/building-a-nationalcyber-information-sharing-ecosystem 7. Bauer, S., Fischer, D., Sauerwein, C., Latzel, S., Stelzer, D., Breu, R.: Towards an evaluation framework for threat intelligence sharing platforms. In: Proceedings of the 53rd Hawaii International Conference on System Sciences. Hawaii International Conference on System Sciences (2020). https://doi.org/10.24251/hicss.2020.239 8. Borden, R.M., Mooney, J.A., Taylor, M., Sharkey, M.: Threat Information Sharing and GDPR: A Lawful Activity that Protects Personal Data. Tech. rep., FS-ISAC (2018) 9. Chandel, S., Yan, M., Chen, S., Jiang, H., Ni, T.Y.: Threat intelligence sharing community: a countermeasure against advanced persistent threat. In: Proceedings—2nd International Conference on Multimedia Information Processing and Retrieval, MIPR 2019, pp. 353–359. Institute of Electrical and Electronics Engineers Inc. (2019). https://doi.org/10.1109/MIPR.2019.00070 10. Chantzios, T., Koloveas, P., Skiadopoulos, S., Kolokotronis, N., Tryfonopoulos, C., Bilali, V.G., Kavallieros, D.: The quest for the appropriate cyber-threat intelligence sharing platform. In: Proceedings of the 8th International Conference on Data Science, Technology and Applications, pp. 369–376. SCITEPRESS—Science and Technology Publications (2019). https://doi.org/10. 5220/0007978103690376.

22

A. Paice and S. McKeown

11. EE-ISAC—European Energy—Information Sharing & Analysis Centre: EEISAC—European Energy—Information Sharing & Analysis Centre (2020). https://www.ee-isac.eu/ 12. ENISA: Cybersecurity Incident Taxonomy. Tech. rep. (2018). https://ec.europa.eu/informati onsociety/newsroom/image/document/2018-30/cybersecurityincidenttaxonomy00CD828CF851-AFC4-0B1B416696B5F71053646.pdf 13. EU Commission: Cyber Security in the Energy Sector Recommendations for the European Commission on a European Strategic Framework and Potential Future Legislative Acts for the Energy Sector. Tech. rep. (2017) 14. European Union Agency for Network and Information Security (ENISA): A good practice guide of using taxonomies in incident prevention and detection—ENISA (2017). https://www. enisa.europa.eu/publications/using-taxonomiesin-incident-prevention-detection 15. Financial Institutes—Information Sharing and Analysis Centre: European Financial Institutes—Information Sharing and Analysis Centre, A Public-Private Partnership— ENISA (2020). https://www.enisa.europa.eu/topics/cross-cooperationfor-csirts/finance/eur opean-fi-isac-a-public-private-partnership 16. FIRST: Traffic Light Protocol (TLP) (2020). https://www.first.org/tlp/ 17. Johnson, C., Badger, L., Waltermire, D.: Guide to Cyber Threat Information Sharing. Special Publication—Council for Agricultural Science and Technology (2016). https://doi.org/10. 6028/nist.sp.800-150 18. Leach, P., Mealling, M., Salz, R.: RFC 4122—A Universally Unique IDentifier (UUID) URN Namespace (2005). https://tools.ietf.org/html/rfc4122#section4.1.1 19. Lee, R.M.: 2020 SANS Cyber Threat Intelligence (CTI) Survey (2020). https://www.sans.org/ reading-room/whitepapers/analyst/2020-cyber-threatintelligence-cti-survey-39395 20. Leszczyna, R., Wro´bel, M.R.: Threat intelligence platform for the energy sector. Softw. Pract. Exp. 49(8), 1225–1254 (2019). https://doi.org/10.1002/spe.2705 21. Levy, I., Maddy, S.: Active Cyber Defence (ACD)—The Second Year—NCSC.GOV.UK (2019). https://www.ncsc.gov.uk/report/active-cyber-defencereport-2019 22. Mavroeidis, V., Bromander, S.: Cyber threat intelligence model: an evaluation of taxonomies, sharing standards, and ontologies within cyber threat intelligence. In: Proceedings—2017 European Intelligence and Security Informatics Conference, EISIC 2017, vol. 2017-Janua, pp. 91– 98. Institute of Electrical and Electronics Engineers Inc. (2017). https://doi.org/10.1109/EISIC. 2017.20 23. Mutemwa, M., Mtsweni, J., Mkhonto, N.: Developing a cyber threat intelligence sharing platform for South African organisations. In: 2017 Conference on Information Communication Technology and Society, ICTAS 2017—Proceedings. Institute of Electrical and Electronics Engineers Inc. (2017). https://doi.org/10.1109/ICTAS.2017.7920657 24. Noor, U., Anwar, Z., Altmann, J., Rashid, Z.: Customer-oriented ranking of cyber threat intelligence service providers. Electron. Commer. Res. Appl. 41, 100976 (2020). https://doi.org/10. 1016/j.elerap.2020.100976 25. Nweke, L.O., Wolthusen, S.: Legal issues related to cyber threat information sharing among private entities for critical infrastructure protection. In: 2020 12th International Conference on Cyber Conflict (CyCon), pp. 63–78. IEEE (2020). https://doi.org/10.23919/CyCon49761. 2020.9131721 26. OASIS-OPEN: STIX 2.1 Standard (2020). https://docs.oasisopen.org/cti/stix/v2.1/cs01/stixv2.1-cs01.pdf 27. Qamar, S., Anwar, Z., Rahman, M.A., Al-Shaer, E., Chu, B.T.: Data-driven analytics for cyberthreat intelligence and information sharing. Comput. Secur. 67, 35–58 (2017). https://doi.org/ 10.1016/j.cose.2017.02.005 28. Rashid, Z., Noor, U., Altmann, J.: Network externalities in cybersecurity information sharing ecosystems. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11113 LNCS, pp. 116–125. Springer (2019). https://doi.org/10.1007/978-3-030-13342-9 29. Ring, T.: Threat intelligence: why people don’t share. Comput. Fraud Secur. 2014(3), 5–9 (2014). https://doi.org/10.1016/S1361-3723(14)70469-5

Practical Cyber Threat Intelligence in the UK Energy Sector

23

30. Rosemont, H.: Public-Private Security Cooperation from Cyber to Financial Crime. Tech. rep. (2016). www.rusi.org 31. Rowley, L.: The value of threat intelligence. Comput. Fraud Secur. 2019(10), 20 (2019). https:// doi.org/10.1016/s1361-3723(19)30109-5 32. Sauerwein, C., Sillaber, C., Mussmann, A., Breu, R.: Threat Intelligence Sharing Platforms: An Exploratory Study of Software Vendors and Research Perspectives. Tech. rep. (2017). https:// aisel.aisnet.org/wi2017/track08/paper/3/ 33. Shin, B., Lowry, P.B.: A review and theoretical explanation of the ‘CyberthreatIntelligence (CTI) capability’ that needs to be fostered in information security practitioners and how this can be accomplished (2020). https://doi.org/10.1016/j.cose.2020.101761 34. Skopik, F., Settanni, G., Fiedler, R.: A problem shared is a problem halved: a survey on the dimensions of collective cyber defense through security information sharing. Comput. Secur. 60, 154–176 (2016). https://doi.org/10.1016/j.cose.2016.04.003 35. Skopik, F., Settanni, G., Fiedler, R.: Cyber threat intelligence sharing through national and sector-oriented communities. In: Collaborative Cyber Threat Intelligence: Detecting and Responding to Advanced Cyber Attacks at the National Level, pp. 129–186. CRC Press (2018). https://doi.org/10.4324/9781315397900 36. Stoddart, K.: UK Cyber Security and Critical National Infrastructure Protection; UK Cyber Security and Critical National Infrastructure Protection. Tech. rep. (2016). https://doi.org/10. 1111/1468-2346.12706. http://www.bbc.co.uk/news/world-uscanada-34641382. 37. Sullivan, C., Burger, E.: “In the public interest”: the privacy implications of international business-to-business sharing of cyber-threat intelligence. Comput. Law Secur. Rev. 33(1), 14– 29 (2017). https://doi.org/10.1016/j.clsr.2016.11.015 38. Tounsi, W., Rais, H.: A survey on technical threat intelligence in the age of sophisticated cyber attacks (2018). https://doi.org/10.1016/j.cose.2017.09.001 39. UK Government: Cyber Threat Intelligence in Government: A Guide for Decision Makers & Analysts. Tech. rep. (2019). https://hodigital.blog.gov.uk/wpcontent/uploads/sites/161/2020/ 03/Cyber-Threat-Intelligence-A-Guide-ForDecision-Makers-and-Analysts-v2.0.pdf 40. UK Government: Detecting the Unknown: A Guide to Threat Hunting. Tech. rep. (2019). https://hodigital.blog.gov.uk/wp-content/uploads/sites/161/2020/03/Detectingthe-Unknown-A-Guide-to-ThreatHunting-v2.0.pdf 41. Wagner, T., Palomar, E., Mahbub, K., Abdallah, A.: A Novel Trust Taxonomy for Shared Cyber Threat Intelligence (2018). https://www.hindawi.com/journals/scn/2018/9634507/ 42. Wagner, T.D., Mahbub, K., Palomar, E., Abdallah, A.E.: Cyber threat intelligence sharing: survey and research directions. Comput. Secur. 87, 101589 (2019). https://doi.org/10.1016/j. cose.2019.101589 43. Zibak, A., Simpson, A.: Can we evaluate the impact of cyber security information sharing? In: 2018 International Conference on Cyber Situational Awareness, Data Analytics and Assessment, CyberSA 2018. Institute of Electrical and Electronics Engineers Inc. (2018). https://doi. org/10.1109/CyberSA.2018.8551462 44. Zibak, A., Simpson, A.: Cyber Threat Information Sharing: Perceived Benefits and Barriers (2019). https://doi.org/10.1145/3339252.3340528 45. Zibak, A., Simpson, A.: Cyber threat information sharing: perceived benefits and barriers. In: Proceedings of the 14th International Conference on Availability, Reliability and Security, pp. 1–9 (2019) 46. Zrahia, A.: Threat intelligence sharing between cybersecurity vendors: network, dyadic, and agent views. J. Cybersecur. 4(1) (2018). https://doi.org/10.1093/cybsec/tyy008

A Free and Community-Driven Critical Infrastructure Ransomware Dataset Aunshul Rege and Rachel Bleiman

Abstract Recent ransomware attacks against critical infrastructure have stressed the need for a deeper understanding of the threat landscape and trends. Yet, this is hard to do due to limited data availability and sharing in open source. This paper provides the justification for, and overview of, the creation and dissemination of a Critical Infrastructure RansomWare (CIRW) dataset. It provides an overview of the CIRW dataset requesters and how they intend to use it. The paper also offers dataset changes over time based on self-assessments and community recommendations. The paper concludes by sharing the many benefits of maintaining an open dialog with the community and its recommendations in the hopes that it will inspire other researchers to take on new and innovative research agendas. Keywords Ransomware · Critical infrastructure · Open-source dataset · Community-driven dataset

1 Introduction In May 2021, the Georgia-based Colonial Pipeline was subjected to a ransomware attack, which forced the company to shut down its operations for over two weeks and pay approximately USD 4.4 million [1]. Later that month, a multi-national meat manufacturer in the US, JBS, was also hit with a ransomware attack, which shut down its beef plants. The company paid approximately USD 11 million in ransom [1]. These two incidents were just a sampling of the increasing ransomware attacks against critical infrastructures. Ransomware attacks in 2020 were up more than 150% compared to the previous year, while ransomware payments were up over 300% [2].

A. Rege (B) · R. Bleiman Temple University, Philadelphia, PA, USA e-mail: [email protected] R. Bleiman e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_2

25

26

A. Rege and R. Bleiman

One security firm noted that ransomware attacks occurred once every eight minutes [3]. Ransomware is a malicious software that allows cybercriminals to lock and encrypt a target’s computer or data, and then demand a ransom payment to restore access [4]. Critical infrastructures are those entities (energy, communications, finance, etc.) vital to the everyday functioning of a country, that their incapacitation or destruction would have devastating consequences for the country’s security and daily operations [5]. Needless to say, ransomware attacks against critical infrastructure can be severely detrimental financially and operationally, as evidenced by the above-mentioned Colonial Pipeline and JBS incidents. Countering the ransomware situation is problematic for several reasons, such as the ever-changing and innovative tactics used by ransomware groups, the low entry barriers with the availability of Ransomware-as-a-Service (RaaS), jurisdictional issues, anonymous and untraceable ransomware payments, and the limited availability of threat intelligence data [6, 7]. Specifically, 65% of security teams stated that they lacked access to cohesive data, which prevented them from understanding threat landscapes, current trends, and how their own organizations fared in relation to other documented ransomware attacks [6]. Researchers and academics face similar hurdles in their own studies on ransomware; datasets are difficult to obtain, are expensive, or need non-disclosure agreements that may prevent them from sharing their findings. For researchers and practitioners to develop new and effective solutions and policies that tackle ransomware, open ransomware datasets are a must. These datasets will help in better understanding the existing ransomware samples, their features, and working mechanisms [8]. This paper focuses on this one aspect of the ransomware challenge, namely the lack of freely available, community-driven ransomware data, via the creation and dissemination of a Critical Infrastructure RansomWare (CIRW) dataset. The next section provides an overview of various ransomware datasets that are strain-based, detection-based, and payment-based to situate the need for the CIRW dataset. The third section discusses the OSEMN framework, which was used as a guide to create the CIRW dataset. In the next section, an overview of the CIRW dataset requesters with their various planned uses for the dataset are shared. The fifth section offers insight into dataset modifications made over time based on self-assessments and the sixth section shares the recommendations received from the user community and how these were incorporated. The paper concludes by sharing additional recommendations from the community, which the dataset could not accommodate, in the hopes that it will inspire other researchers to take on new projects that reflect the needs of the community.

2 Ransomware Datasets There are several ransomware datasets that can be categorized into three main types based on their focus: strains, detection, and payments.

A Free and Community-Driven Critical Infrastructure Ransomware …

27

2.1 Strain-Based Ransomware Datasets Several strain-based ransomware datasets are focused on various ransomware families, such as Locky and cryptowall, which helps understand strain anatomy [9]. Some of these datasets are available at github.com/liato/androidmarket-api-py, maccdc.org, malwareblacklist.com, shieldfs.necst.it, virustotal.com, virusshare.com, and vxvault. siri-urz.net [9]. A more complete list of ransomware datasets can be found in [9]. The ISOT Research Lab ransomware dataset has more than 650 ransomware samples representing the most popular strains in the wild [10]. Another well-known resource is darktracer.com, which offers case study analysis on specific ransomware strains and groups [11].

2.2 Detection-Based Ransomware Datasets The UGRansome dataset was created with network traffic and represented “cyclostationary patterns of normal and abnormal classes of threatening” behaviors to detect zero-day attacks that have been unexplored [12]. The RanSAP dataset houses ransomware storage access patterns which can help with the development of machine learning-based ransomware detection systems [13]. This dataset can be found at git hub.com/manabu-hirano/RanSAP/.

2.3 Payment-Based Ransomware Datasets There are some datasets that focus on the payments involved in ransomware attacks. One of these can be found at github.com/behas/ransomware-dataset, which looks at ransomware payments related to 35 strains from 2013 to 2017. Another dataset that captures bitcoin transactions related to ransomware attacks from 2009 to 2018 is available at kaggle.com/sapere0/bitcoinheist-ransomwaredataset. Ransomwhere (https://ransomwhe.re/) is an open, crowdsourced ransomware payment tracker, which connects the ransom amounts associated with various strains [14].

2.4 The Need for a Critical Infrastructure Ransomware Dataset Each of these datasets offers a unique ransomware focus and provides the research community with open-access data for analysis. While the emphasis on strains, detection, and payments is undoubtedly important, another area where more work is needed

28

A. Rege and R. Bleiman

is a broader view of overall ransomware incidents, specifically against critical infrastructures, given the recent increase in attack frequency and impacts on functionality. Furthermore, a dataset that is regularly maintained and driven by insights from the community is another feature that is often missing in the existing datasets. The next section shares the design logic of the CIRW dataset, data collection procedures, and codebooks that are connected to other well-established taxonomies.

3 Critical Infrastructure Ransomware Dataset The authors were inspired by the data science project lifecycle identified in the OSEMN framework [15]. For this dataset, the authors followed the first three steps and modified these to cater to their needs. The first step was to Obtain the data; the second step was to Scrub or clean the data, and the third step was to Explore—here the authors sifted through their data to find themes, relationships between themes, and connect the themes they found to other taxonomies.

3.1 Obtain Obtaining the necessary data was the first step in the process. Much of the relevant data was hidden behind paywalls or simply did not exist in an aggregate form. To overcome this challenge, the authors used open-source data collection methods to obtain their data. The dataset was meant to include both current and past incidents, and the methods of obtaining the data for both types of incidents differed. For past cases, the primary data collection method that the authors used was following hyperlinks within relevant articles that discussed other potential incidents. Additionally, the authors used the Google search engine. For current or recent cases, the primary method entailed setting up a Google alert for the keyword ‘ransomware’; which provided the authors with a list of articles involving ransomware in various contexts. This included data such as local and national news articles, public statements from critical infrastructure companies, updates from technology news websites, and even public social media posts from accounts that were traced through the dark web.

3.2 Scrub The second step of the process was to scrub the data. Each article that the authors received from the Google alert was combed through in search of any mention of a ransomware attack on a critical infrastructure. Once these sources were identified, the authors read each article in greater detail to ensure there was relevant information on the specific incident. On occasion, the articles would not contain enough

A Free and Community-Driven Critical Infrastructure Ransomware …

29

information, and in such cases, the authors sought additional public sources for the missing information. When authors found several data points describing the same incident, they compared them to see if they complemented or supplemented each other; the former allowed for validation and the latter permitted a richer description of the incident. On rare occasions, the authors came across certain incidents where it was unclear whether the target organization qualified as a critical infrastructure, and in those cases, the authors consulted with each other and referred to existing infrastructure taxonomies to make a decision.

3.3 Explore Exploring was the third step in the OSEMN framework and the last step of the framework that the authors followed. During this step, the authors sifted through the various incidents to identify common themes. If a theme consistently emerged across the articles, it was designated as a variable in the dataset; these variables and their definitions are identified in Table 1. The authors also connected their data to other well-established taxonomies. For instance, the Cybersecurity and Infrastructure Security Agency (CISA) has an excellent taxonomy that clearly identifies and defines 16 critical infrastructure sectors in the US [16]. The authors leveraged this pre-existing taxonomy to help organize the incidents in the CIRW dataset (see Sect. 5.4 for more details). The authors also utilized the MITRE ATT&CK framework, which allowed them to map ransomware strains identified in the open-source cases to the ATTC&K strain ‘software IDs’ (see Sect. 6.2 for more details) [17].

3.4 Limitations A limitation of this dataset is the reliance on open-source data, which may not provide the ‘complete picture’ of ransomware incidents across CI sectors. Relatedly, some sectors may be overrepresented, while others are underrepresented. For instance, our dataset suggests that the government facilities and healthcare and public health sectors are targeted more than the nuclear reactors and waste and defense industrial bases sectors. While this may be the case, there are several reasons why some organizations and sectors may choose not to publicly disclose when they have been victims of a ransomware attack. First, there is damage to their reputation; customers or clients may lose trust or be wary of engaging with an organization if they fear there is a risk of their personal data being compromised or the intended services being inadequate. Second, there is the risk of being held liable for any ensuing fallout or public panic due to the repercussions of the attack. Third, there is the fear that announcing an attack shows internal vulnerability and may make them a more appealing target in the future.

30

A. Rege and R. Bleiman

Table 1 CIRW dataset variables and definitions Variables

Definitions

ID number

The unique identifier of each incident, assigned sequentially with incidents sorted from oldest to newest

Date began

The date that the ransomware attack was first identified by the organization

Year

The year that the ransomware attack was first identified by the organization

General date

If the Date Began is unknown: The first day of the month or year that is known (e.g. “In March of 2016”-> 03/01/2016; “In 2016” - > 01/01/2016)

Organization name

The name of the organization that was attacked with ransomware

Location

The US state (or country, if outside US) of the organization that was attacked

CIS targeted

The critical infrastructure sector(s) to which the organization belonged, categorized using the United States Department of Homeland Security’s 16 identified critical infrastructures, with the additional sector of Education Facilities

Strain

The specific strain of ransomware used in the attack & it’s corresponding MITRE ATT&CK ID if known

MITRE ATT&CK Software ID [if exists]

Software is a generic term for custom or commercial code, operating system utilities, open-source software, or other tools used to conduct behavior modeled in ATT&CK

Duration

If ransom was paid: The time it took for the organization to pay the ransom and recover. If ransom was not paid: The time it took for the organization to recover

Duration rank

The nearest (without exceeding) duration rank: 1 day or less, 1 week or less, 1 month or less, more than 1 month

Ransom amount

The ransom demand in the original currency of the demand (bitcoin, USD, Euro, etc.)

Local currency

The monetary amount demanded by the ransom, converted to USD

Ransom amount rank

The nearest (without exceeding) ransom amount rank in USD: $1,000 or less, $50,000 or less, $100,000 or less, $1 million or less, $5 million or less, more than $5 million

Paid status

Whether or not the organization paid the ransom to the attackers (Yes/No) (continued)

A Free and Community-Driven Critical Infrastructure Ransomware …

31

Table 1 (continued) Variables

Definitions

Pay method

If Paid Status is Yes, the method the organization used to pay the ransom to the attackers (e.g. Bitcoin, Cash, Western Union, etc.). If Paid Status is No, then ‘N/A’

Amount paid

If Paid Status is Yes, the ransom amount that the organization paid to the attackers, in the Pay Method unit

Source

The URL of the article from which the incident was identified

Related incidents

The list of incidents by ID Number thought to be part of the same attack

Comments

Any significant details not covered by other variables (e.g. declaration of state of emergency, total cost to rebuild, etc.)

3.5 Dataset Format, Hosting, and Usage Tracking One of the major hurdles the authors had experienced in their own research work was the hefty cost of datasets. When considering platforms to host the dataset, the authors wanted to ensure there would be no issues with access fees and rights usage. With simplicity and accessibility in mind, the authors decided to structure the dataset as an Excel file that users could request via their research website (https://sites.tem ple.edu/care/ci-rw-attacks/). The authors also wished to track who was requesting the dataset and why, as this helped them understand their community base.

4 Requesters and Dataset Use The first iteration of the CIRW dataset was ready for public use in September 2019. To date, the CIRW dataset has been requested 925 times by various individuals and organizations around the world. This section provides a breakdown of users and their need for the dataset, without disclosing their identities or sensitive information regarding requests. While the CIRW dataset was initially created to cater to the academic community, it has extensively been requested by users outside of the education sector.

32

A. Rege and R. Bleiman

4.1 Industry Industry representatives served as the biggest category of the CIRW dataset requesters, at approximately 29% (268). Some of the intended uses for the dataset included research and training of internal team members; client, stakeholder, and senior management education and awareness; incident response planning; threat assessment and modeling; current and historical incident examples; trend analysis; sector-specific attack analysis; risk analysis and insurance; basic learning and knowledge development; comparing with internal datasets; understanding payment ecosystems; and follow-up on MITRE ATT&CK mappings.

4.2 Government Government entities (10%–96) were also interested in the CIRW dataset for an assortment of reasons, such as: Developing training classes and exercise scenarios for staff and operators; trends, patterns, and evolution overall but also for specific infrastructures; research; assess internal incident response efforts; formulate detection and defense strategies; threat intelligence and modeling; program, policy, and strategy development; leverage to get internal defense-related funding for staff, equipment, and tools; drive the risk home to owners, operators and manufacturers of critical infrastructure to develop effective decisions and budgets; understand MITRE ATT&CK mappings to understand TTPs (tactics, techniques, and procedures) and corresponding mitigations; compare against internal datasets; research on specific ransomware strains; develop indicators of compromise; develop internal resource lists and reports; and develop risk assessment and corresponding insurance policies.

4.3 Educators Approximately 11% (98) of dataset requesters identified as educators. Some educators wanted to use the dataset to design course projects for their undergraduate classes. The majority of educators planned to use the dataset to conduct research as part of their academic endeavors. Some of these research projects included analyzing threats, vulnerabilities, and attacks; developing potential business policies and procedures for incident responses; effects on company performance; developing ransomware attack models; historical analysis and recent trends; and industry-specific analysis.

A Free and Community-Driven Critical Infrastructure Ransomware …

33

4.4 Students Of all the dataset requesters, roughly 16% (146) identified themselves as students. Undergraduate students requested the dataset to help with their class projects or theses. Some of these projects included information visualization of ransomware attacks and determining whether certain infrastructure sectors are safer (less vulnerable to ransomware attacks) than others. Graduate students used the dataset to help with literature reviews, create dataset repositories, theses and dissertations, and global trends and frequencies of incidents. The majority of graduate and undergraduate students started learning about the dataset from their educators.

4.5 Journalists/reporters A small proportion (1%–10) of the dataset requesters were journalists. In light of the 2021 Colonial Pipeline attack, journalists wanted to write articles on ransomware. Specifically, journalists stated they would use the dataset in the following ways: Reference for research; study trends, frequency, and impact; develop a general understanding of the threat to critical infrastructures; and compare incidents in their respective hometowns to those listed in the dataset.

5 Incorporating Recommendations to Make the Dataset Community-Driven In addition to making the CIRW dataset publicly available, the authors wanted to engage with various community members (identified in Sect. 4), welcome their feedback, and incorporate as many recommendations as possible. Given that the dataset was based on open-source information, the authors were able to accommodate some of these recommendations.

5.1 Recommendation 1: Document Modifications (V10.1, August 2020) The community made the excellent recommendation of including a modifications section to the CIRW dataset with each iteration to inform users about incident additions, deletions, and modifications. The modification section would also include major revisions, such as introductions of or changes to codebooks, the introduction of ATT&CK mapping (see below), and listing of any errors or contributions from

34

A. Rege and R. Bleiman

the community (see below). This recommendation improved the dataset’s clarity and transparency, and helped users keep track of major changes between iterations.

5.2 Recommendation 2: MITRE ATT&CK Mapping (V10.1, August 2020) One of the most frequent requests the authors received from the community was to map the dataset to the MITRE ATT&CK framework. The ATT&CK tactics and techniques could not be easily mapped as these were not always clearly listed in the open-source information. Therefore, the most logical solution to address this request was to map the ransomware strain variable to the corresponding ATT&CK ‘software’, which was provided as a hyperlink in the dataset. Not all strains can be mapped with a MITRE software ID; however, the ones that can be mapped are some of the more common strains, including Revil, Conti, and Ryuk. This mapping and linking would allow users to pursue the ATT&CK mapping on their own.

5.3 Reporting Missing Incidents and the Contributors Tab (V10.4, Oct 2020) The community sometimes provided feedback via email on incidents that were missing in the CIRW dataset. To streamline this process and encourage continued engagement with the community, the authors created an online submission form. Users could provide the type of infrastructure targeted, the reference link for the open-source documentation of the incident, and whether their organization wanted to be recognized for their contribution in the next dataset release. If users noted they would like to be acknowledged, a dedicated Contributors tab was created in the dataset which credited the organization for its contribution.

6 Conclusion The CIRW dataset has been used extensively by students, educators, journalists, government, and industry. It has received positive reviews and has been referenced in many articles, such as Security Week, Gartner, Security Magazine, SentinelOne, Bleeping Computer, Dark Reading, the Washington Post, Bloomberg, USA Today, and Institute for New Economic Thinking, to name a few (for a complete list with corresponding links, please visit the dedicated CIRW dataset page at sites.temple. edu/care/ci-rw-attacks).

A Free and Community-Driven Critical Infrastructure Ransomware …

35

The wider community has greatly engaged with the CIRW dataset and has improved its quality, usefulness, transparency, and clarity. Users found the chronologically ordered incidents from different sectors, the rigorous codebooks that offer transparency and clarity, and the modified records listed in the CIRW dataset to be quite useful. Welcoming feedback from the community has several advantages: the dataset is regularly evaluated by experts and offers insights into community needs, which were accommodated to the best of the authors’ abilities. However, there were some recommendations that the authors could not fulfill, and these are shared next.

6.1 Recommendations that Could not Be Accommodated The authors received some recommendations that could not be addressed as they were not consistently provided in open sources, and as such, did not warrant their development as new variables in the CIRW dataset. Instead, the authors created a new tab in the dataset (v 10.4, October 2020) titled “Wish List” that listed the various items that some of the requesters had identified as being interesting or useful. The objective in creating this tab was that every time the dataset was disseminated, individuals across multiple sectors (as identified in Sect. 4) could see the wish list and potentially develop these into research projects. Below are some recommendations from the community: 1. Identifying the points of entry, such as phishing; users wished to understand how the ransomware was introduced into targeted environments as this would help inform better prevention strategies. 2. Users also wished to learn about the threat agents behind the ransomware attack. Unfortunately, this information was not consistently provided in the open-source information and thus could not be justified as being a standalone variable. Furthermore, the authors had already mapped the strain to the ATT&CK software IDs, so users could pursue the ATT&CK framework to learn more about the attack groups. 3. A third wish from the users was to understand the impact (in addition to the financial loss via ransom payment) on the targeted critical infrastructure, such as downtime, resource (person hours, technical) usage to address the attack, the type, intensity, and duration of disruption of services and operations, and interconnected attacks. Unfortunately, open-source articles did not provide this information and the authors could not incorporate it into the CIRW dataset. 4. A few users also wanted to see technical analysis, which would help researchers understand more about certain malware. This could be accomplished by crossreferencing with other datasets such as those identified in section II.A, which was outside the scope of the CIRW dataset and was something that users could pursue on their own.

36

A. Rege and R. Bleiman

5. Some members of the community requested graphical representations of the data beyond what was provided on the CIRW website (a bar graph documenting incidents over the years). Specifically, they would have liked to have the numbers and trends broken down by attack origin countries. This request was not accommodated because users could pursue this on their own. 6. A more recent request was distinguishing between the OT (operational technology) versus the IT (information technology) aspects of the attack, but like the other wish list items, this distinction was often not disclosed in publicly disclosed attacks, and as such, could not be accommodated. The above wish list items have been included as a dedicated tab in the dataset during each release; the authors hope that others in the community can pick these up and develop new and innovative research trajectories. Finally, the authors acknowledge that the CIRW dataset is a dynamic entity and that the continuous engagement with students, educators, government, and industry will help it develop, grow, and serve the needs of the community.

References 1. Dossett.: A timeline of the biggest ransomware attacks. Retrieved January 3, 2022. https://www. cnet.com/personal-finance/crypto/a-timeline-of-the-biggest-ransomware-attacks/ (2021) 2. Tracy.: Turning up the heat: a ransomware attack on critical infrastructure is a nightmare scenario. Retrieved January 3, 2022. https://www.forbes.com/sites/forbestechcouncil/2021/ 07/20/turning-up-the-heat-a-ransomware-attack-on-critical-infrastructure-is-a-nightmare-sce nario/ (2021) 3. Sanger, D., Perlroth, N.: White house warns companies to act now on ransomware defenses”. Retrieved January 3, 2022. https://www.nytimes.com/2021/06/03/us/politics/ransomware-cyb ersecurity-infrastructure.html (2021) 4. Johansen, A.: What is ransomware and how to help prevent ransomware attacks. Retrieved January 25, 2020. https://us.norton.com/internetsecurity-malware-ransomware-5-dos-anddonts.html (2019) 5. Rege, A., Bleiman, R.: Ransomware attacks against critical infrastructure. In: Proceedings from the 19th European Conference on Cyber Warfare and Security (2020) 6. Forrester.: Automation and unification enable a cohesive attack surface defense (2021) 7. Institute for Security and Technology (IST).: Combating ransomware. a comprehensive framework for action: key recommendations from the ransomware task force (2021) 8. Beaman, C., Barkworth, A., Akande, T.D., Hakak, S., Khan, M.K.: Ransomware: Recent advances, analysis, challenges and future research directions. Comput. Secur. 111, 102490 (2021) 9. Maigida, A.M., Abdulhamid, S.I.M., Olalere, M., Alhassan, J.K., Chiroma, H., Dada, E.G.: Systematic literature review and metadata analysis of ransomware attacks and detection mechanisms. J. Reliab. Intell. Environ. 5(2), 67–89 (2019) 10. ISOT Research Lab. Botnet and ransomware detection datasets. Retrieved January 4, 2022. https://www.uvic.ca/ecs/ece/isot/datasets/botnet-ransomware/index.php 11. Darktracer.: Neutralizing Ransomware. Retrieved January 4, 2022. https://www.darktrace.com/ es/resources/ds-ransomware.pdf (2021) 12. Nkongolo, M., van Deventer, J.P., Kasongo, S.M.: UGRansome1819: A novel dataset for anomaly detection and Zero-Day threats. Information 12(10), 405 (2021)

A Free and Community-Driven Critical Infrastructure Ransomware …

37

13. Hirano, M., Hodota, R., Kobayashi, R.: RanSAP: An open dataset of ransomware storage access patterns for training machine learning models. Forensic Sci. Int.: Digit. Investig. 40, 301314 (2022) 14. Ransomwhere.re. Retrieved January 4, 2022. https://ransomwhe.re/ (2022) 15. Lau, C.H.: 5 Steps of a data science project lifecycle. towards data science. https://towardsda tascience.com/5-steps-of-a-data-science-project-lifecycle-26c50372b492 (2019) 16. Cybersecurity & Infrastructure Security Agency (CISA). Critical infrastructure sectors. Retrieved June 30, 2020. https://www.cisa.gov/critical-infrastructure-sectors (2020) 17. MITRE ATT&CK. ATT&CK Matrix for Enterprise. Retrieved June 30, 2020. https://attack. mitre.org (2015) 18. Cybersecruity, Infrastructure security agency (CSIA) joint cybersecurity advisory. cyber actors target K-12 distance learning education to cause disruptions and steal data. https://www.cisa.gov/uscert/sites/default/files/publications/AA20-345A_Joint_Cyberse curity_Advisory_Distance_Learning_S508C.pdf (2020)

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises on Cyber Attacks Against Industrial Control Systems in the Petroleum Industry Andrea Skytterholm and Guro Hotvedt

Abstract Digitalization of the petroleum industry entails a greater interconnection between Information Technology (IT) and Industrial Automation and Control Systems (IACS), and has led to an increased attack surface. To mitigate the consequences of incidents and to ensure a safe operation, the industry uses preparedness exercises. Previously, these exercises have concerned safety-related incidents. Today, digitalization requires the industry to also exercise security incidents, especially incidents that are directed towards IACS. While the need for more detailed guidelines in the area of cyber security and IACS has been explicitly called for by the industry, few guidelines are currently available. We aimed to lessen this shortcoming by investigating descriptions of events to use in exercises, known as scenarios. This project investigated what characterizes a scenario to be realistic and expedient for preparedness exercises on cyber attacks against IACS in the petroleum industry, with a focus on tabletop exercises. Based on data collected through interviews, a list of criteria that characterize such scenarios was created. The list was validated and approved by respondents from two different operator companies. The results highlight the importance of basing the scenario on today’s threat landscape, making the scenarios plausible, and design the scenario such that it leads to a challenging tabletop exercise which also gives a sense of empowerment for the participants. Keywords Cyber security · Incident response · Incident handling · CERT · CSIRT · ISAC

A. Skytterholm (B) SINTEF Digital, Trondheim, Norway e-mail: [email protected] G. Hotvedt Watchcom, Oslo, Norway © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_3

39

40

A. Skytterholm and G. Hotvedt

1 Introduction The potential of severe damage being caused by oil and gas [1] has caused the petroleum industry to always focus on training and exercises [2]. Consequences of accidents may be loss of human lives, damage to equipment, environmental damage, and economic losses [1]. It is imperative that the industry does everything possible to avoid such situations. Systems responsible for the operation of industrial processes in the petroleum industry are called Industrial Automation and Control Systems (IACS). IACS refers to a collection of hardware, software, and personnel that can influence or affect the reliable operation of an industrial process, as well as the safety and security of the process [3]. Originally, these systems were designed to work in a closed environment [4]. With the ongoing digitalization, IACS are now connected to Information Technology (IT), which are systems that control digital information and are connected to the Internet [5]. This interconnection reduces costs of operations, increases efficiency, and opens new possibilities, such as remote access to offshore platforms [4]. Despite these positive outcomes, an increased attack surface with new risks and threats arise when IACS are exposed to the Internet [6]. Among these new threats are cyber attacks [6] that attempt to gain unauthorized access to a computer, computing systems, or computer networks to cause damage [7]. Today, an attacker can perform a cyber attack against a platform that may lead to physical consequences [6]. Threats and risks can be categorized as safety or security. Safety focuses on securing against unintentional events, such as faults in the systems, while security focuses on securing against intentional events. Training of personnel and preparedness exercises are used to increase safety and security, and also to uncover defects or omissions. Previously, the focus of training and exercises in the petroleum sector has been safety. However, the digitalization of the sector has increased the industry’s need to address security-related incidents in their training and exercise program as well. Threats compromising security and IACS components are relatively new in the industry. Hence, the industry needs guidelines on how to best develop and conduct exercises in this area. A report published in 2020 by DNV GL for the Petroleum Safety Authority Norway (PSA) states that the industry lacks clear and concise guidelines for this, which was the motivation for our project. Besides, they state that existing guidelines are not comprehensive and that there is a desire for new guidelines in the area of cyber attacks against IACS [6]. Our research has focused on the description of events to use in exercises, known as scenarios. This paper will attempt to answer what scenarios are expedient and realistic for tabletop exercises related to cyber attacks against IACS in the petroleum industry. Concretized this means which criteria must be evaluated in order to categorize a scenario as expedient and realistic. In this paper, we define expedient scenarios as scenarios that give a valuable learning outcome for the participants. Realistic scenarios revolve around using scenarios that could happen and hence are important to prepare for. The criteria and scenarios resulting from this research can be used by

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

41

the industry to prepare for potential cyber attacks against IACS. This paper is based on the work conducted in our master’s thesis [8]. The rest of the paper is structured as follows. Section 2 presents related work. The method used for this study is described in Sect. 3. Section 4 presents the findings from the interviews, the list of criteria, and a brief description of the scenarios that were developed based on findings from the interviews. Section 5 discuss the results and Sect. 6 concludes the paper and provides directions for future work.

2 Background When working with cyber security and incident management, companies need to follow a holistic approach. This approach includes conducting several activities to prepare, handle, and learn from different types of attacks. ISO 27035 is one of several standards describing this approach and is designed for incident management of information security events. Most of these standards recommend that companies perform training and exercise in the preparedness phase. Preparedness exercises are therefore a vital part of mitigating the risk and consequences of incidents. Today, companies in the petroleum industry often use functional exercises, including penetration testing and red-team exercises, as preparedness exercises for cyber incidents [6]. These exercises focus on testing one part of a system or one process in the organization. In the report from DNV GL it is specified that today, these exercises mostly focus on the IT network, whereas IACS is not included to a desirable extent [6]. In our research, we chose to focus on another type of exercise, tabletop exercises. These require fewer resources compared to other exercises like functional and full-scale exercises, and can also include a larger sample of participants. When conducting tabletop exercises, the participants are presented with a given scenario that describes an unwanted situation. They then discuss how they would solve the situation. Well-designed scenarios are a necessity for making realistic and expedient tabletop exercises. To understand how to best develop and use scenarios in a tabletop exercise, we studied existing literature searching for recommended characteristics, criteria, and recommendations for scenarios. As the findings for scenarios to be used in tabletop exercises regarding cyber attacks against IACS were limited, characteristics for scenarios to use in other types of exercises were also included. In addition, we included characteristics for tabletop exercises in general, along with characteristics for scenarios not targeting cyber attacks against IACS. Several articles, reports, guides, and a master thesis has been studied and are used as references. In the following subsections, we present the results separated in how to develop a scenario, characteristics of individual scenarios, and characteristics of a tabletop exercise.

42

A. Skytterholm and G. Hotvedt

2.1 Scenario Development Kukk writes in her master thesis that the scenario should be developed to reflect reality [9], and The Norwegian Water Resources and Energy Directorate (NVE) emphasizes in their guide on training and exercise the importance of making the scenario realistic [10]. To do so, NVE, The Norwegian Directorate for Civil Protection (DSB), and The Norwegian Digitalization Agency (DigDir) address in their guides some recommendations on what to base the scenarios on [10–12]. NVE and DSB suggest basing the scenario on previous situations [10, 12]. DSB elaborates that previous situations can be used as valuable sources of inspiration [12]. According to NVE, scenarios may be based on risk assessments, the threat landscape, and experiences (both internal and external) [10]. The experiences can be from previous exercises, accidents, incidents, or other unwanted situations. This may help describe a scenario the company considers as a real threat [10, 13]. DigDir also highlights risk assessments [11]. In addition, other companies’ exercises or actual incidents are addressed as areas to base the scenarios on and that these can be used to adapt required training ahead of the exercise. However, DigDir specifies that the content does not need to be based on something that previously has happened, as this challenges the participants’ ability to improvise. The authors of a report concerning a scenario method for Crisis Intervention and Operability analysis (CRIOP) mention that basing a scenario on previous situations that have occurred on installations in the North Sea will make the scenarios realistic as the participants know it has happened before [14]. In 2013, the Norwegian Defence Research Establishment (FFI) published a method for the development of scenarios for games and exercises. The focus of the report is to describe a method for structured scenario development. In this report, FFI addresses that it often is expedient to split a larger scenario into smaller, time-limited phases or episodes, called vignettes. In this way, the participants of an exercise can focus on particular tasks or threats [15]. NVE also states that it will be expedient to present more extensive exercises in several parts and have inputs from the playbook, which again will require the scenario to be more complex [10]. They further elaborate that the scenario may start with a backdrop before proceeding to the first part of the exercise with low intensity. The intensity could then propagate and increase through the scenario and exercise such that it reflects a realistic situation [10].

2.2 Characteristics of a Scenario In the studied literature, we found several characteristics that could be used to categorize a scenario as realistic and expedient. Relevant is the first characteristic, and is highlighted by NSM, NVE, DigDir, FFI, and Louis van der Merwe [10, 11, 15–17].

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

43

NSM addresses that the scenario should contain sufficient information to be valuable and relevant [16]. The next characteristic found in the literature is plausible. According to NSM in their fundamental principles and a report from FFI, plausible means that the scenario is realistic and what the scenario describes can become a reality [15, 16]. FFI also specifies that the scenario must not necessarily be the most probable event to be plausible. The future is unpredictable, and history shows that major unforeseen changes may occur [15]. According to the literature, the scenario should also be short and concise to be well-designed. This characteristic is supported by both DigDir and in a publication on training and exercise for IT plans, and capabilities by the National Institute of Standards and Technology (NIST) [11, 18]. In the CRIOP report, the authors present a list of criteria that should be taken into consideration when selecting scenarios for their scenario analysis. This list addresses, among others, five characteristics we want to highlight as they are relevant for our research. These characteristics are feasibility, acceptance, hazard potential, specificity, and complexity [14]. By feasibility, the authors elaborate that the scenario must be physically possible to conduct. Acceptance involves the scenario being accepted as possible among the participants, while the hazard potential characteristic specifies that the scenario has the potential to cause major accidents or installation damage. The characteristic of specificity is crucial for scenarios [14]. Specificity means that the scenarios must be specific for the installation which will play out the scenario, and thus must be adapted to the operator’s systems [14]. Complexity, the last characteristic CRIOP presents, says that the scenario should be made complex enough to stress the participants. Some keywords they further present are simultaneous operations/incidents, extensive communication, and the fallacy of multiple safety barriers [14]. The literature highlights the importance of the scenario being credible. This is supported by both DigDir and FFI [11, 15]. FFI further specifies that a credible scenario is achieved through the involvement of and anchoring with stakeholders, among other things [15]. In addition, they specify that the scenario could achieve credibility through a transparent process. The transparent process connects the goals and guidelines of working with the scenario to the actual content in a concise, coherent, and traceable manner.

2.3 Characteristics of a Tabletop Exercise The literature presents two characteristics of how to design an expedient tabletop exercise that gives a valuable learning outcome. The exercise and the scenario should both create a sense of empowerment and challenge the participants. Gleason mentions in his article that tabletop exercises should motivate the exercise participants [19]. This aspect is supported by Patrick and Barber, and NVE. Patrick and Barber address that the exercise should be a positive experience and provide a

44

A. Skytterholm and G. Hotvedt

solid learning and training value to the individuals and organizations involved in the exercise [20]. Creating a sense of empowerment for the participants will ensure that it is a positive experience and provides a solid learning outcome and training value for both the participants and the exercising company [20]. NVE also addresses that the participants need to experience mastery during the exercise [10]. Further, NVE emphasizes that the scenario should lead to an exercise that is challenging for the participants [10]. DSB and van der Merwe support this recommendation [12, 17]. NVE elaborates that making the exercise challenging contributes to giving intensity to the exercise. It may be expedient to follow an intensity curve during the scenario. The scenario can start with a backdrop telling a short background for the scenario before proceeding to the first phase with low intensity. The intensity can then increase during the scenario and exercise. The intensity should reach its top at the end of the exercise before it decreases and the exercise is declared finished [10].

3 Method To answer what scenarios are expedient and realistic for tabletop exercises related to cyber attacks against IACS in the petroleum industry, and which criteria must be evaluated in order to categorize a scenario as expedient and realistic, we conducted interviews with eight respondents. The respondents are described in Table 1. In addition, we searched the databases NTNU Oria and Google Scholar for existing research and criteria regarding how to develop and design scenarios for cyber attacks against IACS to be used in tabletop exercises. We were not able to retrieve any specific results from these searches and have then focused our research on data collected from interviews. The goal of this study was to help operator companies in the oil and gas industry to conduct more exercises on the topic of cyber attacks against control systems. When selecting interviewees for the study, it was therefore important to include respondents from several operator companies. However, we also wanted information regarding what scenarios that needs exercise, and as operating companies might not be willing to share information about potential threats and challenges, we also chose to gather information from other parties that have a relation and insight to the industry. Table 1 presents the description of the interviewees and the companies they work for. In total ten people from eight different companies and authority organizations were interviewed. Data saturation was used to determine how many interviewees we needed for our study. Data saturation is the concept of collecting data until the researcher reaches a point where no new information is retrieved [21]. Throughout our study, we used this concept as a measure of when enough interviews were conducted. Once the interviews no longer gave us new information, we carried out a small number of interviews before ending the interview process to ensure we had reached data saturation.

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises … Table 1 Interviewee and company profile Role Company description Supplier

Operator

Operator

External party

External party Authority

Large global supplier that delivers control systems to several industries all over the world Large international company. Among the largest operators on the Norwegian continental shelf Another large international company. Also among the largest operators on the Norwegian continental shelf This company is an international company with expertise in risk assessment and quality assurance National IT-security company

State-owned. Responsible for requirements and follow-ups for safety, work-environment, and readiness in the petroleum sector Computer emergency response Organization that work as a team for the industry support for the entire power industry both in preventive work and in handling incidents Supplier/operator

45

Interviewees Two interviewees: One IT expert and one expert on control systems IT-security expert

Expert in engineering and control systems, works with barrier management Expert in risk assessment for safety

Expert on control systems and cyber security Chief engineer responsible for cyber security

Two interviewees: Head of the organization and an expert on prognosis and analysis, with exercise experience from other settings The interviewee previously Expert in system integration, worked for a global supplier HMI and PLCs. The company that delivers control interviewee has also systems to several industries experience with maintenance all over the world. The of IACS systems, and is now operator company, that the responsible for barriers for interviewee now works for, is a cyber security large international company, which is among the largest operators on the Norwegian continental shelf

The interviews were semi-structured, and we used an interview guide to ensure that we covered the main topics of concern, and to keep track of time. During the interview, we let the interviewees speak freely, and asked follow-up questions where appropriate. The interview guide used in this project can be found in Appendix A.

46

A. Skytterholm and G. Hotvedt

We were two people conducting the interviews, and both took notes during the session. Directly after each interview, we had a brief sum up to talk about the interview and the results. This way, we kept lost information to a minimum, despite not recording the interview sessions. Within a couple of days after the interview, we fine-tuned the notes to get a better overview of the results. In Real World Research, Robson stresses the need for a systematic analysis of qualitative data [21, p. 465]. In our research, we used the thematic coding approach. Thematic coding is flexible and can be used for all types of qualitative data [21, p. 476]. When all of the interviews were conducted, we started by familiarizing with the data through reading and re-reading the summaries from interviews and the literature review. This way, we could see patterns. We then started to label similar data extractions with the same code to categorize the data. Further, the codes were grouped into larger groups, called themes. The whole process of thematic coding was iterative, so the phases were visited and revisited several times. In the last phase, we developed a thematic network to clarify connections between the identified themes. This phase was the actual analysis of the data, and here we searched to understand what the structured data told us and how it related to the research questions. We did not use any software analyzing tool to code the data, we used a regular document and highlighting with different colors. Based on the analysis of the collected data and information from the existing literature, we created a list of criteria that could be used to characterize a scenario as realistic and expedient for a tabletop exercise. In a second round of semi-structured interviews with respondents from two of the operators, we asked them to validate the criteria as realistic and expedient for tabletop exercises on cyber attacks against IACS.

4 Results This section presents the results of the study conducted in [8]. Findings from interviews and background material were used to create a list of criteria that characterize a realistic and expedient scenario for tabletop exercises concerning cyber attacks against IACS in the petroleum industry. The criteria were used in a previous study to create eight different example scenarios [8]. The scenario themes along with a brief description are presented in this chapter. The following subsections present an overview of the most prominent results from the interviews as well as the criteria and the example scenarios.

4.1 Interview Findings When conducting this study, there had been few known attacks against IACS in the petroleum industry and other related industries. According to the interviewees, the

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

47

lack of known incidents results in the industry not considering attacks against IACS as likely and hence not realistic enough. Scenarios describing attacks against IACS are thus usually considered to have a low probability. Still, they need to be properly exercised because of the significant consequences. Developing scenarios that feel realistic for the participants seems to be a challenge but basing them on previous incidents increases the degree of realism according to findings from the interviews. Findings from the interviews also show that scoping down the scenario may result in better exercises that are realistic and expedient. This thought was shared among several of the interviewees. The interviewees highlighted the importance of having correct technical details for the scenario. If the scenario had incorrect technical details, it could make the scenario less realistic. If this was the case, the participants might argue the incorrect technical details and not the intended discussion areas. This could lead to a weaker learning outcome for the exercise. In addition, the scenario could be seen as an incident that is not possible or relevant for their systems because of the wrong details. Other feedback from the interviews was that the description of the scenario must be unambiguous, simple, and precise. There cannot be any room for different interpretations among the participants. The description should also be well explained and justified to make the scenario more realistic. Having a realistic and well-justified scenario will make it easier for the participants to adapt the scenario and perform a solid exercise. Some interviewees mentioned that the scenario alone is not enough to accomplish a successful exercise. They pointed to the difficulties of adapting the scenario into an exercise and then use the depth of a scenario expediently. Adding a note to the developed scenarios with input to an exercise plan was suggested by one interviewee as a solution on how to lessen these difficulties. The note may include the purpose of the exercise, recommended participants, samples of important areas to ask questions, and some examples of relevant questions to ask during the exercise. In other words, the attached note is supposed to be an input to the playbook used during the exercise.

4.2 List of Criteria Based on the interviews and findings from the literature, we created a list of criteria for realistic and expedient scenarios for tabletop exercises. The list is created for tabletop exercises regarding IACS and the petroleum industry, but most elements are general and will fit scenarios in other industries with other focuses as well. The criteria for the scenarios are presented in Table 2. Out of the 21 criteria, we would like to highlight six of them: the importance of basing the scenario on today’s threat landscape, adapt the scenario to the operator’s systems and have the correct technical details, have a plausible scenario, present the scenario in multiple parts where appropriate, and make the scenario such that it both challenges as well as creates a sense of empowerment for the participants of the exercise. The importance of these highlighted criteria will be elaborated in Sect. 5.

48

A. Skytterholm and G. Hotvedt

Table 2 Criteria for a realistic and expedient scenario. Explanations of the criteria are given in the second column Criterion

Explanation

Plausible

Should be realizable such that the described incident could become a reality.

Credible

The participants believe in the scenario

Based on today’s threat landscape

This could be done by basing the scenario on threat assessments, previous incidents, risk analyzes, or the operator’s experiences. The experiences can either be of earlier exercises, accidents, or other unwanted incidents

Adapted to the operator’s systems and have correct technical details

Should only contain details that are correct and relevant to the company

No potential to shut down the platform

Shutting down an entire platform is a highly complex and challenging task and should be avoided

Fit the participants’ knowledge level

E.g., control room operators are typically skilled workers and do not have the expertise in cyber security. Hence, the scenarios for cyber attacks against IACS should not require such competence

Unambiguous

All participants should have the same understanding and interpretation of the scenario

Concise

Should be precise and not give too much information. However, it should provide enough information for the participants to understand the input given during the exercise

Consistent

Something happening in one place in the scenario must not exclude something happening in another place in the scenario

Hazard potential

Should have potential to cause larger accidents or installation damage

Define targeted assets

One should determine which assets the attack affects, both directly and indirectly. For instance, when under a ransomware attack, the IT-systems may be assets directly affected, while a company’s reputation may be indirectly affected

Presented in multiple parts where appropriate

At the beginning of an incident, things may be chaotic and confusing, but you get more and more available information as time passes. The scenario should also reflect this chaotic start and the availability of information

Includes the source of the attack and how it was detected

E.g., the malware got into the systems by phishing and was discovered by the SOC

No defined end

When discussing the scenario, the decisions made during the exercise define the outcome of the incident. Hence, the end should not be predefined

All participants can contribute

The scope and theme for the scenario should enable all present participants to have the opportunity to contribute

Trigger discussion and cooperation

Should be complex enough, so the scenario needs to be discussed among different participants, which leads to cooperation

Challenging

Should challenge the participants in the same way that an actual incident would

Creates a sense of empowerment

All participants should feel a sense of empowerment during an exercise using the scenario.

Not known to the participants in advance

The scenario description should not be known to the participants in advance. However, the participants should be given the theme of the scenario in advance so they can make the necessary preparations

Fulfills the exercise’s purpose, goals, form, and scope

The scenario should be adjusted to fit the purpose, goal, and form of the exercise without expanding the scope

Relevant plans are available

Relevant plans, such as preparedness plans and response plans, should be made available ahead of the exercise. By having these plans available, the participants may use them as a reference or guide during the exercise

Additionally we would like to discuss “unambiguous” as this was a bit unexpected to us before we started the interviews.

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

49

4.3 Example Scenarios Using these criteria, we developed eight example scenarios for the industry regarding cyber attacks against IACS. We want to present the themes and a brief description of them in this paper to show how the criteria could be used in practice. The scenarios are presented in Table 3, while the full descriptions and information about the scenarios can be found in our project preceding this paper [8].

5 Discussion In this section, we discuss and justify what scenarios are expedient and realistic for tabletop exercises related to cyber attacks against IACS. This includes which criteria to evaluate to categorize a scenario as expedient and realistic. The justification will be based on findings from the literature as well as the interviews. We will justify the six highlighted criteria from the results and explain why these criteria should be fulfilled to develop a well-designed scenario. We chose to highlight these six criteria as we believe they are essential for making a scenario realistic and expedient. Not all of the criteria from the list are relevant for all types of scenarios to use in tabletop exercises. However, we believe the six highlighted criteria should be present for any scenario as they improve the exercise value. The first criterion we highlighted in our results was to base the scenario on today’s threat landscape. Basing the scenario on today’s threat landscape contributes to a realistic scenario that has a hold in the real world. The literature suggests several ways to fulfill this criterion [10, 11, 13, 14]. We believe that basing a scenario on threat assessments could be a wise starting point for the companies. Even if the threat introduced in the threat assessment is considered small, it is still present, and one cannot ignore it or argue that it is unrealistic. Basing the scenario on previous incidents increases the realism as is it proven that it has happened before. Findings from the interviews also show that basing the scenario on previous incidents increases the degree of realism of the scenario. As the industry may struggle to see the realism in cyber attacks against IACS due to few attacks against these systems, we were advised to use this approach. Besides, it can also be used to determine areas that should be included in future exercises. Experiences from previous accidents and unwanted incidents may indicate how the company wants a given scenario to be handled and what kind of events they are exposed to. The second criterion, to adapt the scenario to the operator’s systems and have correct technical details, helps to make the scenario realistic and expedient. The scenario could lose its value if it is not adapted to the operator’s systems or have correct technical details. Both information from interviews and the literature highlight this criterion.

50

A. Skytterholm and G. Hotvedt

Table 3 Themes and brief descriptions of the developed scenarios from [8] which are based on the developed criteria Theme of scenario Brief description Ransomware

The industrial control systems are encrypted and locked. Furthermore, the attackers threaten to shut down the core generator if the ransom is not paid Attack with USB stick enabling 4G An inside attack where a technician connects a 4G dongle that enables 4G connection. This allows the attacker to connect to internal networks via this 4G connection Supply chain attack with information gathering The SOC is notified that data is attempted to be sent to an IP address that is not defined in the firewall. It turns out that spyware has entered through a backdoor in a new component from a vendor Disconnection of detectors Three control room operators observe that the gas detectors stop responding. They investigate further and assume it is a technical error. It is then informed by the SOC that they are under a cyber attack IACS insider attack The SOC detects that data is being sent through a port that is not normally in use. The port has been opened by an employee who comes from a high-risk country. The employee has been threatened by actors in the home country to open the port Industrial Internet of Things Attackers have changed the data the land organization receives from the IIoT units. The land organization thus makes decisions based on incorrect data Access to IACS via remote support A compromised supplier logs on the control systems with remote access through 2FA. The attackers have installed malware on the provider’s computer so that the attackers also get a connection when the supplier logs on to the operator’s systems Disruption of safety systems An employee smells some gas odor and no gas detectors in the actual area have made a notification about such an event. A compromised update of the fire and gas system has led to an increase in the limit for acceptable gas levels

According to the interviewees, participants might argue on incorrect technical details instead of discussing the intended topic areas, which may weaken the outcome of the exercise. Besides, the scenario may be seen as something that is not possible or relevant for their systems because of the wrong details. The exercise may not be

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

51

expedient if the scenario is not adapted to the exercising company’s systems as they do not get to exercise on their specific systems. This criterion is also supported in the CRIOP report [14]. We have added the plausible criterion to increase a scenario’s realism. A scenario describing an incident that could not become a reality, hence, is not plausible, would be hard for the participants to find realistic and challenging to accept. The importance of having a plausible scenario is confirmed by both findings from the literature [15–17] and the interviews. In the interviews, the interviewees focused on having a scenario that has a hold in the real world to make the scenario realistic for the participants. To make the exercise feel more realistic, the scenario description can be divided into multiple parts [10, 15]. We have included this criterion as we find it expedient to follow when a larger or more complex scenario should be exercised. This criterion may be expedient if the participants should discuss several tasks or threats during a tabletop exercise. A scenario presented this way may increase the realism and expediency of the exercise, and the participants may get a feeling of how an actual incident would have progressed. Because of this, we have included it as a criterion in our list and highlighted it for this paper. For the scenario to give a valuable learning outcome and hence be expedient, it needs to challenge the participants [10, 12, 17]. Therefore, we have included and highlighted a criterion on this aspect. For the exercise to challenge the participants, the scenario needs to do so as well. Using a scenario that challenges the participants may make the exercise more expedient. Actual incidents would likely challenge the participants and the exercising organization, and the scenario should also do so. In addition, to challenge the participants, we have added a criterion on creating a sense of empowerment among the participants. This is included as we believe that the scenario may become more expedient by fulfilling it. If the participants do not feel a sense of empowerment, they might lose interest in the exercise and not see the value of conducting it [10, 19, 20]. Operator companies could use the list of criteria to ease the planning and creation of new exercises. We believe that tabletop exercises, which demand fewer resources than other commonly used exercise types, could help increase the number of exercises conducted by the industry. Respondents from two operator companies have validated the criteria as criteria that contribute to a realistic and expedient scenario for tabletop exercises. One can therefore presume that the criteria will help when creating scenarios that are realistic and expedient for tabletop exercises on cyber attacks against IACS. Lastly we would like to discuss the criteria “unambiguous”. During the interviews, one of the interviewees mentioned a situation where some gas detectors had been unavailable for 17 h. The operators had suspected that this was due to the weather conditions, however, it turned out that it was a result of a technical error. When creating the criteria, we therefore thought it would be good to have scenarios that could be interpreted in different ways. However, the interviewees stated that this may lead the exercise in an undesired direction, and that it may be confusing for the participants. Real-life situations may be unambiguous at a first glance, but in

52

A. Skytterholm and G. Hotvedt

an exercise situation, the time is limited, and the desired learning outcome and goal might be affected by spending too much time on confusing elements.

6 Conclusion In this paper, we have studied what characterizes an expedient and realistic scenario for a tabletop exercise, with the focus on scenarios related to cyber attacks targeting IACS. The conducted research was carried out in a project preceding this paper [8]. The authors of the report from DNV GL in 2020, Training and Exercise, commissioned by PSA [6], identified the lack of guidelines for exercises focusing on cyber attacks against IACS today which was used as the motivation for our project. To answer what characterizes an expedient and realistic scenario, we have developed a list of criteria presented in Sect. 4.2. Along with this list, we presented eight example scenario topics together with a brief description that were based on these criteria to show how the criteria could be taken into use. Six highlighted criteria were then justified in Sect. 5. By following these criteria when developing scenarios for a given exercise, the industry confirmed through interviews that the criteria could lead to realistic and expedient scenarios. Hence, this list answers which criteria to evaluate to categorize a scenario as expedient and realistic. We believe the list of criteria can be used as guidelines for the industry on how best to develop and take usage of scenarios for tabletop exercises regarding cyber attacks against IACS. The criteria have been validated as valuable by the industry. This validation indicates that the criteria may contribute to preparedness exercises being conducted more efficiently where a valuable learning outcome is provided. Respondents from two different operator companies have contributed in this study. To increase the value of the criteria, it would be valuable to conduct interviews with more operators. This area would therefore be of interest to study further. Acknowledgements We would like to thank our supervisors, Maria Bartnes, Lars Bodsberg, and Roy Selbæk Myhre for the help and support during the work with our thesis and this article. We would also like to thank the interviewees from the participating organizations.

Appendix A Interview guide Introduction Brief introduction to our research questions and main goal of the research

Criteria for Realistic and Expedient Scenarios for Tabletop Exercises …

53

Main topics to be discussed (First round of interviews) • Digital threats against IT-systems in the petroleum industry • Digital threats directed towards the industrial ICT-systems • Your experience from preparedness exercises tailored towards digital threats and in general • Your experience from working with exercise scenarios (if relevant)

Validation of criteria and scenarios (Second round of interviews) Go through each of the criteria and ask if they are considered expedient and realistic. Ask why/why not. Do the same for the scenarios.

Ending We want to thank you for contributing to our research. We have gained valuable input from the interview. If you would like to receive our final result, we can send it to you when the report is delivered.

References 1. Norwegian Ministry of Labour and Social Affairs: Health, safety and environment in the petroleum industry. Technical Report, Norwegian Ministry of Labour and Social Affairs (2018). Accessed 21 May 2021 2. Topdahl, R.C.: -oljå tenker alltid “worst case”. https://www.aftenbladet.no/aenergi/i/1yOjK/ oljaa-tenker-alltid-worst-case (2012). Accessed 21 May 2021 3. Security for Industrial Automation and Control Systems: Standard. International Electrotechnical Commision, Geneva, CH (2010) 4. Stouffer, K., Falco, J., Scarfone, K.: Guide to industrial control systems (ICS) security. NIST Spec. Publ. 800(82), 16–16 (2011) 5. i-SCOOP: Operational technology (ot)—definitions and differences with it. https://www.iscoop.eu/industry-4-0/operational-technology-ot/ (2020). Accessed 04 November 20 6. Håland, E.: Trening og øvelse. Technical Report, DNV GL (2020) 7. Pratt, M.K.: Definition: cyber attack. https://searchsecurity.techtarget.com/definition/cyberattack (2021). Accessed 25 February 2021 8. Andrea, S., Hotvedt, G.: Preparedness exercises for cyber attacks against industrial control systems in the petroleum industry. Master’s thesis, Norwegian University of Science and Technology (2021) 9. Ottis, R., Luht, L.: Mapping the best practices for designing multi-level cyber security exercises in Estonia (2017)

54

A. Skytterholm and G. Hotvedt

10. Larsen, A.K.: Øvelser - en veiledning i hvordan planlegge og gjennomføre øvelser innen energiforsyningen. Technical Report, The Norwegian Water Resources and Energy Directorate (2015) 11. Agency, T.N.D.: Veileder i planlegging og gjennomføring av ikt-øvelser. Technical Report, The Norwegian Digitalization Agency (2015) 12. The Norwegian Directorate for Civil Protection: Veileder i planlegging, gjennomføring og evaluering av øvelser - grunnbok: Introduksjon og prinsipper. Technical Report. The Norwegian Directorate for Civil Protection (2016) 13. Authority, N.N.S.: Risiko 2021—Helhetlig sikring mot sammensatte trusler. Technical Report. Norwegian National Security Authority (2021). Accessed 23 April 2021 14. Johnsen, S.O., Bjørkli, C., Steiro, T., Fartum, H., Haukenes, H., Ramber, J., Skriver, J.: Criop: A Scenario Method for Crisis Intervention and Operability Analysis. SINTEF Technology and Society, Technical Report (2011) 15. Malerud, S., Fridheim, H.: Metode for utvikling av scenarioer til spill og øvelser. Technical Report, Norwegian Defence Research Establishment (2013). Accessed 30 April 2021 16. Authority, N.N.S.: Grunnprinsipper for sikkerhetsstyring - utarbeid scenario. https://nsm.no/ regelverk-og-hjelp/rad-og-anbefalinger/grunnprinsipper-for-sikkerhetsstyring/identifisereog-kartlegge/utarbeid-scenario/ (2021). Accessed 12 March 2021 17. Van der Merwe, L.: Scenario-based strategy in practice: a framework. Adv. Dev. Hum. Resour. 10(2), 216–239 (2008) 18. Grance, T., Nolan, T., Burke, K., Dudley, R., White, G., Good, T.: Guide to test, training, and exercise programs for it plans and capabilities (2006) 19. Gleason, J.J.: Getting big results by going small-the importance of tabletop exercises. In: International Oil Spill Conference Proceedings, vol. 2014, pp. 114–123. American Petroleum Institute (2014) 20. Patrick, L., Barber, C.: Tabletop exercises-preparing through play. In: International Oil Spill Conference, vol. 2001, pp. 363–367. American Petroleum Institute (2001) 21. Robson, C.: Real World Research. John Wiley & Sons, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom (2011)

CERTs and Maritime Cybersecurity

Exploring the Need for a CERT for the Norwegian Construction Sector Andrea Neverdal Skytterholm and Martin Gilje Jaatun

Abstract This paper presents an empirical study on the need for sector-specific CERT capacity in the Norwegian construction sector. Findings from the interviews demonstrate a need for developing competence in ICT security in this sector. The actors express a desire for a forum for sharing information and learning from other actors within the industry. In our estimation, there is insufficient support in the industry to create a “full-blown” CERT/CSIRT. However, it seems that all the interviewees are positive about the idea of creating an ISAC-like forum. Keywords Cybersecurity · Incident response · Incident handling · CERT · CSIRT · ISAC

1 Introduction Digitalization offers enormous potential for efficiency and industrialization, and introduces new ways of working. The construction sector is an industry with many actors and a complicated value chain, making the industry vulnerable. Control of buildings, construction sites, information, and processes requires digital security expertise that is highly specialized, costly, and in high demand. There is no actor today who describes a holistic situation for the industry, focusing on threats, vulnerabilities, incidents, or security measures. A. N. Skytterholm (B) SINTEF Digital, Trondheim, Norway e-mail: [email protected] M. G. Jaatun SINTEF Digital, Trondheim, Norway e-mail: [email protected]; [email protected] University of Stavanger, Stavanger, Norway © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_4

57

58

A. N. Skytterholm and M. G. Jaatun

The government’s goal is to have response units in all sectors of society. An important task for the sector-specific response unit is to ensure that all relevant actors receive correct information to be able to implement the necessary measures as quickly as possible. The sector-specific response units shall be the National Cyber Security Centre’s (NCSC’s) contact point for ICT security incidents. Today, there is no separate sector-specific response unit for the construction sector, and most actors rely on assistance from third parties to handle ICT security incidents. This paper intends to provide an understanding of how great the need for a common collaborative security environment for the construction sector is, and what services are needed in the industry. The paper is based on interviews and a review of relevant literature and documents, along with the authors’ general competence and expertise in ICT security [1, 4, 9, 10]. Seven interviews have been conducted with experts in security responsibilities from the construction sector. Also, three interviews were conducted with national emergency units for ICT security (CERT), and one interview with an ICT security services provider. The rest of the paper is structured as follows. Section 2 presents background information on the construction sector and CERT for other industries. Section 3 presents national frameworks for ICT security, and the responsibilities and tasks of sectoral response units. Section 4 presents the key findings from the interviews and Sect. 5 summarizes the results and identifies future work.

2 Background Complex, comprehensive, and integrated digital infrastructures and systems create new dependencies and vulnerabilities. The solutions must meet security requirements, the individual’s privacy, and resilience. With operations becoming less manual and more controlled by technology, the importance of having overview over potential vulnerabilities and risks increases. At the same time, this means an increased need for the assessment and management of threats. Companies’ IT departments are faced with new tasks and methods to be able to safeguard internal information, uptime, privacy, and effective work methodology in the company and together with other partners in the implementation of construction projects. At the same time, the threat landscape is changing; it is now assumed that foreign intelligence services devote considerable resources to breaking into Norwegian computer networks [11]. An increasing number of actors in different industries are experiencing attempts of external interference; state actors, contractors, organized criminals, and fraudsters are all hunting for information and attempting to exploit our infrastructure and services. Access to systems and access to premises are among the most important objectives of the threat actors [14]. The construction sector is a sector with many actors and a complicated supply and value chain. This makes the industry vulnerable. Control of buildings, construction sites, information, and processes requires a digital security expertise that is highly specialized, high in demand, and costs a lot. There is no actor who today

Exploring the Need for a CERT for the Norwegian Construction Sector

59

describes a holistic situation picture for the industry, focusing on threats, vulnerabilities, incidents, and security measures. Nor is there a joint resource and competence centre that can support actors with notifications, information sharing or competence building. Technical analyses and technical and methodical support are up to each individual actor, without a common industry focus that can be found, for example, in KraftCERT,1 Nordic FinanceCERT,2 The Norwegian Maritime Cyber Resilience Centre (NORMA CYBER)3 and others.

2.1 Challenges Specific to the Construction Sector One could argue whether the construction sector is any different from other sectors regarding cybersecurity challenges. However, Mantha et al. [5] highlight several areas where the construction sector differs from other sectors, and states some vulnerabilities that are specific to the industry: • First of all, the supply chain of the construction sector is complex. A large portion of the construction is usually being performed by subcontractors who belong to small and medium-sized enterprises (SMEs), which increases the complexity of the construction supply chain networks that is responsible for the increased cybervulnerability of the construction process. • The construction sites change from project to project, which implies a dynamic workplace and workflow. The ever-changing workforce makes it difficult to educate and train employees on the best cybersecurity practices. • There are interoperability issues regarding the information needed to be shared among different multidisciplinary teams across various platforms. • Exchange of confidential or sensitive information may occur outside the company’s network, for instance, using personal computers. Also, devices used on construction sites may not be validated or monitored by the company. • Employees come from different socio-economic classes, they have different education levels and cultural backgrounds, which causes varying levels of cybersecurity knowledge and awareness. Also, restricting access to project data by placing each employee in the right category may be challenging. • Different stakeholders have different interests. Contractors are interested in maximizing the profit, whereas an owner tries to minimize the total budget. • The project teams may vary also for similar projects. This is a major limitation in the context of cybersecurity, considering that the cybersecurity policies may differ among each of the participants, and developing a synergy every time with a new set of project teams is challenging and may impact productivity.

1

https://www.kraftcert.no/english/index.html. https://www.nfcert.org/. 3 https://www.normacyber.no/en/services. 2

60

A. N. Skytterholm and M. G. Jaatun

According to Skopik et al. [12], information provided by national CERTs, who often take the role of contact point for coordinating and aggregating security incident reports, is usually not targeted to vertical industry sectors. The authors, therefore, suggest that sector-oriented views, along with rich information and experience reports, are required to make CERTs more effective. Turk et al. [15] describe the construction sector as a sector with complex interactions, different stakeholder interests, and lower profit margins, which make it difficult for the sector to directly adopt existing cybersecurity standards and practices from other industries. They also state that “existing cybersecurity threat models do not correspond to the life cycle phases of a construction project due to the unique communication structure and corresponding cybersecurity challenges” [15]. Sonkor and García de Soto [13] highlight some characteristic challenges in the construction sector. They mention the changing environment and lack of stability on-site, and how this increase the challenge of providing cybersecurity during the construction phase. Besides, potential cyberattacks raise safety concerns with regard to human interaction with machines. The authors emphasise the need for understanding potential threats against operational technology on construction sites, detecting security vulnerabilities, and providing mitigation methods [13]. Oesterreich and Teuteberg [8] have studied the lack of innovation and technological progress in the construction industry. They investigate which technologies are taken into use and the current state of the art of these technologies. In the article, they emphasizes some challenges specific to the sector that may have had an effect on the integration of innovative technologies, such as tight collaboration with customers, subcontractors and other stakeholders, and on-site-based, complex projects which require specialist knowledge. Besides, small and medium-sized enterprises with limited capabilities for investment in new technologies dominate the sector [8].

2.2 Working Method and Analytical Framework The goal of this research was to investigate the need for a sector-specific CERT in the construction industry, to consider what challenges the industry faces today, and how incidents are managed. The findings are based on interviews and review of relevant literature, and the paper builds on our previous work [4]. We invited twenty-four actors to participate in interviews. Among these, eight actors agreed to participate. Five of the actors felt that they were not relevant participants, and two did not have the resources or time to participate. The remaining nine did not respond to our requests. See Fig. 1 for a graphical representation. One of the scheduled interviews had to be postponed, and we were unable to agree on a new interview slot. In addition to interviews with actors from the industry, the project has interviewed three experts in national emergency response units for ICT security (CERT), as well as one provider of ICT security services. A total of 11 interviews have been conducted.

Exploring the Need for a CERT for the Norwegian Construction Sector

61

Fig. 1 Invitations to industry actors Table 1 Number of employees at participating actors Employees #Actors 50 50–499 500–1499 1500–3500 3500

1 1 2 2 1

When recruiting participants for the interviews, we aimed to involve people from both smaller and larger construction companies. Despite the difficulties of recruiting participants, we obtained a variation in the size of the participating companies (see Table 1). 90% of the interviewees have responsibility for cybersecurity in the company, and the remaining 10% have a high or leading position. All information from informants is anonymized in the paper. All information that can be associated with company names in the paper is taken from open sources. The interviews were semi-structured, and an interview guide with some pre-made questions was used (see Appendix 1). The interview guide was primarily used to ensure that the main topics of concern were discussed and to keep track of time during the interview sessions. We were two people conducting the interviews, where one had the role as an interviewer and the other as a note taker. The roles stayed the same during the entire project. After each interview, we had a brief sum-up to discuss the findings and ensure that lost information was kept to a minimum.

62

A. N. Skytterholm and M. G. Jaatun

The notes from each interview were analysed using an analysis software called Nvivo. Nvivo allowed us to encode the findings from each interview, easing the process of structuring, and connecting the findings from all of the interviews. The coding resulted in four main topics which is presented in Sect. 4.

2.3 Limitations As with the majority of research with busy industry practitioners, participant numbers were low. However, we experienced data saturation in the form of the actors largely agreeing on the needs of the industry and the challenges they face. Besides, triangulation with published material have helped to strengthen the reliability of the findings [3, 5, 6, 8, 12, 13, 15, 16]. The authors, therefore, suggest that sector-oriented views, along with rich information and experience reports could help to make CERTs more effective.

3 National Frameworks for ICT Security This section presents national frameworks for ICT security and the responsibilities and tasks of response units. Furthermore, there is an overview of Norwegian response units for ICT security, and examples of the types of tasks the different types of units have.

3.1 Framework for Handling ICT Security Incidents The Norwegian Ministry of Justice and Public Security has developed a framework for dealing with ICT security incidents as a key measure to contribute to strengthening the national ability to detect and deal with digital attacks [7]. The purpose of the framework is to uncover and clarify efforts between relevant actors to deal with serious ICT security incidents that affect across sectors, as well as to contribute to creating a good situational overview through aggregation and coordination of information on all relevant ICT security incidents. The framework sets requirements for the tasks that response units must take care of and what characteristics the response units must have. The framework also describes the capabilities the enterprises themselves are expected to have related to handling ICT security incidents. The target group for the framework is public and private enterprises that are important for critical infrastructure and/or critical societal functions, sectoral response units, authorities that have a role related to the management of ICT security incidents, and the ministries. The framework is not binding on private legal entities, but

Exploring the Need for a CERT for the Norwegian Construction Sector

63

all ministries are encouraged to incorporate key private actors through agreements that ensure that enterprises (state administrative bodies and private legal entities) report incidents to NSM via sectoral response units.

3.2 Sectoral Response Units The National Strategy for Information Security [6] published in 2012 assumes that the sectoral response units will play a central role in incident management. In 2016, the EU issued a separate directive (a.k.a. the NIS directive) on cybersecurity stating that Member States should ensure that they have well-functioning Cybersecurity Incident Response Teams (CSIRTs) [3]. So far, the NIS Directive has not been included in Norwegian legislation, but a limited number of Norwegian sectoral response units have been established. The Government’s goal is to ensure that there are response units in all sectors of society. An important task for a sector- specific response unit is to ensure that all relevant actors receive the correct notification information in order to initiate the necessary measures as quickly as possible. The sector-specific response units shall be the Norwegian Cyber Security Centre’s (NCSCs) point of contact in connection with ICT security incidents. A sector-specific response unit has authority in the sector and can impose measures both in prevention and management, while the NCSC will have overall alerting and coordination responsibility. Communication with individual enterprises shall be safeguarded or coordinated with sectoral authorities. The enterprises themselves have a responsibility to be able to ensure security and handle incidents. An example of a response unit is KraftCERT which is a response unit for the energy sector, but in recent years also for other industries such as water & wastewater and oil & gas. Membership in KraftCERT is voluntary, and the business must pay a membership fee. The United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (2015) [16] has provided a statement on cyber norms which also calls for establishing CERTs. As a confidence-building measure, the States should consider additional measures to strengthen cooperation by, for instance, expanding the support practices in CERTs or CSIRTs such as information exchange and enhancing regional and sector-based cooperation [16]. Today, there is no formal response unit within the construction sector.

3.3 ICT Security Units (CERT) There are a number of different response units for ICT security in Norway. Some are at the national level, some are at the sector level, some are internally in a company, and

64

A. N. Skytterholm and M. G. Jaatun

some suppliers offer ICT security services within emergency preparedness. Examples of Norwegian CERT functions are given in Table 2. These are entities that are all members of the international “Forum of Incident Response and Security Teams”.4 Based on the definition, it appears that handling incidents is the main focus of a CERT. However, the tasks of a CERT often involve far more than managing and restoring IT events. Prevention of incidents such as mapping, protection, detection, and notification is often included as tasks in a CERT.

3.4 International Collaboration Forums There are a number of collaborative forums for sharing information and experience regarding ICT security incidents. Internationally, the term “Information Sharing & Analysis Centre” (ISAC) is used for collaboration between public and private sectors for sharing information and experience from combating and handling ICT security and can be sector specific. EE-ISAC is an example of a European cooperation forum that includes several European energy companies. FIRST is a global member forum for collaboration between trusted CERT actors. The forum currently has 605 members. Norwegian members of FIRST are shown in Table 2.

4 Results from Interviews We have conducted online interviews with construction sector experts with security responsibilities in five contractor companies, one engineering and advisory company, and a builder and property manager. In addition, we have spoken to three experts from different national CERTs, as well as a provider of ICT security services. In this section, we present the results of the interviews categorized into four main topics. The interviews took place during September–November 2021. The interview guide used in interviews with industry actors is given in Appendix 1.

4.1 Vulnerabilities Seven informants define an ICT security breach as unauthorized access to data. One of the informants uses a slightly different definition, where an ICT security breach is described by employees losing one of their devices without notifying them, sharing their password with others or observing something suspicious without notifying them. 4

https://www.first.org/.

Exploring the Need for a CERT for the Norwegian Construction Sector Table 2 Norwegian actors that are members of FIRST Short name Full name Host organisation BF-SIRT

Basefarm SIRT

Basefarm AS

Defendable CERT

Defendable CERT

Defendable AS

DNB CDC

DNB Cyber Defense Center (IRT) Nkom EkomCERT

DNB ASA

EkomCERT

Equinor CSIRT

HelseCERT KCSC KraftCERT mIRT NCSC-NO

Nordic Financial CERT Norges Bank CSIRT Tax-IRT

TCERT UiO-CERT

UNINETT CERT Sopra Steria SOC Nordics SpareBank 1 IRT Visma CSIRT/CC

Equinor Computer Security Incident Response Team HelseCERT Kongsberg Cyber Security Center KraftCERT Mnemonic Incident Response Team National Cyber Security Centre in Norway Nordic Financial CERT Norges Bank CSIRT The Norwegian Tax Administration Operational Security Team Telenor CERT University of Oslo Computer Emergency Response Team UNINETT CERT

Norwegian Communications Authority (Nkom) Equinor ASA

Norsk Helsenett SF Kongsberg Defence and Aerospace KraftCERT AS mnemonic AS Norwegian National Security Authority Nordic Financial CERT association Norges Bank Skatteetaten

Telenor Norge AS University of Oslo

UNINETT AS

65

Type of organization Data center/cloud service vendor ICT security service vendor Finance sector Public body

Energy sector

Public body Industry sector Energy sector ICT security service vendor Public body

Finance sector Finance sector Public body, Finance sector

ICT/Telecom Research and education

Research and education 2S-SOC Sopra Steria AS ICT service vendor, including security SpareBank 1 Incident SpareBank 1 Utvikling Finance sector Response Team DA Visma Cyber Security Visma AS Software vendor Incident Response Team/Coordination Center

66

A. N. Skytterholm and M. G. Jaatun

Three of the actors have agreements with Microsoft that notify them if something abnormal is detected in their systems, and this is one of the ways security breaches are usually detected. Security breaches are also reported by employees or users who, for instance, have been granted access to systems or documents they should not have access to. Seven out of eleven respondents also receive notifications from third parties that monitor network traffic to and from their systems. From the interviews, it is not clear what is the most frequent cause of security breaches. All the actors we spoke to could refer to ICT security breaches of varying severity, to which either they or their supplier have been exposed. One of the actors says that they have been subjected to numerous attacks and that the attackers often make use of “social engineering” and go through employees. Employee behaviour is, therefore, something they focus on. Many of the attacks are also often about financial crime. Another actor also told about a security breach where Social engineering was used as an entry point. Three of the actors have experienced being exposed to, or had, a provider that has been exposed to ransomware. One of the actors report that they received extortion claims, but that they did not pay them. They had a backup of the systems, and wouldn’t have come back any faster if they had paid the claim. It is not evident what was the entry gate for the ransomware.

4.2 Incident Management There is a large gap between the internal IT resources among the actors. For example, one of the actors has no internal IT resources, only one self-appointed IT administrator, while another actor has an internal IT department with different service owners in charge of their services, and uses third-party providers for advice. Common to all of the actors is that they use third-party providers to handle ICT security incidents. All the actors who tell about specific ICT security incidents confirm this. Some of the actors have fixed agreements with their suppliers, while one actor reports that they have no fixed agreements, but only contact a supplier when they need assistance. According to one of the actors we have spoken to, a normal Norwegian company will not have the expertise needed to restore the systems in the event of an ICT security incident, and that they must, therefore, hire specialist expertise anyway. Through our interviews, we have not uncovered actors experiencing challenges in cooperation and coordination of handling ICT security breaches.

4.3 Challenges Facing the Industry One of the challenges mentioned by 50% of the actors is that the industry is immature. Through interviews, we have the impression that the maturity when it comes to cybersecurity/ICT security in the industry varies, both between the actors, but also

Exploring the Need for a CERT for the Norwegian Construction Sector

67

within the companies. One of the actors mentions that the competence of the management is good and that they understand the importance of implementing security solutions, while outwardly in the lines it is inferior. It is difficult for the management to communicate the importance of, for example, two-factor solutions. Employees don’t understand why it’s necessary, and they find it cumbersome. Another actor says that they had no challenges in adopting this because people are used to using it. A lack of IT expertise, both in terms of outdated and newer IT systems, is also a challenge. It is often difficult to obtain expertise in the older and outdated systems, as this competence sits in the head of an ageing workforce. It is often the younger ones who have expertise in the newer IT systems, but also here many do not have this expertise. Another challenge mentioned by almost 40% of the actors, and can possibly be seen in the context of the point above, is that there are many who do things themselves, without thinking about what consequences it can have. Interviewees mention for instance that some of the solutions are a bit “cowboy-like”, and that they have, among other things, seen remote control systems that have been quite accessible to outsiders, or that servers have been placed in a building without securing them. Having control over the suppliers and ensuring that they have ensured safety in a good way can be a challenge, and the actors, therefore, depend on having a strict structure here. When the Internet of Things (IoT) is used in buildings, it is important to ensure security so that these are not used as a backdoor into their systems.

4.4 Sector CERT All the representatives who participated in the interviews say they would benefit from an industry-specific CERT. Two of the actors say that it might be appropriate to replace the Security Operations Center (SOC) that they have today with an industryspecific CERT, but that this depends, among other things, on technology, price, and functionality. The other actors seem to have the greatest need and interest in a forum where one can share experiences and information. Today, none of the actors share information about incidents among themselves, but everyone agrees that they had benefited from a forum for information sharing with other actors. The information they would like to share comprises ICT security incidents, industry-specific vulnerabilities, and industry-specific solutions to address these vulnerabilities. They are also interested in other efforts to raise security awareness in the industry. Of the actors we spoke to, only one of them had been in contact with a CERT or similar in connection with the handling of an incident. Due to receiving assistance from a vendor, the CERT in question considered that no further support was required from them. However, they had sporadic contact along the way, even though they were not actively involved in handling the incident. The actor was interested in knowing if the attack was one of many, or if it was targeted at them, but this could not be

68

A. N. Skytterholm and M. G. Jaatun

answered for sure, but they assumed that it was not targeted as they see that the frequency of such types of attacks is increasing. Currently, several of the CERTs may be relevant for some of the actors. The problem is that it can be difficult and unclear who to contact and in what situations. Some believe that this problem speaks against having an industry-specific CERT, as it makes it even harder to know who to deal with in different situations. On the other hand, the construction sector is large, and there is currently a lot going on on the technology side, which suggests either establishing a new unit, or participating in existing CERTs. The method used to attack systems in the construction sector is no different than if one were to attack another industry. One of the respondents could not see anything that makes the construction sector any more special than other response units that exist. The price of hiring consultants to assist in dealing with ICT security incidents is already high. One of the actors says that it is difficult to understand how one should have managed to finance such a team that is ready to assist around the clock. Another drawback mentioned is that IT security is already highly in demand, and that it will be difficult to get enough people with that expertise to be able to operate an industry-specific CERT. One of the actors acknowledges that there may be challenges in bringing people along and avoiding that only two or three actors actually contribute to the sharing of information. Through our interviews, however, it has emerged that all actors are positive to information sharing. One of the actors also says that there is no reason not to cooperate in this area; they are competitors, but sharing this type of information will not come at the expense of the competition between them. When it comes to information sharing about their own ICT security breaches, they have a common interest in hearing about each other’s events. From interviews with authorities and other CERT channels, it has emerged that creating an ISAC can be a good start. It is also easy for an ISAC to have a connection to the technical unit of NSM, and this contact they can have regardless of whether they are an ISAC, sector response unit or CERT, the only thing that changes is the requirements set by the NCSC. One of the established CERT’s says that it is important to get a forum on security, regardless of whether it is a sector CERT or an ISAC. The CERT channels we have been in contact with do not make any recommendations on whether or not an industry-specific CERT should be created. The most important thing is that the actors have a place where they can share and get information. If they only work alone, they will become a much easier prey for attackers. Whether this place is one of the established CERTs or if it is something industryspecific does not have much significance. None of the actors we have spoken to use the traffic light protocol (TLP). Those familiar with the protocol have become familiar with it through reports or security assessments from others. One of the actors mentions that they have used TLP in internal risk assessments.

Exploring the Need for a CERT for the Norwegian Construction Sector

69

5 Summary and Conclusions The summary and conclusions below are based on the main impressions from the interviews with actors in the construction industry, national emergency response units for ICT security (CERT), and a supplier of ICT security services.

5.1 The Needs of the Industry There is a large degree of variation in internal IT resources among the participants who have participated in the study. Common to all of them is that they use vendors to handle ICT security incidents, either through fixed agreements or by contacting vendors when an incident occurs. According to one of the respondents, a normal Norwegian company will not have the expertise needed to restore the systems in the event of an ICT security incident, and that they will, therefore, have to procure specialist expertise to assist with the handling anyway. Only one of the actors we have spoken to has an internal IT department that participates actively in handling incidents. This actor also uses vendors to assist with incident management. From the interviews, there is nothing to indicate that a large actor with an internal IT department is able to handle ICT security incidents better than a small actor who only uses vendors to handle the incident. The price for vendor services, on the other hand, is high, so the capacity of the actors to pay for these services will probably vary. If the incident is large enough, and it takes a long time to deal with it, then the differences between small and large enterprises may become clearer. It seems that all the actors we have spoken to are satisfied with how the incidents are handled today, and they make use of vendors who assist with incident management. Based on this, we currently do not see a need for a separate response unit. On the other hand, some of the major actors would consider replacing the vendor/SOC that they use today in favour of an industry-specific CERT. All the actors express that they would benefit from a forum to discuss industryspecific threats, incidents and security solutions, and all are positive to information sharing. This can be a low-threshold measure that can either be operated on the basis of the larger construction sector actors, or by the actors joining forces on the “best-effort” principle to pay an external organization to do so. Interviews and experiences from other projects show that the construction sector is relatively immature, and that there is a high degree of variation in competence when it comes to cybersecurity/ICT security. There is a need for competence enhancement in this area throughout the industry. However, the findings also indicate that the construction sector is concerned with cybersecurity/ICT security, and wants an ISAC in order to raise competence in this area. There are several challenges in creating a separate response unit for the construction sector. The cost of hiring consultants to assist in dealing with ICT security

70

A. N. Skytterholm and M. G. Jaatun

incidents is already high, and one of the respondents says that it is difficult to envisage how a team that is ready to assist around the clock should be managed and funded. ICT security is a highly sought-after expertise, and it is mentioned that it will be difficult to get enough people to be able to operate an industry-specific CERT. Another challenge mentioned is that it can be difficult to bring enough actors to such a unit, and also that everyone contributes information. There are currently several existing CERT units that may be relevant for the actors, but it can be difficult and unclear which of them to contact in different situations. Some believe that this problem speaks against having an industry-specific CERT, as it makes it even harder to know whom to deal with in different situations. From interviews with authorities and other CERT channels, it has emerged that creating an ISAC can be a good start. It is also easy for an ISAC to have a connection to the technical unit of NSM, and they can have this contact regardless of whether they are an ISAC, sector-specific response unit or CERT; the only thing that changes is the requirements set by the NCSC. One of the established CERTs says that it is important to get a forum on security, regardless of whether it is a sector-specific response unit or an ISAC.

5.2 Organization of an ISAC To run an ISAC, one needs a secretariat that is responsible for, among other things, organizing meetings and running a digital platform for information sharing. The secretariat’s responsibility can be rolled out between the members or it can be serviced to an external actor. Members of an ISAC must expect to set aside about two days a month. These days are used to participate in meetings, contribute to information sharing on a digital platform, and participate in collaborative activities such as organizing campaigns, developing products or tools, and conducting sector analysis. When starting up an ISAC, it is recommended to start with few actors, to build relationships and trust. These relationships can also be used to create trust among several members. It is recommended that the membership size does not exceed 20–25 members, as many members can make the administration of the ISAC difficult [2]. Representatives from member companies should have sufficient expertise to provide information and benefit from discussions, and they must have the authority to represent the company and speak freely during the meetings. In addition, they must be able to contribute, or receive information at a relevant level (for example, strategic vs. technical level). Acknowledgements This work is based on research funded by Oslo Construction City AS. The authors gratefully acknowledge the support from OBOS, AF Gruppen, Betonmast, and Statsbygg, and the anonymous interviewees from the participating organizations.

Exploring the Need for a CERT for the Norwegian Construction Sector

71

Appendix 1 Interview Guide Background • • • • • • • • • • • • • • • • • • • • •

What is your role in the company? Can you describe how your role is linked to managing ICT security incidents? Is the term Operational Technology (OT) used in your company? What systems and routines fall under your responsibility? CERT capacity in the construction sector What do you consider especially challenging in your sector regarding protection against and managing cyberattacks/ICT security incidents? What internal resources and roles are involved in ICY preparedness and incident management in your company? How do you define an ICT security breach? How are ICT security incidents usually discovered in your company? Do you have any plans for managing ICY security incidents? Are these plans included in trainings and exercises? Who is contacted in the event of serious ICT security breaches? When did you last update your contact lists? How do you collaborate with other actors on handling ICT security incidents? Do you see special challenges related to dealing with ICT security breaches in industrial process control systems and automation? Would you benefit from a sector CAC for your industry to better understand, detect, and deal with threats and vulnerabilities? If so, how would you benefit from such a CAC? What improvement needs do you think are the most important when dealing with ICT security breaches in your case? Can you tell us about the last ICT security breach you experienced? How was this handled? How did the handling work? Why did the handling work as it did? Do you experience challenges around cooperation and coordination in handling ICT security breaches? If this is the case, what kind of challenges are experienced? Would you benefit from participating in national exercises focusing on handling ICT security incidents? Feel free to elaborate on why

72

A. N. Skytterholm and M. G. Jaatun

Operationalization of CERT Alerts • How is your practice regarding information sharing about (own) ICT security breaches? What type of information is shared, and with whom? • What tools are used for information sharing about ICT security breaches in your company? • Do you know the term TLP (traffic light protocol)? If so, how is this used in your company when sharing information? • Do you share information about your own ICT security breaches via CERT channels? If so, what type of information, and in what way? • Do you receive information about new ICT security threats and vulnerabilities via CERT channels? If so, how is this information used in the company’s internal ICT security and emergency preparedness work? • What improvement needs do you think are the most important when it comes to sharing information in ICT security incidents and operationalizing CERT alerts?

General Closing Questions • Are there topics we have not addressed in this interview that we should have addressed?

References 1. Bernsmed, K., Jaatun, M.G., Meland, P.H.: Safety critical software and security-how low can you go? In: 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC), pp. 1–6. IEEE (2018) 2. ENISA: Information Sharing and Analysis Center (ISACS)—Cooperative Models (2018). https://www.enisa.europa.eu/publications/information-sharing-and-analysis-centerisacs-cooperative-models 3. European Union: Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the union (2016). http://eur-lex.europa.eu/legal-content/EN/TXT/ PDF/?uri=CELEX:32016L1148&from=EN 4. Jaatun, M.G., Bodsberg, L., Grøtan, T.O., Elisabeth Gaup Moe, M.: An empirical study of CERT capacity in the North Sea. In: 2020 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), pp. 1–8 (2020). 10.1109/CyberSecurity49315.2020.9138865 5. Mantha, B., de Soto, B.G., Karri, R.: Cyber security threat modeling in the AEC industry: an example for the commissioning of the built environment. Sustai. Cities Soc. 66, 102682 (2021) 6. Norwegian Government: Nasjonal strategi for informasjonssikkerhet (National Strategy for Information Security [In Norwegian]) (2012). https://www.regjeringen.no/globalassets/ upload/fad/vedlegg/ikt-politikk/nasjonal_strategi_infosikkerhet.pdf 7. NSM: Rammeverk for håndtering av IKT-hendelser (framework for handling ICT incidents [in Norwegian]) (2017). https://nsm.no/getfile.php/133853-1593022504/Demo/Dokumenter/ rammeverk-for-handtering-av-ikt-sikkerhetshendelser.pdf

Exploring the Need for a CERT for the Norwegian Construction Sector

73

8. Oesterreich, T.D., Teuteberg, F.: Understanding the implications of digitisation and automation in the context of industry 4.0: a triangulation approach and elements of a research agenda for the construction industry. Comput. Ind. 83, 121–139 (2016) 9. Okstad, E.H., Bains, R., Myklebust, T., Jaatun, M.G.: Implications of cyber security to safety approval in railway. In: Proceedings of the 31st European Safety and Reliability Conference, pp. 2120–2127 (2021) 10. Onshus, T., Bodsberg, L., Hauge, S., Jaatun, M.G., Lundteigen, M.A., Myklebust, T., Ottermo, M.V., Petersen, S., Wille, E.: Security and independence of process safety and control systems in the petroleum industry. J. Cybersecur. Priv. 2(1), 20–41 (2022) 11. PST: National Threat Assessment 2020 (2020). https://pst.no/alle-artikler/trusselvurderinger/ annual-threat-assessment-2020/ 12. Skopik, F., Settanni, G., Fiedler, R.: A problem shared is a problem halved: a survey on the dimensions of collective cyber defense through security information sharing. Comput. Secur. 60, 154–176 (2016) 13. Sonkor, M.S., García de Soto, B.: Is your construction site secure? A view from the cybersecurity perspective. In: ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, vol. 38, pp. 864–871. IAARC Publications (2021) 14. Telenor: Trusselrapport 2020—Trusselforståelse (Threat report 2020—Threat perception [In Norwegian]) (2020). https://www.telenor.no/om/digital-sikkerhet/2020/artikler/ trusselforstaaelse.jsp 15. Turk, Ž, de Soto, B.G., Mantha, B.R., Maciel, A., Georgescu, A.: A systemic framework for addressing cybersecurity in construction. Autom. Constr. 133, 103988 (2022) 16. UN General Assembly: Group of governmental experts on developments in the field of information and telecommunications in the context of international security. UN Doc. A/70/174, vol. 22 (2015)

Holistic Approach of Integrated Navigation Equipment for Cybersecurity at Sea Clet Boudehenn, Jean-Christophe Cexus, Ramla Abdelkader, Maxence Lannuzel, Olivier Jacq, David Brosset, and Abdel Boudraa

Abstract Recent studies have demonstrated the interest of analyzing GNSS (Global Navigation Satellite System) and AIS (Automatic Id entification System) data to improve the safety of naval infrastructures for a wide spectrum of maritime applications. However, in-depth analyses also underline the sensitivity of these systems to attacks such as jamming and spoofing. In this context, it is essential that researchers, specialized organizations and companies rely on realistic data to improve these types of systems to better detect and cope with potential threats. However, because of the lack of open data sets, or due to financial, technical or operational reasons, the use of simulated data is preferred in most cases over real life data, which can lead to biases. To cope with this challenge, we have developed a prototype called “HAPPINESS” for “Holistic APProach of Integrated Navigation Equipment for Cybersecurity at Sea”. The main objective of this dedicated and autonomous embedded system is to collect navigation data in real time without using proprietary or restrictive protocols. The generated open data, then continuously feeds a cyber naval platform able to reproduce the functional and operational systems of a ship. This prototype allows to reproduce the kinematics of a ship in various contexts (like specific maneuvers, long tracks, docking…) in NMEA format in order to design highly realistic scenarios based on real life data and allowing to obtain more complete and richer data (than those freely accessible online) in terms of information, giving additional means to detect anomalies on navigation systems. Keywords Maritime systems · Navigation systems · GNSS · GPS · AIS · NMEA · Embedded systems · Data Collection · Navigation data sets generation C. Boudehenn (B) · M. Lannuzel · D. Brosset · A. Boudraa Naval Academy Research Institute (lRENav), Arts Et Metiers Institute of Technology, Brest, France e-mail: [email protected] J.-C. Cexus · R. Abdelkader LAB-STICC, UMRCNRS6285, Ensta-Bretagne, 29806 Brest Cedex 9, France O. Jacq France Cyber Maritime Association, Brest, France © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_5

75

76

C. Boudehenn et al.

Abbreviations GNSS AIS GPS NMEA IMO GPIO I2C HF HDOP VDOP MMSI ECS ECDIS

Global Navigation Satellite Systems Automatic Identification System Global Positioning System National Marine Electronics Association International Maritime Organization General Purpose Input Output Inter Integrated Circuit Very High Frequency Horizontal Dilution of Precision Vertical Dilution of Precision Maritime Mobile Service Identity Electronic Chart System Electronic Chart Display Information Systems

1 Introduction Transportation by sea is the main mean for worldwide goods exchange and represents an essential resource for economic exchanges all over the world. Nowadays, more than 90% of the world’s trade in volume is carried out by sea, with nearly 290 tons of goods transported via maritime means every second. In this global context, the digital field has been widely deployed for several years now, especially on-board of merchant ships, thus conducting the maritime sector into a continuous shift towards digitalization. A ship built during the last decade shows all characteristics of a full information system. For instance, programmable logical controllers are used for engine and power management, while the bridge is now highly relying on digital sensors, networks and displays for navigation [5]. Internal systems are increasingly complex and not necessarily designed with a cyber security approach. Meanwhile, the number of cyberattacks targeting these types of systems is increasing, especially due to criminal and non-state actors having a real interest in them. Thus, by understanding how these systems work through the analysis of field data is a major challenge in order to implement efficient countermeasures and cyber threat detection policies. Generic and specific vulnerabilities are discovered every day, widening the attack surface of the sector, potentially endangering thousands of ships worldwide [10]. Experts have demonstrated that disrupting the operational functions and functional elements of a ship at sea is not fictitious.1 These demonstrations came sometime after researchers spoofed and jammed the GPS and AIS sensors of a super yacht at low cost during her navigation [9]. The number of GNSS and AIS spoofing cases has increased 1

Available: https://www.reuters.com/article/us-cybersecurity-shipping-idUSBREA3M20820I40 424 (accessed 09 2021).

Holistic Approach of Integrated Navigation Equipment …

77

steadily over the last decade, impacting the whole sector (civilian and military ships and harbors), elevating the risks of accidents, especially in critical navigation areas such as straits or channels, where sometimes the current geopolitical context is more and more tense. Increasingly frequent events highlighting the vulnerabilities and potential consequences of this sector have put cybersecurity issues back on the table and are now part of the decision-making process [1, 8]. In addition, the International Maritime Organization (IM O), responsible for maritime space, has now applied guidelines2 to require shipowners to take into account appropriate cybersecurity regulations and policies on-board the ships [4]. The Suez Canal obstruction in March 2021 for more than 6 days by the large container ship Ever Given impacted more than 10% of the world’s freight transport and caused Ever Green company to lose millions of dollars3 ; the cause of the accident has been attributed to a combination of environmental factors (like high winds and human error in navigational inputs by the bridge team. Even though it is unlikely that a cyberattack was the cause, it is a reminder that this type of maritime incident can have high economic and geopolitical consequences. A ship is a complex system of systems: from the bow to the stern, from the engine room to the bridge, tens of Information Technology (IT) and Operational Technology (OT) systems are interconnected to ensure her mission and safety. Among the potentially vulnerable systems of the ship, we can cite more precisely various systems contributing to the navigation function, such as GPS and AIS, but also the NMEA (National Marine Electronic Association) interconnection standard [2, 6, 7]. In order to develop solutions to reduce the cyberattack surface of vulnerable systems, it is important to understand how this type of system works and to collect a large number of data representing a wide spectrum of situations. Designing relevant scenarios to reproduce advanced cyberattacks is really useful, especially in a domain where data are not easily accessible due to proprietary systems [3]. It can definitely contribute to perform in-depth studies, detect incidents in specific situations and support operators in decision-making, whatever the context maybe. In this paper, we present an autonomous on-board system capable of collecting in real time GNSS and AIS information from a moving ship without impacting its onboard functional and operational systems. The prototype is presented in the form of a small suitcase capable of being embarked on any type of vessel and able to collect a huge amount of navigation data such as GNSS tracks and AIS data. In Sect. 2, we detail the architecture of this system. In Sect. 3, we describe the data collection strategy through the acquisition system and show how useful and relevant it can be for conducting in-depth analysis. In Sect. 4, before the conclusion, we detail some perspectives, as well as the future improvements of the system.

2

https://www.imo.org/en/OurWork/Security/Pages/Cyber-security.aspx. Available: https://www.bloomberg.com/news/features/2021-06-24/how-the-billion-dollar-evergiven-cargo-ship-got-stuck-in-the-suez-canal (accessed 09 2021).

3

78

C. Boudehenn et al.

2 Architecture of the System The onboard GNSS data acquisition system we designed had to respect several constraints: it had to be fully autonomous in case of software operation, but also must be able to fully operate when only powered by a battery. Most of the components and elements used are very often exploited together, which means that it was very easy to connect them and achieve the desired purpose. Figure 1 shows the complete architecture of our system with the different elements that compose it. Here is the complete list of the chosen elements: Here is the complete list of the chosen elements: • A Nano computer. Nano computers are well established in the development board market. In terms of performance, while many boards would be more suitable for embedded operational needs, the Raspberry Pi 4, for example, remains very easy to use, has a very active and growing community, is affordable, its Linux base is fully adapted to the projects for embedded and easy programming. Moreover, it is sufficiently powerful for this type of tiny data collection project. • RTL-SDR (Software Defined Radio) dongle is a dongle that can be used as a computer-based radio scanner for receiving live radio signals, ideal for GPS and AIS data reception. It is plugged into one of the USB ports of the Raspberry Pi. • GPS receiver with its remote antenna to increase accuracy, is connected via USB to the Raspberry Pi. • An AIS antenna (tuned for the Maritime VHF (Very High Frequency) band where the AIS frequencies of 161.975 and 162.025 MHz are located and connected to the RTL-SDR dongle. • More traditional sensors: Accelerometer MMA8451, Gyroscope GYR03b and a weather sensor BMEE680, to measure humidity, pressure and temperature values. All these sensors have a low power consumption and are connected to the l2C (Inter-Integrated Circuit pins on the GPIO (General Purpose Input/Output) pins of the Raspberry board. • An USB 4G stick, allowing to have a sufficiently large 40 coastal coverage in the navigation areas and a SIM subscription to send information in real time via the Internet. This transmission is achieved in a secure way, via the use of a Virtual Private Network. • An External Battery of 20,000 MAh, which supplies the needed power for the Raspberry board and its components in a continuous manner. It allows the system to have autonomy of about 24 h. The external battery is also constantly plugged into a 220 V power outlet on board the ship, ensuring a high level of power resilience, which has proven to be useful during maintenance operations. • All elements are embedded into a hermetic and electrically insulated suitcase.

Holistic Approach of Integrated Navigation Equipment …

79

Fig. 1 HAPPINESS project architecture

3 Methodology for Data Collection This section describes the method used to receive, collect and interpret the data in real time, but also how to save it in a long-term retention database. Once the whole system built, a thorough definition of the collect and visualization processes was necessary. Data flowing from the different sensors and receivers of

80

C. Boudehenn et al.

Fig. 2 Data collection methodology

the suitcase are first recorded on the embedded system for local storage and then sent to shore to a remote server to be ingested into a more global database. For the first phase of this prototype, we asked “Morlenn Express” shipping company4 ) the authorization to set up the system on one of their boats. The boats of this company, specialized in the local transportation of passengers, can accommodate an average of 200 people to travel between different cities in Brittany, especially between the maritime cities of Brest and Crozon, on the French west coast. Those boats, 35 meters long and 7 meters wide, make simple and regular maneuvers throughout the day, achieving 4 round trips per day with an average duration of 1 hour per trip, suitable for the validation phases of the prototype. The following diagram (Fig. 2) shows the data collection methodology. This prototype was placed directly on the bridge of one of the company’s boats so that it is electrically powered and easily accessible. Once the system is in place, the 4G LTE connection allows us to send data to the remote server. Ashore, data are sorted, parsed and cleaned in real time to ease and enrich further analysis before

4

http://www.morlenn-express.com/.

Holistic Approach of Integrated Navigation Equipment … Table 1 Examples of NMEA messages

Talker ID

Code

Message description

GP

GGA

Global positioning fix data (GPS)

GP

GLL

Latitude/longitude data (GPS)

GP

RMC

Recommended minimum data (GPS)

GP

GSV

Satellites in view (GPS)

GP

GSA

GPS DOP and active satellites (GPS)

AI

VDM

Received data from other vessels (AIS)

81

being stored on the server. Finally, we send those data to an open-source ECS (Electronic Chart System) called d Open CPN.5 The data, whether from sensors, GPS or AIS, are all first of all stored using the “.csv” format, with different headers and timestamps. The reception of navigation data is NMEA-0183 standard compliant. This format, very similar to the operation of the CAN bus protocol, was created to ease connections between navigation systems (Lock, Speedometers, Echosounders, GPS and AIS receiver, …) over serial interfaces. Table 1 details some examples of NMEA frames Talkers and Sentence IDs received for GPS and AIS. NMEA frames are coded in ASCII and composed of Talkers ID (the first two letters of the frame, indicating which system transmit s information (“GP” for GPS, “AI” for AIS, “II” for Integrated Instruments, ...). Then, Frame Type gives indications on the content of the complete frame: for example, if we consider a frame with the “GP” Talker, the following type “GGA” will give satellites information, “RMC” will give speed and rudder angle information while “RMB” will give waypoints information. Theses frames are dedicated to “Listeners” like ECDIS (Electronic Chart Display Information Systems) indicating to the operator precise information on the environment in which the boat evolves. During the whole experimentation phase, the acquisition system has been placed for more than 2 months on 3 different ships of the company “Morlenn Express”, respectively, named “Tibidy”, “Louarn” and “Dervenn”. The Figs. 3, 4 and 5 show an example of data collected on August 17 2021 from 11:09 a.m. to 01:10 p.m. (about 2 h of data sending) with a round trip of the boat during this period. The following is the number of frames collected: • about 2200 NMEA-0183 GPS frames per hour, • about 5750 NMEA-0183 AIS frames per hour, • about 4250 frames from various sensors per hour. Thus, since June 29 2021, we have collected a certain amount of data (GNSS tracks, NMEA frames and sensors data), thanks to this system that is later intended to remain on board for a long time (probably two years). Thus, for an average of 10 hours data reception per day (the moments when the boat is sailing and not at the dock) for 2 months, we obtain a database of more than 5

https://www.opencpn.org/.

82

C. Boudehenn et al.

Fig. 3 Some samples of collected data

Fig. 4 Example of AIS message visualization (in black) and ship trajectory (in green) during 2 h

Holistic Approach of Integrated Navigation Equipment …

Fig. 5 Evolution of GPGGA and sensors data

83

84

C. Boudehenn et al.

19, 738, 300 raw frames totaling fairly less than 7 Gb. These data are dedicated to the elaboration of a model representing the normal behavior of a ship of this size. Once cleaned and parsed, we identified 20 features to conduct data-driven analysis such as: • • • • • • • • • • • • • • • • • • • •

Latitude, Longitude, Reception_Status, Speed_Over_Ground, Track_Angle, GPS_Receiver_Quality, Number_Satellites_in_View, Orthometric_Height, Horizontal_Dillution_OJ_Precision, Vertical_Dillution_OJ_Precision, Positions_Dillution_OJ_Precision, Satellite_lD, Satellite_Elevation, Satellite_Azimuth, SNR, Maritime_Mobile_Service_Identity, Ais_Status, Course, Heading, Raim, Radio,

These characteristics are the ones that are likely to change when GNSS systems fall victim to cyberattacks such as spoofing. A long-term analysis of the evolution of these features would eventually allow to determine the normal behavior of the GNSS signals evolution and can help to detect outliers. The interest in collecting this type of data allows us to have richer information than what we could find on sites that simply give free access to AIS frames. Although these frames are interesting to analyze, the GPS data that we recover here allows us to have additional precise information on the quality of the satellite reception, with, for example, indications on the constellation of satellites used with by the respective satellites ID (which can be useful in the case of spoofing attack). In the same way, more detailed information on speed and direction and heading is collected. Here, the goal of combining data from AIS, GPS and different sensors allows for correlation for further analysis.

4 Conclusion In this paper, we present the concept of an embedded and autonomous on-board system that is able to collect and store local GNSS and AIS information. As of August 2021, 2 prototypes of this project are currently operating on 2 different ships from

Holistic Approach of Integrated Navigation Equipment …

85

the maritime transport company “Morlenn Express” and 2 others in the final step of development. This project allows us to gather huge amount of GNSS tracks data (such as GPS, AIS, and various kinetic and motion sensors such as accelerometers, gyroscopes and also weather sensors). It feeds a database for long-term retention, thanks to the real-time transmission to a remote shore server in a secure infrastructure, generating open data to feed a Naval Cyber Range. These data allow us to reproduce typical scenarios of this kind of passenger-carrying vessels, merchant ships or cargoes. Since the beginning of the project, we have captured almost 4 full months of data and more than 19 million GPS and AIS frames. In addition, this project makes it possible to easily conduct data sets generation experimentations, ship motion and kinetics in-depth features analysis in the context of cyberattacks such as GNSS spoofing, which are increasing in this critical domain. Future work will include improvements in the quality of the antennas and receivers, where more suitcases will be placed in different types of ships in a longer period, and conduct more global studies in order to select and study the relevant features and to model the normal behavior of a ship. Acknowledgements This work is supported by the Chair of Naval Cyber Defense and its partners Thales, Naval Group, French Naval Academy, IMT-Atlantique, ENSTA-Bretagne and the Region Bretagne. Special thanks to the maritime transport company “Morlenn Express”, who agreed to collaborate with us for the testing and the validation of this project.

References 1. Alincourt, E., Ray, C., Ricordel, P.M., Dare-Emzivat, D., Boudraa, A.: Methodology for AIS signature identification through magnitude and temporal characterization. In: OCEANS 2016— Shanghai (2016) 2. Balduzzi, M., Pasta, A., Wilhoit, K.: A security evaluation of ais automated identification system. In: Proceedings of the 30th Annual Computer Security Applications Conference. ACSAC‘14 (2014) 3. Becmeur, T., Boudvin, X., Brosset, D., Heno, G., Coste, B., Kermarrec, Y., Laso, P.M.: Generating data sets as inputs of reference for cy ber sec urity issues and industrial control systems. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS) (2017) 4. Blauwkamp, D., Nguyen, T.D., Xie, G.G.: Toward a deep learning approach to behavior-based AIS traffic anomaly detection. In: (DYNAMICS) Workshop, San Juan (2018) 5. Boudehenn, C., Cexus, J.C., Boudraa, A.: A data extraction method for anomaly detection in naval systems. In: International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) (2020) 6. Boudehenn, C., Jacq, O., Lannuzel, M., Cexus, J.C., Boudraa, A.: Navigation anomaly detection: An added value for maritime cyber situational awareness. In: International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) (2021) 7. Iphar, C., Napoli, A., Ray, C.: An expert-based method for the risk assessment of anomalous maritime transportation data. Appl Ocean Res (2020) 8. Iphar, C., Napoli, A., Ray, C., Alincourt, E., Brosset, D.: Risk analysis of falsified automatic identification system for the improvement of maritime traffic safety. In: 26th European Safety and Reliability Conference (2016)

86

C. Boudehenn et al.

9. Kelley, J.: From super-yachts to web isolation. Comput Fraud Security 2017(12) (2017) 10. Mileski, J., Clott, C., Galvao, C.: Cyberattacks on ships: a wicked problem approach. Maritime Bus Rev 3 (2018). https://doi.org/10.1108/MABR-08-2018-0026

Building Maritime Cybersecurity Capacity Against Ransomware Attacks Georgios Potamos , Savvas Theodoulou, Eliana Stavrou , and Stavros Stavrou

Abstract Ransomware is considered among the top threats that organizations have to face, and one that is not expected to go away anytime soon. Cyber criminals have turned ransomware into a profitable business by targeting environments in which they can maximize the attack’s impact and their profits. The Maritime domain is a lucrative environment as it supports many aspects of the supply chain, making it a high priority target for cyber criminals. Recent ransomware incidents in the Maritime domain have demonstrated the necessity to increase the cyber risk awareness and readiness levels, to effectively address this threat. To defend and minimize the risk to get infected and/or recover if impacted by a ransomware, the required cybersecurity capacity needs to be developed. This can be achieved by educating and training all Maritime stakeholders, according to their role and responsibilities, across a strategic, operational, and/or tactical level. The challenge is to determine the capabilities that the Maritime stakeholders need to develop across the different levels, so they can exercise sound judgement and procedures when faced with a ransomware incident. This work presents an innovative training curriculum that was developed to build cybersecurity capacity in the Maritime domain and defend against ransomware attacks. A highlight of the proposed curriculum is that it specifies structured walkthrough practice to promote active learning and make education memorable and actionable. The proposed curriculum targets to provide design directions to the cybersecurity community to develop new training curricula to address future ransomware attacks. Keywords Maritime cybersecurity capacity · Ransomware attacks · Maritime cybersecurity training · Maritime cyber situational awareness · Cybersecurity curriculum

G. Potamos (B) · S. Theodoulou · S. Stavrou Open University of Cyprus, Nicosia, Cyprus e-mail: [email protected] E. Stavrou Applied Cyber Security Research Lab, University of Central Lancashire Cyprus, Larnaca, Cyprus © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_6

87

88

G. Potamos et al.

1 Introduction Ransomware is considered among the prime threats that organizations have to face today, regardless of the sector they belong to [1]. This threat refers to a form of malware designed to encrypt files on a device rendering operations unusable. The infected organization is then faced with a ransom demand, in exchange for the decryption keys. During the past year, it was observed that the frequency and complexity of ransomware attacks (Fig. 1) have increased, maximizing the cyber criminals’ profits, and shifting into the golden era of ransomware. Cyber criminals have turned ransomware into a stable profitable business by offering Ransomware-as-a-Service (RaaS). This service provides the means to cyber criminals to employ a wide variety of tactics, techniques, and procedures (TTPs) to compromise an infrastructure, aiming to render its operations unusable, steal information, and thus critically impacting the organization. Such adversarial capabilities, pose a great challenge to organizations to be able to protect their infrastructure from ransomware attacks and to mitigate their impact. In the recent years, cyber criminals have shifted their attention to critical infrastructure organizations, deploying a quadruple extortion scheme [2, 3] to maximize their profits. The quadruple ransomware extortion scheme as depicted in Fig. 1, involves deploying a ransomware and encrypting the organization’s files (single extortion), exfiltrating sensitive data and threatening to publicize them (double extortion), further disrupting operations with a DDoS (triple extortion), and finally establishing a direct communication with customers and supply chain stakeholders (quadruple extortion) to force the organization to act upon the cyber criminals’ demands.

Fig. 1 Ransomware extortion scheme

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

89

A critical infrastructure that has become a recent target of adversarial campaigns involves the Maritime domain [4]. The first ransomware attack in this domain was reported in 2017 [5], with devastating operational and financial impact, and since then the attacks started rising at an alarming rate. To defend and minimize the risk to get infected and/or recover if impacted by a ransomware, all Maritime stakeholders need to develop relevant knowledge and skills based on their role and responsibilities at a strategic, operational, and tactical level [4]. The ransomware extortion scheme (Fig. 1) that is often deployed, relates to a range of business, operational, and tactical aspects that Maritime stakeholders need to be aware of. The challenge is to identify and build the necessary cybersecurity capabilities across the workplace hierarchy so that people can exercise sound judgment and procedures when faced with a ransomware incident. This work presents an innovative training curriculum that was developed to build relevant knowledge and skills to defend against ransomware attacks in the Maritime domain. The authors envision that the proposed curriculum can inspire and guide other curriculum designers to develop relevant training resources, thus enhancing the cybersecurity capacity in the Maritime domain. Section 2 presents relevant work in this area. Section 3 analyses ransomware attacks in the Maritime domain and highlights the attack methodology that can be deployed in the context of a ransomware extortion scheme. Section 4 presents the training curriculum that is developed, providing insights as to the key aspects that could be considered for the development of future curricula to address ransomware attacks. Section 5 describes the training platform that was utilized to build the proposed curriculum and briefly presents one of the developed practical activities. Finally, Sect. 6 concludes the work.

2 Related Work The Maritime domain is a lucrative environment for cyber criminals, thus defending against cyber threats should be a high priority for all Maritime stakeholders [6]. To this end, cybersecurity training of all stakeholders in the shipping industry is pursued [7], due to the necessity to increase the cyber risk awareness and preparedness, by developing material and practical exercises to build adequate skills [8]. For this purpose, several studies describe the training methods and aspects, covering most of the cybersecurity functional areas [9]. An essential training topic that should be promoted is how to manage the risk of ransomware. Several organizations have focused on issuing guidelines to increase awareness on ransomware attacks and address the threat. NIST special publication 8374 [10] specifies a ransomware profile that can be used as a guide to manage the risk of ransomware incidents, taking into consideration the Cybersecurity Framework. The “No More Ransom” project [11] is the first public–private partnership of law enforcement and IT Security companies aimed to help victims of ransomware recover their encrypted data without having to

90

G. Potamos et al.

pay the ransom. The project has also developed guidelines for regular users and for businesses to mitigate ransomware attacks. The Ransomware Task Force (RTF) was formulated in early 2021 by the Institute of Security and Technology (IST) in partnership with experts in industry, government, academic institutions, etc. The RTF developed a framework of actions [12] to reduce the impact from ransomware attacks. Despite these efforts, the fact that organizations keep getting infected from ransomware at an alarming rate, and many of them decide to pay the ransom, indicates that we have not reached the necessary readiness level to address this threat. Investigations and solutions related to developing specific capabilities to defend and respond against a ransomware extortion scheme are still limited, especially with regards to the Maritime domain, and need to be extended. Cybersecurity competency frameworks have been proposed in the context of several EU pilot projects (FORESIGHT, SPARTA, Cyber4Europe, CONCORDIA and ECHO) launched to provide valuable insights as to the roles and the cybersecurity competencies that need to be developed in different sectors. A recent work [13], highlights the relationship between Maritime safety, security, and training, and presents a framework to develop a risk aware culture in the Maritime domain. However, more training courses are needed towards building knowledge and skills to defend against ransomware attacks and minimize the risk of infection and/or impact across an organization. The delivery and continued development of effective training and exercises is critical for the development of a cyber situational aware workforce. For practical exercises, cyber ranges are commonly used as the proper environment to perform offensive and defensive training drills [14, 15].

3 Profiling Ransomware Attacks in the Maritime Domain This section aims to profile recent ransomware attacks in the Maritime domain to gain a better understanding of the attacks, the cyber criminals’ motivation, their capabilities, the impact to a ship and/or ashore systems, etc. Such knowledge can then be utilized to inform the design of training curricula to build capabilities to manage the risk of ransomware incidents.

3.1 Analysis of Recent Ransomware Cases Over the last few years, ransomware has been used against different Maritime targets, including shipping companies’ servers and management systems, ports, ships, and supply chain stakeholders. Unfortunately, in many cases, the attacks were successful and impacted critical infrastructures, such as ports and energy transportation facilities, rendering unusable part of their main services and affecting the entire supply chain. The financial impact of these attacks was enormous for the shipping industry. The work in [16] analyzes the major cybersecurity incidents that occurred in the

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

91

Maritime domain since 2010, covering a range of attack vectors and the relevant affected Maritime assets/systems. Specifically, 41% of the reported incidents were relevant to ransomware attacks. Table 1 lists the ransomware incidents that concern the Maritime domain, providing insights related to the targets, the ransomware utilized, the impact, and the Maritime assets that have been affected. Valuable observations are extracted from Table 1: • Ransomware impacted both onboard and ashore Maritime stakeholders and assets, including shipping companies, energy critical infrastructures, ports, etc. • Both IT and OT systems were infected. Most of them are related to IT. • Several ransomwares were weaponized, using different initial access techniques. Further information on the attack methodology is analyzed in Sect. 3.2. • The attackers managed to cause huge financial consequences using ransomware. • The attackers’ motivation was beyond getting ransom, focusing on the disruption of services. • Attacks affected the supply chain. Additional to the known incidents listed in Table 1, several scenarios were constructed to demonstrate the possible implementation of ransomware attacks against Maritime OT systems: (a) Disruption of ECDIS services [17] during the sailing of a ship. To be able to do this, an attacker could tamper the communication between ECDIS and the navigation sensors/systems. (b) Take control of crucial peripheral systems such as cargo management systems, and onboard machinery management systems [17]. For instance, a ransomware controlling the vessel could block any door or movement towards the land, holding passengers as hostages at sea until a ransom is paid. (c) Block ECDIS system [4] and force the ship to sail with the use of radar and optical means. This incident may cause departure or arrival ship delays, and may also increase the possibility of collisions, especially in cases where visibility is limited.

3.2 Attack Methodology This section presents the tactics and techniques from the MITRE ATT&CK framework that can be utilized to apply a ransomware attack in the Maritime domain. Initially, to facilitate the selection of adversarial tactics and techniques, the existing vulnerabilities of the Maritime IT/OT systems [4] have been identified. Moreover, it was taken into consideration that the Maritime domain includes ships and ashore Maritime infrastructures that have different IT/OT systems and operational capabilities. A ransomware attack can be executed against a ship’s systems and/or a Maritime infrastructure’s systems (including ports, shipping companies, shipyards, supply chain providers etc.). A successful ransomware attack can compromise the operation of IT systems onboard or ashore and at the same time can directly impact

92

G. Potamos et al.

Table 1 Recent ransomware incidents in the maritime domain Maritime target

Incident description

Ransomware used

Infected system/insertion point

Shipping company

Maersk 4000 servers and 45,000 Windows Workstations had to be reinstalled (2017)

NotPetya

Update patch for the tax accounting software Windows servers/workstations

COSCO shipping lines (2018)

Undisclosed ransomware

Email and network telephony

Diana Shipping company (2020)

Egregor

Limited details are known for this incident

French container carrier company CMA CGM (2020)

Ragnar Locker

Online Services had to shut down

German cruise operator AIDA (2020)

DoppelPaymer

IT systems impacted

MSC HQs in Geneva (2020)

Undisclosed ransomware

Servers—PCs shut down for five days

Rotterdam (2017)

Petrwrap (version of the NotPetya)

Container terminal

America (2019)

Ryuk

CCTV, access control systems

San Diego and Barcelona (2018)

Ryuk

IT systems

Marseilles (2020)

Mespinoza/Pysa

Limited details are known for this incident

Long Beach (2018) Kennewick (2020)

Undisclosed ransomware

Servers locked

Lanngsten (2020)

Undisclosed ransomware

Database breach by encryption

Nhava Sheva (2022)

Undisclosed ransomware

Container terminal

US pipeline operator

Gas pipeline infra-structure shut down for two days (2019)

Undisclosed ransomware (presumably Ryuk)

Via Phishing email Natural gas compression facility was hit

Shipyards

Operational disruption in Norwegian Vard

Undisclosed ransomware

Online services

Marine service provider

James Fisher (2019) forced to shut down its digital systems

Undisclosed ransomware

IT/OT systems

Ports

Cruise operator Carnival Undisclosed Corporation and PLC ransomware

Personal info—Card details

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

93

OSINT, Social Engineering Gather Vic m Network and Host Informa on rela ve Naviga onal and Surveillance equipment (Technical Datasheets)

Ac ve Network Scanning RECONNAISSANCE

Phishing Email A ack

OSINT, Dumpster Diving, Social Engineering

Gain access to VSAT communica ons, to a ship’s service provider infrastructure/resources or to secondary networks (entertainment, POS, CCTV) Compromise VSAT controller/ communica ons Router Compromise a provider (MITM) (map provider, communica ons provider etc) Removable Media (USB, Rubber Ducky) Command Line Interface/Scrip ng

User Execu on

Scheduled Task

Remote Procedure/VDI

RESOURCE DEVELOPMENT

INITIAL ACCESS

Scheduled Task / Startup Execu on

Account Manipula on

Obfuscate Files or Informa on

Spoof Repor ng Message

Ac ve Network Scanning

Tracking of the Ship

Network Sniffing

Account Discovery

Command Line Interface

Compromise partner account

Malicious File (HTML, PDF, Macros) Unpatched vulnerability

Unsigned Process

Zero Day Exploit

User Execu on

Removable Media (USB, Rubber Ducky)

Execu on through API EXECUTION

Modify configura on of the Naviga onal aid, or the Surveillance System

Compromise staff’s account

Command line/Scrip ng Remote Procedure/VDI

Account Manipula on

Scheduled Task

Scheduled Task / Startup Execu on

PERSISTENCE

Process Hijack Obfusca on/Encryp on DEFENCE EVASION

Disabling Security Tools Process Discovery Network Share Connec on Removal Scrip ng File and Directory Permissions Modifica on

A ack via the ECDIS/ Plo er via USB connec on Exploita on of Remote Services

System Informa on Discovery DISCOVERY

User Interac on/Execu on

LATERNAL MOVEMENT

Eleva on vulnerabili es / Impersona on

P2I

Equipment monitoring

Opera onal Data

COLLECTION

C2 over DNS or HTTPs

C2 over CDN

Mul -hop Proxy

COMMAND AND CONTROL

Mul -hop Proxy C2 over DNS or HTTPs C2 over CDN

Remotely/Locally

Remotely/Locally

User Interac on/Execu on

P2I Equipment monitoring

Remotely

Exploita on of Remote Services Removable Media

Local/Network and Cloud Data

EXFILTRATION Denial of Service

Physical Damage (ICS Systems)

Reputa on

Human Causali es

Financial Loses IMPACT

Data Encrypted

Process Termina on

Service Stop

Financial Loses

Reputa on

Inhibit System Recovery

Damaged equipment (firmware corrup on)

Fig. 2 Attack methodology

OT systems utilized in a Maritime context. To provide a comprehensive coverage of the adversarial tactics and techniques that can be utilized to deliver a ransomware attack in the Maritime context, MITRE ATT&CK Enterprise and ICS matrices have been utilized to extract the information provided in Fig. 2. It should be highlighted that that due to the integration of specific management systems on a ship with the corresponding ashore systems, an attacker can implement a combination of ship to shore attack techniques and maximize the attack’s impact. As observed from Fig. 2, a range of techniques related to ransomware attacks have been identified, indicating which of the techniques can be applied against a ship or ashore systems. In addition, Fig. 2 shows that most of the techniques may impact the IT systems of a ship, including part of the OT systems. In such a case, a ransomware attack may have operational impact on the ship, changing the maneuvering behavior/capability of the ship. In the context of creating training exercises, Fig. 2 can be utilized to guide the design of specific attack scenarios, demonstrating specific adversarial capabilities that target to infect specific Maritime assets.

4 Situation—Aware Training Curriculum: Design Directions This section discusses the training curriculum that is developed over a Maritime Cyber Range called MARTE [14], taking into consideration the guidelines provided

94

G. Potamos et al.

in [18]. The section aims to highlight key cyber aspects and to provide design directions for the development of future curricula related to building cyber capabilities to manage the risk of ransomware attacks.

4.1 Audience and Scope Addressing ransomware attacks is very challenging, mainly because it is a topic that concerns business, operational, and tactical aspects. This means that all people across the entire organizational hierarchy should build relevant knowledge and skills to handle such an incident. The challenge is to determine the capabilities that people need to develop across the different levels so they can handle a ransomware incident. At a strategic level, senior decision-makers need to build knowledge and skills to handle ransomware from a business perspective. People that make day-to-day decisions regarding Maritime operations should build capabilities to address ransomware incidents that affect operational aspects. Lastly, personnel at a tactical level must be able to identify adversaries’ actions and the assets that have been compromised, to inform and drive tactical decisions. To mitigate effectively and efficiently a ransomware incident in the Maritime domain, training curricula should consider a unified approach, specifying learning objectives, topics, and pedagogy utilized across strategic, operational, and tactical levels. Such an approach can assist all parties involved to understand their role in mitigating ransomware incidents, building relevant knowledge and skills on an individual level and on a team’s level. The latter can greatly contribute to coordinate actions effectively across the workforce hierarchy during a ransomware incident, so that the right decision-maker can take the right actions to mitigate the impact from the attack and support the mission [18] of the organization.

4.2 Learning Objectives The first aspect that needs to be specified when designing training curricula is to identify the learning objectives. Table 2 lists key learning objectives across a strategic, operational, and/or tactical level to develop a skillful workforce that can defend against a ransomware attack. The proposed learning objectives take into consideration the guidelines in [19, 20]. The list of learning objectives is not meant to be exhaustive, but rather indicative and can be further elaborated depending on the training needs, the personnel’s role, and the proficiency level of cybersecurity capabilities that need to be demonstrated by the personnel.

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

95

Table 2 Learning objectives ID

Learning objective

Level

LO1

Understand how ransomware works

S/O/T

LO2

Discuss the impact to the operations and mission objectives of the organization from a ransomware attack

S/O

LO3

Describe how ransomware can be delivered during the initial stages of an attack strategy

S/O

LO4

Apply a cyber hygiene to minimize the risk of ransomware infection

S/O

LO5

Communicate information about a ransomware attack to the appropriate stakeholders/authorities

S/O

LO6

Exercise judgment regarding the course of action at a business level when faced with a ransom demand

S

LO7

Apply a ransomware incident response plan

S/O/T

LO8

Understand adversaries’ mindset and how data exfiltration can occur

T

LO9

Implement network hardening and best practices to safeguard the Maritime assets from ransomware attacks

T

LO10

Deploy and use intrusion detection, network monitoring and analysis tools to identify techniques utilized across the cybersecurity kill chain

T

LO11

Apply monitoring rules to improve early detection of ransomware attacks and T relevant adversarial activities Level S Strategic, O Operational, T Tactical

4.3 Learning Content and Pedagogy This section provides directions as to the learning content that should be developed to support the learning objectives specified in the previous section. A key element to consider is the pedagogy utilized to deliver the learning content and activities. As identified in [21], active learning should be pursued to improve cybersecurity awareness and training levels and build practical skills. As indicated in Fig. 3, active learning is empowered through kinesthetic learning. This aspect can be supported with the use of Maritime cyber ranges, delivering demonstrations and practical activities. This is taken into consideration when designing the proposed curriculum, indicating the topics where practical activities can have a fit. To promote the learning objectives, 4 thematic areas have been considered (Table 3 indicates the relevant Learning Content—LC and pedagogy) to educate and train Maritime stakeholders, covering: (1) fundamentals on Maritime domain and ransomware to assist attendees to understand how ransomware works and identify signs of infections (LC1,2), (2) ransomware cases in the Maritime domain to gain an understanding of what this attack entails and the impact it can have in the Maritime domain (LC3,4,5), (3) ransomware mitigation steps to be able to minimize the risk of getting infected but also to be ready to recover if impacted by an attack (LC6– 11), and (4) coordination activities, decision-making and communication during an

96

G. Potamos et al.

Fig. 3 Pyramid of learning

attack in the context of a ransomware incident response plan (LC12) to empower stakeholders to effectively and efficiently address the attack. A training course that deploys the proposed curriculum can deliver parts of the learning content to different audiences, depending on their role and background Table 3 Learning content Learning Content (LC)

Pedagogy

1

Maritime domain, mission objectives, assets, operations, and stakeholders

L/CD

2

What is ransomware, how it works, signs of infection

L/AV/D

3

Ransomware attacks in the Maritime domain, malicious objectives, and CSA/CD impact

4

Delivery of ransomware: the case of social engineering attacks

L/CD/D/PA

5

To pay or not to pay the ransoms—risks involved

L/CD

6

Apply cyber hygiene practices

D/CD/PA

7

MITRE ATT&CK matrices: Data exfiltration—Tactics, Techniques, Procedures (TTP)

CSA/D/PA

8

Industry guidelines, standards, and controls to mitigate ransomware

L/CD/SGA

9

Network-based intrusion detection systems (NIDS), Host-based intrusion detection systems (HIDS), network monitoring and analysis tools: Configuration and use

D/CD/PA

10

Ryuk ransomware walkthrough

CSA/D

11

Ransomware attacks simulation, Indicators of Compromise (IoC) and SIEM rules configuration

PA

12

Ransomware incident response plan

TE/AV/SGA/CD

L Lecture, CD Critical Discussion, CSA Case Study Analysis, D Demonstration, SGA Small Group Activity, TE Tabletop Exercise, AV Audio-Visual, PA Practical Activity

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

97

(technical versus non-technical), while certain parts can be applicable to all Maritime stakeholders working at a strategic, operational, and/o tactical level. Therefore, the curriculum developers should elaborate content to the appropriate level considering the relevant audience.

5 Training Platform This section presents the training platform that was utilized to develop the proposed curriculum. An example of a practical activity included in the curriculum is briefly presented.

5.1 High-Level Architecture A training environment is proposed, as part of a Maritime Cyber Range, called MARTE [14], in order to execute offensive and defensive drills, including training exercises related to ransomware attacks. As depicted in Fig. 4, the platform contains the typical Command-and-Control Systems (C2) on a ship, or in a shore Maritime infrastructure. According to the profile of ransomware attacks (Sect. 3) in the Maritime domain, several attack points and tactics can be considered in training scenarios and implemented in the proposed infrastructure. In particular, the adversary can implement a ransomware extortion scheme against:

C2/MSA

GNSS IT/OT INS Router

ECDIS/Plo er

RADAR

IT Ship GW

Admministra on/ Management (C2)

Ashore Router

Administra on/ Entertainment onboard

AIS

OT/ICS ICS Router

ICS C2/Control Access Management

Industrial Network: Administra ve Network: Integrated Naviga on Systems Network Ransomware A ack Points: Ship to Shore Ransomware A ack:

Fig. 4 Training platform high–level architecture

Machinery Monitoring and Control System

PLC

98

G. Potamos et al.

• Maritime IT/OT systems on a ship, including navigation and surveillance components. This category of training exercises contains adversarial techniques implemented against OT and/or IT components onboard a ship. • Industrial (ICS/OT) systems on a ship and/or against ashore Maritime asset’s (IT/OT/ICS) included in ports, shipping companies’ operation centers, shipyards, energy transportation infrastructures, etc. In this case, the attack can impact the Machinery Monitoring and Control System (MMCS). Moreover, a range of ICS and/or IT adversarial techniques can be implemented against the Maritime assets. To this end, the platform takes into consideration the MITRE ATT&CK matrices and supports training scenarios implementing techniques from the enterprise and ICS matrices. In addition, it is taken into consideration that sophisticated attacks can take the control of integrated systems between the ship and the shore. The platform supports the implementation of sophisticated offensive scenarios, supporting the execution of pivoting attacks that will allow the attacker to move from one compromised Maritime asset to the other, onboard a ship or at ashore infrastructures. • IT systems utilized for administration and entrainment purposes. The platform supports the implementation of offensive and defensive scenarios that include attacks against these systems that are on ships and/or at ashore infrastructures. • C2/MSA system of an ashore Maritime infrastructure, capable to collect navigation and surveillance data from a ship or a fleet of ships. The collected data are fused accordingly, to compose the Maritime Situational Awareness (MSA). Attacks against this system may impact the running services and manipulate its connected databases. Training scenarios can consider single (data encryption), double (data exfiltration), and triple (DDoS) extortion (Fig. 1), against the C2/MSA system, maximizing the attacks’ impact on the MSA.

5.2 Example of a Training Scenario This section briefly discusses an attack scenario that is implemented over MARTE considering the attack methodology presented in Fig. 2. The training scenario (Fig. 5) considers the case where a ransomware attack is executed against the ECDIS onboard a ship. The trainees are expected to take the role of the attacker and implement a range of adversarial techniques across the cybersecurity kill chain so they can gain an understanding of the attackers’ mindset. Initially, it is assumed that the attacker managed to compromise the VSAT communication system due to the weak authentication measure and has access to the ship’s gateway. Then, via network sniffing the attacker can discover the connections and the data flow between the sensors and the C2 system (ECDIS workstation). This technique is feasible if the attacker monitors all the maritime protocols transmitted by the surveillance and navigation sensors, as the relevant traffic (e.g., GNSS NMEA, Radar NMEA and AIS NMEA packets) is communicated in plain text and collected by ECDIS. Once the ECDIS workstation is identified, a vulnerability is exploited

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

99

Gather Vic m Network and Host Informa on related to Naviga onal and Surveillance equipment (Technical Datasheets).

RECONNAISSANCE

RESOURCE DEVELOPMENT

Gain access to VSAT communica on.

Compromise VSAT controller/ communica ons Router.

INITIAL ACCESS

Remote Procedure/VDI.

EXECUTION

PERSISTENCE

Network Sniffing.

User Execu on.

Remotely/Locally.

Denial of Service.

Modify configura on of the Naviga onal aid, or the Surveillance System.

DISCOVERY

LATERNAL MOVEMENT

EXFILTRATION

IMPACT

Fig. 5 Example of a training scenario

to gain access on it and modify its configuration file to run ECDIS as a service and establish a persistence connection. Establishing a persistent connect will allow the attacker to investigate more vulnerabilities and move lateral across the bridge’s systems. Finally, the remote execution of the malware impacts the track streaming services, encoding all the ECDIS directory and stopping the service. Due to the encryption, the attacker manages to cause denial of the ECDIS as a service. Building an attacker’s mindset is essential to gain a strong understanding of the techniques that are utilized and how they can be applied. This can constitute the basis to advance the trainees knowledge and skills through other training scenarios aiming to build competencies to detect, respond, and recover effectively from ransomware attacks.

6 Conclusions Ransomware will continue to evolve as cyber criminals try to maximize their profits by targeting lucrative targets such as critical infrastructures. The Maritime domain supports many aspects of the supply chain, making it a high priority target for cyber criminals. To effectively manage the risk of ransomware, all Maritime stakeholders need to understand this risk and their role in protecting the assets they are dealing with. This can be achieved through cybersecurity awareness and training sessions that incorporate structured walkthrough practice to promote active learning and make education memorable and actionable. Engaging all stakeholders, from C-suite, end users to IT and technical personnel, is critical to develop the necessary capabilities to defend, recover, and respond when faced with a ransomware incident. Cyber

100

G. Potamos et al.

ranges can play a key role in promoting structured learning and supporting the use of different pedagogies to engage participants and achieve effective learning. Therefore, they should be considered in training curricula to develop specific cybersecurity competencies. This work proposed a new training curriculum and provided design directions to build activities over a Maritime Cyber range and develop capabilities to defend against ransomware attacks. Future work will include the development of new training activities to enhance the cybersecurity capacity in the Maritime domain. Acknowledgements This paper has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement no. 833673. The work reflects only the authors’ view, and the Agency is not responsible for any use that may be made of the information it contains.

References 1. European Union Agency for Cybersecurity., ENISA threat landscape 2021: April 2020 to mid July 2021. LU: Publications Office (2021). 25 February 2022. https://doi.org/10.2824/324797 2. Ransomware Double Extortion and Beyond: REvil, Clop, and Conti—Security News. https:// www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/ransomwaredouble-extortion-and-beyond-revil-clop-and-conti. Last accessed 25 February 2022 3. EU-Threat_Landscape_Report-Volume 1 4. The Guidelines on Cyber Security Onboard Ships. https://www.bimco.org/about-us-andour-members/publications/the-guidelines-on-cyber-security-onboard-ships. Last accessed 23 October 2021 5. Mos, M.A., Chowdhury, M.M.: The growing influence of ransomware. In: 2020 IEEE International Conference on Electro Information Technology (EIT), pp. 643–647. https://doi.org/10. 1109/EIT48999.2020.9208254 6. Jacq, O., Boudvin, X., Brosset, D., Kermarrec, Y., Simonin, J.: Detecting and hunting cyberthreats in a maritime environment: specification and experimentation of a maritime cybersecurity operations centre. In: 2018 2nd Cyber Security in Networking Conference (CSNet), pp. 1–8. https://doi.org/10.1109/CSNET.2018.8602669 (2018) 7. Canepa, M., Ballini, F., Dalaklis, D., Vakili, S.: Assessing the Effectiveness of Cybersecurity Training and Raising Awareness within the Maritime Domain, pp. 3489–3499. https://doi.org/ 10.21125/inted.2021.0726 (2021) 8. Tam, K., Jones, K.: MaCRA: a model-based framework for maritime cyber-risk assessment. https://doi.org/10.1007/s13437-019-00162-2 (2019) 9. Hatzivasilis, G., Ioannidis, S., Smyrlis, M., Spanoudakis, G., Frati, F., Goeke, L., Hildebrandt, T., Tsakirakis, G., Oikonomou, F., Leftheriotis, G., Koshutanski, H.: Modern aspects of cybersecurity training and continuous adaptation of programmes to trainees. Appl. Sci. 10, 5702 (2020). https://doi.org/10.3390/app10165702 10. Fisher, B., Souppaya, M., Barker, W., Scarfone, K.: Ransomware Risk Management: A Cybersecurity Framework Profile, NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD (2022) 11. https://www.nomoreransom.org. The No More Ransom Project. https://www.nomoreransom. org/en/index.html. Last accessed 25 February 2022 12. Combating Ransomware—A Comprehensive Framework for Action: Key Recommendations from the Ransomware Task Force, p. 81 13. Hopcraft, R.: Developing maritime digital competencies. IEEE Commun. Stand. Mag. 5(3), 12–18 (2021). https://doi.org/10.1109/MCOMSTD.101.2000073

Building Maritime Cybersecurity Capacity Against Ransomware Attacks

101

14. Potamos, G., Peratikou, A., Stavrou, S.: Towards a Maritime Cyber Range training environment. In: 2021 IEEE International Conference on Cyber Security and Resilience (CSR), pp. 180–185. https://doi.org/10.1109/CSR51186.2021.9527904 (2021) 15. Tam, K., Moara-Nkwe, K., Jones, K.: The Use of Cyber Ranges in the Maritime Context: Assessing maritime-cyber risks, raising awareness, and providing training. Marit. Technol. Res. 3(1). https://doi.org/10.33175/mtr.2021.241410 (2020) 16. Meland, P.H., Bernsmed, K., Wille, E., Rødseth, Ø., Nesheim, D.A.: A retrospective analysis of maritime cyber security incidents. TransNav, Int. J. Mar. Navig. Saf. od Sea Transp. 15(3) (2021) 17. Caprolu, M., Pietro, R.D., Raponi, S., Sciancalepore, S., Tedeschi, P.: Vessels cybersecurity: issues, challenges, and the road ahead. IEEE Commun. Mag. 58(6), 90–96 (2020). https://doi. org/10.1109/MCOM.001.1900632 18. Potamos, G., Theodoulou, S., Stavrou, E., and Stavrou, S.: Maritime cyber threat detection framework: building capabilities. In: 15th World Conference on Information Security Education (WISE15), pp. 13–15 June, Copenhagen (2022) 19. Maritime Bulk Liquids Transfer Cybersecurity Framework Profile. United States. Coast Guard. https://www.hsdl.org/?abstract&did=797741. Last accessed 12 February 2022 20. Parrish, A., Impagliazzo, J., Raj, R., Santos, H., Asghar, M.R., Jøsang, A., Pereira, T., Sá, V.J., Stavrou, E.: Global perspectives on cybersecurity education. In: Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE 2018). Association for Computing Machinery, New York, NY, USA, pp. 340–341. https://doi. org/10.1145/3197091.3205840 (2018) 21. Stavrou, E.: Back to basics: towards building societal resilience against a cyber pandemic. J. Syst. Cybern. Inf. (JSCI) 18(7) (2020)

Cyber Situational Awareness Applications

A Decade of Development of Mental Models in Cybersecurity and Lessons for the Future Robert Murimi, Sandra Blanke, and Renita Murimi

Abstract Mental models are essential in learning how to adapt to new and evolving circumstances. The landscape of best practices in cybersecurity is a constantly changing area, as the list of best practices evolves in response to the increasing complexity and scope of threats. In response, users have adapted to the threats and corresponding countermeasures with mental models that simplify the complex networked environments that they inhabit. This paper presents an overview that spans over a decade of research in mental models of users when dealing with cybersecurity threats and corresponding security measures in different kinds of environments. The lessons from over a decade of research in mental models for cybersecurity offer valuable insights about how users learn and adapt, and how their backgrounds and situational awareness play a critical role in shaping their mental models about cybersecurity. Keywords Mental model · Cybersecurity · User engagement

1 Introduction A recent report from ThreatLocker found that 85% of cybersecurity breaches in 2021 were due to human errors [1]. This finding comes on the heels of similar reports that document that if the human were eliminated, 19 out of 20 breaches could have been eliminated [2]. One significant factor that contributes to the human error in cybersecurity breaches is complexity. Users are faced with increasingly complex decisions, such as the need to understand evolving advice on password hygiene, R. Murimi (B) · S. Blanke · R. Murimi University of Dallas, Irving, TX 75062, USA e-mail: [email protected] S. Blanke e-mail: [email protected] R. Murimi e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_7

105

106

R. Murimi et al.

scanning emails for suspicious headers and content, and refraining from clicking on benign-looking download buttons that carry malicious payloads. At the same time, the stakes keep getting riskier. Whereas once the clicking of links and buttons might have resulted in individual data loss at a lower level of risk, threat vectors carrying ransomware have the potential to result in massive data breaches that cause damage to the entire organizational infrastructure. Simultaneously, technology applications have diversified rapidly. For example, Web3, the third generation of the Web, which is built upon blockchain technologies has required users to learn about new concepts, terminologies, and applications. One way in which people adapt to the complex environments around them is through the development of mental models. In [3], the authors define a mental model as a “dynamic, planned action setting must be composed of (at least) these four elements: intentions, perceptions, system structures, and plans.” Mental models seek to simplify the complexity and extract elements that are useful to the users in order to understand, adapt, and engage with the tools and platforms of these environments, and develop with experience [4]. Mental models are dynamic and are based on individual experiences [5], as they influence people’s decisions and actions [6]. However, certain challenges to mental models exist. Mental models have the potential to alter behavior, but not necessarily for the better since they can be incomplete or inaccurate [7]. An incorrect mental model can make users mistrust insecure technologies [8]. Further, since mental models exist in the mind, one cannot directly measure which mental model is active in a person’s mind and the extent of the model’s performance. An effective mental model that keeps up with the advances in cybersecurity requires a human-centered foundation, with consideration of the technology, situational awareness, and human behavior. In cybersecurity, secure decisions rely on users having correct mental models of security issues. In general, computer users have difficulty anticipating cybersecurity threats, or perhaps may take incorrect actions and end up making themselves less secure. While cybercrime has a diverse array of outcomes, such as data theft, fraudulent financial transactions, stolen intellectual property, or software or hardware destruction, hacking often plays a significant role in the execution of cybercrime [9]. Organizations routinely roll out improved threat detection and intrusion prevention systems to bolster cybersecurity, but hackers have begun targeting humans in addition to hardware or software. The issue of how to increase understanding of, and compliance with, security measures has been one of the hallmarks of cybersecurity research and has necessitated some attention to the psychology of users. In response to a warning or suggested activity, a typical user has four ways to engage with the warning or activity. These ways lie on a spectrum of engagement, as shown in Fig. 1, which illustrates how users interact with password hygiene advice. At one end of the spectrum, a user might ignore the warning or security activity and ignore the warnings about changing default passwords. For example, generic SSL/TLS (Secure Socket Layer/Transport Layer Security) warning does little to encourage or discourage these mental models, so users are relatively free to adopt whatever mental model they like. As found in [10, 11], users’ responses to TLS warnings are relatively consistent: they usually ignore them. A slightly more involved form of engagement is weak engagement, where the

A Decade of Development of Mental Models in Cybersecurity …

107

Fig. 1 User engagement with cybersecurity activities with examples of password hygiene

user might change the default password but only substitutes it for a weak password. Further along, a more cautious user would engage exactly as recommended by the warnings and security recommendations and use passwords that are strong, unique, and frequently changed. At the extreme end of the spectrum, motivated users with strong engagement would go above and beyond the recommendations for password hygiene and leverage additional tools for password security such as vaults and VPNs. For each of these levels of engagement, the driving factor is the mental model of the user. It should be noted that the modes of engagement vary determined by multiple factors such as expert guidance on best practices, cultural norms, risk-averse behavior, and familiarity with technology. For example, the National Cyber Security Center (NCSC) issued password guidance in 2015 that no longer endorsed frequent change of passwords, citing the burden imposed on users in changing and remembering the new passwords [12]. This paper offers a review of mental model research in cybersecurity over the past decade, where users have developed ways (ignore, weak, exact, and strong) of interacting with security recommendations and security activities in their networks. Although we refer to a “decade” of research, we have included research extending as far back as the early 2000s in order to include some seminal work referring to the role of mental models in how we think about secure online practices. The research that we profiled for this paper was derived from a Google Scholar search of the terms, “mental models” and “cybersecurity”. The rest of this paper is organized as follows: Sect. 2 presents two categories of mental models, folk metaphors, and formal methods, to classify the ways in which users engage with the technologies. Section 3 presents findings in mental model research of interfaces that people interact with and how that influence cybersecurity. Section 4 provides cybersecurity-related mental model research in specific platforms, technologies, user types, social factors, tools, or applications. The implications of this research for emerging technologies are discussed in Sect. 5, and Sect. 6 concludes the paper.

2 Folk Metaphors and Formal Models Since a mental model is a representation in working memory, the mental models of experts and non-experts vary. The mental models of two types of users—those who had informal exposure to cybersecurity topics and those who had formal exposure

108

R. Murimi et al.

were studied in [13]. Starting with a hypothesis of inverse relationship between cybersecurity knowledge and perceived security, the authors constructed mental models of end users’ cybersecurity knowledge. They found that Users with formal cybersecurity exposure offered longer responses and more domain-specific words than users with informal exposure. Thus, prior formal exposure was shown to produce different mental models between the two groups of users. This has important implications for cybersecurity training and awareness campaigns, where end users may possess varying levels of domain-specific expertise in cybersecurity topics. While the role of mental models in cybersecurity is undisputable, there is a wide range of models in existing literature. Categorizing them and systematically analyzing them will be the theme of the rest of this paper. To start with, this section approaches mental models in terms of their underlying philosophy. The ones derived from analogies to common social phenomena such as medical infections or warfare are referred to as folk models, while those derived from engineering disciplines are referred to as formal models. This categorization is not absolute, since there are some studies whose findings about mental models would not neatly fit in either or have significant overlap between the two.

2.1 Folk Models Among the earliest research in folk mental models for cybersecurity is the work in [14]. Here, the authors proposed a framework where there are predominantly five kinds of mental models for communicating complex security risks. The models take the form of analogies or metaphors to other similar situations: physical security, criminal behavior, medical infections, warfare, and market failure. The use of metaphors for mental model nomenclature was used to make security less virtual and more tactile in order to increase risk awareness. Associating virtual risks with more tactile risks—wild animals, disease, crime, and war have been shown to increase sensitivity to and awareness of risks [26]. Comparison of responses by experts and non-experts to these metaphors showed that non-experts found a physical and criminal model to be the most accessible mental model for cybersecurity risk communication. Additional work in [15] implemented mental models from [14] by using agents that simulate human behavior within a network security testbed. The simulation exercise specifically analyzed four security activities: using antivirus software, caution in visiting websites, making regular backup, and updating patches regularly. The findings suggested that mental models are not necessarily self-consistent. Other early work in [16] investigated the mental models of home computer users in the context of computer security. Their findings about users’ mental models were categorized as follows: buggy (due to software flaws), mischief (due to mischiefmongers), crime (intended to obtain sensitive information), burglar (stealing financial data), vandal (causing damage for showing off), and big fish (targeting rich or important individuals for attacks). The authors noted that majority of home computer users have little computer security knowledge and most of the decisions they make

A Decade of Development of Mental Models in Cybersecurity …

109

about computer security are guided by how they think about computer security, which may not be technically correct to lead to a desirable security behavior. In other words, sometimes even “wrong” mental models produce good security decisions. Thus, this work called for additional research to investigate the connection between mental models and actual security behaviors, since not all mental models lead to positive security behaviors. A key finding of this research was that non-expert users such as home computer users could still navigate computer security-related issues efficiently, thus eliminating the constraint that nontechnical users must become more like computer security experts to properly protect themselves. The authors argued for two action items to successfully change people’s mental models. First, research needs to identify how people form these mental models, and how these mental models can be influenced. Second, the relationship between these mental models and associated security behaviors needs to be analyzed to identify which mental models are good for home computer users. The authors recommended three approaches to assist users in computer security. First, the stupid approach attempted to create security solutions by removing the user from the decision-making process since they are seen as the weakest link [27]. This approach was not recommended due to its one-size-fits-all stance to security problems in spite of the fact that people use computers for such a variety of different purposes. Second, the education approach was suggested that allows users the freedom to choose and provide them with appropriate training to enable them to make good security choices. However, this approach was constrained by the fact that home computer users are rarely interested in learning the details of how security software works. Finally, the understand-how-users-think approach involves working to understand how computer users think about security, and how they make security decisions. This approach leads to an understanding of user thinking (mental models) in order to understand user behavior. A key finding of their paper was that mental models are neither correct nor incorrect, rather they result in different potential benefits from associated behaviors. Thus, the emphasis should not be on teaching people “correct” mental models, but on finding ways to encourage models that lead to valuable security behaviors. Work in [17] explores how people use mental models to understand and experience privacy and security as they engage in online and computer actions and activities in their daily lives. Here, the authors emphasize the need for effective cybersecurity training and education for end to navigate the online environment safely. However, effective cybersecurity training requires that we first understand how people think about and experience online privacy and security in their daily lives. In their findings, the authors identified multiple mental models: the brave new world model, the fatalistic model, the little value model, the maintenance model, the not-my-job model, the optimistic model, the reputation model, and the verification model. These models were identified as coping mechanisms for users to adapt to their rapidly changing technological environment. Their findings also suggested that these mental models were sometimes partially formed, and that multiple mental models were used simultaneously indicating the lack of a single overarching model among users. Among the various mental models that were identified, the authors found that the brave new world model was the most common model among users.

110

R. Murimi et al.

2.2 Formal Models Formal methods in mental models are derived from a range of disciplines, including control systems, human factors, and state models. Mental models of users at the intersection of formal methods and human factors engineering [18] were analyzed in the form of a proposed framework that combines the error and blocking state architecture of [28] with human factors engineering. This framework builds upon folk models of computer security threats initially conducted by [29] and uses error and blocking states to reveal insights about user and system model mismatches. The use of formal methods coupled with human factors engineering offers powerful tools for discovering the hard-to-anticipate failure modes that threat actors’ leverage through social engineering and other attack strategies in which humans are the targets. Building upon the mental model frameworks developed in [15, 30, 31] suggested that the use of mental models in cybersecurity serves two purposes. First, mental models could result in “strong intervention”, which states that mental models are necessary in order to understand the internet security situation [15]. Second, mental models could also result in “weak intervention”, which states that mental models supplement understanding of the Internet security situation [30]. The difference between the strong and weak claims is that the strong claim predicts that understanding and performance in cybersecurity situations is improved by the use of mental models, whereas the weak claim predicts only that understanding and performance in cybersecurity situations will be changed using mental models. The findings of [32] indicated that while there was little evidence for the strong intervention hypothesis, the use of any mental model or metaphorical framing of the context improved people’s understanding of internet security situations. They also found that while the weak intervention hypothesis predicted systematic changes in performance across different mental models, there was no specific change in performance. In general, there was a better overall performance when using cybersecurity context than when employing any mental models. Two approaches to risk communication: the human information-processing approach (C-HIP) and the mental model approach in risk communication (MMARC) were analyzed from the perspective of approaches for risk communication in cybersecurity [31]. The C-HIP approach has been taken by researchers in cybersecurity to study user perception of security risks and how risk perception influences risk-taking behavior [33]. This approach characterizes the human as a communication system, with risk-communication information from a source delivered to the receiver, who processes the information in various stages [34]. In the C-HIP approach, it is assumed that effective communication must trigger attention of the receiver, achieve comprehension, and influence decision-making and behavior of the receiver [35]. This model not only considers the traditional information-processing stages of humans, but also accounts for social-cognitive and cognitive-affective components. Work in [36] used C-HIP as an investigation tool and designed a survey to understand users’ attitudes towards software warnings and updates, to explain users’ hesitation in applying software updates.

A Decade of Development of Mental Models in Cybersecurity …

111

While memory and comprehension are key components in the C-HIP model [37–39], the MMARC approach emphasized the importance of understanding and comparing the mental models of experts and non-experts, and drafting and evaluating risk-communication messages. MMARC has been widely applied to studies concerning risk communication, from communications for tourists [40], flash flood risks [41], and medical risk [42]. The concept behind the MMARC approach is that security designs and educational efforts should align with users’ mental models that direct their decisions and actions [5, 43]. This approach highlights a five-step process—developing the expert model, eliciting the public model, and comparing it with the expert model, conducting confirmatory surveys among the broader population to determine the prevalence of the public model, drafting risk-communication messages based on the knowledge gap between experts and non-experts, and evaluating effectiveness of the messages. The role of cognition in cybersecurity activities was studied in [19], where the authors noted that although technology plays an important role in cybersecurity, it is humans who play key roles in cyberspace as attackers, defenders, and users. The authors proposed a cognitive security model with three layers: Knowledge, Information and Cognitive. Their model supports the process of modeling mental maps, the generation of knowledge, and the fusion and handling of large datasets. The proposed cognitive security model includes control techniques, OODA (Observation, Orientation, Decision, Analysis), and HITL (Human in the loop). OODA was employed to infer patterns from the analysis of the datasets by generating mental models based on profiles of attacks, threats, and user behaviors. The outcomes of OODA would be used to influence the situation awareness of the organization and associated activities for maintaining a secure cybersecurity posture. HITL integrates human interaction with the technological solutions of the three layers, and informs the generation of false alerts and other indicators of performance. The use of this proposed model affords the security analyst the ability to integrate experience and knowledge for a dynamic analysis of the organization’s cybersecurity posture and the support of decision-making processes. As shown in Table 1, not all models fall neatly within folk or formal categories. For example, a basic model of cyberspace consisting of three layers: technical, sociotechnical, governance to create a foundation with three corresponding mental models was proposed in [44]. An extension with eight additional mental models, crown jewels, kill chain, situational awareness, risk assessment model, risk response model, institutional model, direct and indirect social contract model, and triple bow tie model, was further suggested. (See [44] for an explanation of each of these models.) In the basic model with three layers, each layer has its own security requirements. The inner layer of the cyberspace model concerned all IT that enables the cyber activities. The middle layer of socio-technical cyber activities are activities executed by people and smart IT for accomplishing their personal, business, or societal goals. Finally, the outer layer concerns the governance layer of rules and regulations that should be put in place to properly organize the two underlying layers, including their security.

112

R. Murimi et al.

Table 1 Summary of mental models in cybersecurity Article

Mental model developed/tested

Type

Focus of mental model(s)

[14]

Physical, criminal, medical, warfare, market

Folk

Security risks

[15, 16]

Crime, burglar, mischief, vandal, buggy, medical

Folk

Common security activities

[15]

Stupid, education, understand-how-users-think

Folk

Home users

[17]

Brave new world model, fatalistic, little value, maintenance, not-my-job, optimistic model, reputation, verification model

Folk

Privacy and security

[18]

Error and blocking state model

Formal

Formal methods

[19]

Control methods, OODA (Observation, Orientation, Decision, Analysis), and HITL

Formal

Cognition

[20]

Incorrect, incomplete, partially correct, complete

Formal

firewalls

[21]

Access control, black box, cipher, iterative encryption

Formal

Encryption technologies

[22, 23]

Usable security models

Formal

Usability

[24]

Human factors ontology (HUFO)

Formal

Trust

[21]

Access control, black box, cipher, iterative encryption

Formal

Encryption technologies

[25]

Incomplete, inaccurate

Formal

Cryptocurrencies

3 Interface Design and Perceptions of Security People interact with technology of any kind using interfaces, and this holds true for activities related to cybersecurity as well. Thus, interface design is a crucial element in the design of secure systems and networks. This section explores the existing work on how mental models of interfaces influence the activities of users, and the importance of interface design in altering the perception of security.

3.1 Interface Design Human factors engineering was studied in [45] for exploring the human dimension of cybersecurity by employing a human factor integration (HFI) framework. Defining HFI as a “systematic process for identifying, tracking and resolving the wide range of human related issues in the development of capability”, the authors used HFI to consider the multiple ways in which the human can differentially affect the security of a system. The key mandate of the HFI process is to characterize and address the

A Decade of Development of Mental Models in Cybersecurity …

113

risks to a system generated by the humans. The HFI framework is divided into seven domains: social and organizational factors, manpower, personnel, human factors engineering, system safety, training, and health hazard assessment. Noting that the complexity of current technical systems is a major source of vulnerabilities, the authors suggest that introduction of technical security measures in such complex environments can lead to unforeseen human consequences including a reduction of effectiveness or efficiency. Similar to the work in [46, 47] that counters the notion of humans being the weakest link in the fight against cyberattacks, their findings indicate that HFI has the potential to prioritize and act upon the greatest risks originating from the human factor while also adhering to time and budget constraints. The focus on the human as the introducer of risk in networks is countered in [47], where the authors addresses the misconstrued notion that the “user is the enemy”. Many factors such as lack of clarity, uncertain consequences, use history, system expertise, shared responsibility, and conventional reliance on technical experts for providing cybersecurity have been identified as contributing factors to this notion. However, labeling users as the problem is not a solution—this displacement of responsibility takes a costly toll on our economy and on our safety. The direct relationship between usability and security dictates that the users cannot be left out of this equation of solving cybersecurity issues. Changing user expectations mean that all users, and not just experts, now have an active role to play in digital security. By shifting focus and start looking at users as the greatest hope for system security, interface designers can explore how compromising behavior can be designed out of the system. By considering the user’s profiles in terms of what they know, how they use the system, and what their needs are, designers will be better positioned to empower them in their digital security roles. The findings of [47] pointed to the need for appropriate mental models, increased transparency, and effective interface design for situational cybersecurity awareness. Additional work on the use of mental models for correct interpretation of displayed information is in [48], where the emphasis is on the design of cyber security dashboards that leverage data visualization and consequently guide policymaking. Existing literature on dashboard design suggests the construction of different types of dashboards for different people, which might be due to the dominance of different mental models in different user groups. Based on expert interviews, the authors demonstrated that there exists a difference in the perceptions of cyberattacks by different categories of users such as managers, operators, and analysts. Aligning designing guidelines for usable cybersecurity systems to standardized security controls has been studied in [49]. Here, the authors propose the design of context-independent guidelines, that do not focus on specific areas within cybersecurity such as authentication, access control, encryption, firewalls, secure device pairing and secure interaction. The rationale here is that broad context-independent guidelines can be adapted readily to various domains, and offer scope for innovation and customization while also being responsive to core problems and evaluation methods. Interface design in applications using anonymous credentials was studied in [50]. Here, the authors explored ways in which mental models of data minimization can

114

R. Murimi et al.

be evoked on end users for online applications. Existing literature in mental models of anonymous credentials is sparse both due to their novelty and complexity, and has further complicated the design of easily understandable interfaces end users. The authors found that users have grown accustomed to believing that their identity cannot remain anonymous when acting online, and so they lack the right mental model to understand the working of anonymous credentials or how it can be used to protect their privacy. In their study on mental models of anonymous credentials, the authors explored different user interface approaches for anonymous credentials based on three different metaphors: card-based, attribute-based, and adapted cardbase approaches. The authors found that successful adoption of novel technologies such as anonymous credentials requires a comprehension of their advantage and disadvantages, and that inducing adequate mental models is a key issue in successful deployment. Out of the three metaphors, the authors found that the adapted cardbased approach was the closest comprehensive mental model for anonymous credential application by helping users understand that attributes can be used to satisfy conditions without revealing the value of the attributes. The importance of effective risk communication using mental models that incorporated human-centered security was studied in [5]. The authors extend the narrative that end-users cannot be blamed for being the weakest link in cybersecurity, due to the number of warnings that the end-users receive on a daily basis. The high volume of warnings coupled with the inability to understand the nature of these warnings leads users to ignore these warnings, which eventually turns into a habit since users do not perceive the risk of ignoring these warnings. To rectify this malformed habit of ignoring warnings, the author proposes that the design of warnings should be aligned with the mental models of end users, and not just those of the developers and designers. The decision fatigue imposed by warnings was also studied in [51], where the authors found that users were overwhelmed by the constant need to be alert requiring them to make more decisions than they could process. The authors suggested the need for simpler user interfaces that could aid users in mitigating the decision fatigue caused by complex security advice. While most research in mental models is focused on the development of mental models, work in [52] investigated the assessment of mental models. Here, the authors developed an interface called Sero using the concept of mapping. Concept mapping enables advanced assessment techniques for mental models, with their capacity to blend recall, recognition, and reasoning techniques in the context of a nonlinear assessment. In this manner, concept mapping allows for more efficient data collection than interviews do, and presents an advantage over writing-based assessments and facilitates self-monitoring. Authors note that although other techniques such as thinkaloud protocols, narrative text, causal diagrams, pretest–posttest comparison, and lunar phases concept inventory (LPCI) represent other modes of assessment, they often fall short of the necessary requirements for the assessment of mental models and are not feasible for practical implementation. Software designers and security architects continue to face issues in developing a model that is both secure and usable. This is because security and usability seem to be conflicting in their goals. The main challenge in designing any usable security is

A Decade of Development of Mental Models in Cybersecurity …

115

finding a balance between protecting the system from unauthorized disclosure and cognitively designing the system to conform to the user’s expectations and satisfaction. In [53], the authors developed a holistic meta-model that combines security, usability, and mental models. This meta-model applies knowledge nurturing, mobilization, and sharing concepts in its development. The authors found that the degree of usable security depends on the ability of the designer to capture and implement the user’s tacit knowledge. The authors advocate for user interface design that aligns with the user’s mental and conceptual models and is consistent with the user’s expectations for functionality of the system. Mental models help to bridge the incongruity gap between the security and usability expectations of users, and thus, are a crucial foundational element of cybersecurity. Recognizing that not all security measures are friendly for users, [22] called for the need for usability tests that are different from the conventional software testing activities with a focus on rooting out any impediments that might affect user experience. Specifically, usability testing calls for a focus on user perceptions, characteristics, needs, and abilities as essential inputs for effective and robust system design. In their follow-up to this article published almost a decade later [23], the authors noted that user resistance, ineffectiveness of password-based security measures, and high volume of breaches indicate a pressing need for revisiting usable security. Their recommendations for usable security include the need for improved user experience, codification of best practices, graceful recovery procedures, and thinking of users not as adversaries but as part of the solution in cybersecurity. Interface design for risk communication was also studied in [54], where the authors analyzed the mental models of security experts and non-experts. Typical risk communication consists of a message that has been formulated by security experts to warn the non-experts of the looming threats. The gap between mental models of security experts who create the risk communication, and the mental models of non-experts who are expected to act upon the risk communication can decrease the efficacy of the risk communication. The authors note that the purpose of risk communication is not to convey a perfect truth to the users, but rather prompt them to take appropriate action in defending their systems against risk. Their findings showed that mental models based on physical security were appropriate for the non-experts but not the medical infection mental model, while the opposite was found to be true for experts leading the authors to suggest that risk communications should be driven by mental models of non-expert users.

3.2 Perceptions of Security While user interface design has garnered a lot of interest in motivating users to perform desirable activities to improve their risk posture, interface elements also tend to be ignored by users leading to increased risk to the users’ security and privacy. Authors in [55] investigate the motivation behind the reason the users chose to follow or not follow common computer security advice. The impact of security advice on

116

R. Murimi et al.

users in four well-known areas was studied: keeping software up to date, using password managers, using 2FA (two-factor authentication), and changing passwords frequently. The authors used a cost–benefit framework in their study, which was supplemented by risk perception and social motivation constraints. Their findings indicated that risk perception is central to security behavior while social motivation is much stronger and independent of instrumental motivation. Unlike other studies that separated experts from non-experts, their work simply compared those who followed the advice versus those who did not. The authors found that social considerations were largely trumped by individualized rationales, and each group viewed their decision as the rational one. Negative perceptions of security were explored in [56]. The authors explored how security advocacy can attempt to overcome negative perceptions that security is scary, confusing, and dull. Although cyber threats are evolving, users are falling behind in defending their systems and networks. Users often fail to implement and effectively use basic cybersecurity practices and technologies, due in part to negative feelings about security. Their findings called for security advocates, who must first establish trust with their audience and address concerns by being honest about risks while striving to be empowering to overcome these negative perceptions. Cybersecurity perception and behavior differed between experts and non-experts [57], where the authors found that part of the challenge of cybersecurity has been understanding the ways in which different groups of people think about and interact with cybersecurity. By examining the similarities and differences between experts and non-experts and identifying what characteristics influence their attitudes and behaviors, the authors provided insights into how to help non-experts understand and protect themselves online. In their study, the authors did not observe experts and non-expert as two separate groups, where participants could fall anywhere between non-experts and experts. Their findings showed that non-experts did not have a solid mental model related to cybersecurity, and they drew on multiple mental models that were ill-formed and which only partially helped them understand and navigate cybersecurity. On the other hand, experts used different mental models and tended to be proactive in their online security practices. Additional work in [58] explores the reasons why non-experts choose not to protect themselves from cyber threats by investigating the role of catalogued warning messages. In their evaluation, the authors organized their study around five elementary components of Technology Threat Avoidance Theory (TTAT)— perceived susceptibility, perceived severity, perceived effectiveness, perceived costs, and self-efficacy. The authors observe that non-experts who choose to not protect themselves have several reasons as to why they do not take the warning seriously such as a view that the threats as probably not real or not harmful, a view of threat countermeasures as probably not effective, costly, and difficult to implement, while also thinking of the task as not their job. Their findings point to the need to describe users’ actions, include information about threat probability, use color to represent threat severity, include information about threat consequences, provide users with specific instructions about how to avoid the threat, directly contrast potential losses from the attack with estimates of how much time will be required to implement the

A Decade of Development of Mental Models in Cybersecurity …

117

recommended actions to prevent the attack, and provide users with information about what their response accomplished once they respond to the warning message.

4 Mental Models in Specific Cybersecurity Domains This section offers perspectives in mental model development in several domains. Figure 2 shows the taxonomy of the various domains, which are further elaborated upon below. Each of these domains focuses on specific platforms, technologies, user types, social factors, tools, or applications.

4.1 Platform In this subsection, we review existing work in mental models of cybersecurity that investigate how platforms such as the Internet, Web, and media influence mental model development. Mental Model of the Internet. An analysis of cybersecurity mental models by delving into mental models of the underlying Internet itself was performed in [59], where the work examined users’ mental models of how the internet works and their privacy and security behavior in today’s Internet environment. The authors investigated users with computer science or related technical background against users with no technical or computational background in the field. The findings were consistent with other previous studies, where users with no technical background (otherwise referred to as non-experts in previous studies) had simpler mental models that omitted Internet levels, organizations, and entities in their design. Users with more articulated

Fig. 2 Taxonomy of mental models in cybersecurity domains

118

R. Murimi et al.

technical models perceived more privacy threats, possibly driven by their more accurate understanding of where specific risks could occur in the network. In observing the experts, the authors identified patterns in their conceptual models of the network and awareness of network-related security and privacy issues. The authors suggest that user perceptions vary as a function of their personal experiences and technical education level. Users’ technical knowledge partly influences their perception of how their data flows on the Internet. However, their technical knowledge does not seem to directly correlate with behaving more securely online. The authors further suggest that regardless of their technical knowledge, participants seem to have made most of their privacy-related decisions based on their experiences and cues. The authors found that there is mixed and indirect evidence of whether an accurate mental model and more advanced Internet knowledge are associated with more secure online behavior. Additional work on risk assessment of expert and non-expert users of the Web is in [60]. Building upon work in [16], the authors explored how mental model approach can be combined with individualization of security interventions. Here, the authors use card sorting to qualitatively study how users (expert and non-expert) perceive risks on web sites. The authors propose four strategies on how to effectively improve security interventions through individualization. The first strategy deals with emphasizing unknown risks, where an emphasis on unknown risk is suitable for behavioral data. In the second strategy, the mental model is enhanced by concrete implications to make the communication more effective. The third strategy is related to the perception of risk communication. The emphasis is on making the communication relatable to the users to make it effective. Finally, the fourth strategy increases the granularity of mental models, where the level of detail of the user’s mental model can identify the gaps in knowledge, the concreteness of current knowledge, and the individual’s perception of the risk. The findings suggest that the granularity of the mental models needs to go beyond a lay–expert-user dualism. Their findings of users’ mental models of security interventions support the notion that for most comprehensive and effective risk communication, security interventions need to be individualized. Role of Media. The role of media in forming mental models was studied in [7], where the authors examined the relationship between computer security and fictional television and films. Participants in their study were shown six clips from television series and films depicting computer security topics. The authors found that merely exposing users to the concept of computer security may improve their understanding or awareness. However, inaccurate, and exaggerated portrayals could also harm development of healthy mental models. This is because people’s ability to correctly recognize evidence of security breaches depends on their idea of what security incidents look like. Users draw conclusions about what is (not) realistic about computer security in fictional media using a variety of heuristics, most of which are either entirely non-technical or only partially grounded in technical understanding. The findings of their work indicated that security researchers and educators should take the effects of fictional portrayals into account when trying to teach users about security concepts and behaviors, supporting the findings of [61, 62] that fictional media can be a major source of security information for users.

A Decade of Development of Mental Models in Cybersecurity …

119

Work in [63] analyzes three issues that users face when dealing with cyber security. These issues are related to users’ conceptualization of passwords, antivirus protection, and mobile online privacy. Although the security industry provides users with ample security advice to help them stay informed about the latest threats and the best security practices, many users remain vulnerable because of noncompliance with security policies and the recommended security advice. While most of the security communications focus on the action level, the authors suggest a supplementary approach that uses metaphors and graphical explanation to facilitate users’ understanding of new security concepts. Specifically, the work proposed an online interactive comic series called Secure Comics for this purpose. The authors identified three challenges unique to usable security—(1) users are typically interested in security, (2) security systems are complex and abstract, and (3) users have poor mental models of security. The goal of the interactive comic series was to help to motivate learners’ interest in the challenges mentioned, and emphasized the role of educational efforts supplementary to technical, legal, and regulatory approaches for a holistic solution to securing computer systems.

4.2 Technology Cybersecurity is a vast domain comprising of a plethora of technologies, and users accordingly develop mental models of the different technologies that they interact with. In this subsection, we review literature in mental models of various technologies (both beneficial and detrimental) encountered in cybersecurity such as phishing, firewalls, single sign on (SSO), and secure HTTP. Mental Models of Phishing Security. The mental models of experts and novices in relation to the prevention of phishing attacks was studied in [64], where the authors generated ten terms (updates, anti-malware, training, red team, warnings, passwords, software, authentication, encryption, and blacklist), which the participants were asked to rate the strength of the pair of each concept. The authors used Pathfinder, a statistical software tool that represents pairwise proximities in a network, to determine the relatedness of the pairing and consequently the relationship between expertise and performance [65–68]. The authors found that mental model of experts was more complex than novices in the prevention of phishing attacks. Mental Models of Firewalls. In [20], the authors explored the users’ mental models of Vista Firewall (VF), where they investigated changes to the users’ mental models and the users’ understanding of the firewall settings after working with both the basic VF interface and the authors’ developed prototype. Their findings acknowledged the tension between complexity of the interface and the security of the system leading to the conclusion of the need for transparency. That is, if the security of the application changes because of underlying context, then the changes should be revealed to the users. The authors categorized their participants’ mental models based on their drawings of VF functionalities. In the incorrect mental model, users had an incorrect basic understanding of the inner workings of a firewall, while in

120

R. Murimi et al.

the incomplete mental model, users had a correct basic understanding of the firewall operation without context of network location and connection. Users with a partially correct model had a correct basic understanding of the firewall operation, with either the context of network location or connection. The complete mental model resulted in context of both network location and connection. The authors suggested that designers should consider the impact of contextual factors when designing the user interface of any security application, and users should be provided with contextual information for understanding application functionality. Mismatch between users’ computer-centric perspective of their security and the firewall’s changing security states could lead to dangerous errors, which in turn could change the state of the firewall. The authors suggested that providing an interactive tutorial for the firewall may help provide a platform for users to learn about the firewall and the impact of network context on firewall configuration. Mental Models of SSO. Single Sign-On (SSO) has the potential to increase security and usability of the systems it covers by making authentication for the user of multiple applications convenient and easy to use. While previous studies by [27, 69] have documented the issues with password authentication citing human cognitive limitations as the main problem, work in [70] analyzed the complexities of aligning mental models in an SSO. They found that there was a mismatch in the users’ understanding of the operation of the SSO system and its actual operation. By correcting the user’s mental models of SSO to match the actual functionality of the application is effective in getting users to successfully enroll in SSO and to better perceive what SSO is doing for them. In addition, by having matching models, the users could more easily maneuver the process when new applications are added into the single sign-on, or when they are asked to perform routine maintenance on their passwords. The authors found that users have different mental models, hence it is important for HCI designers to consider this diversity when designing effective and usable SSO systems. Mental Models of HTTPS. The mental models of end users and administrators concerning HTTPS and the types of attackers that HTTPS protects against were studied in [71]. The authors found that mental models of both users and administrators are developed based on the protocols and user interfaces with which they interact. They also found that many non-expert participants significantly underestimate the level of protection that HTTPS offers, whereas administrators generally have a good understanding of what HTTPS can or cannot protect against. In addition, most administrators had little conceptual knowledge of how the protocol works but were familiar with the different steps of establishing a communication. The authors found that there were differences between mental models of HTTPS between the two groups. Administrator mental models were generally protocol-based and correct even if sparse, on the other hand, the mental models of end users were sometimes not only sparse but simply wrong or non-existent.

A Decade of Development of Mental Models in Cybersecurity …

121

4.3 Type of User The preceding discussion shows that the bulk of existing work has focused on the mental models of experts and non-experts. This section reviews the current work in the cybersecurity-related mental models of additional categories of users—children, journalists, and chief information officers (CIOs). The differing demographics (children, journalists, CIOs) showcase the breadth of the existing cybersecurity mental model research among different types of users, and is not a comprehensive summary of all kinds of users. For example, studies on the privacy and security-related mental models among the elderly [72], incident responders [73], ER medicine [74], and MTurk workers [75] offer a wide range of cybersecurity-related mental models from the perspective of different types of users. Mental Models of Privacy and Security in Children. The increased usage of technology by children has attracted the attention of researchers in studying their mental models of privacy and security. Work in [76] examined how children aged 5–11 manage their privacy and security when they are online. The authors found that children have a reasonable understanding of some privacy and security components, but children aged 5–7 had some gaps. Children have some strategies to manage privacy and security online but rely heavily on their parents for support. Parents use mostly passive strategies to mediate their children’s device use, and they largely defer addressing privacy and security concerns to the future. The authors recommended several strategies among them, building apps and websites for children aged 5–11 that incorporate learning opportunities that children can encounter through regular use of the service, development of educational resources to help children understand that other actors are involved in online activities and that these actors affect people’s privacy and security online, create educational resources that are more focused on how the Internet functions may complement these resources by helping children to understand better on how and why certain online activities raise privacy and security concerns. The authors suggest that resources should promote direct engagement between parents and children and should focus on helping children’s grasp why certain practices protect privacy and security online and by doing this, parents may benefit from guidance on how to help children develop good privacy and security practices before they reach adolescence. Further work in [77] showed that children had adequate mental models of passwords, where they understood that passwords provided access control and offered privacy and protection. Mental Models of Information Security Among Journalists. Most of both legal and technological security risks to journalists and sources in recent years have centered on digital communications technology. Journalists’ mental models of information security were studied in [78]. The motivation for this study was in the shield laws, where journalists had operated with the understanding that their communications with sources were effectively protected from government interference. These shield laws prevented law enforcement from using the legal system to compel journalists to reveal their sources. The authors found that among reporters and editors, the need for security precautions was dependent on both coverage area and reporters’

122

R. Murimi et al.

lack of first-hand experience with security incidents. These two areas indicate that participants’ perceived security risk was primarily related to how sensitive or visible one’s subject of reporting may be to powerful actors, rather than the vulnerabilities of the technologies through which that reporting is done. One security strategy referenced by the participants, was the use of face-to-face conversation as a security strategy. This strategy of avoiding the use of technology as a privacy or security measure has been previously categorized as a privacy-enhancing avoidance behavior [79]. The authors describe journalists’ mental models of information security as “security by obscurity” to reflect journalists’ thinking about security risk and avoidance in relation to digital communications technology. This mental model treats as “secure” any type of journalism that is sufficiently “obscure” to not be of interest to powerful actors, such as nation-states. However, the authors noted two prerequisites to the security by obscurity mental model, that being “obscure” as a journalist or journalistic organization is possible, and two, that being lower profile in this way offers a measure of security. Mental Models of CIOs. The mental models of individuals who are responsible for managing the security culture of the organization were studied in [80]. Four mental models that were based on four growth phases were used to explain the evolution of security management. In the growth phase, CIOs recognize the security needs and acquire and implement security tools. In the integration phase, as the size of the installed base of tools increases, integration problems start to appear. In the formalization phase, required resources including security mechanisms are mobilized to be ready when they are needed. In the involvement phase, people deal with security mechanisms actively and so, effective design is necessary to prevent these security mechanisms from turning into constraints since security measures that are perceived as obstacles cannot be maintained over time. Their findings showed that a combination of all the efforts—technical, integration, formal, and involving—were essential to lead the firm towards the desired security level.

4.4 Social Factors The implications of effective (or ineffective) cybersecurity are anchored on social factors such as trust, privacy, and security. This subsection looks at research in mental models of these aforementioned social factors in relation to cybersecurity. Trust. A trust-based human factors ontology (HUFO) for cybersecurity was studied in [24]. Since humans are part of virtually all networks either as users, defenders, or attackers, they are capable of introducing risk into the network even if they are not attackers. The HUFO cybersecurity model proposed by the authors considers humans as risk factors and as risk mitigators, and incorporates metrics that go beyond the classic CIA (confidentiality, Integrity, Availability) framework. Another trust-based holistic risk assessment framework for users, defenders, and attackers is in [81]. While the primary focus in their risk assessment framework was on defenders, it also aimed to identify the differences in characteristics between trust

A Decade of Development of Mental Models in Cybersecurity …

123

in defenders, trust in users, and trust in attackers. An interesting perspective was provided of how attacks are perceived by attackers and defenders. The authors noted that attacks were easier to design, create, and launch from origins of the attackers’ choosing, while cyber defense efforts instead were challenged with predicting and detecting attacks. In their framework, the authors suggested that trust in the human factors is composed of two main categories: inherent characteristics, that which is a part of the individual, and situational characteristics, that which is outside of the individual. Their proposed trust-based framework also accounted for differing mental models, risk postures, and inherent biases. The motivation for this framework was derived from the 1996 Presidential Congressional Commission Framework for Risk Management, which incorporated standards from the environmental and human health risk assessment. Their trust-based holistic assessment framework comprised of context and problem formulation, assessment of systems, humans, risks, and threats, as well as agility and decision-making options. Privacy. Users’ mental models about privacy of data flow was studied in [82] by analyzing the perception of privacy using three applications: Endomondo—an application that supports users in being active in all sports, MobilePay—a payment application which works between peers and in many stores, and Roskilde festival application—an application that users use to get information about the festival and the best places to eat and visit during this festival. For each of these applications, the authors studied interface design from the user’s perspective, and also alignment with GDPR privacy laws. Their findings indicated that the participants did not see privacy to be a significant challenge. For example, the participants did not see anything wrong with sharing private data like heartbeat and GPS locations, but were wary about unwanted exchange of data where the service provider might preserve the collected data for use after the transaction. The work concluded that the mental models could be used to discuss privacy challenges and as basis for suggesting notifications and consent forms relating privacy.

4.5 Tools Tools such as encryption and machine learning have come to define the burgeoning complexity of cybersecurity. This subsection presents a summary of mental model research in cybersecurity in these two areas. Mental Models of Encryption. Cryptography is a pillar of modern cybersecurity, offering the capabilities of confidentiality, availability, and non-repudiation. Work in [21] explores mental models of encryption by exploring the three facets of users’ perception of encryption: what it is, how it works, and its role in their lives. The rationale for this work was derived from the potential of encryption technologies to circumvent the possibility of user error by transparently incorporating encryption into software and thus bypassing the user entirely, where the authors identified four mental models of encryption. First, the access control model was offered as the basic model of encryption which provides only the most minimal abstraction of access

124

R. Murimi et al.

control. The black box model was developed as an extension of existing credentialbased access. In the next level of abstraction labeled the “cipher” model, participants had a clear sense of how the “transformation” process of encryption works. Finally, the iterative encryption model was the most advanced, where participants explicitly described the encryption process as an iterative one involving multiple passes over the source data to produce the encrypted output. Although these four mental models of encryption vary in detail and complexity, their ultimate role is in access control. The authors suggest that perception of encryption as access control can be useful in the right contexts since a majority of participants adhered to this mental model and offered a base model upon which other models could be layered. The authors noted that improving the how of encryption was not sufficient in itself to drive adoption, rather the how and why were necessary in developing and using effective mental models. To aid in the how and why, users would need to understand the potential benefits of encryption to self and society at large. Since cryptography tends to be mathematically and computationally complex, the authors recommend that rather than offering users the technical details of encryption an effective model should focus on aligning designs and communication efforts with the functional models that the users already possess. Their findings called for security and warning indicators to be carefully designed, with an aim of being noticed, trusted, and validated by users. Additional work in mental models of encryption has been performed in [83] explores users and non-users’ mental models of end-to-end encryption of communication tools. Specifically, the authors investigated the use of secure communication tools to empower people to resist surveillance. In this study, the authors studied mental models of a hypothetical encrypted communication tool to avoid introducing bias about well-known tools. Prior work by [84] has shown that incorrect mental models are a key obstacle to the adoption of secure communication tools and other privacy-enhancing technologies. The authors found that end-to-end encrypted tools are widely used but not accurately understood, which leads to users unknowingly selecting insecure communication tools in situations where they most require privacy. The authors suggested that it is important to communicate the security properties of E2E encrypted communication tools, since users might know about the choice of specific encrypted tools for security-sensitive environments. Other important factors in adoption of encrypted communication tools were the size of the user base, and focused training sessions that reached out to specific categories of users such as activists and dissidents who would most benefit from the use of these encrypted tools. Mental Models of Security of Machine Learning Systems. Work in [85] explored the perception of vulnerabilities in machine learning (ML) applications. Specifically, the authors conducted a first study to explore mental models of adversarial machine learning (AML). The authors focused on developers’ mental models of the ML pipeline and potentially vulnerable components. Despite ML being increasingly used in industry, very little is known about ML security in practice. They identified four characteristic ranges that described practitioners’ mental models in AML. The first range was concerned with the intertwined relationship of adversarial machine learning (AML) and standard security. Here, the distinction between AML

A Decade of Development of Mental Models in Cybersecurity …

125

and security was not clear, security threats were often taken for granted, and practitioners were less aware of AML attack scenarios. The second range was concerned with the structural components and functional relations. By crafting inputs, an attacker might deduce architectural choices within the functional structure, whereas on the other hand a key parameter from the model could be accessed unlawfully. The third range was concerned with variations in the pipelines, attacks, and defenses, where the attacker either injected specific inputs or general malicious input to the application. Finally, the fourth range corresponded to the varying levels of technical depth, where the industrial practitioners perceived ML-specific threats and defenses at a varying level of technical depth.

4.6 Applications In this subsection, mental models of specific applications such as smart homes, IoT devices, and cryptocurrencies are surveyed. Mental Models of Smart Homes and IoT Security. Although there has been a growing interest in smart homes technologies, the level of acceptance varies among potential users leading to varying explored the end users’ perceptions, expectations, and concerns regarding smart homes [86]. The authors found that users’ understanding of smart homes was superficial and their mental models seemed to be affected by the technologies advertised in the media. While users appreciated the convenience offered by smart homes, their major concerns were burglary, hacker attacks, data theft, data abuse, and collection and storage of sensitive data like bank details. Comparing the mental models of experienced users’ and non-experienced users’, the authors found that most mental models for smart homes had two foci. First, users were concerned about their own technical deficiencies and anticipated issues due to perceived dependence on the technology or had some difficulties with the device thus rendering the device useless. Second, users were concerned about the privacy and security of smart homes. A key recommendation of this study was access control, such as limiting Internet access to a predetermined subset of devices. Work in [87] explored how users interact with the security features of IoT devices, and found that users’ mental model errors and biases, as well as their perceptions of complex conflicting system features were significant drivers of their interaction with these devices. Mental Models of Security in Cryptocurrency Systems. The rise of blockchainenhanced frameworks has prompted the development of Web3 technologies, which include diverse individual applications such as cryptocurrencies, non-fungible tokens, token economics, and decentralized digital solutions. Among these applications, cryptocurrencies (namely Bitcoin) were the first blockchain application to gain prominence. Work in [25] explored mental models of security and privacy risks in cryptocurrencies. The authors studied two mental models—the incomplete mental model, where participants knew most of the details such as the requirement of a sender to sign the transaction with a generated key, and knowledge of addresses

126

R. Murimi et al.

as the payment destination. The inaccurate mental model on the other hand explored misconceptions of cryptocurrency systems such as anonymity, cryptographic keys, and fees. In the inaccurate model, some participants assumed that cryptocurrencies were based on central management entities or direct end-to-end connections between users. The authors found although many misconceptions did not jeopardize users’ security or privacy, major misconceptions related to the functionality and management of cryptographic keys are not compensated by the cryptocurrency tools. The authors observed that wallet interfaces shaped the way participants perceived the blockchain location (centralized vs. decentralized), its functionality (persistent, transparent), and the users’ role within the cryptocurrency system. The authors recommend that modifications of the interface of cryptocurrency management tools can prevent security and privacy threats caused by incorrect mental models. Inaccurate mental model can lead to devastating loss in term of monetary value. The authors suggest that cryptocurrency tools should perform encryption by default and inform the users about this safety measure, while advocating for designers to add cues and visualizations to explain to the users which security measures (e.g., encryption) are implemented so that users can make informed trust decisions.

5 Discussion This paper has provided a brief overview of existing research in mental models that dictate our cybersecurity activities. These models span the spectrum of various tools, technologies, platforms, user types, and applications. Below, we discuss some implications of the role of standardization, social factors, workforce development, and emerging technologies such as Web3 in influencing users’ mental models of cybersecurity. Standardization: Drawing upon metaphorical nomenclature as well as formal methods, researchers have found that people think about cybersecurity in different ways. While the distinct ways of thinking about cybersecurity are a function of an individual’s expertise and background, it also poses a challenge for the designers and developers of security tools. Further, a single user might use different mental models for different applications or activities. For example, a mental model about password security might be different from a mental model about blockchain keys. One approach to addressing the challenge of a plethora of models would be curation and standardization efforts, similar to the MITRE Common Vulnerabilities and Exposure (CVE) database that is maintained by a consortium of organizations in academia, industry, and government sectors as a central point of information dissemination about security vulnerabilities. Similar efforts in identification and development of mental models, with inputs with diverse disciplines such as computer science, cyberpsychology, criminology, economics, marketing, information systems, and others would help to identify widely used mental models among different types of users for differing applications.

A Decade of Development of Mental Models in Cybersecurity …

127

Social factors: Mental models are an outcome of several attributes of an individual, including but not limited to experience with the artefact, linguistic interpretation of cybersecurity advice, and socio-cultural factors. For example, the findings of [88] indicate that the big five personality traits were an important predictor of cybersecurity-related behavior. Further, studies of technology usage show distinct trends in adoption and usage behavior such as Twitter usage in three major cities around the world [89], demographic patterns of Facebook usage [90], and interactive radio formats for audience engagement in Africa [91]. Further research is required to identify patterns in adoption of cybersecurity-related activities around the world and finding their antecedents in the socio-cultural cues. Workforce development: Cyber workforce development is a pressing need. As identified in [92], the importance of social fit in the highly complex and heterogenous cyber workforce is crucial in building secure networks. The authors identified six assumptions for the future of cybersecurity workforce development—the requirement for systemic thinkers, team players, a love for continued learning, strong communication ability, a sense of civic duty, and a blend of technical and social skill. Since cybersecurity is a critical component enabling social aspects of human behavior in networks [93], cybersecurity professionals must understand both the technical and human aspects of interaction with networks. Web3: The rise of blockchain-fueled applications such as cryptocurrencies, nonfungible tokens, and digital records for provenance have ushered in new technologies for users. The social media platforms of Web2 have also undergone a parallel evolution with newer platforms such as Discord, Notion, and others that are heavily leveraged by the Web3 ecosystem. Blockchain-based applications have steadily moved into mainstream discourse, starting with their initial foray from into cryptocurrencies such as Bitcoin and Ethereum. Since then, Ethereum has become a blockchain platform enabling the development of distributed applications (dApps). Additionally, blockchain architectures such as public, private, and consortium allow varying levels of participation, security, and privacy, which also require the development of a new vocabulary and associated mental models to understand the evolving terminology [94]. These new technologies need new mental models to process concepts related to public key cryptography and the security of private keys, the visibility of wallet addresses, hashing, and terms such as immutability and consensus which form the foundation of blockchain systems. The security threats confronting these technologies are diverse and evolving in complexity as well [95], some examples of which include ransomware strains that attack IoT devices [96] and niche supply chains [97], and botnets which were once at the forefront of spam spread are now actively used in mining cryptocurrencies [98].

6 Conclusions Users engage differently with the security recommendations, and their activities range from ignoring, weakly engaging, exactly engaging, or strongly engaging with

128

R. Murimi et al.

these recommendations. Each of the models surveyed in our paper offers users the ability to ignore (not engage) or engage to some degree (weak, exact, or strong) with the security advice. Ultimately, it is the users’ mental models that determine their level and intensity of engagement. This paper discussed mental models in cybersecurity from over a decade of research that explored how people perceived threats and associated security measures when interacting with technologies of different kinds. The findings of this survey indicate that mental models are diverse, and that no single mental model can be used to describe a definitive way of thinking about cybersecurity. This points to the need for continuous research to discover mental models that users develop in response to their situational awareness, newer technologies, and threats, so that security countermeasures can be designed in response to these mental models. Further, the design and implementation of security measures has to be driven by inputs from a variety of stakeholders, since the measures are often developed by experts for non-experts. Users leverage their cybersecurity mental models as a primary defense mechanism in understanding and responding to the protection of their data and networks, and so mental model research in cybersecurity will need rigorous modeling with an increased understanding of how non-experts navigate environments that are increasing in technological complexity.

References 1. Threatlocker: 12 steps to protect against ransomware. https://www.threatlocker.com/12-stepsto-protect-against-ransomware/. Accessed 16 May 2022 2. IBM Cyber Security Intelligence Index Report. https://www.ibm.com/security/threat-intellige nce/ (2021). Accessed 16 May 2022 3. Richardson, G.P., Andersen, D.F., Maxwell, T.A., Stewart, T.R.: Foundations of mental model research. In: Proceedings of the 1994 International System Dynamics Conference (1994) 4. Rowe, A.L., Cooke, N.J., Hall, E.P., Halgren, T.L.: Toward an online knowledge assessment methodology: Building on the relationship between knowing and doing. J. Exp. Psychol. Appl. 3–47 (1996) 5. Volkamer, M., Renaud, K.: Mental models—general introduction and review of their application to human-centered security. In: Number Theory and Cryptography, pp. 255–280. Springer, Berlin, Heidelberg (2013) 6. Morgan, G., Fischoff, B., Bostrom, A., Atman, C.J.: Creating an expert model of the risk. In: Risk Communication: A Mental Models Approach, pp. 34–61 (2002) 7. Fulton, K.R., Gelles, R., McKay, A., Abdi, Y., Roberts, R., Mazurek, M.L.: The effect of entertainment media on mental models of computer security. In: Proceedings of the Fifteenth Symposium on Usable Privacy and Security ({SOUPS} 2019), pp. 79–95 (2019) 8. Castelfranchi, C., Falcone, R.: Trust is much more than subjective probability: mental components and sources of trust. In: Proceedings of the 33rd Annual Hawaii International Conference on System Sciences (2000) 9. FBI: 2016 Internet crime report. https://www.fbi.gov/news/stories/ic3-releases-2016-internetcrime-report. Accessed 16 May 2022 10. Akhawe, D., Felt, A.P.: Alice in warning-land: a large-scale field study of browser security warning effectiveness. In: Proceedings of the 22nd USENIX Security Symposium, pp. 257–272 (2013)

A Decade of Development of Mental Models in Cybersecurity …

129

11. Porter-Felt, A.P., Reeder, R.W., Almuhimedi, H., Consolvo, S.: Experimenting at scale with google chrome’s SSL warning. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2667–2670 (2014) 12. NCSC: The problems with forcing regular password expiry. https://www.ncsc.gov.uk/blogpost/problems-forcing-regular-password-expiry#:~:text=The%20NCSC%20now%20reco mmend%20organisations,of%20long%2Dterm%20password%20exploitation. Accessed 16 May 2022 13. Cotoranu, A., Chen, L.C.: Applying text analytics to examination of end users’ mental models of cybersecurity. In: AMCIS 2020 Proceedings, vol. 10 (2020) 14. Camp, L.J.: Mental models of privacy and security. IEEE Technol. Soc. Mag. 28(3), 37–46 (2009) 15. Blythe, J., Camp, L.J.: Implementing mental models. In: 2012 IEEE Symposium on Security and Privacy Workshops, pp. 86–90 (2012) 16. Wash, R., Rader, E.: Influencing mental models of security: a research agenda. In: Proceedings of the 2011 New Security Paradigms Workshop, pp. 57–66 (2011) 17. Prettyman, S.S., Furman, S., Theofanos, M., Stanton, B.: Privacy and security in the brave new world: the use of multiple mental models. In: Proceedings of the International Conference on Human Aspects of Information Security, Privacy, and Trust, pp. 260–270 (2015) 18. Houser, A., Bolton, M.L.: Formal mental models for inclusive privacy and security. In: Proceedings of SOUPS (2017) 19. Andrade, R.O., Yoo, S.G.: Cognitive security: a comprehensive study of cognitive science in cybersecurity. J. Inf. Secur. Appl. 48, 102352 (2019) 20. Raja, F., Hawkey, K., Beznosov, K.: Revealing hidden context: improving mental models of personal firewall users. In: Proceedings of the 5th SOUPS (2009) 21. Wu, J., Zappala, D.: When is a tree really a truck? Exploring mental models of encryption. In: 14th Proceedings of ({SOUPS} 2018), pp. 395–409 (2018) 22. Theofanos, M.F., Pfleeger, S.L.: Guest editors’ introduction: shouldn’t all security be usable? IEEE Secur. Priv. 9(2), 12–17 (2011) 23. Theofanos, M.: Is usable security an oxymoron? Computer 53(2), 71–74 (2020) 24. Oltramari, A., Henshel, D.S., Cains, M., Hoffman, B.: Towards a human factors ontology for cyber security. Stids 26–33 (2015) 25. Mai, A., Pfeffer, K., Gusenbauer, M., Weippl, E., Krombholz, K.: User mental models of cryptocurrency systems—a grounded theory approach. In: Proceedings of the Sixteenth Symposium on Usable Privacy and Security ({SOUPS}), pp. 341–358 (2020) 26. Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211(4481), 453–458 (1981) 27. Adams, A., Sasse, M.A.: Users are not the enemy. Commun. ACM 42(12), 40–46 (1999) 28. Degani, A., Heymann, M.: Formal verification of human-automation interaction. Hum. Factors 44(1), 28–43 (2002) 29. Wash, R.: Folk models of home computer security. In: Proceedings of the Sixth Symposium on Usable Privacy and Security, pp. 1–16 (2010) 30. Wash, R., Rader, E.: Too much knowledge? Security beliefs and protective behaviors among united states internet users. In: Proceedings of SOUPS (2015) 31. Chen, J.: Risk communication in cyberspace: a brief review of the information-processing and mental models approaches. Curr. Opin. Psychol. 36, 135–140 (2020) 32. Brase, G.L., Vasserman, E.Y., Hsu, W.: Do different mental models influence cybersecurity behavior? Evaluations via statistical reasoning performance. Front. Psychol. 8, 1929 (2017) 33. Agrawal, N., Zhu, F., Carpenter, S.: Do you see the warning? Cybersecurity warnings via nonconscious processing. In: Proceedings of the 2020 ACM Southeast Conference, pp. 260–263 (2020) 34. Proctor, R.W., Vu, K.P.L.: Human information processing: an overview for human-computer interaction. In: The Human-Computer Interaction Handbook, pp. 69–88 (2007) 35. Breakwell, G.M.: Risk communication: factors affecting impact. Br. Med. Bull. 56(1), 110–120 (2000)

130

R. Murimi et al.

36. Fagan, M., Khan, M.M.H., Buck, R.: A study of users’ experiences and beliefs about software update messages. Comput. Hum. Behav. 51, 504–519 (2015) 37. Wogalter, M.S., Laughery, K.R., Mayhorn, C.B.: Communication-human information processing stages in consumer product warnings. In: Human Factors and Ergonomics in Consumer Product Design, pp. 41–67. CRC Press (2011) 38. Wogalter, M.S.: Communication-human information processing (C-HIP) model in forensic warning analysis. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds.) Proceedings of the 20th Congress of the International Ergonomics Association, Advances in Intelligent Systems and Computing, p. 821 (2019) 39. Conzola, V., Wogalter, M.: A communication–human information processing (C–HIP) approach to warning effectiveness in the workplace. J. Risk Res. 4(4), 309–322 (2001) 40. Aliperti, G., Nagai, H., Cruz, A.M.: Communicating risk to tourists: a mental models approach to identifying gaps and misperceptions. Tour. Manag. Perspect. 33, 100615 (2020) 41. Lazrus, H., Morss, R.E., Demuth, J.L., Lazo, J.K., Bostrom, A.: “Know what to do if you encounter a flash flood”: mental models analysis for improving flash flood risk communication and public decision making. Risk Anal. 36(2), 411–427 (2016) 42. Stevenson, M., Taylor, B.J.: Risk communication in dementia care: family perspectives. J. Risk Res. 21(6), 692–709 (2018) 43. Norman, D.A.: Some Observations on Mental Model Models. Hillsdale, NJ (1983) 44. Van den Berg, J.: Grasping cybersecurity: a set of essential mental models. In: European Conference on Cyber Warfare and Security, p. 534 (2019) 45. Nixon, J., McGuinness, B.: Framing the human dimension in cybersecurity. EAI Endorsed Trans. Secur. Saf. 1(2) (2013) 46. Still, J.D.: Cybersecurity needs you! Interactions 23(3), 54–58 (2016) 47. Hernandez, J.: The human element complicates cybersecurity. Defense Systems. https:// defensesystems.com/cyber/2010/03/the-human-element-complicates-cybersecurity/189831/. Accessed 16 May 2022 48. Maier, J., Padmos, A., Bargh, M.S., Wörndl, W.: Influence of mental models on the design of cyber security dashboards. In: Proceedings of VISIGRAPP (3: IVAPP), pp. 128–139 (2017) 49. Nurse, J.R., Creese, S., Goldsmith, M., Lamberts, K.: Guidelines for usable cybersecurity: past and present. In: Proceedings of the 3rd International Workshop on Cyberspace Safety and Security, pp. 21–26 (2011) 50. Wästlund, E., Angulo, J., Fischer-Hübner, S.: Evoking comprehensive mental models of anonymous credentials. In: Proceedings of the International Workshop on Open Problems in Network Security, pp. 1–14. Springer, Berlin, Heidelberg (2011) 51. Stanton, B., Theofanos, M.F., Prettyman, S.S., Furman, S.: Security fatigue. IT Prof. 18(5), 26–32 (2016) 52. Moon, B., Johnston, C., Moon, S.: A case for the superiority of concept mapping-based assessments for assessing mental models. In: Proceedings of the 8th International Conference on Concept Mapping. Universidad EAFIT, Medellín, Colombia (2018) 53. Mohamed, M., Chakraborty, J., Dehlinger, J.: Trading off usability and security in user interface design through mental models. Behav. Inf. Technol. 36(5), 493–516 (2017) 54. Asgharpour, F., Liu, D., Camp, L.J.: Mental models of security risks. In: Proceedings of the International Conference on Financial Cryptography and Data Security, pp. 367–377. Springer, Berlin, Heidelberg (2007) 55. Fagan, M., Khan, M.M.H.: To follow or not to follow: a study of user motivations around cybersecurity advice. IEEE Internet Comput. 22(5), 25–34 (2018) 56. Haney, J.M., Lutters, W.G.: “It’s Scary… It’s Confusing… It’s Dull”: how cybersecurity advocates overcome negative perceptions of security. In: Proceedings of the Fourteenth Symposium on Usable Privacy and Security ({SOUPS}), pp. 411–425 (2018) 57. Theofanos, M., Stanton, B., Furman, S., Prettyman, S.S., Garfinkel, S.: Be prepared: how US government experts think about cybersecurity. In: Proceedings of the Workshop on Usable Security (USec), Internet Society (2017)

A Decade of Development of Mental Models in Cybersecurity …

131

58. Jones, K.S., Lodinger, N.R., Widlus, B.P., Namin, A.S., Hewett, R.: Do warning message design recommendations address why non-experts do not protect themselves from cybersecurity threats? A review. Int. J. Hum. Comput. Interact. 1–11 (2021) 59. Kang, R., Dabbish, L., Fruchter, N., Kiesler, S.: “My data just goes everywhere”: user mental models of the internet and implications for privacy and security. In: Proceedings of 2015 SOUPS, pp. 39–52 (2015) 60. Bartsch, S., Volkamer, M.: Effectively communicate risks for diverse users: a mentalmodels approach for individualized security interventions. In: INFORMATIK 2013–Informatik angepasst an Mensch, Organisation und Umwelt (2013) 61. Abu-Salma, R., Redmiles, E.M., Ur, B., Wei, M.: Exploring user mental models of end-to-end encrypted communication tools. In: Proceedings of the 8th USENIX Workshop on Free and Open Communications on the Internet (2018) 62. Ruoti, S., Seamons, K.: Johnny’s journey toward usable secure email. IEEE Secur. Priv. 17(6), 72–76 (2019) 63. Zhang-Kennedy, L., Chiasson, S., Biddle, R.: The role of instructional design in persuasion: a comics approach for improving cybersecurity. Int. J. Hum. Comput. Interact. 32(3), 215–257 (2016) 64. Zielinska, O.A., Welk, A.K., Mayhorn, C.B., Murphy-Hill, E.: Exploring expert and novice mental models of phishing. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59(1), pp. 1132–1136 (2015) 65. Day, E.A., Arthur, W., Jr., Gettman, D.: Knowledge structures and the acquisition of a complex skill. J. Appl. Psychol. 86(5), 1022 (2001) 66. Dorsey, D., Campbell, G.E., Foster, L.F., Miles, D.E.: Assessing knowledge structures: relations with experience and post training performance. Hum. Perform. 12(1), 31–57 (1999) 67. Goldsmith, T.E., Johnson, P.J., Acton, W.H.: Assessing structural knowledge. J. Educ. Psychol. 83(1), 88 (1991) 68. Rowe, A.L., Cooke, N.J.: Measuring mental models: choosing the right tools for the job. Hum. Resour. Dev. Q. 6(3), 243–255 (1995) 69. Van der Veer, G., Melguize, M.: Mental models. In: Jacko, J.A. Sears, A. (eds.) The Human Computer Interaction Handbook, pp. 52–80. Lawrence Associates, Mahwah, NJ (2003) 70. Heckle, R., Lutters, W.G., Gurzick, D.: Network authentication using single sign-on: the challenge of aligning mental models. In: Proceedings of the 2nd ACM Symposium on Computer Human Interaction For Management of Information Technology, pp. 1–10 (2008) 71. Krombholz, K., Busse, K., Pfeffer, K., Smith, M., von Zezschwitz, E.: “If HTTPS were secure, I wouldn’t need 2FA”—end user and administrator mental models of https. In: Proceedings of the 2019 IEEE Symposium on Security and Privacy, pp. 246–263 (2019) 72. Fritsch, L., Tjostheim, I., Kitkowska, A.: I’m not that old yet! the elderly and us in HCI and assistive technology. In: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI) (2018) 73. Floodeen, R., Haller, J., Tjaden, B.: Identifying a shared mental model among incident responders. In: Proceedings of the 2013 Seventh International Conference on IT Security Incident Management and IT Forensics (2013) 74. Stobert, E., Barrera, D., Homier, V., & Kollek, D.: Understanding cybersecurity practices in emergency departments. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020) 75. Shillair, R.: Talking about online safety: a qualitative study exploring the cybersecurity learning process of online labor market workers. In: Proceedings of the 34th ACM International Conference on the Design of Communication (2016) 76. Kumar, P., Naik, S.M., Devkar, U.R., Chetty, M., Clegg, T.L., Vitak, J.: ‘No telling passcodes out because they’re private’ understanding children’s mental models of privacy and security online. In: Proceedings of the ACM on Human-Computer Interaction (CSCW), vol. 1, pp. 1–21 (2017) 77. Choong, Y.Y., Theofanos, M.F., Renaud, K., Prior, S.: “Passwords protect my stuff”—a study of children’s password practices. J. Cybersecur. 5(1) (2019)

132

R. Murimi et al.

78. McGregor, S.E., Watkins, E.A.: “Security by obscurity”: journalists’ mental models of information security. In: Quieting the Commenters: The Spiral of Silence’s Persistent Effect, p. 33 (2016) 79. Caine, K.E.: Supporting privacy by preventing misclosure. In: Proceedings of the CHI’09 Extended Abstracts on Human Factors in Computing Systems, pp. 3145–3148 (2009) 80. Sarriegi, J.M., Torres, J.M., Santos, J.: Explaining security management evolution through the analysis of CIOs’ mental models. In: Proceedings of the 23rd International Conference of the System Dynamics Society, Boston (2005) 81. Henshel, D., Cains, M.G., Hoffman, B., Kelley, T.: Trust as a human factor in holistic cyber security risk assessment. Proc. Manuf. 3, 1117–1124 (2015) 82. Sørensen, L.T.: User perceived privacy: mental models of users’ perception of app usage. Nord. Balt. J. Inf. Commun. Technol. 1, 1–20 (2018) 83. Abu-Salma, R., Sasse, M.A., Bonneau, J., Danilova, A., Naiakshina, A., Smith, M.: Obstacles to the adoption of secure communication tools. In: Proceedings of the IEEE Symposium on Security and Privacy, pp. 137–153 (2017) 84. Renaud, K., Volkamer, M., Renkema-Padmos, A. Why doesn’t Jane protect her privacy? In: Proceedings of the International Symposium on Privacy Enhancing Technologies Symposium, pp. 244–262 (2014) 85. Bieringer, L., Grosse, K., Backes, M., Krombholz, K.: Mental models of adversarial machine learning (2021). arXiv preprint arXiv:2105.03726 86. Zimmermann, V., Bennighof, M., Edel, M., Hofmann, O., Jung, J., von Wick, M.: “Home, smart home”—exploring end users’ mental models of smart homes. In: Mensch und Computer 2018-Workshopband (2018) 87. Yarosh, S., Zave, P.: Locked or not? Mental models of IoT feature interaction. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2993–2997 (2017) 88. Shappie, A.T., Dawson, C.A., Debb, S.M.: Personality as a predictor of cybersecurity behavior. Psychol. Popul. Media 9(4), 475 (2020) 89. Adnan, M., Leak, A., Longley, P.: A geocomputational analysis of Twitter activity around different world cities. Geo-Spat. Inf. Sci. 17(3), 145–152 (2014) 90. Gil-Clavel, S., Zagheni, E.: Demographic differentials in Facebook usage around the world. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 13, pp. 647–650 (2019) 91. Srinivasan, S., Diepeveen, S.: The power of the “audience-public”: interactive radio in Africa. Int. J. Press Polit. 23(3), 389–412 (2018) 92. Dawson, J., Thomson, R.: The future cybersecurity workforce: going beyond technical skills for successful cyber performance. Front. Psychol. 9, 744 (2018) 93. Garvin, D.A., Wagonfeld, A.B., Kind, L.: Google’s Project Oxygen: Do Managers Matter? Harvard Business School Review, Boston, MA (2013) 94. Yao, W., Ye, J., Murimi, R., Wang, G.: A survey on consortium blockchain consensus mechanisms (2021). arXiv preprint arXiv:2102.12058 95. Carlin, D., Burgess, J., O’Kane, P., Sezer, S.: You could be mine (d): the rise of cryptojacking. IEEE Secur. Priv. 18(2), 16–22 (2019) 96. Yaqoob, I., Ahmed, E., ur Rehman, M.H., Ahmed, A.I.A., Al-Garadi, M.A., Imran, M., Guizani, M.: The rise of ransomware and emerging security challenges in the Internet of Things. Comput. Netw. 129, 444–458 (2017) 97. Jarjoui, S., Murimi, R., Murimi, R.: Hold my beer: a case study of how ransomware affected an Australian beverage company. In: Proceedings of the International Conference on Cyber Situational Awareness, Data Analytics and Assessment (2021) 98. Murimi, R.: Use of Botnets for Mining Cryptocurrencies, pp. 359–386. CRC Press, Botnets (2019)

Exploring the MITRE ATT&CK® Matrix in SE Education Rachel Bleiman, Jamie Williams, Aunshul Rege, and Katorah Williams

Abstract Cybersecurity is a multidisciplinary field that requires understanding of human behavior. To reinforce this idea and encourage non-technical students to participate in cybersecurity, an experiential learning project was implemented in an upper-level undergraduate criminal justice class. This paper is focused on that proofof-concept class project in which groups of students mapped a social engineering case study onto the MITRE ATT&CK framework to understand the adversarial mindset. The paper provides background information on the ATT&CK framework, compares groups’ mappings to others within the class as well as against a mapping done by an ATT&CK representative, and it offers a discussion on the lessons learned and opportunities to expand our application and understanding of educational cybersecurity principles. This paper emphasizes that while someone with more knowledge and experience using a framework that focuses on the technical aspects of cybersecurity may map a SE case study differently than multidisciplinary students who are experiencing it for the first time, there is not a single correct way to interpret and correspondingly defend adversary behaviors. Having students experience this mapping project allows them to understand the breakdown of an adversary’s behavior and contextualize key tactics and techniques in a way that fits their perspective and skillset. This paper also demonstrates how a SE case study can be mapped onto the ATT&CK framework despite SE not being the focus of the framework, and that SE uses tactics and techniques that are also prevalent within more technical cyber campaigns. The

R. Bleiman (B) · A. Rege · K. Williams Temple University, Philadelphia, PA, USA e-mail: [email protected] A. Rege e-mail: [email protected] K. Williams e-mail: [email protected] J. Williams The MITRE Corporation, Bedford, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_8

133

134

R. Bleiman et al.

authors hope to encourage more interdisciplinary cybersecurity education by sharing this experiential learning course project. Keywords Cybersecurity education · Social engineering · Experiential learning · Adversarial frameworks · MITRE ATT&CK

1 Introduction Cybersecurity is often thought to be a technical field, in which only those with technical knowledge, such as coding, can contribute. However, that is not the case. In fact, cybersecurity should be a holistic, multidisciplinary field. Cyber intrusions and activity are committed by humans, and thus, understanding human behavior is an important element in defending against such security threats. Furthermore, according to the National Institute of Standards and Technology (NIST), which is part of the National Initiative for Cybersecurity Education (NICE), the global shortage of cybersecurity professionals is estimated to be 2.72 million [1]. As current students are the next generation workforce, it is vital that they learn the necessary skills to participate in the cybersecurity field, whether that be in a technical or non-technical position. While students in technical fields are typically aware of opportunities within the cybersecurity field, many non-technical students are not aware that cybersecurity is even a career option. One way to promote and advertise cybersecurity careers to non-technical students is through education on social engineering (SE), which is the act of manipulating others into taking actions that may not be in their best interest [2]. This can take many forms that bridge into the technical domain, such as clicking on a link in a malicious phishing email, plugging in a malware infected USB stick, or allowing an unauthorized individual entry into a secure server room. SE is a non-technical tool that is used throughout cyber campaigns and can be studied through understanding human behavior; in fact, understanding the human behavior of SE can better help to defend against it. Cyber intrusions may be carried out with only the use of SE, or SE may be used throughout and alongside the campaign to enable technical measures. Regardless, if only technical skills are emphasized in cyber education, defenders are left in the dark on a major part of adversary behaviors: the side of the human element. One way for non-technical students to study SE in a way that trains them to be better defenders is through experiential learning. Instead of learning about SE via a lecture, experiential learning allows students to have hands-on experience in learning about the topic. While there are several ways to implement hands-on SE exercises, both from the mindset of the offender and the defender, one ethical way is to seek to highlight and understand adversary behaviors by mapping these behaviors onto an existing framework of cyber adversarial tactics and techniques, such as MITRE’s ATT&CK® [3].

Exploring the MITRE ATT&CK® Matrix in SE Education

135

Thus, this paper will detail such an implementation and provide a proof-of-concept of an experiential learning project in which multidisciplinary students map the adversarial behaviors from a SE case study onto the ATT&CK framework. The following section describes other frameworks that exist in the cybersecurity field and extensively describes the ATT&CK framework to justify its use in this project. Next, the authors provide a description of the current study, including an overview of the class in which the project was implemented and a description of the assigned SE case and course project instructions. Afterwards, the authors analyze the results from the course project, including a comparison of the results from the various groups with those of an ATT&CK representative who also completed specific parts of the project. The authors conclude with a discussion of the implications and takeaways from this proof-of-concept project.

2 Adversarial Frameworks In addition to ATT&CK, there are numerous other frameworks that exist within the cybersecurity community for various uses. Such frameworks are offered by the Open Web Application Security Project (OWASP), with both the OWASP Security Knowledge Framework and the OWASP Risk Assessment Framework [4, 5]. Both of these OWASP frameworks are focused on coding and software security. Another framework is MITRE’s Common Vulnerabilities and Exposure (CVE) Framework, which identifies, defines, and catalogs publicly disclosed cybersecurity vulnerabilities [6]. MITRE’s Common Attack Pattern Enumeration and Classification (CAPEC) is another framework that describes adversarial behavior and attack patterns, such as the common attributes and approaches known adversaries use [7]. Some wellknown attack patterns that CAPEC describes are HTTP Response Splitting and SQL Injection. Another example of a useful framework is NIST’s Mobile Threat Catalogue (MTC), which examines the mobile environment in particular [8]. The MTC contains a structured repository of threats against mobile information systems. While this list is not comprehensive of all adversarial behavioral frameworks, a final framework that the authors will discuss is the ATT&CK framework, which was chosen for use in this case study [3].

2.1 MITRE ATT&CK The MITRE ATT&CK framework [9] enables practitioners to understand and track cyber adversary behaviors. The globally accessible knowledge base [3] is maintained by the MITRE Corporation as a means of organizing and analyzing the tactics, techniques, and procedures (TTPs) used by real adversaries. The content within ATT&CK is informed by real-world observations and is continuously maintained via

136

R. Bleiman et al.

analysis of publicly available cyber threat intelligence (CTI) as well as contributions from the community of ATT&CK users. ATT&CK is appropriately structured around organizing adversary behaviors into TTPs. Specifically, for each included behavior ATT&CK captures and contextualizes the adversary’s: 1. Tactic(s), or “why” the behavior was performed (Fig. 1) 2. Technique, or “how” the adversary attempted to achieve their tactical goal by performing the behavior (Fig. 2) 3. Procedure, or “what” the adversary specifically did to implement the technique (Fig. 3). ATT&CK also includes sub-techniques, which are functional equivalents to techniques but describe the specific behavior at a lower level than a technique (where applicable) (Fig. 3).

Fig. 1 ATT&CK for enterprise matrix [3]

Fig. 2 Example ATT&CK technique, phishing (Technique ID T1566) of the initial access tactic

Exploring the MITRE ATT&CK® Matrix in SE Education

137

Fig. 3 Sub-Techniques of the phishing ATT&CK technique (Technique ID T1566)

The organization provided by ATT&CK not only captures the adversary perspective of malicious cyber operations, but also facilitates the creation of a common, shared language for tracking these behaviors. This enables users to apply ATT&CK towards many operational use cases, including (but not limited to): • Tracking and organizing new as well as known behaviors observed in threat intelligence/case studies, • Developing and aligning defensive countermeasures for specific behaviors, • Prioritizing and communicating behaviors used by a red or other form of offensive assessment team, and • Engineering and documenting an organization’s current defensive posture, including strengths and potential gaps relative to specific adversary behaviors. Though ATT&CK is selectively scoped to cyber activities (i.e., those directly involving victimized systems modeled into the framework as platforms), the structure and way of organizing TTPs is applicable to wider domains. This structure—specifically modeling behaviors into why (tactics), how (techniques), and what (procedures) were performed by an adversary—is conducive towards modeling and connecting behaviors to defensive countermeasures. Researchers have explored and used the blueprint of ATT&CK to create similar ATT&CK-like representations of differing adversary behaviors [10]. Concepts such as social engineering are not directly captured in the current version of ATT&CK as an individual technique/object, though the application of social engineering is relevant to many technical behaviors. For example, the previously highlighted T1566—Phishing technique as well as many other behaviors exist where the “human-element” associated with the targeted system is exaggerated. These techniques specifically incorporate or even rely-on elements of persuasion, manipulation, elicitation, and impersonation for successful execution. Utilizing the ATT&CK framework in an educational setting, specifically within a class largely focused on social engineering, allows students to explore adversarial behavior through experiential learning and understand how social engineering

138

R. Bleiman et al.

is relevant within cybersecurity. The structure of the framework allows students to see and describe each behavior from the perspective of the adversary, motivating them to compile and question “why” and “how” each individual action contributes to the operational objectives. ATT&CK also creates a common language to describe these behaviors—which is a vital requirement of the greater goal of understanding, tracking, and measuring each behavior against potential defenses as well as corresponding defensive gaps. The following section provides an overview of a multidisciplinary undergraduate cybercrime class and experiential learning project in which students mapped a SE case study onto the ATT&CK framework.

3 ATT&CK Mapping Project 3.1 Cybercrime Class Overview The current case study project was implemented in an undergraduate upperlevel criminal justice elective, Cybercrime, during the Fall 2020 and Spring 2021 semesters. The Cybercrime class consisted of students from multidisciplinary backgrounds including liberal arts and technical fields. Throughout each semester, students were divided into groups to engage in experiential learning projects, covering a range of SE topics such as shoulder surfing, pretexting, and open-source intelligence, among others. Groups consisted of 5–7 students; some groups were multidisciplinary (with students from both the College of Liberal Arts and the College of Science and Technology), while others had students all from the same disciplinary background. One such project that the students were assigned was the MITRE ATT&CK mapping course project. The logistics, results, and discussion of this project are discussed in the following sections.

3.2 Project Description For the ATT&CK mapping course project, student groups were each assigned a SE case study to read and map onto the ATT&CK framework. This paper will examine and compare the project completed by 3 of the groups across two semesters which were each assigned the same case study: Mission Not Impossible, from this chapter of the book Social engineering: The art of human hacking by Christopher Hadnagy. This case study detailed a story of a professional social engineer, Tim, who was tasked with infiltrating a server containing sensitive information that was highly protected due to the potential dangers associated with it falling into the wrong hands. The case study detailed a step-by-step account of Tim’s planning and actions, allowing students to map techniques he used onto the ATT&CK framework.

Exploring the MITRE ATT&CK® Matrix in SE Education

139

The students were given a 2-part project and had approximately two weeks to complete each part. In Part 1, students had to identify the techniques and subtechniques that mapped onto the case study, providing excerpts from the case study as evidence. They also had to identify techniques from the case study that they could not map to the framework. In Part 2, students had to identify the proportion of techniques for each tactic that they were able to map and explain what their proportions may indicate about the ATT&CK framework or the case study. Further, students had to redesign the framework in a way that they could attain a better mapping. To add to the analysis of the results of the project, one of the authors, who is a representative from MITRE focusing on the ATT&CK framework, also completed a mapping of the tactics, techniques, and subtechniques onto the same case study. Their mapping is compared to the student group mappings throughout the following results section.

4 Results 4.1 Part 1 (Sub)Techniques Identified and Excerpt Evidence Each tactic had at least one group map a technique to it; however, while there were some similarities and overlaps among the mappings, not all groups identified techniques from every category, and the techniques that the groups identified varied. For example, under the tactic of Reconnaissance (see Table 1), Group A mapped the following 5 techniques: Gather Victim Identity Information, Gather Victim Network Information, Gather Victim Organization Information, Search Open Websites/Domains, and Phishing for Information. While mapping from the exact same case study, Group B identified a set of 5 slightly different techniques: Active Scanning, Gather Victim Identity Information, Gather Victim Organization Information, Phishing for Information, and Search Open Websites/Domains. Their mappings were fairly similar, overlapping in 3 of the techniques and differing in 2 techniques. Still within this tactic of Reconnaissance, all 3 student-groups along with the ATT&CK representative agreed on the mapping of 3 techniques: Gather Victim Identity Information, Gather Victim Org Information, and Search Open Websites/Domains. Interestingly, of the Reconnaissance techniques that were mapped, three of them were only mapped once: Active Scanning, Search Open Technical Databases, and Search Victim-Owned Websites. While all groups read the same case study, these three techniques were each only identified by one of the groups. Two of these three unique techniques were mapped by Group C, which had the highest proportion of techniques mapped from the Reconnaissance tactic as well as total techniques from all tactics. This shows differences in how Group C was able to interpret these techniques to better fit the case study and their mapping.

140

R. Bleiman et al.

Table 1 Reconnaissance techniques mapping distribution Reconnaissance techniques

Total

Active scanning

1

Gather victim identity information

4

X

Gather victim org information

4

X

Phishing for information

3

Search open websites/domains

4

Gather victim host information

2

Gather victim network information

3

Search open technical databases

1

Search victim-owned websites

1

Total

Group A

Group B

Group C

ATT&CK

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X

X X 5

5

8

5

Further, within each technique, groups identified various subtechniques. For instance, within the technique of Gather Victim Identity Information, under the tactic of Reconnaissance, Group B identified all the subtechniques, which were Credentials, Email Addresses, and Employee Names. Meanwhile Group C only identified the subtechniques of Email Addresses and Employee Names (but not Credentials). As with the mappings at the tactic and technique levels, the subtechnique mappings show again how groups interpreted the case study differently and how certain groups were able to identify subtechniques to a much greater extent than other groups. In instances where the groups identified the same technique, the excerpts they provided as evidence had similarities and differences. For example, Groups B and C both identified the subtechnique of Gather Victim Identity Information under the tactic of Reconnaissance, and both groups used the same excerpt as evidence: “Tim went full-bore, collecting information such as the e-mail layout scheme, open requests for quotes, all employee names he could find, plus any social media sites they belong to, papers they wrote and published, clubs they were part of, as well as service providers they used.” In contrast, Groups B and C had different excerpts they used as evidence of identifying the technique of Gather Victim Org Information under the Reconnaissance tactic; the excerpts from both groups each demonstrated the technique sufficiently. The differences in excerpts shows how groups interpret and map onto the framework in a way that makes sense to them, while still justifying their interpretation through evidence. In Part 1 of this project, Groups B and C were also tasked with identifying techniques in the case study that did not map onto the ATT&CK framework. Group B identified the subtechniques Improvised Pretext and Dumpster Dive as part of the Gather Victim Org Information technique and the technique Method of Obtaining Physical Access with a subtechnique called Shove Knife. Meanwhile, Group C also identified Dumpster Diving that would belong to the tactic of Reconnaissance and the technique improvisational pretext that would fall under the tactic of reconnaissance, initial access, or a “much-needed social engineering tactic.” Identifying such

Exploring the MITRE ATT&CK® Matrix in SE Education

141

Table 2 Group mapping proportions1 Tactic

Group A

Group B

Group C

ATT&CK

Count #

Reconnaissance

5/10

5/10

8/10

5/10

23

Resource development

2/6

0/6

3/6

0/7

5

Initial access

2/9

4/9

3/9

3/9

12

Execution

1/10

0/10

1/10

0/12

2

Persistence

1/18

1/18

2/18

2/19

6

Privilege escalation

1/12

0/12

1/12

1/13

3

Defense evasion

2/37

0/37

0/37

4/34

6

Credential access

1/14

1/14

3/14

1/14

6

Discovery

1/25

0/25

3/25

0/24

4

Lateral movement

2/9

1/9

2/9

0/9

5

Collection

11/17

1/17

5/17

2/15

19

Command and control

6/16

0/16

2/16

3/16

11

Exfiltration

7/9

2/9

0/9

3/8

12

Impact

2/13

1/13

0/13

0/13

3

Total

44/205

16/205

33/205

24/203

techniques that are not covered with the ATT&CK framework encourages students to expand their focus past what was provided to them. This can help them become better defenders when they read about or observe novel and developing adversarial behavior.

4.2 Mapping Ratios, Indications, Redesign In Part 2 of the project, students identified the proportions of each tactic they used (see Table 2). Interestingly, the proportions that each group was able to map show that even within the same case study, some groups mapped drastically different numbers of techniques within each tactic. For example, within the Collection tactic, Group A identified 11 techniques, while Group B only identified 1 technique. In some cases, groups differed over whether a tactic was present in the case study at all. For example, The ATT&CK representative did not map any techniques in 5 of the tactics; comparatively, Group A mapped at least one technique in every tactic, Group B did not map any techniques in 6 of the tactics, and Group C did not map any techniques in 3 of the tactics. Groups also varied in the tactics in which they did not map any techniques; for example, of the 3 tactics that Group C did not map to,

1

Note: ATT&CK representative denominators do not match the other groups’ denominators for some techniques as a newer version of the ATT&CK framework was used.

142

R. Bleiman et al.

only 1 of those was the same as any of the 6 that the ATT&CK representative did not map to. The ‘Total’ ratio in the bottom row of Table 2 shows some stark differences in how many total techniques groups were able to map, including differences both within the course groups and compared to the ATT&CK mapping. Group A had the highest mapping proportion with 44/205 techniques identified. Meanwhile, Group B had the lowest mapping proportion with only 16/205 techniques identified. The ATT&CK representative’s mapping proportion is much closer to Group B, with a proportion of 24/203, and Group C’s mapping proportion is closer to Group A’s proportion, with 33/205 techniques identified. This does not mean that the groups which differed from the ATT&CK representative’s mapping are wrong; they interpreted differently but were able to justify their mapping through excerpt evidence. Among all the tactics, the ‘Count #’ column shows that the most frequently mapped tactics were Reconnaissance, Collection, Initial Access, and Exfiltration. However, in some instances, there is only one group that is skewing the count column, making certain tactics seem more frequent. For example, the Collection tactic is the second most frequently mapped tactic; however, breaking down the mapped techniques across each group, it is clear that Group A, which mapped to Collection 11 times, is carrying the weight of that variable’s frequency, as groups B, C, and the ATT&CK representative only mapped to it 1 time, 5 times, and 2 times, respectively. This is similarly seen with the tactic of Exfiltration. Group A mapped it 7 times, while groups B, C, and the ATT&CK representative only mapped it 2, 0, and 3 times, respectively. In this case it is interesting that one of the most frequent tactics when looking at the count was not mapped at all by one of the groups. While there are some stark differences in the mappings, there are some tactics in which all the groups were in agreement. For example, the Tactics of Execution, Privilege Escalation, and Impact all had less than 3 mappings, each with multiple groups identifying either zero or one applicable techniques. As noted earlier, Reconnaissance has slight variation in the mappings between the groups, but for the most part shows agreement among each group, including the ATT&CK representative. Looking at the differences in the ratios of the student-groups compared to those of the MTIRE representative, tactics such as Resource Development are interesting to examine. For example, under the tactic of Resource Development, the ATT&CK representative identified 0 techniques. However, group A identified 2, which is a third of all techniques in that tactic, and group C identified 3, which is half of all techniques in that tactic. Perhaps those two student groups have a different understanding of the techniques in that tactic than the ATT&CK representative has. Additionally, the similarity between Group B and the ATT&CK representative in this example (both with 0 mappings to Resource Development) might stem from Group B having low ratios of techniques mapped for nearly every category, apart from Reconnaissance. After identifying the proportion of tactics that the groups could map onto the ATT&CK framework, the groups offered insight on what their mappings might indicate, both about the case study and about the ATT&CK matrix. Group A reported that

Exploring the MITRE ATT&CK® Matrix in SE Education

143

The attacker in our case study did not rely on many technical methods to complete his job, therefore only a small proportion of the techniques mapped onto the matrix … he relied on more hands-on tactics to conduct the attack … Since he had to rely on more hands-on approaches, there is only a small proportion of the MITRE ATT&CK techniques that map over.

Group A believed that only a small proportion of techniques from the case study were able to be mapped onto the framework, but, interestingly, this group had the highest number of techniques mapped across all groups. This also indicates that Group A believed that the framework does not adequately cover hands-on approaches, which are commonly seen in SE behavior. Next, Groups B and C provided implications of their mappings specifically for the ATT&CK framework. In regard to the ATT&CK framework, Group B reported that We are able to see a majority of the techniques reside during the Reconnaissance and Initial Access tactics. Based on that notion we can assume that … Tim … worked diligently to obtain the information he gathered … we are able to see the disparity in techniques usedstarting more prevalent in the tactics regarding gathering information and becoming more scarce towards the tactics in which he could potentially be noticed and compromise himself.

As Group B noted, their mapping focused heavily on the early stages of the attack, specifically in the Reconnaissance and Initial Access tactics. In fact, of the group’s 16 total techniques mapped, 9 of them are in the first 3 tactics in the framework. The group inferred that the attention to these two early tactics indicated that a much more careful approach was utilized during the attack. Thus, they believe that techniques toward the right-hand side of the matrix correlate with riskier behavior while tactics on the left-hand side of the matrix include behavior more focused on information gathering. Meanwhile, Group C’s implications about the ATT&CK matrix were that The ATT&CK matrix more than sufficiently covered all of the technical aspects of this case study, which speaks to its strength. Our group had no trouble finding a technique or [subtechnique] for every technical maneuver. There were many techniques that exist on the matrix but were not applicable to our case study, which speaks to its depth and universal application. However, the matrix lacks depth when it comes to social engineering tactics. As social engineering is [a] critical [component] of cyberattacks, the framework could definitely improve in this area.

The thoughts expressed above imply that the matrix has a wide depth, especially in its technical measures, but lacks in regard to social engineering related techniques. This is similar to the implications that Group A gathered from their own mapping. It is interesting that the 2 groups with the highest ratio of techniques mapped both spoke to the matrix not providing enough techniques regarding SE tactics; perhaps their large number of techniques mapped can be attributed to an attempt to make up for the lack of SE techniques by heavily mapping technical techniques. Lastly, Groups B and C provided implications for the completeness and richness of the case study. Regarding the case study, Group B reported that The current ATT&CK Framework suggests that Tim had a good pretext and was able to adequately social engineer others without becoming compromised. The case study as

144

R. Bleiman et al.

presented dictates itself as very cautious ... In order to map this case study any better, more information would need to be provided.

Group B inferred that the ability to map this case study onto the ATT&CK framework implied that the behaviors of the adversary were well thought out, as techniques he used during his attack are techniques that are commonly used, as they are in the framework. They believe that being able to map behaviors to the framework makes for a sophisticated attack. Meanwhile Group C said that Since we had no [difficulties] mapping the case study onto the matrix, the case study has shown [to] be of quality and complete. The amount of tactics and techniques that were able to be mapped shows the depth of the case study. The case study was not vague at any point and allowed our group to easily map every technical technique. If our case study contained more techniques, there would have been a stronger mapping, but that is not a fault of the case study as this was a [relatively] simple ‘attack.’

Here, Group C acknowledges their strong technical mapping. However, they refer to lacking an even stronger mapping due to the task being ‘simple’. This might be attributable to the lack of SE techniques on the framework. Because such SE behavior does not map onto the framework, it is being thought of as a ‘simple attack’. Groups B and C were also tasked with redesigning the ATT&CK matrix to achieve a better mapping (e.g., increase mapping proportion, improve simplicity, minimize redundancy) and justify how their redesign is better. Group C noted that The low proportion of one case study allows for many different kinds of studies to be easily mapped to the matrix. There are many different ways attacks can be carried out, and the [framework] reflects this well … we would not attempt to reduce redundancy because the places where techniques are repeated are purposeful and reflect the different stages of an attack where a strategy can be used.

Thus, the group decided to strengthen the matrix by adding an additional tactic called ‘Social Engineering’, which would fit into the matrix between the tactics of resource development and initial access. They justify this placement with the pattern that creating a pretext and engaging in social engineering techniques typically occurs during the early stages of an attack. Their ‘Social Engineering’ tactic would contain 7 techniques: Phishing, Vishing, Baiting, Spear Phishing, Quid Pro Quo, and Farming. Meanwhile, Group B’s redesign did not create any new tactics or techniques, but they suggested other ways to improve the framework. First, they suggested that the framework keeps up to date with advancing technology and physical methods. They also suggested adding in an appropriate place in the framework for unplanned strategies. Lastly, Group B suggested that the framework uses language that is more digestible for readers who are not familiar with the terms from the technical domain, or to provide clear definitions for technical terms used. Perhaps the nature of the language might explain why groups interpreted techniques differently and had different mappings from each other. Regardless, having students reflect on both the case study and the ATT&CK matrix encourages students to consider the foundations upon which people examine adversarial behavior, and learning to improve those foundations will better allow people to defend against such behavior.

Exploring the MITRE ATT&CK® Matrix in SE Education

145

5 Discussion This section offers a discussion on the lessons learned from analyzing the results of the course project, student-suggested revisions for the ATT&CK matrix, biases to be mindful of when using mappings, and some limitations of this study.

5.1 Lessons Learned This proof-of-concept experiential learning project taught students three main lessons: (i) mappings are up for interpretation, as long as they can be justified with evidence, (ii) it is important to expand focus past the adversarial behaviors that are provided within a framework, and (iii) it is important to assess the foundations upon which people examine adversarial behavior. One of the most evident lessons learned is the notion that there is no single correct way to map adversarial behavior onto the ATT&CK matrix. Despite each group reading and mapping the behavior from the same SE case study, each group produced its own unique mapping with evidence from the text as empirical support. Mappings can also vary depending on the target audience and the objective of creating the mappings, whether that is for industry, government, or education, and whether that is for better understanding a specific adversary, creating defensive protocols/playbooks, or general practice with understanding human adversarial behavior, among any other objectives. Regardless of the reason, each group was able to provide excerpt evidence as proof that the mapped technique was in fact present in the case study, highlighting various potentially unique perspectives and approaches to defending these techniques. In looking at the selected text chosen to support the mapping to each tactic or technique, it also became clear that groups had different interpretations on each tactic and technique. Regardless of the official definition for tactic or technique that ATT&CK lists on their matrix, groups were able to interpret these terms to have definitions more relevant to their case study and to SE in particular. For example, ATT&CK’s official definition for the technique of ‘Trusted Relationship’, which is under the tactic ‘Initial Access’ includes “Organizations often grant elevated access to second or third-party external providers in order to allow them to manage internal systems as well as cloud-based environments.” This description indicates that the technique is in regard to using a trusted relationship to gain technical access; however, student groups interpreted this term differently in a way that fits the case study. This was often done in the context of social engineering behavior, which otherwise did not map onto the framework according to the official definitions. In this example, Group C also mapped onto the ‘Trusted Relationship’ technique and provided the following excerpt as evidence: “At this point a little friendly chit chat ensued and before you know it they were laughing and exchanging pleasantries.” In this context, the group interpreted the technique differently than the description provided on the ATT&CK

146

R. Bleiman et al.

framework, so that it would be more applicable to SE adversarial behavior. Regardless of the interpretation, each group’s use of a technique was able to be backed up with excerpt evidence from the text. These interpretations and the student revisions and expansion of the framework lead to the second lesson from this project, which was the importance of widening one’s scope to look at behaviors that fall outside the ATT&CK framework. This case study affirmed that SE is relevant, both in the ATT&CK framework and within cybersecurity as a whole. The various mappings demonstrate the focus of SE on the reconnaissance stage, but also that techniques used throughout a SE engagement can apply to techniques or tactics that are typically used for technical cyber campaigns, and that techniques and tactics that are commonly used for technical intrusions are also relevant in SE activity. Further, even when certain behaviors, such as those involving SE, cannot be mapped onto the framework, they are still important to examine to properly understand and be able to defend against adversarial behavior. Finally, by requiring students to reflect on the richness of the case study and the usefulness of the ATT&CK matrix for such a case study, the students can begin to learn how to assess such resources. Assessment of such tools is necessary to improve them and subsequently improve how people understand adversarial behavior, leading to a better defense against it.

5.2 ATT&CK Matrix Revisions and Extensions While teams were able to take different interpretations of some of the ATT&CK tactics and techniques to fit into a SE perspective, it was clear that there was a gap in some social engineering techniques. As noted in part 1 of the results, some teams discussed various SE tactics and techniques that they would add onto the matrix to make it more useful for SE case studies, including some behaviors that they found within their case study that they could not map onto the matrix (even with reinterpreting terms), with some groups going as far as suggesting tactics dedicated solely for SE. This reaffirms that the matrix is not specifically made for SE adversarial behavior; nonetheless, groups were able to successfully map an entire SE case study onto the framework. This highlights the opportunity to further invest in explicitly capturing SE, and corresponding defensive practices, within ATT&CK and/or to augment the framework with a derivative parallel that represents the SE aspects of cyber (and potentially broader) adversary behaviors.

5.3 Biases While it is important to note which tactics and techniques were mapped across this case study and any that are mapped in the future, it is also important to keep in mind biases associated with reporting, sharing, and analyzing adversarial behavior.

Exploring the MITRE ATT&CK® Matrix in SE Education

147

These include novelty bias, visibility bias, producer bias, victim bias, and availability bias [11]. Novelty bias occurs when techniques that are newer, more interesting, or more exciting tend to get reported more than standard techniques, which might get ignored. Visibility bias occurs when certain techniques are not available to the person mapping; some techniques might only be visible in a certain stage of the attack, whether that is before, during, or after. Others may not be visible at all. Thus, these issues with visibility may result in biased mappings. Next, producer bias is when reported mappings reflect a limited scope, due to what extent organizations choose to publish and what their objectives are. Another type of bias is victim bias, which holds that certain victim organizations might be more likely to report incidents or mappings than other victim organizations. The final type of potentially relevant bias is availability bias, which is that people are more easily able to recall certain techniques over others, so those techniques will have higher frequencies of being reported. Because of any or all combinations of these biases, it may be deceptive to only view the prevalence of certain techniques when examining a set of mappings manually produced by analysts. Further, as this paper provided mappings of the same technique by different groups that were backed by different case study excerpt evidences, certain techniques may reoccur during a campaign, yet the frequency or repetition of certain techniques cannot easily be deduced. Thus, these biases must be kept in mind, especially when using these mappings to generate defensive operations.

5.4 Limitations It is important to note that there were some limitations to this study. First, this analysis is based on mappings of a single case study that was short in length and not as thorough as it ideally could have been. Thus, these findings cannot be generalized, as they could be attributed to any attributes of the case study. A second limitation is the number of mappings that were compared in this analysis. The sample size is small with only three mappings, which also makes it difficult to generalize the findings. A final limitation is that this course project was implemented across multiple semesters during which project instructions and elements varied. This, along with variations in the version of the ATT&CK framework that was used across semesters complicated some of the comparisons. Along the same line, the comparisons between the student-groups and the ATT&CK representative were limited, as the ATT&CK representative was not asked to complete the entire course project, only mapping of the tactics, techniques, and subtechniques.

148

R. Bleiman et al.

6 Conclusion The authors encourage others to use the findings and discussion in this article to inspire future work, particularly exploring multi-disciplinary approaches to understanding and solving cybersecurity problems. This may include academics implementing similar projects in their classroom with other case studies or more detailed case studies. Additionally, further research could be done with mapping other cybercrimes outside the realm of strictly SE, although SE is incorporated into various cybercrimes. Finally, the authors suggest future work to give a stronger focus to highlighting, documenting, and sharing lessons learned regarding possible mitigations associated with relevant tactics and techniques. Educators could also encourage students to utilize their mapping of adversarial behavior and turn it into a defender mindset by identifying the various ways their unique knowledge, skills, and resources can stop or mitigate such behavior. For instance, some defensive approaches students could delve into include a broad spectrum of operations in and out of SE; in this case study simultaneously explored techniques potentially mitigated by controls such as Multi-factor Authentication, Encrypting Sensitive Information, as well as User Training. Educators and practitioners should also allow students to develop their own mitigation strategies that may or may not be part of ATT&CK to allow for creativity and novel approaches. By sharing the logistics, results, and benefits of this course project, the authors hope to help educators in developing their own course projects and teaching various aspects of cybersecurity (including human-factors) via experiential learning.

References 1. NICE: Cybersecurity Workforce Demand. Available at https://www.nist.gov/system/files/ documents/2021/12/03/NICE%20FactSheet_Workforce%20Demand_Final_20211202.pdf (2021) 2. Hadnagy, C.: Social Engineering: The Science of Human Hacking. Wiley (2018). 3. ATT&CK Enterprise Matrix. https://attack.mitre.org/matrices/enterprise. Last accessed 31 March 2022 4. OWASP Security Knowledge Framework. https://owasp.org/www-project-security-knowle dge-framework/. Last accessed 31 March 2022 5. OWASP Risk Assessment Framework. https://owasp.org/www-project-risk-assessment-fra mework/. Last accessed 31 March 2022 6. CVE Overview Page. https://www.cve.org/About/Overview. Last accessed 31 March 2022 7. CAPEC Homepage. https://capec.mitre.org/. Last accessed 31 March 2022 8. NIST Mobile Threat Catalogue. https://pages.nist.gov/mobile-threat-catalogue/background/. Last accessed 31 March 2022 9. Strom, B., Applebaum, A., Miller, D., Nickels, K., Pennington, A., Thomas, C.: MITRE ATT&CK: Design and Philosophy. Available at https://attack.mitre.org/docs/ATTACK_Des ign_and_Philosophy_March_2020.pdf (2020) 10. AMITT Design Guides. https://github.com/cogsec-collaborative/AMITT. Last accessed 31 March 2022

Exploring the MITRE ATT&CK® Matrix in SE Education

149

11. ATT&CK Sightings. https://attack.mitre.org/resources/sightings/. Last accessed 31 March 2022

Municipal Cybersecurity—A Neglected Research Area? A Survey of Current Research Arnstein Vestad

and Bian Yang

Abstract Municipalities are tasked with ensuring the cybersecurity of critical public services and functions in diverse areas such as safe water supply, healthcare, child protective services, and education with vastly different security requirements—all usually served from a common infrastructure with limited technical and organizational cybersecurity capabilities. This literature review identifies recent research on municipal and local government cybersecurity to identify current research areas, state of the art, and research methods used in research so far. We found research in the areas of smart cities, elections, human factors, operational technology, and crisis management. We also give suggestions for further research to develop better models for cybersecurity in cross-disciplinary organizations. Keywords Cybersecurity · Municipalities · Governance · Literature review

1 Introduction Municipalities have complex, interwoven ICT infrastructures created to support an equally diverse range of public services. But this complexity is in some way unavoidable; given the diverse set of requirements, municipal cybersecurity must address areas such as the running of schools, child protective services, critical water supply services, and health and social services. From a security point of view, complexity is an enemy, creating dark areas where attackers may infiltrate and establish footholds, posing great risks to the municipality’s ability to serve its public. Incidents like the cyberattack of the Norwegian municipality of Østre Toten [1] illustrate dramatically the great responsibility for protecting highly sensitive information and critical services placed on organizations that may be woefully A. Vestad (B) · B. Yang NTNU, Norwegian University of Science and Technology, Trondheim, Norway e-mail: [email protected] B. Yang e-mail: [email protected] © The Author(s) 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_9

151

152

A. Vestad and B. Yang

inadequately furnished to take on this responsibility. Cyberattacks on municipal infrastructures create risks of debilitating operational technology systems in water and sewage services, patient alerting systems in nursing homes, case management systems serving vulnerable populations in child protective services or electronic patient journal systems in primary health care. New and increasingly stringent legal requirements and regulations, like the EU privacy regulation GDPR, the EU NIS Directive (The directive on security of network and information systems—the first EU-wide cybersecurity regulation), and other sector-specific regulations are also affecting the municipalities—are increasing the burden of regulatory compliance and giving rise to legal risk and large economic penalties for incompliance. At the same time, municipalities face increased needs for public services, such as increased spending on healthcare for the elderly. Many established cybersecurity knowledge domains are already relevant for municipal activities. A municipal perspective would seem to add little to areas such as malware analysis, firewall configurations, intrusion detection, etc. Nevertheless, the consequences of breaches in security, and evidence from incidents, show both the challenge of establishing a necessary level of cybersecurity controls, the risks of highly interconnected infrastructures in multidisciplinary organizations, as well as the need for suitable methods of analyzing, communicating and understanding risk, and choosing and implementing cost-effective security strategies. This issue will be further developed in the discussion section. A similar question might be raised when it comes to addressing the municipality as a whole, rather than specific areas of responsibility, for example, health care, water supplies, education, etc., with their differing concerns. The educational domain often has a focus on the rapid adoption of new tools, software, etc., to ensure a good pedagogical environment with less emphasis on confidentiality except for certain types of information (in particular student-related health, social, and child protective servicesrelated information communicated with other relevant authorities). Water supplies have operational technology solutions, often challenged by geographical dispersion and hardware/software solutions that are hard to secure/update, where reliability is a high concern, while confidentiality is less so. The municipal health care sector not only processes large amounts of sensitive personal information from health care journals making confidentiality of prime importance, but is also investing heavily in remote health solutions where reliability will be important to ensure secure patient care. Nevertheless, in municipalities, these systems are usually tightly connected through shared infrastructure and reliance on the shared IT staff supporting the infrastructure and applications. These unique conditions lead to equally unique cybersecurity challenges—and this paper argues for the need for a better understanding of this complexity and how it might best be managed. A related work [2] performed a review of research and the contributions of professional associations and industry to the cybersecurity of local government organizations—they also focus on cross linkages that go outside or above the municipal level into urban infrastructures in general. The study is from an American perspective— and while most/all countries have a form of local government, the responsibilities and supporting governance structures such as sectorial directorates and authorities

Municipal Cybersecurity—A Neglected Research Area? A Survey …

153

vary from country to country. They conclude that there is a need for more research on what works and why, and suggest action research as a methodology for this. They also recommend more comparative studies between municipalities, as well as more government-industry-university partnerships to support cybersecurity innovation for the sector.

1.1 Research Motivation In similarity with this study, [2] highlights the high level of interconnectedness and overlapping and open-ended systems as a source of risk for municipal infrastructures. The consequences of cybersecurity failures in municipal infrastructures, as evidenced in Norway by the ransomware attack on Østre Toten, show that the societal consequences of a worst-case scenario are real and dramatic. The need to understand both what has been researched and identify new venues of research has led to the following research questions this study aim to answer: • Q1: What is the state of the art in municipal and local government cybersecurity research? • Q2: What are the research areas or concerns that current research investigates? • Q3: What are the research methods used in research on municipal cybersecurity? • Q4: When considering municipal cybersecurity, what areas require more research? The rest of this paper will, in Sect. 2, describe the methodology of this literature study, Sect. 3 will give a presentation of the identified studies, while Sect. 4 will give a discussion and propose further research areas.

2 Methodology This study is guided by the principles and steps for performing a literature review as presented in [3] as well as the guidelines for conducting a systematic mapping survey in [4]. In comparison with a systematic literature review focused on gathering and synthesizing evidence, a systematic mapping study is used to structure a research area. The two approaches are similar when it comes to searching and study selection, but have different goals, and the research questions of a mapping study are more general, as they focus on discovering research trends and research gaps as well as mapping and categorizing research contributions. The steps followed, following the methodology described in [4], consist of the identification of the need for a mapping survey and the appropriate research questions, developing the search, evaluating the search, and inclusion and exclusion criteria and quality criteria, performing the data extraction and classification and conducting and reporting the mapping.

154

A. Vestad and B. Yang

2.1 Selection of Studies The databases used in this study were ScienceDirect, IEEE digital library, ACM digital library, Springer database, Web of Science, and AIS electronic library. The libraries were chosen to give a broad but technically focused source of material. The following search term was used: (“municipal” OR “municipality”) AND “cybersecurity”.

2.2 Inclusion and Exclusion Criteria We wanted this study to focus on papers that emphasize the municipality itself. Papers focusing primarily on state/national levels are, therefore, excluded, unless the municipal focus is significant. The papers should also have the municipalities as a central actor or theme. We also exclude papers that do not treat cybersecurity as the main topic or concern of the paper (for example, papers mainly about big data, where cybersecurity is one of many concerns). To focus on recent research, the study was limited to 2018–2021, with peer-reviewed papers from academic journals and conference proceedings as quality criteria. The following inclusion and exclusion criteria were used (Table 1).

2.3 Search Results and Reduction Process The keyword search resulted in Table 2. 244 papers, including 1 retracted paper, were removed after title screening of the original search result that consisted of 627 papers. After title screening, 383 papers were left for record (including abstract) screening. Abstract screening left 34 papers for full-text screening. The relatively significant reduction illustrates that the search terms are quite generic and frequently used in papers on other topics. In the full-text screening step, the full-text papers were read and further considered for inclusion or exclusion based on the criteria. Through this step, another 13 papers Table 1 Inclusion and exclusion criteria Inclusion criteria

Exclusion criteria

Focused on local/municipal government as an important subject AND focused on cybersecurity as the main concern

State/national level as the subject of interest

Published paper from peer-reviewed journal or Cybersecurity as a peripheral concern conference From the year 2018 to 2021

Municipal subject merely a background theme

Municipal Cybersecurity—A Neglected Research Area? A Survey … Table 2 Search results

Database

Number of results

Sciencedirect

276

IEEE digital library

28

Web of science

43

155

ACM Digital library 38 AIS Digital library

28

SpringerLink

214

Total

627 papers for deduplication and title review

were considered out of scope, leaving a final count of 21 papers. In addition to these 21 papers, constructive input from peer review pointed out other terms used for similar local government organizations instead of “municipalities”—we performed an additional search of the databases and identified 6 papers to a total of 27.

3 Literature Review Of the 27 papers in the final review, we identified several topics by a keywording strategy [5] based on identifying keywords from the abstracts of the chosen papers. By far largest was “smart city” with 10 papers, followed by management and governance with 5, human factors with 4, and elections with 4. In the following review, the findings are divided according to these categories.

3.1 Smart Cities The most frequently explored theme by far is smart cities. Smart city is a nebulous term, used on diverse themes concerning social, environmental, and economic development in an urban setting, often designed around information technology and Internet of things-enabled sensor technologies, supporting mobility, efficient city management, interconnected health services, etc. [6]. Since smart cities are by nature also a municipality, this link is a natural one. Pelton and Singh [7] gives a general overview of security issues that smart city planners should consider, especially those connected to network security and vulnerabilities. In [6], the authors present a literature review of security, privacy, and risk of smart cities, identifying several clusters of research themes, such as privacy and security of mobile devices and services, smart city infrastructure, smart power systems, smart healthcare, frameworks, algorithms and protocols, operational threats, use and adoption by citizens as well as the use of blockchain. From this, they also develop a “smart city interaction framework” where security, privacy, and risk are discussed

156

A. Vestad and B. Yang

Fig. 1 Distribution by year and by thematic content

in a more holistic manner as it relates to key challenges for smart cities such as trust, operational and transitional issues, and technological and sustainability issues (Fig. 1). Also, from a more policy-oriented perspective, [8] investigates the underdeveloped focus on management and policy when it comes to securing smart cities and the need for a dual focus on both the technological and the policy level. The paper also provides a review of privacy and security vulnerabilities imparted by the generic architecture of the smart city, such as physical level vulnerabilities connected to device level protection and mobile crowd sensing, communication level vulnerabilities, data processing, and storage level vulnerabilities. This is followed by an overview of domain-specific security challenges such as smart health, smart transportation, smart grid, smart home, and public safety and emergency management. They discuss the privacy and security of smart cities from the perspective of policymaking and regulation and technical aspects, pointing out the need for a holistic approach incorporating legal and organizational issues and technology. Smart cities are made up of a multitude of organizations, stakeholders, technological standards, protocols, and solutions as well as vendors that produce them. Third-party risk management and well-defined security requirements, as well as who is responsible for meeting the requirements, are necessary. Vitunskaite et al. [9] reviewed 93 different standards relevant to the smart city, of which 13 consider security, and performed a comparative case study of three large smart city projects to investigate their governance models, security measures, technical standards, and third-party management. They suggest that government should mandate standards and minimum security requirements and requirements for third party and supply chain management. The inherent complexity and high level of integration on technical, organizational, and societal level of smart cities, and their inherent risk suggest the need for a holistic risk management process. Ullah et al. [10] reviewed 796 papers to propose a multilayered technology-organization-environment (TOE-based) risk management framework for sustainable smart city governance. They identify 56 key risks grouped

Municipal Cybersecurity—A Neglected Research Area? A Survey …

157

into three categories: technological, organizational, and external environment, to help both researchers and practitioners focus on the top risks of smart city governance. Cybersecurity needs to be “built-in” and not “bolted on” as an afterthought. The authors of [11] conducted a case study of four smart city projects to identify a common set of principles for security and privacy to serve as best practices and guidelines that communities could use. The guidelines identified the areas of specific technology usage, implementation of a cybersecurity management process and framework, and cybersecurity expertise and public–private partnerships. In [12], the authors identify cyber situational awareness as a critical issue in securing smart cities. They investigate through a literature survey the availability and sufficiency of data-driven techniques to support cyber situational awareness in the context of smart cities. The techniques are classified as “system abstraction”, “risk and vulnerability assessment”, and “attack detection methods”, looking into the theoretical background (such as graph theory, neural networks, simulations, etc.), data input, accuracy, and scope of the techniques as well as their support for visual representation. Smart cities are arguably distributed, and the use of blockchain as a distributed mechanism to address security requirements for smart cities was addressed by several authors including [6, 8], and [10]. Paul et al. [13] proposes a smart access control framework in a public and a private blockchain for smart city applications, taking into account the need for low resource consumption for IoT devices in the smart city. The authors of [14] conducted a bibliometric review of literature on blockchain in the context of smart cities, identifying key research and influential studies. They identified research in key areas such as the use of IoT for security in sensor data collection, privacy for machine learning, smart contracts for transparent and reliable data sharing, and blockchain use for empowering smart communities and fostering sustainability in smart cities. Privacy is also an essential issue in smart city development and is also heavily regulated. The EU General Data Protection Regulation (GDPR) imposes strict rules that affect how smart city technology can be utilized when it processes personal data and high fines for lack of compliance. One of the primary measures the GDPR imposes is the need to perform Data protection impact assessments (DPIA) when processing entails high risks to individuals’ rights and freedoms. Developing the DPIA can be a complex and costly undertaking, [15] suggests a smart city topology that aids in clustering services based on data protection to make the DPIA process more efficient.

3.2 Operational Technology Operational technology (OT) is heavily utilized in the municipalities’ responsibility for water supply and wastewater treatments. Lindstrom et al. [16] points out that the cybersecurity of OT systems is an important issue, OT systems are often required to be dependable and have high up-time, is often rarely patched and have other typical

158

A. Vestad and B. Yang

security vulnerabilities. While many organizations have an IT security policy, few have an OT policy, and the researchers, through an in-depth qualitative study and action research develop an OT policy for a Swedish municipality. Gouglidis et al. [17] also discussed OT technologies and provides a game theoretical approach to the problem of choosing an optimal defense strategy based on a threat model for a water utility system. The framework was demonstrated using data from an industrial control system (ICS) test-bed in.

3.3 Elections Municipalities are a part of the democratic structure of the nation and are managed through political and democratic processes, of which elections are a key issue. While the cybersecurity requirements and other aspects of election integrity are not governed directly by the municipalities, municipalities are often responsible for the implementation/running of the elections. Three papers addressed the theme of election cybersecurity. The authors of [18] describe the potential value of electronic voting, and the cybersecurity responsibilities, including preparedness plans for incidents that municipalities would need to have, mainly from a legal perspective. Also, from a legal perspective, the authors of [19] present findings from a review of online voting in Ontario, showing issues with weak voter authentication, poor transparency of election results, and a general lack of disaster-preparedness. The authors of [20] describe the development of cybersecurity awareness training specific to election integrity for poll workers in the municipality, and how specific and relevant training increases the effectiveness of cybersecurity awareness.

3.4 Human Issues and Cybersecurity Awareness Accounting for the human element in cybersecurity is critical, and no less so for municipalities. While [20] discussed cybersecurity awareness in connection with elections, [21] looked into how the Swedish public sector, including municipalities, responded to the changing threat landscape connected with the Covid-19 pandemic, and used communication to enhancing employee cyber security awareness. Among the findings were data showing that 74% of municipalities have outsourced or have less than one dedicated staff for cybersecurity, and 74% of municipalities report as not yet having implemented cybersecurity work. The skills shortage in cybersecurity has been a frequent theme in news media, the author of [22] of describes the application of competency-based education (CBE) and the use of the NIST NICE (National initiative for cybersecurity education) framework in a project to enhance cybersecurity capabilities in a metropolitan region in the U.S. By using a formal training framework, the local government will be able to

Municipal Cybersecurity—A Neglected Research Area? A Survey …

159

assess and direct training activities and to assess what competencies are most needed. CBE allows this training to be outcomes based and organized around the relevant knowledge, skills, abilities, and tasks defined by the NICE framework. The research also describes creating clear learning pathways and using digital badging and eportfolios to give motivation, clarity, and a good fit between cybersecurity needs of the organization and training outcomes. Also, connected to human issues, [23] investigated the relationship between municipalities affected by a ransomware attack and the effect of the security behaviors of the population in or near the municipality, suggesting an effect of cybersecurity incidents extending outside the municipality. People who live close to an attacked community are more likely to take preventive actions to reduce their susceptibility to ransomware.

3.5 Crisis Management Despite all protective measures, cybersecurity incidents occur, and preparedness is key to managing the following crisis. The authors in [24] aimed to support the development of educational simulations and related experiential learning exercises that help prepare city and public infrastructure personnel to effectively respond to cybersecurity attacks. They conducted 8 expert interviews including 12 cybersecurity experts from federal, state, and city organizations, as well as academics with relevant expertise. They organized their findings into crucial learning outcomes, scenarios, roles, and issues that simulation designers should consider. The authors in [25] analyzed municipalities responsibilities when handling crisis in general and cyber-incidents in particular. A crisis management model and a tentative design to be tested when executing cyber training and exercises in a training environment, such as the Norwegian Cyber Range is suggested.

3.6 Management and Governance Managerial aspects, such as governance, investments and sourcing have also been studied. The authors of [26] conducted a nationwide survey of the cybersecurity practices of local governments in the United States. They found inadequate investments, low use of tools, recommended practices and appropriate policies, low awareness of standards, and limited ability to address cyber events. The paper calls not only for more management awareness and investments, but also for researchers to investigate cybersecurity at the grassroots level. The call for investments is also supported by [27] that utilized data from a cyber incident database with 865 incidents by U.S. local governments and departments between 2006 and 2017 finding a significant reduction in incidents correlated with IT investments in cybersecurity, and found that this effect was increasing over time (making investments more effective).

160

A. Vestad and B. Yang

The authors of [28] investigated the decisions of local governments to outsource cybersecurity services and how cybersecurity outsourcing differs from other IT outsourcing activities because of complexity and information asymmetry—they found a clear trend toward outsourcing despite arguments against this. One specific type of outsourcing is cloud services—the authors of [29] describe a case study of Australian local government authorities and the factors used to assess cloud requirements proposing a conceptual cloud computing security requirements model with four components—data security, risk assessment, legal and compliance requirements, and business and technical requirements. The authors of [30] used the NIST CSF to target three levels, executive, management, and technical to ascertain an organization-wide understanding of cybersecurity risks. The paper also describes other related cybersecurity standards. The research describes a process to evaluate cybersecurity maturity in local government organizations and presents measurable metrics and improvement steps.

3.7 Municipal Technology The authors of [31] investigated the use of the security protocol HTTPS by Portuguese municipalities by performing scanning of the municipalities’ web pages and grading their use of certificates and protocols. The study was a follow-up of a previous study to identify drivers or correlations for municipal cybersecurity performance, where a weak correlation between population and tax level was found. No similar correlation could be found in the follow-up, suggesting other factors such as awareness to have a higher effect.

3.8 Research Methods A variety of research methods were used in the identified studies as listed in Table 3: Qualitative methods such as descriptive case studies, interviews, and document studies seem to be the most common approach. While there is a need for quantitative Table 3 Research methods Research method

Counts

References

Technical information gathering

1

[31]

Literature studies

7

[2, 6, 8–10, 12, 14]

Case studies

9

[9, 11, 15, 17, 19, 20, 22, 29, 30]

Quantitative studies (surveys)

4

[21, 23, 26, 27]

Other qualitative methods (expert interviews, document studies, action research)

6

[11, 15, 16, p., 18, 24, 28]

Municipal Cybersecurity—A Neglected Research Area? A Survey …

161

data to compare and contrast what works and what does not, many of the specific issues of municipal cybersecurity, such as organization, culture, capabilities, and policies, is likely more suited to qualitative methods.

4 Discussion Leavitt [23] in his study of organizational change developed the model later known as Leavitt’s diamond. The model is frequently used in socio-technical analysis, as it incorporates both social (structures, people, tasks) as well as technical elements, and how they interact. Since municipal cybersecurity is, to a great extent, an organizational issue, concerned with an organization tasked with ensuring the security of their operation, the diamond model serves as an interesting analysis framework to understand how the papers contribute to the different parts of the model: • Structure: Papers with a focus on organizational structure, communication, policies, responsibilities, also risks on organizational level, standards (non-technical) • Tasks: Papers focusing on (security-related) tasks, activities that the organization is performing to fulfill (or support) their mission. • People: Papers with a focus on the human aspects of cybersecurity, awareness, culture, as well as attitudes, and skills • Technology: computer systems, software, devices, but in addition methods, frameworks, etc., used as tools to support cybersecurity work in the organization

4.1 Thematic Contributions Of the 27 papers, 20 papers were considered to contribute primarily to structure (in relation to the areas of structure, task, people, and technology). They present a highlevel overview of risks in their area (smart cities, elections, etc.), discussed policy issues, presented regulations, and management frameworks, gave a legal perspective or organizational topics such as roles and responsibilities and outsourcing strategies. Four papers were more focused on the human aspect, such as awareness training and communication about cybersecurity to municipal employees and the effect of awareness among citizens after municipal cybersecurity incidents. Eight papers contributed to the technology aspect, as defined above—of these four could be considered technology in a narrower sense, such as analysis of technical issues, technical methods for security analysis, game theoretic approaches as well as strategies for the use of blockchain. Four other papers contributed under a more open definition of technology, with a management framework or a more analytical framework for cybersecurity and frameworks for risk analysis and data protection analysis.

162

A. Vestad and B. Yang

4.2 Identified Gaps and Need for Research While the identified literature is highly relevant to municipal cybersecurity and approaches the issue from differing perspectives, several areas have received less attention. A major focus has been placed on “smart cities” and related privacy and security challenges—while several authors point to the risks of interconnecting legacy technologies not built for a new threat landscape. This illustrates a focus on the “new and shiny” while both incidents and surveys demonstrate that many municipalities are not yet able to cope with today’s challenges. Cybersecurity in the municipal sector is of vital importance to the delivery of critical services in a democratic society and research need to address the fundamental nature of the municipal organization with its high interconnectedness of infrastructure, governance, and personnel performing tasks in areas with widely differing security requirements, cultures, and maturity. Lack of focus on cross-organizational interactions In the reviewed literature, limited attention is given to the structural issue that most significantly describes municipalities—the wide span in tasks, and how this affects cybersecurity. On the organizational level, we need a better understanding of how organizations with a wide span of tasks, with associated variety in technical solutions, cybersecurity threat landscapes, and applicable cybersecurity solutions (both technical, human and organizational), need to organize their security management and operational capabilities. The municipalities’ need to provide services, from schools to health and water supplies, with hugely different cybersecurity requirements, as economically efficient as possible, is a complex socio-technical issue. And if they are not able to ensure the security of existing services, building smart cities on top of a weak foundation might not turn out to be very smart. Lack of focus on tasks and capabilities None of the papers contribute significantly to the tasks component of Leavitt’s diamond discussing activities and tasks relevant to cybersecurity such as vulnerability management, access control management, security monitoring and security event analysis, especially in cross-functional organizations. Research could be improved by addressing these task in a socio-technical perspective as a set of cybersecurity capabilities including both human and technological aspects in a municipal enterprise cybersecurity architecture. There is a need for a better understanding of how to manage the broad scope of systems and responsibilities in a municipality that poses an extra challenge both in relation to technical integration and cross-functional cooperation (for example, between IT-departments and OT-departments). The high complexity of cybersecurity in municipal organizations also raises the need for better tools to understand and manage this complexity, including tools that enable a better understanding of cross-organizational risks and risks connected to the high interdependence of municipalities on complex supply chains and ecosystems.

Municipal Cybersecurity—A Neglected Research Area? A Survey …

163

Simulation technology can be a possible avenue of research in this area by allowing the study of cybersecurity as emergent phenomenon in a complex environment.

5 Conclusion This mapping study was conducted to provide an overview of the state of research on municipal cybersecurity. The municipalities are tasked with managing cybersecurity in complex interconnected infrastructures both within their own sphere of control, but also in connection with others, such as governmental IT-services, third-party vendors of cloud services, and, in providing remote health services, even into private homes. The survey showed that while cybersecurity issues relevant to municipalities are discussed, most significantly in the context of the emerging area of smart cities, important areas such as management of cybersecurity in cross-functional organizations, and research that takes into account needed cybersecurity capabilities and the complexity and risk of such systems in cross-functional organizations is still lacking. Acknowledgements This work has received funding from the Research Council of Norway through the SFI Norwegian Centre for Cybersecurity in Critical Sectors (NORCICS) project no. 310105.

References 1. KPMG: IKT-sikkerhet i Østre Toten kommune forut for dataangrepet 9. januar 2021. IKT-sikkerhet i Østre Toten kommune forut for dataangrepet 9. januar 2021, Aug. 26, 2021. https://www.ototen.no/_f/p1/i5689ceb7-72b4-44d0-970c-a5c4828047e5/endeligrapport-26082021-kpmg_sladdet.pdf. Accessed 2 Sept 2021 2. Preis, B., Susskind, L.: Municipal cybersecurity: more work needs to be done. Urban Aff. Rev. (2020). https://doi.org/10.1177/1078087420973760 3. Fink, A.: Conducting Research Literature Reviews: From the Internet to Paper, 5th edn. Sage, Los Angeles (2020) 4. Petersen, K., Vakkalanka, S., Kuzniarz, L.: Guidelines for conducting systematic mapping studies in software engineering: an update. Inf. Softw. Technol. 64, 1–18 (2015). https://doi. org/10.1016/j.infsof.2015.03.007 5. Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering (2008). https://doi.org/10.14236/ewic/EASE2008.8 6. Ismagilova, E., Hughes, L., Rana, N.P., Dwivedi, Y.K.: Security, privacy and risks within smart cities: literature review and development of a smart city interaction framework. Inf. Syst. Front. (2020). https://doi.org/10.1007/s10796-020-10044-1 7. Pelton, J.N., Singh, I.B.: Cyber defense in the age of the smart city. In: Smart Cities of Today and Tomorrow, pp. 67–83. Springer International Publishing, Cham (2019). https://doi.org/10. 1007/978-3-319-95822-4_4 8. Habibzadeh, H., Nussbaum, B.H., Anjomshoa, F., Kantarci, B., Soyata, T.: A survey on cybersecurity, data privacy, and policy issues in cyber-physical system deployments in smart cities. Sustain. Cities Soc. 50, 101660 (2019). https://doi.org/10.1016/j.scs.2019.101660 9. Vitunskaite, M., He, Y., Brandstetter, T., Janicke, H.: Smart cities and cyber security: are we there yet? A comparative study on the role of standards, third party risk management and

164

10.

11.

12.

13.

14. 15.

16.

17.

18. 19.

20.

21.

22.

23.

24.

25.

A. Vestad and B. Yang security ownership. Comput. Secur. 83, 313–331 (2019). https://doi.org/10.1016/j.cose.2019. 02.009 Ullah, F., Qayyum, S., Thaheem, M.J., Al-Turjman, F., Sepasgozar, S.M.E.: Risk management in sustainable smart cities governance: a TOE framework. Technol. Forecast. Soc. Chang. 167, 120743 (2021). https://doi.org/10.1016/j.techfore.2021.120743 Dickens, C., Boynton, P., Rhee, S.: Principles for designed-in security and privacy for smart cities. In: Proceedings of the Fourth Workshop on International Science of Smart City Operations and Platforms Engineering, pp. 25–29, New York, NY, USA (2019). https://doi.org/10. 1145/3313237.3313300 Neshenko, N., Nader, C., Bou-Harb, E., Furht, B.: A survey of methods supporting cyber situational awareness in the context of smart cities. J. Big Data 7(1), 92 (2020). https://doi.org/ 10.1186/s40537-020-00363-0 Paul, R., Ghosh, N., Sau, S., Chakrabarti, A., Mohapatra, P.: Blockchain based secure smart city architecture using low resource IoTs. Comput. Netw. 196, 108234 (2021). https://doi.org/ 10.1016/j.comnet.2021.108234 Rejeb, A., Rejeb, K., Simske, S.J., Keogh, J.G.: Blockchain technology in the smart city: a bibliometric review. Qual Quant (2021). https://doi.org/10.1007/s11135-021-01251-2 Vandercruysse, L., Buts, C., Dooms, M.: A typology of Smart City services: the case of data protection impact assessment. Cities 104, 102731 (2020). https://doi.org/10.1016/j.cities.2020. 102731 Lindstrom, J., Viklund, P., Tideman, F., Hallgren, B., Elvelin, J.: Oh, no—not another policy! Oh, yes—an OT-policy!, vol. 81, pp. 582–587 (2019). https://doi.org/10.1016/j.procir.2019. 03.159 Gouglidis, A., König, S., Green, B., Rossegger, K., Hutchison, D.: Protecting water utility networks from advanced persistent threats: a case study. In: Rass, S., Schauer, S. (eds.) Game Theory for Security and Risk Management, pp. 313–333. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-75268-6_13 Ivanova, K.: Online voting as an element of cybersecurity of megacities. Pravoprimenenie-Law Enforcement Rev. 3(2), 31–37 (2019). https://doi.org/10.24147/2542-1514.2019.3(2).31-37 Cardillo, A., Akinyokun, N., Essex, A.: Online voting in Ontario municipal elections: a conflict of legal principles and technology? vol. 11759, pp. 67–82 (2019). https://doi.org/10.1007/9783-030-30625-0_5 Schürmann, C., Jensen, L.H., Sigbjörnsdóttir, R.M.: Effective cybersecurity awareness training for election officials. In: Krimmer, R., Volkamer, M., Beckert, B., Küsters, R., Kulyk, O., Duenas-Cid, D., Solvak, M. (eds.) Electronic Voting, vol. 12455, pp. 196–212. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-60347-2_13 Andreasson, A., Artman, H., Brynielsson, J., Franke, U.: A census of Swedish public sector employee communication on cybersecurity during the COVID-19 Pandemic. In: 2021 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), Jun. 2021, pp. 1–8. https://doi.org/10.1109/CyberSA52016.2021.9478241 Pike, R.: Enhancing cybersecurity capability in local governments through competency-based education. In: Hawaii International Conference on System Sciences 2021 (HICSS-54), Jan. 2021 [Online]. https://aisel.aisnet.org/hicss-54/dg/cybersecurity_and_government/3 Marett, K., Nabors, M.: Local learning from municipal ransomware attacks. In: AMCIS 2020 Proceedings, Aug. 2020 [Online]. https://aisel.aisnet.org/amcis2020/data_science_analytics_ for_decision_support/data_science_analytics_for_decision_support/6 Gedris, K., et al.: Simulating municipal cybersecurity incidents: recommendations from expert interviews. In: Hawaii International Conference on System Sciences 2021 (HICSS-54), Jan. 2021 [Online]. https://aisel.aisnet.org/hicss-54/dg/cybersecurity_and_government/5 Østby, G., Katt, B.: Cyber crisis management roles—a municipality responsibility case study. In: Murayama, Y., Velev, D., Zlateva, P. (eds.) Information Technology in Disaster Risk Reduction, vol. 575, pp. 168–181. Springer International Publishing, Cham, 2020. https://doi.org/10. 1007/978-3-030-48939-7_15

Municipal Cybersecurity—A Neglected Research Area? A Survey …

165

26. Norris, D.F., Mateczun, L., Joshi, A., Finin, T.: Managing cybersecurity at the grassroots: Evidence from the first nationwide survey of local government cybersecurity. J. Urban Aff. 43(8), 1173–1195 (2021). https://doi.org/10.1080/07352166.2020.1727295 27. Kesan, J.P., Zhang, L.: An empirical investigation of the relationship between local government budgets, IT expenditures, and cyber losses. IEEE Trans. Emerg. Top. Comput. 9(2), 582–596 (2021). https://doi.org/10.1109/TETC.2019.2915098 28. Nussbaum, B., Park, S.: A tough decision made easy? Local government decision-making about contracting for cybersecurity. New York, NY, USA (2018). https://doi.org/10.1145/320 9281.3209368 29. Ali, O., Shrestha, A., Chatfield, A., Murray, P.: Assessing information security risks in the cloud: a case study of Australian local government authorities. Gov. Inf. Q. 37(1), 101419 (2020). https://doi.org/10.1016/j.giq.2019.101419 30. Ibrahim, A., Valli, C., McAteer, I., Chaudhry, J.: A security review of local government using NIST CSF: a case study. J. Supercomput. 74(10), 5171–5186 (2018). https://doi.org/10.1007/ s11227-018-2479-2 31. Gomes, H., Zúquete, A., Dias, G.P., Marques, F., Silva, C.: Evolution of HTTPS usage by portuguese municipalities. In: Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S., Orovic, I., Moreira, F. (eds.) Trends and Innovations in Information Systems and Technologies, vol. 1160, pp. 339– 348. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-030-45691-7_31

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Novel and Emerging Cyber Techniques

Near-Ultrasonic Covert Channels Using Software-Defined Radio Techniques R. Sherry, E. Bayne, and D. McLuskie

Abstract Traditional cybersecurity practices rely on computers only communicating through well-defined expected channels. If malware was developed to use covert channels, such as one created using ultrasonic sound, then this could bypass certain security measures found in computer networks. This paper aims to demonstrate the viability of acoustic covert channels by creating a low-bandwidth ultrasonic frequency channel utilising software-defined radio (SDR) techniques. Previous work was evaluated to identify the strengths and weaknesses of their implementations. Software-defined radio techniques were then applied to improve the performance and reliability of the acoustic covert channel. The proposed implementation was then evaluated over a range of hardware and compared to previous implantations based on the attributes of their throughput, range, and reliability. The outcome of this research was an ultrasonic covert channel implemented in GNU Radio. The proposed implementation was found to provide 47% higher throughput than previous work while using less signal bandwidth. Utilising software-defined radio techniques improves the performance of the acoustic covert channels over previous implementations. It is expected that this technique would be effective in an office environment, but less effective in high security or server environments due to the lack of audio equipment available in these spaces. Keywords Ultrasonic data transfer · Covert channel · Software-defined radio · SDR · GNU radio

R. Sherry (B) · E. Bayne · D. McLuskie Division of Cybersecurity, Abertay University, Dundee DD1 1HG, Scotland e-mail: [email protected] E. Bayne e-mail: [email protected] D. McLuskie e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_10

169

170

R. Sherry et al.

1 Introduction When a malicious actor attacks a computer system, often their goal is to obtain and extract data. Compromised data can be sold on the black market or leveraged for strategic and economic advantages. The United States Council of Economic Advisers (CEA) estimated that malicious cyber activities cost the US economy between 57 and 109 billion US dollars in 2016 alone [1]. The large reputational and economic cost of cyberattacks has caused businesses to take cybersecurity much more seriously. However, as cybersecurity is an adversarial field, this will naturally lead to attackers using more advanced techniques in attempts to successfully infiltrate networks. Lampson [2], in the first recorded work on covert channels, defines covert channels as a “communication channel that is not intended for information transfer at all”. Covert channels could be used to extract data from air-gapped networks—where a device is physically separated from the network to prevent leakage of valuable data. The ability to cross air-gapped networks has been seen in attacks such as Stuxnet, where malware was spread through non-network means to air-gapped SCADA operator systems. However, malware would not be able to perform data extraction without the use of covert channels. Beyond bridging air-gapped networks, covert channel attacks have the potential to be an effective attack vector due to the lack of consideration of covert channels in standard security policies. Acoustic covert channels have the potential to be particularly effective due to the common existence of channel requirements (i.e., a speaker and a microphone), and their potential for high rates of data transfer. The use of a covert channel in an attack has the potential to be particularly devastating, as it makes discovery and incident response considerably more difficult as command and control signalling and data exfiltration could be orchestrated without any data being sent over a monitored network. This would allow it to avoid communication traces and logs being left on network monitor hardware, such as intrusion detection systems (IDS). This paper aims to improve on previous work in the field by utilising softwaredefined radio techniques to create a high-throughput method of transferring data utilising a near-ultrasonic audio spectrum. Previous work in this area has neglected to use modern digital signal processing techniques, which limits the potential range and throughput of the demonstrated solutions of the work. Therefore, it can be hypothesised that by utilising modern software-defined radio techniques and digital signal processing techniques, the performance of an acoustic covert channel can be increased. In this paper, we present three main research contributions: (i) Presenting a novel acoustic covert channel approach to transferring data, (ii) Evaluating the proposed model to previous covert channel work, and (iii) Examining the threats that an acoustic covert channel poses to traditional and high-security networks. The background section will explore the digital signal processing techniques required to create a communication channel and introduce GNU Radio [3]—the

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

171

software used to present the proposed SDR approach. The section will then present related work in covert data transfer. The methodology section will outline the development process and testing method used to establish the performance of the covert channel. The results section will summarise performance testing while the discussion section will analyse results from the proposed method and compare them with previous work. Finally, a conclusion will be made from the research findings.

2 Background In the first recorded work on covert channels, Lampson discusses the difficulty of restricting a program’s ability to transmit information due to covert channels. These channels have the potential to bypass security measures implemented on a system due to the difficulty in considering all potential avenues in its design. For an acoustic channel to be covert, the frequency of sound it uses must be inaudible over ambient noise. The minimum frequency that a human can hear varies significantly with age, however, this study will specifically target the adult (over 18 years old) hearing range. For this age group, sensitivity begins to drop off at around 16 kHz, with frequencies over 20 kHz being imperceptible to average ambient background noise [4]. However, there is also an upper limit to the frequency that can be used by a standard computer. The maximum frequency that can be reproduced by a piece of equipment is given by half of its sample rate. This is known as the “Nyquist Limit”. Most audio equipment in commodity computers operates at 48 kHz, therefore, the Nyquist limit is at 24 kHz. Acoustic covert channels have been studied by prior literature. Acoustic covert channels only require commonplace hardware to construct—a speaker and microphone. A study in 2018 demonstrated that by using low-level sound card driver functions, an output device could be used as an input device. A sound wave hitting a speaker induces a current much like it would being picked up by a microphone. This allows an acoustic covert channel to be constructed even in higher security environments where microphones are prohibited. Three main properties can be varied in a wave to encode data—these are phase, frequency, and amplitude. For example, a signal at 19 kHz could represent a “0” symbol and a signal at 20,000 Hz could represent a “1” symbol. This type of modulation is known as “binary frequency shift keying” (BFSK) and is the basis for most previous work in this field. BFSK has the issue of frequency changes occurring when the amplitude of the wave is not zero. This causes spectral distortion due to the sudden change in frequency, as seen in Fig. 1, which can extend far beyond the original transmission frequency and can cause audible artefacts throughout the transmission. These audible artefacts greatly reduce the covertness of the channel, can interfere with adjacent channels, and decrease reliability.

172

R. Sherry et al.

Fig. 1 BFSK signal

Minimum-shift keying (MSK), illustrated in Fig. 2, improves upon BFSK by ensuring that the frequencies are picked so that the frequency changes occur when the amplitude of the wave is zero. MSK effectively solves this issue as the frequency changes do not cause any sudden change in amplitude, which maximises the signalto-noise ratio and maintains covertness. Modulation is the process of transmitting data, and in the case of an acoustic covert channel is a simple process of turning binary data into a set of tones that can be emitted by the speaker. However, performing demodulation—turning the audio back to binary data—is often significantly more complex due to three key factors: (i)

Fig. 2 MSK signal

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

173

the environment will add noise to the transmission, (ii) the transmissions are received asynchronously, and (iii) demodulation can never be considered fully reliable and may produce invalid data. These three issues affect all forms of wireless communication, and as such, significant research on how to optimally recover the original signal is available. As the baud rate (the rate at which symbols are transmitted) is increased, the accuracy of each of these steps becomes more important to recover a transmission successfully. The algorithmic and mathematical complexity of the algorithm to optimally demodulate a signal is very high and previous literature has failed to implement modern techniques to recover the signal, thus significantly limiting the maximum transmission speed and reliability. However, traditional electromagnetic communication and sound are both waveforms, and hence mathematically equivalent. This means it is possible to utilise existing radio projects that have already implemented algorithms for the acoustic medium. The program this paper utilises is called GNU Radio [3], which is a free and open-source software development toolkit that provides signal processing blocks to implement software-defined radios. These blocks can be linked together in a flowchart to modulate and demodulate a signal. Additionally, it provides a means to interface with the operating system, allowing access to important functions such as sockets and sound devices, as well as having a large community which has developed a wide range of custom open-source blocks for the program to extend its functionality. While this is designed for use with softwaredefined radios, it can also be leveraged to create an acoustic modem.

2.1 Related Work At the date of writing this paper, there are six papers on acoustic covert channels [5–10]. All of them follow a similar process of modulating data utilising binary frequency shift keying and demodulating the data by cross-correlating the signal with a known preamble and demodulating the data by looking at the magnitude of different frequencies over a static number of samples. Hanspanch and Goetz [5] repurpose a communication system originally developed for robust underwater communication to create a mesh network of computers utilising common computer audio hardware. They evaluate the range of a single link and its performance over multiple hops. The results from this study show that the approach is very limited, achieving only 20 bit/s of throughput and having 6 s of latency per hop, which severely limits the utility of the method as a means to transfer moderately large corpora of data. Deshotels [6] presents work on acoustic covert channels between mobile devices utilising ultrasonic frequencies. During testing, Deshotels was able to demonstrate a throughput of 345 bit/s between two mobile devices placed back-to-back. This is the highest throughput backed by empirical evidence in covert channel literature. The authors note that above this speed, they encountered audible clicking that is likely

174

R. Sherry et al.

caused by sudden changes in frequency during modulation. Below this baud rate, they solved this issue with pulse shaping to minimise the amplitude of the waves when they shift frequency. The solution presented by Deshotels is sub-optimal as it does not scale to higher baud rates and reduces the signal-to-noise ratio of the channel, which will affect the reliability at higher ranges and throughputs. Carrara [7] provides a basis for measuring the covertness of covert channels, with a section that focuses on the optimisation and utility of an acoustic covert channel. The testing presented is in-depth, covering 10 different devices, and analysing multiple channel characteristics that consider the background noise, reverberation, and frequency responses of the devices that were used for testing. Channel parameters are also tested in-depth and optimised for performance, however, this data does not appear to inform subsequent tests, which may limit the maximum throughput possible with the proposed model. While Carrara’s individual channel implementation is low performance, they combine multiple channels in parallel within the ultrasonic range to achieve 230 bit/s. Wong [8] discusses the design and implementation of an ultrasonic covert channel, experimenting with different modulation schemes within the acoustic medium. They validate their results on a wide range of hardware and measure the throughput and reliability at different distances. Wong’s data is used to analyse the effectiveness of an acoustic covert channel and discuss its limitations. However, the paper would appear to be over-ambitious in its claims. The throughput of 500 bit/s utilising Quaternary Frequency Shift Keying (QFSK) is stated throughout the paper; however, it appears to only have been explored theoretically and is not experimentally validated other than demonstrating that there is enough bandwidth for it to fit in the ultrasonic spectrum. Wong also claims the discovery of utilising pulse shaping to eliminate the issues of clicking found in previous work, however, in the same paper cited as having this issue, the same solution is both proposed and implemented. Additionally, the throughput measurements provided by Wong neglect to include the overhead from packetization and error loss, meaning that their performance data is inflated and would not be representative of its effectiveness. Wong’s paper was initially the basis for this research, however, when the work was reproduced, the synchronisation technique began to fail at over 50 bits/s on high-end audio hardware, which does not support the results provided in Wong’s paper. Smye [9] investigates ultrasound as a communication method, commenting on potential security issues with their use. They note several existing frameworks which claim to provide high throughput over an ultrasonic channel. The authors note that the signal strength significantly degrades with realistic test conditions. Smye’s project documents a similar independent implementation of this research using GNU Radio, with the author achieving a throughput of 50 bit/s over 2 m. Zarandy et al. [10] investigate the use of ultrasound signals as a censorshipresistant mesh network and covid contract tracing physical layer between mobile devices. They demonstrated a throughput of 685.7 bit/s and a maximum range of 8 m using an 8-PSK modulation method. The limiting factor in the Zarandy et al. study was the overhead from the error correction required to maintain a reliable connection and correct for carrier phase drift.

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

175

3 Methodology To validate and build upon previous work done within this research domain, GNU Radio [3] is used to create an audio signal processor that can generate and decode sound signals sent in the near-ultrasonic frequency range. GNU Radio is a free and open-source toolkit which is used to rapidly develop a signal processing system. GNU Radio is traditionally used for SDR systems, however, in this paper, it is used to process sound signals to and from data streams.

3.1 Implementation Figure 3 shows the final and functional implementation of the duplex ultrasonic covert channel in GNU Radio. This includes packetization, synchronisation, equalisation, and automatic gain control which all work to maximise the reliability of the channel. This is accessed through a Linux TAP virtual network interface, which allows for full-duplex Transfer Control Protocol (TCP) communication across the link by virtualising a layer 3 ethernet interface. Figures 4 and 5 show the flowgraphs that were used to explicitly test the covert channel in the experiments presented in this research. These flowgraphs are strippeddown versions of the full model presented in Fig. 3 that use the channel in a simplex configuration, with communication accessed through a socket server. These flowgraphs were used to establish the throughput and error rate of a single link and eliminate additional errors caused by the additional channel used for packet acknowledgement. The key components of these flowgraphs that allows the model to be used for acoustic data transfer are the “Frequency Xlating FIR Filter” and the “Complex to

Fig. 3 Final flowgraph utilising a duplex Linux TAP interface

176

R. Sherry et al.

Fig. 4 Transmission flowgraph

Fig. 5 Receiving flowgraph

Real” blocks. The former allows the signal to be shifted from baseband (centred around 0 Hz) to ultrasonic frequencies. This is intended to extract a signal and bring it down to baseband, however, by providing it with a negative frequency, it can be used to do the reverse. The latter converts the complex samples to real samples that audio devices can understand. The NGHAM Encoder/Decoder blocks [11] are an open-source addition to GNURadio which provides a robust scheme for forwarding error correction and packetization.

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

177

3.2 Channel Bandwidth The bandwidth of the channel was determined under GNU Radio utilising a model. This parameter is important to know as it determines the number of channels that can be fit within the near-ultrasonic range. For GMSK, this is directly linked to the baud rate, and as such, this value was measured for each baud rate tested in the previous experiment. This flowgraph will continuously modulate data read in from a file, which is then demodulated with the result being output to a terminal. The “sideband” value was initialised at 100 Hz and increased by 5 Hz till the signal began to demodulate correctly. This value is used to determine the range of frequencies above and below the carrier frequencies that should go through the filter, and as such, is half the total channel bandwidth.

3.3 Testing A standardised OS test environment containing the project was created using a 32 GB USB stick to ensure consistency across the machines used in testing. Each USB stick contained an identical instance of Ubuntu 19.10 with GNU Radio 3.7.14.4 and the NGHam packetization module installed.

3.3.1

Equipment

To determine the viability of this system in a realistic attack scenario where the equipment may be known but out of the control of the attacker, the system was tested over a range of hardware at three different quality points classified as L (low), M (medium), and H (high) quality by the researchers. Every combination of hardware was tested, giving a total of 9 test sets. The hardware used in testing can be found in Table 1. Table 1 Equipment used for experiments

Quality

Model

Speakers L

Viobyte DH-660 (Headphones)

M

Genius SP-S110

H

Logitech Z370

Microphones L

Generic Office Headset (no brand)

M

Tonor TN120072BL

H

Blue Yeti Nano

178

3.3.2

R. Sherry et al.

Test Method

Testing was conducted in an Abertay University computer lab. The experiments were carried out while other people were present in the lab. The added noise from people talking created a realistic environment, and the added interference was deemed unlikely to affect results due to being far outside the expected audio transmission range. A test file was generated by reading 10,800 bytes of random data from “/dev/urandom”, which was reused for each test and verifiable. A test harness was built on top of GNU Radio’s python processes to automate testing and take measurements of speed and error rate. A checksum was calculated at each side of the transmission to validate the integrity of received packets. As such the reliability of the link can be calculated as follows: bytes r eceived × 100 = r eliabilit y per centage 10, 800 Effective throughput was calculated as such bytes r eceived time taken The speaker and microphone were placed 3, 6, and 9 m apart on opposing office chairs using a measuring tape for all combinations of equipment (Fig. 6). The transmitting computer was configured to 80% volume, and to play audio only through the left channel. The exception to this was the headphones, which were set to the system’s max volume level of 150%, as the power is significantly lower than a speaker system. The headphone left cup was placed over the top of the right cup to prevent it from physically interfering with the signal. Without these alterations, the reliability of the channel using headphones was too low at the tested ranges. Each combination of audio hardware was tested at three baud rates—480, 600, and 800 baud. If any baud rate failed at a previous range, they were not tested at Fig. 6 Lab experiment setup

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

179

subsequent ranges as it was assumed that the data would continue to be irrecoverable. While this assumption may not necessarily always be true, access to perform extended experiments was limited due to time restrictions. Testing was repeated at three different frequencies—18, 20, and 22 kHz—to eliminate any issues caused by the individual frequency response curves of the equipment. Each test was only conducted once, again due to limited lab access due to time restrictions. Overall, 9 combinations of hardware were tested with the devised covert channel over 3 distances, 3 baud rates, and 3 frequencies each—providing 243 tests total.

4 Results 4.1 Results at 18 kHz 18 kHz provided a very good performance overall (Fig. 7). While as the range and baud rate increased, the error rates for many of the devices increased, only the lowest quality microphone had issues receiving signals for demodulation.

4.2 Results at 20 kHz The performance at 20 kHz is considerably lower, with only the higher-end devices able to successfully decode the data at higher baud rates and ranges (Fig. 8). However, at the lower range of 3 m, and a baud rate of 480, many of the devices are still able to consistently demodulate a signal.

4.3 Results at 22 kHz Performance at 22 kHz is very weak, with only the highest-end equipment able to receive data at high ranges and throughputs (Fig. 9). However, at the lower baud rate of 480, the high-end microphone was able to receive data at short ranges from all three speakers.

180

Fig. 7 Effective throughput 18 kHz with 95% confidence intervals

R. Sherry et al.

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

Fig. 8 Effective throughput 20 kHz with 95% confidence intervals

181

182

Fig. 9 Effective throughput 22 kHz with 95% confidence intervals

R. Sherry et al.

Near-Ultrasonic Covert Channels Using Software-Defined Radio …

183

5 Discussion Testing showed that transmission was possible at all three baud rates of 480, 600, and 800 baud, but performance varied significantly with the frequency and range of communication, as well as the quality of the equipment. Table 2 demonstrates that as the baud rate increases, so does the bandwidth required for the channel to operate, decreasing the number of channels that could be implemented in the near ultrasonic range from 18 to 24 kHz. Communication at 18 kHz was by far the strongest demonstrated by these experiments, with communication being demonstrated as feasible in most tested configurations. Communication at 20 and 22 kHz drops off considerably, with only the highest-end equipment being able to reliably transmit data at this frequency. However, at close ranges and the lowest baud rate, the transmission was still shown to be viable at this frequency range. This is likely due to both the speaker and microphone having a poor frequency response at high frequencies, as distortion in inaudible frequencies is unlikely to be noticed by most users.

5.1 Input Devices The low-end microphone was almost completely unable to demodulate data consistently due to the error rate being too high. The low-end microphone was only considered usable in one experiment—480 Baud, 20 kHz, at a 3 m distance. This is likely due to both the frequency and distance of the audio source being outside of the device’s design parameters. The mid-range microphone was able to decode data successfully at short ranges, however, error rates became unusable at distances beyond 6 m. This is once again likely due to its design and intended use case, which is made for employees gathered around a table. The high-end microphone performed extremely well, being able to demodulate signals at most ranges, baud rates, and frequencies. This is likely due to the high fidelity of this equipment, which greatly helps with signal recovery. Table 2 Sideband measurement results

Baud rate

Channel bandwidth (Hz)

No. of channels

480

440

14

600

560

11

800

740

8

184

R. Sherry et al.

5.2 Output Devices The low-end headphones had surprisingly high performance, with solid reliability rates at all three baud rates when paired with a high-end microphone at 18 kHz. This is likely due to having decent audio fidelity, but poor volume, requiring the sensitivity of the high-end microphone to recover the signal. The mid-range speakers had overall poor performance. This is potentially due to the amplifier having poor fidelity at ultrasonic frequencies, causing error rates to increase significantly. This can be established due to the high-end microphone’s uncharacteristically poor performance when communicating with this device. The high-end speakers performed very well, likely due to a combination of high fidelity at high volume, providing a best-case scenario for the receiver. This kind of speaker is unlikely to be found on most office employee computers, but similar quality audio equipment could exist in conference rooms or studio environments.

5.3 Observations The quality of the microphone used appeared to have the greatest effect on performance, with the lowest quality microphone consistently having the worst performance, and the highest quality microphone having the best performance. The headset microphone is designed to be very close to the user to operate effectively, whereas the other two microphones tested are cardioid microphones, designed to pick up audio from a distance. As such, similarly designed cardioid microphones will likely be effective at comparable ranges. Additionally, the headset microphone used in this testing was of particularly poor quality, with significant noise in the base signal. This was done to represent an absolute worst-case scenario. As such, further testing with the headset microphone could be conducted at closer ranges ( aW + bW + cW . Otherwise, if cW > c Q , W has received more strong accepts. (4.3) Plurality: With the voting paradigm now followed being plurality, C A f creates another p sized vector Const f containing ciphertexts which are encrypted using public key of EC O = h EC O such that Dec(Const f (w) ) = 1 and Dec(Const f (k) ) = 0 for k ∈ [ p], k = w. C A f creates a block for this created encrypted vector. Here, C A f invokes algorithm 3.1 for range proof with V A f acting as a verifier. However, this only proves that a binary vector is encrypted. We also need to ensure that only one value is 1 and rest are 0s. For this, we just need to prove that sum of all the entries in the entire vector in 1 for which zero-knowledge sum proof (ZKSP) logg (c1 ) == logh EC O (c2 /g) is also provided. Hence, this does not force ECO to verify the legality by decryption using its secret key as V A f does it in the encrypted domain.

428

A. Agrawal et al.

(5) Final aggregation (Pluralistic): EC O reads the blockchain and aggregates all the constituency ballotsto compute the p sized vector of aggregated and encrypted t−1 Consti, j = Enc(m j ) = {c j,1 , c j,2 } j∈[ p] . C As collabpairs Ans where Ans j = i=0 t−1 t−1 orate to compute the common public key P K = i=0 h i = g i=0 xi . Decryption can be either performed by ECO directly using x EC O or through collaboration amongst all the t CAs. An array Fin is created where Fin j = m j . This process is repeated for all p political parties to completely populate the Fin array. The party u has won the election if Fin u == max(Fin), i.e., u = index of maximum value in Fin. It claims majority by itself or forming a coalition government with its allies. (5.1) Tie−breaker_2: As at least 2 parties are tied on same seat count, total scores  Result = t−1 f =0 {Res f } Fin[ f ]=1 are taken into account to break the tie. The party w wins the election if Resultw == max(Result). It is easy to observe that this tiebreaker favours the parties having national presence over their regional counterparts participating from limited number of constituencies. The self-explanatory mini numerical example in Fig. 8 describes workflow of algorithms 3, 4 and 5, diagrammatically. In this example, regional party P3 participating from only two constituencies defeats national level parties P2 and P4 in a fiercely contested tie-breaker.

4 Security Analysis This section explains the arguments for a multitude of security requirements being satisfied for the two phases as follows: A. Image processing-based voter authentication (A.1) Visual Cryptography Algorithm: The security of the used visual cryptography algorithm is analysed in order to check whether one of the two shares is revealing any information about the other. Proof: Out of the two shares, the owner and the master shares, one is completely randomized and the second one is dependent on it and the resulting image through an XOR operation. Obviously, the adversary has no knowledge about the resulting 2 watermarked image leaving him/her no choice other than trying close to 256n possible shares through brute-force approach. Here, n is both the length and breadth of the square grayscale image share having possible values for all of its n 2 pixels ranging in [0,255]. (A.2) Owner Share integrity: The biometric images are embedded inside a QR code via 2-level discrete wavelet transform (DWT). The resultant watermarked image and randomized master share (MS) are used to create a legitimate owner share (OS) before its distribution. It is essential to address the scenario when the OS/MS gets compromised. Proof: Only a valid owner share can generate a valid scannable watermarked image which can be read in order to ascertain the identity of the share owner. Its

Blockchain-Based Cardinal E-Voting System …

429

image hash is stored in a database which can be used to perform a validity check in order to detect any kind of minute tampering. If an adversary steals the MS, he/she will still not be able to create a valid owner share because of the gibberish confidential content encoded in the QR code by the CA. Even if that happens, the hash equality check will fail for his/her invalid OS will not match with any database entry. (A.3) Eligibility check and prevention of multiple voting: Only eligible users should be able to cast their votes but this should not happen twice from same or different constituencies. Proof: The partially hashed details encoded in the carrier QR code resulting from the superimposition of MS and the voter’s OS acts as proof of citizenship, eligibility, age and more importantly, constituency. There OS hash present in the enrollment database further confirms this argument. The same OS cannot be reused because of its deactivation and invalidation of its hash in the database ruling out any duplicate voting. (A.4) Prevention of fake votes through user impersonation: The adversary impersonates an eligible voter by compromising his/her valid OS before the victim votes. Proof: In this scenario, the biometric images embedded in the superimposed image will not be similar to the attacker’s corresponding physical attributes. Hence, the authentication fails in the first phase itself. Also, the normally expensive process of 1 to n verification through n comparisons replaced by an efficient single superimposition, watermark extraction and scannability check. So, the proposed scheme assures data non-repudiation by relying on the combined strength of what the user has (valid OS) and is (biometrics). B. Homomorphic encryption and blockchain-based voting (B.1) Confidentiality and Completeness: The vote privacy preservation along with its accurate aggregation are enabled jointly by additive ElGamal encryption. Proof: Due to the additive homomorphic nature of the encryption, the final tally must be the correct sum of all valid ballots ensuring their privacy in parallel. Any external adversary A cannot learn about the vote score from the additive ElGamal r ciphertext c f,i = (gr f,i , g m f,i · yCf,iA f ) which is semantically secure and indistinguishable under chosen-plaintext attack (IND-CPA). As per as per Decisional DiffieHellman (DDH) assumption, this reveals nothing about the plaintext m f,i which can be recovered only by a valid decryption key xC A f as yC A f was used for encryption. Only intra-constituency aggregate rather than individual scores is decrypted after gathering all voter score cyphers from a constituency. For inter-constituency scenario, the ballot is encrypted with the public key of ECO (y EC O = g x EC O ) who shared its secret key x EC O amongst the t CAs. This ensures not less than t CAs can collude to decrypt the final aggregate which ECO can decrypt, directly. (B.2) Ciphertext privacy and legality: It must be ensured that only valid vote ciphertexts should be aggregated while preserving their privacy. Proof: The vote ciphertext validity check on an individual basis is enabled via a suitable NIPKRP. This partial knowledge set membership proof (PKRP) along

430

A. Agrawal et al.

with zero-knowledge sum proof (ZKSP) help VAs in discarding all illegal ciphertext vector sent by CA and voters of their respective constituencies in the encrypted domain. Also, vote ciphertext privacy against the fellow constituency voters, ECO and CAs needs to be ensured. This is achieved via distribution of secret multiplicands by VA which prohibit CA and other entities from knowing about the ciphertext or the underlying plaintext even if they possess or leak the secret key used for decryption. Also, VA holding all parameters cannot decrypt the ciphertexts placed in the root nodes of the aggregation subtrees as its public key is not used while encrypting the data and it is oblivious of the corresponding CA secret key. In this manner, receiptfreeness and non-coercion are ensured to a considerable extent. (B.3) Data integrity, non-repudiation and FDI robustness: Any data forgery or injection of false data is detectable. Also, the voter cannot deny the ownership of the ciphertext it signs and sends. Proof: The eligible voters V AL f from constituency f and C A f hash their bit ciphertext b f ( j,x) with timestamp T S f,i . Any online data modification by the internal or external adversary can be detected by V A f through comparison of anti-collision vote hashes ensuring data integrity. The V A f is assumed not to conspire with V AL f to report fraudulent data and false batch verification status. Also, data non-repudiation is ensured through the distributed multiplicand shares which help VA to authenticate the source of the ciphertexts on one-to-one basis. After intra-constituency level aggregation, all transaction records in the constituency blockchain are made tamperproof through timestamps and hash of the aggregation tree root. Therefore, interconstituency aggregation is resistant to FDI as well as replay attacks. Hence, a robust and decentralized voting scheme preventing common online attacks is realized. (B.4) Fairness and Verifiability: The voters and CAs can verify the election result fairness from the submitted ballots. Intermediate results should be completely unknown to any entity. Proof: The append-only property of the blockchain platform supports verifiability as the posted data can never be forged. The tamperproof records ensure disputefreeness by enabling verification of the voting result correctness as per the protocol, publicly. All ballots are kept secret during the entire course of the voting process and controlled by several administrators making partial result learning and illegal addition of extra ballot impossible. All CAs passing valid PKRP can collaborate to verify the final election results through a secret sharing mechanism and are able to compute the final tally using the discrete logarithm method. Hence, verifiability along with fairness together ensure the soundness of the proposed cryptosystem. Moreover, a voter can easily verify whether his/her vote has been taken into account for the final result computation by invoking Algorithm 4.1 on his/her constituency aggregation tree.

Blockchain-Based Cardinal E-Voting System … Table 1 Theoretical performance Al Computation CV 2 3 3.1 4 4.1 4.3 5

< t, |V AL f | − 1, 0 > 4 ∗ p∗ < 3, 2, 0 > |V AL f |∗ < 14, 8, 2 > < 2 ∗ p, 4 ∗ |V AL f | + 3 ∗ p − 4, 0 > < 0, log2 (|V AL f |), 0 > p∗ < 3, 1, 1 > < 1, 2 ∗ t − 1, 0 >

431

Communication CV < |V AL f | + 1, 1 > < 4 ∗ p, 0 > |V AL f |∗ < 3, 4 > < 0, 0 > < |V AL f | − 1, 0 > p∗ < 2, 1 > < 2 ∗ p, 0 >

5 Performance Analysis In this section, the robustness of the watermarking algorithm used in the biometric authentication phase is tested. This is followed by the theoretical and practical analysis of the algorithms mentioned under the voting phase.

5.1 Theoretical Analysis Let the parameters, p and w denote the total number of candidates and voters, respectively. The execution times for multiplication and exponential in group G are denoted as tm and te , respectively. The same for hash calculation is th . |Z p | and |G| represent the element sizes for Z p and G, respectively. Let, A =< te , tm , th > and B =< |G|, |Z p | >. The cost vector (CV) is given in Table 1 so that both computation and communication costs can be computed through dot products C V · A and C V · B, respectively.

5.2 Experimental Analysis We have implemented and tested the efficiency of our proposed scheme in the Charm crypto platform which is an extensible Python language-based framework for rapid prototyping advanced cryptosystems [52]. We employ a symmetric curve with a 512-bit base ‘SS512’. Group (G) used is a 512-bit multiplicative cyclic prime order R Core(TM) i3group. We conducted the experiments on a system having Intel 5005U CPU @ 2.00 GHz x64-based processor, 4.00 GB RAM and OS: Ubuntu 20:04:2 LTS WSL. The experiments are performed by varying the number of constituencies, maximum candidates or voters per constituency keeping other parameters fixed. Figure 9 presents the computation overheads of different algorithms after

432

A. Agrawal et al.

taking average of values obtained through 50 different trials, graphically. For a constituency, the number of candidates is fixed at four while varying the number of voters whereas the number of voters is fixed at 20 while varying the number of candidates. Also, the number of constituencies is varied keeping the maximum number of voters and candidates fixed at 100 and 5, respectively. It can be easily observed from the three graphs that the time complexities associated with voting, and the constituency aggregation tree creation vary linearly with the number of voters and candidates. Conversion time from binary to decimal vector in an encrypted domain depends only on the number of candidates. It is intuitive as its computation occurs when the vote are aggregated already ruling out any dependency on the number of voters. It is also indicated that the number of operations required in ensuring whether the vote was taken into account for aggregation vary on a logarithmic scale with respect to the number of voters. Lastly, time complexity for inter-constituency aggregation varies linearly with respect to the number of candidates as well as the constituencies keeping other parameters fixed.

5.3 Usability Analysis The proposed scheme intends to execute the overall vote ciphertext aggregation procedure in a hierarchical manner. As the number of users increases, it becomes extremely important to keep the processing overhead associated with the operations under a certain bound. The homomorphic encryption is employed to decrease the total number of decryption operations required to obtain the plaintext sum. A few of the phase operations are performed for an entire batch at once in order to optimize the verification procedure. Other techniques also can be integrated in order to make the system scalable which is still an open problem in the concerned research domain. In order to fix a myriad of issues associated with biometric template storage, our proposed biometric authentication mechanism uses the combination of visual cryptography and two-level discrete wavelet transform to rule out the need to perform the comparison between user’s live captured fingerprint, and millions of biometric templates. This makes our system usable for a large-scale application scenario. Nonetheless, deploying the entire system in online mode still may encounter a lot of non-technical challenges. Firstly, the setup related technical jargon, even if simplified through system abstraction, may not be comprehensible to a sizeable portion of the population. Moreover, it cannot be expected for millions inclusive of rural voters to have easy access to the proposed Internet-based technology. Possibly, these limitations mandate a hybrid approach in which such voters can cast their votes through online mode by physically being present at a nearby polling booth. The booth may appoint guides and provide all the prerequisite technical setup and facilities for making the entire procedure simplistic and hassle-free.

Blockchain-Based Cardinal E-Voting System …

433

Fig. 9 Performance of various algorithms

6 Future Work and Conclusion In summary, the advocated novel blockchain-based election framework oriented towards biometric authentication mechanism integrates image processing methodologies, smart contract, digital signature, verifiable secret sharing and partial homomorphic encryption to achieve data unforgeability and non-repudiation for a privacypreserving, verifiable and cardinal e-voting. Our scheme uses an appropriate tiebreaking policy which is more effective than lottery-based selection. The proposed novel e-voting scheme is premised on the combined strength of blockchain and other techniques. It is designed to fulfil different key security requirements of voting process under any large-scale governance. To achieve source authentication, we advocate a simple and yet, efficient (2,2)-visual cryptography and QR watermarkingbased secured biometric template data creation mechanism. This is important as a little security flaw in such scheme designs might result in massive election fraud. Our scheme employs the score-based voting paradigm with a tie-breaker protocol where vote is stored as an array of the score ciphertexts for all the constituency parties with NIPKRP proving the range legality of the values. The privacy, integrity, verifiability, and computability of the score while being encrypted are ensured through the combination of secret distribution, homomorphic cryptosystem, and aggregation tree construction. Hence, decentralized and secure vote casting, and valid score aggregation are realized with a strong perimeter against FDI and other attacks highly improbable under mentioned security assumptions. This scheme certainly lays the foundation for a much more robust election mechanism featuring an efficient algorithm design. This design may feature an amalgamation of the capability to tackle unknown, and unaddressed e-voting challenges, QR code reconstruction and minutiae-based extraction enabling fingerprint recognition along with comparison. This is left as one of the major future works.

434

A. Agrawal et al.

References 1. Gibson, J.P., Krimmer, R., Teague, V., Pomares, J.: A review of e-voting: The past, present and future. Ann. Telecommun. 71, 279–286 (2016) 2. https://www.wionews.com/india-news/indias-new-electoral-reform-bill-linking-aadhaar-tovoter-ids-438840 3. Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform, Jun ’20, online: https://github.com/Ethereum/wiki/wiki 4. Underwood, S.: Blockchain beyond bitcoin. Commun. ACM 59(11), 15–17 (2016). (WISE’18), pp 18–35, Nov 5. Zhang, W. et al.: A privacy-preserving voting protocol on blockchain. In: 2018 IEEE 11t ICCC (CLOUD), (2018), pp. 401–408 6. Yu, B. et al.: Platform-independent secure blockchain-based voting system. In: Proceedings of the International Conference on Information and Security, pp. 369–386 (2018) 7. Kim, T., Oh, Y., Kim, H.: Efficient privacy-preserving fingerprint-based authentication system using fully homomorphic encryption. SCN 11 (2020) 8. Karabat, C., Kiraz, M.S., Erdogan, H., et al.: THRIVE: threshold homomorphic encryption based secure and privacy preserving biometric verification system. EURASIP J. Adv. Signal Process. 2015, 71 (2015) 9. Catak, F.O., Yildirim Yayilgan, S., Abomhara, M.: A privacy-preserving fully homomorphic encryption and parallel computation based biometric data matching. Preprints (2020) 10. Ross, A., Othman, A.: Visual cryptography for biometric privacy. IEEE Trans. Inform. Forensics Secur. 6(1), 70–81 (2011). https://doi.org/10.1109/TIFS.2010.2097252 11. Naor, M., Shamir, A.: Visual cryptography. In: Proceedings of Annual International Conference on the Theory and Applications of Cryptographic Techniques, Saint-Malo, pp. 1–12 (1995) 12. Hartung, F., Kutter, M.: Multimedia watermarking techniques. Proc. IEEE 87(7), 1079–1107 (1999) 13. Hämmerle-Uhl, J., Raab, K., Uhl, A.: Watermarking as a means to enhance biometric systems: a critical survey. In: Information Hiding: Lecture Notes in Computer Science, vol. 6958. Springer, Berlin, Heidelberg (2011) 14. Vashistha, A., Joshi, A.M.: Fingerprint based biometric watermarking architecture using integer DCT. In: 2016 IEEE Region 10 Conference (TENCON), pp. 2818–2821 (2016). https://doi. org/10.1109/TENCON.2016.7848556 15. Jain, A.K., Uludag, U.: Hiding biometric data. IEEE Trans. Pattern Anal. Mach. Intell. 25(11), 1494–1498 (2003) 16. Gunjal, B.L., Mali, S.N.: Secure E-voting system with biometric and wavelet based watermarking technique in YCgCb color space. In: IET International Conference on Information Science and Control Engineering 2012 (ICISCE 2012), pp. 1–6 (2012) 17. Olaniyi, O.M., Folorunso, T.A., Aliyu, A., Olugbenga, J.: Design of secure electronic voting system using fingerprint biometrics and crypto-watermarking approach. Int. J. Inform. Eng. Electron. Bus. 8(5), 9 (2016) 18. Bousnina, N., Ghouzali, S., Mikram, M., Abdul, W.: DTCWT-DCT watermarking method for multimodal biometric authentication. In: Proceedings of the 2nd International Conference on Networking, Information Systems & Security (NISS19), pp. 1–7. Association for Computing Machinery, New York, NY, USA (2019) 19. Chow, Y., Susilo, W., Tonien, J., Zong, W.: A QR code watermarking approach based on the DWT-DCT technique. ACISP 2017, Auckland, New Zealand (2017) 20. Agrawal, A., Sethi, K., Bera, P.: IoT-based aggregate smart grid energy data extraction using image recognition and partial homomorphic encryption. In: IEEE International Conference on Advanced Networks and Telecommunications Systems (pp. 322–327) (2021) 21. Knirsch, F., Unterweger, A., Unterrainer, M., Engel, D.: Comparison of the Paillier and ElGamal Cryptosystems for Smart Grid Aggregation Protocols. ICISSP (2020) 22. Liu, Y.N., Zhao, Q.Y.: E-voting scheme using secret sharing and k-anonymity. World Wide Web 2, 1657–1667 (2018)

Blockchain-Based Cardinal E-Voting System …

435

23. Li, Y.J., Ma, C.G., Huang, L.S.: An electronic voting scheme(in Chinese). J. Softw. 16(10), 1805–1810 (2005) 24. Using the following range voting system the green party of Utah elected a new slate of officers. Independent Political Rep (2017) 25. Cruz, J.P., Kaji, Y.: E-voting system based on the bitcoin protocol and blind signatures. Trans. Math. Model. 10(1), 14–22 (2017) 26. Jonker, H., Mauw, S., Pang, J.: Privacy and verifiability in voting systems: methods, developments and trends. Comput. Sci. Rev. 10, 1–30 (2013) 27. Chaum, D.L.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 24(2), 84–90 (1981) 28. Aziz, A.A., Qunoo, H.N., Samra, A.A.: Using homomorphic cryptographic solutions on evoting systems. Int. J. Comput. Netw. Inform. Secur. 10(1), 44–59 (2018) 29. Jabbar, I., Saad, N.A.: Design and implementation of secure remote e-voting system using homomorphic encryption. Int. J. Netw. Secur. 19(5), 694–703 (2017) 30. Hirt, M., Sako, K.: Efficient receipt-free voting based on homomorphic encryption. (EUROCRYPT’00), pp. 539–556 (2000) 31. Kiayias, A., Yung, M.: Self-tallying elections and perfect ballot secrecy. In: Proceedings International Workshop Public Key Cryptography, pp. 141–158 (2002) 32. Wu, W.C., Lin, Z.W., Wong, W.T.: Application of QR-code steganography using data embedding technique. In: Information Technology Convergence, pp. 597–605. Springer: Berlin, Germany (2013) 33. Seenivasagam, V., Velumani, R.: A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud. Comput. Math. Methods Med. 516465, 16 (2013) 34. Kang, Q., Li, K., Yang, J.: A digital watermarking approach based on DCT domain combining qr code and chaotic theory. In: 2014 Eleventh International Conference on WOCN, pp. 1–7 (2014) 35. Cardamone, N., d’Amore, F.: DWT and QR Code Based Watermarking for Document DRM. IWDW 36. Tkachenko, I., Puech, W., Destruel, C., Strauss, O., Gaudin, J., Guichard, C.: Two level QR code for private message sharing and document authentication. IEEE Trans. IFS 11(3), 571–583 (2016) 37. Barmawi, A.M., Yulianto, F.A.: Watermarking QR Code. In: 2015 2nd International Conference on Information Science and Security (ICISS) (2015) pp. 1–4. https://doi.org/10.1109/ICISSEC. 2015.7371041 38. Chow, Y., Susilo, W., Tonien, J., Vlahu-Gjorgievska, E., Yang, G.: Cooperative secret sharing using QR codes and symmetric keys. Symmetry 10(4), 95 (2018) 39. Fathimal M., Jansirani A.: New fool proof examination system through color visual cryptography and signature authentication. Int. Arab J. Inf. Technol. 16, 66–71 40. Wang, D., Zhang, L., Ma, N., Li, X.: Two secret sharing schemes based on Boolean operations. Pattern Recogn. 40(10), 2776–2785 (2007) 41. Nir, K., Jeffrey, V.: Blockchain-enabled e-voting. IEEE Softw. 35(4), 95–99 (2018) 42. Li, Y. et al.: A blockchain-based self-tallying voting protocol in decentralized IoT. IEEE Trans. Depend. Sec. Comput. (2020) 43. Lee, K., James, J.I., Ejeta, T.G., Kim, H.J.: Electronic voting service using block-chain. J. Digit. Forensics, Secur. Law 11(2), 123–136 (2016) 44. Zhao, Z., Chan, T.H.: How to vote privately using bitcoin. 9543, pp. 82–96 (2015). https://doi. org/10.1007/978-3-319-29814-6_8 45. Linh, V.C.T., Khoi, C.M., Chuong, D.L.B., Tuan, A.N.: Votereum: an ethereum-based evoting system. In: IEEE-RIVF’19, pp. 1–6 (2019) 46. Mccorry, P., Shahandashti, S.F., Hao, F.: A smart contract for boardroom voting with maximum voter privacy. In: FC’17, pp. 357–375 (2017) 47. Revenkar, P.S., Anjum, A., Gandhare, W.Z.: Survey of visual cryptography schemes. Int. J. Secur. Appl. 4 (2010)

436

A. Agrawal et al.

48. Nakamoto, S.: Bitcoin: A Peer-to-Peer Electronic Cash System. BN Publishing: Hawthorne, CA, USA (2019) 49. ElGamal, T.: A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inf. Theory 31(4), 469–472 (1985) 50. Score Voting (2021). https://en.wikipedia.org/wiki/Scorevoting 51. Fiat, A., Shamir, A.: How to prove yourself: Practical solutions to identification and signature problems. In: Proceedings of the Conference on the Theory and Application of Cryptographic Techniques, pp. 186–194 (1986) 52. Akinyele, J.A., Garman, C., Miers, I., Pagano, M.W., Rushanan, M., Green, M., Rubin, A.D.: Charm: a framework for rapidly prototyping cryptosystems. J. Crypt. Eng. pp. 111-128 (2013)

Neutralizing Adversarial Machine Learning in Industrial Control Systems Using Blockchain Naghmeh Moradpoor , Masoud Barati , Andres Robles-Durazno , Ezra Abah, and James McWhinnie

Abstract The protection of critical national infrastructures such as drinking water, gas, and electricity is extremely important as nations are dependent on their operation and steadiness. However, despite the value of such utilities their security issues have been poorly addressed which has resulted in a growing number of cyberattacks with increasing impact and huge consequences. There are many machine learning solutions to detect anomalies against this type of infrastructure given the popularity of such an approach in terms of accuracy and success in detecting zero-day attacks. However, machine learning algorithms are prone to adversarial attacks. In this paper, an energy-consumption-based machine learning approach is proposed to detect anomalies in a water treatment system and evaluate its robustness against adversarial attacks using a novel dataset. The evaluations include three popular machine learning algorithms and four categories of adversarial attack set to poison both training and testing data. The captured results show that although some machine learning algorithms are more robust against adversarial confrontations than others, overall, the proposed anomaly detection mechanism which is built on energy consumption metrics and its associated dataset are vulnerable to such attacks. To this end, a blockchain approach to protect the data during the training and testing phases of such machine learning models is proposed. The proposed smart contract is deployed in a public blockchain test network and their costs and mining time are investigated. N. Moradpoor (B) · A. Robles-Durazno · E. Abah · J. McWhinnie Edinburgh Napier University, Edinburgh EH10 5DT, UK e-mail: [email protected] A. Robles-Durazno e-mail: [email protected] E. Abah e-mail: [email protected] J. McWhinnie e-mail: [email protected] M. Barati Newcastle University, Newcastle Upon Tyne NE1 7RU, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_24

437

438

N. Moradpoor et al.

Keywords Adversarial attacks · Machine learning · Critical national infrastructure · Industrial control systems · Water treatment systems · Blockchain

1 Introduction Critical National Infrastructure (CNI), such as transportation, communication, police systems, national health services, and utilities, like oil, gas, electricity, and drinking water, are a country’s public assets. The nation’s health and safety and their ability to continue day-to-day jobs and businesses with no interruption depends on the continuous operation of those assets with no failure and no interruption. However, despite the importance of such assets their cybersecurity issues are poorly addressed. Additionally, the increased level of connectivity for the devices that form a given CNI and the appearance of Industry 4.0 [1] leads to a growing number of cyberattacks against such systems both in occurrence and impact. Criminals and state-sponsored hackers are increasingly going after CNI to disturb society and harm nations. In 2020, 56% of utility sectors, which include electricity, natural gas, and drinking water, reported at least one cyberattack on their infrastructure that cause either loss of data or operations shutdown [2]. For example, hackers targeted a U.S. water supply system located in Oldsmar, Florida in 2021 [3] and poisoned the amount of sodium hydroxide, also known as lye, from 100 parts per million to 11,100 parts per million. Luckily the attempt was identified by the operator who successfully reversed the change before the toxic level of chemical reached the drinking water. Machine learning algorithms have proven their success in detecting known and unknown attacks and producing reliable, repeatable decisions, and results in a wide range of networks from traditional computer networks and CNI to wireless technologies. This includes a variety of attacks and applications such as phishing emails [4], insider threat detection [5], Internet of Things (IoT) attacks [6], mobile malware detection [7], water services [8], and fake news detection [9] as well as predictive maintenance and business process automation. However, ML techniques are known to be vulnerable to adversarial attacks where hackers and criminals employ the adversarial perturbations during the training and/or testing phases to exploit a given model and cause mis-classification. For example, to classify benign events as malicious, and vice versa, leading to attack detection evasion and disturbance of the systems which force the entire model to fail. In order to address such adversarial attacks, blockchain-based techniques provide a secure, transparent, and immutable way for storing the training or testing data. The third generation of blockchain technology introduces smart contracts enabling combine computer protocols with user interface for executing the conditions/terms proposed in a real contract. The smart contracts also extend the usability of blockchain-based approaches in various domains or infrastructures (e.g., CNI) so as to record generated critical data in a blockchain network while ensuring their protections under different policies or regulations [10, 11]. The combination of blockchain and machine learning has recently investigated profoundly in order to

Neutralizing Adversarial Machine Learning in Industrial Control …

439

improve the security of both training and testing data. In [12], a blockchain-based federated learning architecture was presented through which local learning model updates are securely exchanged and verified using a blockchain network. Moreover, a blockchain-empowered secure data sharing architecture was designed for multiple parties within an industrial IoT environment [13]. The architecture developed a privacy-aware data sharing model using the integration of blockchain and federated learning. Although the aforementioned blockchain-based approaches have attempted to enhance the security of training/testing data, none of them had been focused on clean water treatment systems. Additionally, there is no blockchain development nor proposal to protect energy consumption metrics of CNI’s endpoints (e.g., sensors and actuators). These features can be employed to detect anomalies against such systems and therefore their protection is hugely important. To realize such level of protections, this paper presents the following contributions: – A virtual testbed representing a clean water treatment system called VNWTS is presented. This was designed, implemented, and evaluated during the UK COVID19 lockdown when accessing a physical testbed was not possible. – A systematical architecture that supports both ML-based engines and a smart contract factory for improving data security against adversarial attacks is designed. – A set of energy consumption features to assist us in detecting anomalies against clean water treatment systems using machine learning algorithms is defined and a novel energy-based dataset captured is using various benign and malicious scenarios on the testbed. – An energy consumption-based machine learning approach to detect anomalies against clean water treatment systems is implemented. – Various adversarial attacks are implemented and the proposed energy consumption-based machine learning approach and its associated dataset are tested against them. The impact of such attacks on the performance is presented. – A blockchain-based technique to protect the proposed energy-consumption-based machine learning approach and its related dataset during the training and testing phases is proposed with the aid of smart contracts. The rest of the paper is structured as follows. Section 2 designs a blockchain-based and ML-supported architecture for the virtual clean water treatment system and gives the details of its layers. Section 3 represents the architecture’s implementation and describes the interaction among proposed components. Section 4 provides some experimental results, and finally Sect. 5 concludes the paper.

2 System Architecture The proposed system architecture includes six layers of: VNWTS Testbed, Data Management, ML Training Engine, Blockchain Virtual Machine, ML Testing Engine, and Interface as follows, Fig. 1.

440 Fig. 1 System architecture

N. Moradpoor et al.

Interface ML Testing Engine Blockchain Virtual Machine ML Training Engine Data Management VNWTS Testbed

2.1 Layer 1: VNWTS Testbed The first layer includes a virtual clean water treatment testbed called VNWTS, which stands for Virtual Napier Water Treatment System, and includes sensors, actuators, a PLC, a SCADA system, a HMI, and a novel Python code providing communication between the above components. The VNWTS testbed is designed, implemented, and evaluated during the UK COVID-19 lockdown when accessing a physical testbed was not possible because of the strict restrictions. The VNWTS testbed, which is explained in the next section and fully explained in [14], is designed for us to collect a dataset based on energy-based features for anomaly detection in a given clean water treatment system. Another route for us was to go for available data logs such as those from SWaT physical testbed [15]. However, the approaches proposed in [16–21] have not been followed in this paper, since they didn’t include any energy features thus not addressing the needs. Furthermore, the newly collected dataset includes novel attacks on system components such as level and temperature sensors, hot and cold pump controllers, as well as PLC memory attacks including changing level and temperature setpoints in the working memory which are not present in the existing datasets. The VNWTS testbed, Fig. 2 (right), is implemented in Simulink, a MATLABbased graphical programming environment, which emulates chlorine treatment of drinking water. Each component of this testbed is a virtual representation of a real element found in the MPA Compact Workstation Rig [22] shown in Fig. 2 (left) which represents a scaled-down version of a one of a kind water treatment system. These virtual components have the same characteristics and dynamics of the physical elements from the MPA Compact Workstation Rig and includes Pipes, Pressure Vessel (×1), Pumps (×2), Proportional Valve (×1), Water Reservoir Tank (×1), Flow Sensors (×2), and Water Supply (×2). The Pipes used in the virtual model have a 18.621 mm diameter. The Pressure Vessel acts like a normal pipe but because of its different shape it creates a small decline in water pressure. The Pumps, which include a voltage supplier, a DC Motor,

Neutralizing Adversarial Machine Learning in Industrial Control …

441

Fig. 2 MPA compact workstation rig and virtual water treatment system

and a centrifuge pump, deliver fluid from the reservoir tanks to another tank (TANK1 in Fig. 2, right). The Proportional Valve simulates water demand models for a small city. The Water Reservoir Tank is a virtual representation of the physical tank shown in Fig. 2 (left). This tank has the shape of a truncated pyramid. Flow Sensors allow the rate of fluid to be measured at specific points of the virtual plan and water supply, one representing raw water and one representing chlorine. Additionally, the VNWTS testbed employs a virtual SIMATIC S7-1500 PLC which is available in the SIMATIC S7-PLCSIM Advanced V3.0 software distributed by SIEMENS. Using this software, the operation of this particular PLC and its internal elements such as input, output, working memory, and network functionalities are successfully emulated. Furthermore, four PI controllers: two to regulate the speed of the pumps delivering the raw water and chlorine, one to regulate the delivery rate for each pump, and one to regulate the water level in the reservoir tank are implemented. PI controller is a control mechanism based on mechanical and electronic controllers which consists of two control techniques: proportional and integral. Moreover, a Python Communication Module acting as an OPC server allowing the exchange of information between the testbed components is implemented. For example, between the PLC and Simulink, where PLC receives the readings from the virtual sensors and controls the actuators such as the pumps discussed above.

2.2 Layer 2: Data Management The second layer includes Data Management which contains two components for gathering and handling the project data: Data Collection and Data Pre-processing. The Data Collection component gathers the energy traces of the sensors that compose the VNWTS testbed during the simulation run time. The value of each sensor is obtained at a sample rate of 0.1 s and saved in a file for later processing. To

442

N. Moradpoor et al.

make the model realistic, a water demand model of a small city for 7 days of a week which is based on a real model of the UK energy consumption is implemented. This model has been completely detailed in the previous work [23] and is implemented in a proportional valve of the VNWTS, which is regulated according to the water demand. For example, a fully open valve represents high water demand, while a slightly open valve represents low water consumption. Higher energy consumption is expected during high water demand because the speed of the pumps increases to maintain the level of the reserve tank. During the benign and attack scenarios, the Data Collection component captures a unique dataset of 3,132,651 events including eight features such as Cold Flow Rate, Hot Flow Rate, Temperature, Tank Level, Voltage in the warm water pump, Voltage in the cold water pump, Current in the warm water pump, Current in the cold water pump, along with classification (0 for benign and 1 for attack), and Type of Attack (attack to the level setpoint, attack to the temperature setpoint, and attack to multiple sensors). The Data Pre-processing component is employed to improve the quality of the raw data previously gathered by the Data Collection module. This phase is extremely important as it has a significant impact on the performance of the machine learning algorithms used in the upper layer. For instance, feature selection could have a huge positive impact in terms of reducing the computational cost of building a predictive model along with improving the performance of it. Normalization along with three popular feature selection techniques (information gain, chi-square, and Pearson’s correlation) is chosen for the data pre-processing phase. The feature selection removed four features and thus reduced the total number of features from eight to four: Temperature, Tank Level, Cold Flow Rate, Voltage in cold pump, along with Classification (0 for benign and 1 for attack), and Type of Attack (attack to the level setpoint, attack to the temperature setpoint, and attack to multiple sensors).

2.3 Layer 3: ML Training Engine The third layer includes an ML Training Engine where the selected machine learning algorithms build models based on sample data, which is also known as “training data”, to make prediction or decision (e.g., to predict an event as benign or malicious AKA attack) without being explicitly programmed to do so. For this, three popular algorithms are chosen: Logistic Regression (LR), Support Vector Machine (SVM), and Artificial Neural Networks (ANN). The 80% of the collected data is used for training and 20% for testing. Therefore, in this layer, the ML Training Engine passes the 80% of the pre-processed data to the three ML models built by the above algorithms for the pure purpose of training as the name suggests. Simultaneously, the engine passes the 20% remaining to the next layer, which is the Blockchain Virtual Machine, to store and put aside for the testing phase.

Neutralizing Adversarial Machine Learning in Industrial Control …

443

2.4 Layer 4: Blockchain Virtual Machine This layer uses a Blockchain-based virtual machine such as Ethereum hosting a smart contract for storing the testing data in a Blockchain. The contract encompasses two functions, called store() and get(). The former records “Cold Flow Rate, Hot Flow Rate, Temperature, Tank Level, Voltage in the warm water pump, Voltage in the cold water pump, Current in the warm water pump, Current in the cold water pump” in the Blockchain. The get function enables users to retrieve the records (testing data) from the Blockchain. The reason of using a public Blockchain here is providing the availability of data with the users in a transparent way.

2.5 Layer 5: ML Testing Engine The fifth layer includes an ML Testing Engine where the performance of the three fully trained LR, SVM, and ANN models are evaluated on a testing data which includes the remaining 20% of the total pre-processed dataset. Accuracy, recall, F1 score, and precision are considered as the four main metrics to evaluate the performance of the build models. The ML Testing Engine responsibilities are to connect to the Blockchain; (2) download the testing data stored previously by the ML Training Engine; (3) pass the testing data to the LR, SVM, and ANN models; and (4) capture the performance of the built models.

2.6 Layer 6: Interface This layer enables users to communicate with the system and monitor its functionality as a whole. This includes monitoring that the VNWTS testbed functions correctly (e.g., ensuring that the Python Communication Module allows the exchange of information between the testbed components such as PLC and sensors to control the pumps). It also monitors that the two components of the data management layer, data collection and data pre-processing, work properly. For example, it ensures that the data collection captures the raw data from the VNWTS testbed taking into consideration the pre-defined energy consumption features, passes them to the data preprocessing component to do normalization and feature selection, and eventually making the data ready for the ML testing and ML training engines. Additionally, the interface layer observes the data split of 80% for the ML training and 20% for the ML testing engines. This is to ensure that the training split is successfully passed to the ML models built by the chosen algorithms, while the testing split is successfully uploaded to the blockchain and downloaded later for testing purpose right after the successful completion of the training phase.

444

N. Moradpoor et al.

The interface is directly connected to a DApp so as to call the get function in the proposed smart contract and show the retrieved block contents.

3 Architecture Overview The data flow between the different layers of the system is depicted in Fig. 3. This includes VNWTS testbed (Layer 1), Data Management (Layer 2), Machine Learning Engine (Layers 3 and 5), and Blockchain Virtual Machine (Layer 4). The layer one data flow between the system components (Control Station, HMI, PLC, and the Process Under Control also known as Water Treatment System) which forms a SCADA is as follows. The Control Station loads the program that regulates the water treatment system into the PLC. The Control Station and the PLC communicate over a LAN network. The PLC sends diagnostic information to the Control Station, for example, confirming that the program which regulates the water treatment system is/are not loaded successfully (step 1 in Fig. 3). The Control Station enables the HMI to give direction to the SCADA systems and receive feedback from system components such as the PLC. The HMI allows a human to control and monitor the water treatment process. The Control Station and the HMI communicate over a LAN network. The HMI sends diagnostic information to the Control Station, for example, confirming that there is/are not a communication issue between itself and the system components (step 2 in Fig. 3). The sensors associated with the system, such as ultrasonic sensor, flowmeters, and pressure sensor, which are hard-wired to the PLC, provide the status of the water treatment process to the PLC. For example, the ultrasonic sensor provides the water level inside the B102 tank while the flowmeters measure the volumetric flow in the pipes. The PLC implements control techniques such as PID, Cascade, and Feedforward which manage the actuators such as pumps and valves based on VNWTS

Data Manager 6

Contract:

Contract:

Control

Blockchain

Fig. 3 Interactions within the architecture

Neutralizing Adversarial Machine Learning in Industrial Control …

445

the information received from the hard-wired sensors (step 3 in Fig. 3). The PLC sends information about the water treatment process to the HMI, and as a result, line operators can ensure that the process is working properly. The HMI is capable of controlling the behavior of the water treatment process by sending control signals to the actuators or modifying process variables such as setpoints (step 4 in Fig. 3). Ten system features, which are captured by the ultrasonic and the flowmeter sensors and now form a dataset with millions of both benign and malicious events, pass from the VNWTS testbed to the Data Manager (1) (step 5 in Fig. 3). The features are Cold Flow Rate, Hot Flow Rate, Temperature, Tank Level, Voltage in the warm water pump, Voltage in the cold water pump, Current in the warm water pump, Current in the cold water pump, and the class feature (0 for benign and 1 for attack), as well as Type of Attack (attack to the level setpoint, attack to the temperature setpoint and attack to multiple sensors). The dataset then goes through a pre-processing phase by the Data Manager. This phase includes normalization along with three popular feature selection techniques (information gain, chi-square, and Pearson’s correlation). The pre-processed dataset will then split to 80% for training and 20% for testing. The 80% of dataset passes to the ML Engine for creating the machine learning models (using LR, SVM, and ANN ML algorithms) and training them (2) (step 6 in Fig. 3). The remaining 20% will be stored in the blockchain (step 7 in Fig. 3) by deploying the contract and activating store function (step 8 in Fig. 3). After building the machine learning models and training them, the final 20% of the dataset, which was previously stored on the blockchain, will be retrieved through the get function in the contract and employed to test the ML engine component (steps 9 and 10 in Fig. 3).

4 Experimental Results The experiments has two parts: The blockchain-based evaluation estimates the required gas for the deployment and execution of the proposed smart contracts. Moreover, it investigates the average time taken for the mining process.

4.1 Adversarial Machine Learning For the experimental analysis, a water demand model for a small city inspired by a real model of UK energy consumption for the duration of a week is implemented. This includes normal operation and malicious behavior of the VNWTS testbed. For the malicious scenarios, attacks on VNWTS system components including level and temperature sensors, hot and cold pump controllers, and PLC memory are developed. The implemented attacks are categorized in three groups: attack to the level setpoint, attack to the temperature setpoint, and attack to multiple sensors.

446

N. Moradpoor et al.

Given the focus of this paper, which is anomaly detection based on energy consumption metrics, and after comprehensive study and the authors’ previous research in the field, eight energy-based features are considered to capture their values during benign and malicious scenarios. These included (1) Cold Flow Rate, (2) Hot Flow Rate, (3) Temperature, (4) Tank Level, (5) Voltage in the warm water pump, (6) Voltage in the cold water pump, (7) Current in the warm water pump, and 8) Current in the cold water pump. The dataset also includes binary classification (“0” for benign and “1” for malicious event) and Type of Attack (“1” for attack to the level setpoint, “2” for attack to the temperature setpoint, and “3” for attack to multiple sensors). 3,132,651 malicious and benign events are captured which formed a unique energy-based dataset. The dataset is pre-processed using normalization and three popular feature selection techniques (information gain, chi-square, and Pearson’s correlation). While the former reduces data redundancy and improves data integrity, the latter reduces the number of input variables when developing a predictive model. The feature selection techniques reduced the eight features to four: (1) Temperature, (2) Tank Level, (3) Cold Flow Rate, and (4) Voltage in cold pump. The binary classification (“0” for benign and “1” for malicious event) and Type of Attack (“1” for attack to the level setpoint, “2” for attack to the temperature setpoint, and “3” for attack to multiple sensors) remain unchanged. The pre-processed dataset is passed to three popular machine learning algorithms, Logistic Regression (LR), Support Vector Machine (SVM), and Artificial Neural Networks (ANN), to build predictive models equally using 80% of the dataset for training and the remaining 20% for testing. For the performance metrics of the three algorithms above, accuracy, recall, f1score, and precision are considered. However, only f1-score and accuracy results are presented in this paper given the page limitation and that f1-score considers recall and precision values in the calculation. Regarding machine learning adversarial attacks, four categories of random label flipping, targeted label flipping, Fast Gradient Sign Method (FGSM), and Jacobian Saliency Map Attack (JSMA) are considered. While flipping techniques (both random and target) focus on training data, FGSM and JSMA targets testing data. These attacks are considered against only one type of classification in the dataset: binary classification (“0” for benign and “1” for malicious event). The impact of adversarial attacks against the other one, which is the multiclass classification where the type of attack is known, will be discussed in the future publications due to lack of space. The f1-score for the selected ML algorithm is captured after employing random flipping, target flipping, FGSM, and JSMA attacks for the binary classification. They are depicted in Figs. 4 and 5. Overall, SVM outperforms LR in terms of showing a longer battle against target flipping attack; however, both algorithms shown the same performance against random flipping, Fig. 4. Similarly, ANN shows longer resistance against attacks such as FGSM and JSMA in comparison with LR, Fig. 5. This is the same case for the accuracy reductions for

Neutralizing Adversarial Machine Learning in Industrial Control …

447

Fig. 4 Comparing f1-score in random and target flipping for LR versus SVC

Fig. 5 Comparing f1-score in FGSM (left) and JSMA (right) for LR versus ANN

all three algorithms after adversarial attacks. Overall, ANN and SVM reveal a longer resistance against all the attacks (random and target flipping, FGSM, and SJMA) in comparison with LR, Figs. 6 and 7, and target flipping has a greater impact on the accuracy in comparison with random flipping, Fig. 6.

4.2 Blockchain-Based Evaluation A Blockchain-based prototype using Ethereum virtual machine and Remix-IDE is implemented in order to write and compile the proposed smart contract [24]. The

448

N. Moradpoor et al.

Fig. 6 Comparing accuracy in random and target flipping for LR versus SVC

Fig. 7 Comparing accuracy in FGSM (left) and JSMA (right) for LR versus ANN

contract has been written with Solidity, which is a popular programming language for encoding contracts in Ethereum [25]. A public Ethereum test network (Ropsten) was used to deploy the contract and its transactions in a Blockchain network [26]. After the contract deployment, the amount of gas used for its execution was calculated as 244,340 wei.1 The average consumed gas was 52,700 wei for the store function and was 35,455 wei for the get function. These results were calculated after five times execution of the functions with different parameters. Table 1 represents the average costs and mining time for executing the transactions and creating blocks. The amount of gas prices for cheap, average, and fast modes for 1

Gas is the fee required to successfully run a transaction or deploy a contract on the Ethereum blockchain and its unit is wei or Gwei.

Neutralizing Adversarial Machine Learning in Industrial Control … Table 1 Transaction costs and mining time

449

Gas price (Gwei)

80

160

320

Store (cost: ETH)

0.004

0.008

0.016

Store (cost: Gwei)

4,637,600

8,432,000

16,864,000

Get (cost: ETH)

0.003

0.006

0.011

Get (cost: Gwei)

2,836,400

5,672,800

11,345,600

Mining time (s)

4857

300

28

miners were, respectively, 80, 160, and 320 Gwei. Such values have been captured from ETH Gas station2 that shows the miners’ motivation in terms of gas price for executing the transactions and block creations on the day of the experiment. The average costs for running the store function in ETH and Gwei are represented in the table. The cost in Gwei is calculated as: used gas gas price. The same evaluation has been tested for the get function. Because the number of opcodes in the store function was more than those in the other one, its cost was higher than the get function. As seen from the table, when the gas price increases, the average time taken for mining transactions/blocks reduces sharply. For instance, for a gas price of 320 gwei, miners can create blocks just nearly 28 s in average.

5 Conclusion and Future Work In this paper, an energy consumption-based machine learning approach built on a novel dataset is proposed to detect anomalies in a virtual model of a water treatment system named VNWTS. Then its robustness against adversarial attacks is evaluated. The evaluation of the proposed anomaly detection algorithm against the adversarial machine learning includes four attack categories: random label flipping, targeted label flipping, fast gradient sign method, and Jacobian saliency map attack for three popular machine learning algorithms: Support Vector Machine, Logistic Regression, and Deep Learning. Additionally, two popular metrics for performance comparison are considered: f1-score and accuracy. Addressing the captured results, Deep Learning and Support Vector Machine have shown longer battle against all four categories of attack in comparison with Logistic Regression considering both performance metrics. Additionally, the target flipping has a bigger impact compared with random flipping. It is concluded that, although there is a different level of resistance among the three algorithms for f1-score and accuracy reduction against adversarial attacks, the proposed energy-consumption-based machine learning approach, which is built on the novel energy-based dataset, is vulnerable against such attacks. A smart contract for logging and getting data into/from a blockchain network was deployed in Ropsten and the results showed that an increase in the gas price leads to a noticeable decrease in the average mining time. 2

https://ethgasstation.info/.

450

N. Moradpoor et al.

Future work will focus on the implementation of the architecture on the real testbed. Moreover, the investigation of the proposed method in a more scalable and decentralized systems using federated machine learning tools and multichain remains another challenge for future direction. Acknowledgements This research is supported by the Edinburgh Napier University. The data presented in this study are available on request.

References 1. Semwal, P.: A multi-stage machine learning model for security analysis in industrial control system. In: AI-Enabled Threat Detection and Security Analysis for Industrial IoT, pp. 213–236. Springer, Cham (2021) 2. Analysis of Top 11 Cyber Attacks on Critical Infrastructure [Online]. https://www.fir stpoint-mg.com/blog/analysis-of-top-11-cyber-attackson-critical-infrastructure/. Accessed 04 Nov 2021 3. U.S. Water Supply System Being Targeted by Cyber-Criminals [Online]. https://www.forbes. com/sites/jimmagill/2021/07/25/us-water-supply-system-being-targeted-by-cybercriminals/? sh=34c2aa4a28e7. Accessed 18 Oct 2021 4. Alhogail, A., Alsabih, A.: Applying machine learning and natural language processing to detect phishing email. Comput. Secur. 110, 102414 (2021) 5. Yuan, S., Wu, X.: Deep learning for insider threat detection: review, challenges and opportunities. Comput. Secur. 102221 (2021) 6. Raman, D.R., Saravanan, D., Parthiban, R., Palani, D.U., David, D.D.S., Usharani, S., Jayakumar, D.: A study on application of various artificial intelligence techniques on internet of things. Eur. J. Mol. Clin. Med. 7(9), 2531–2557 (2021) 7. Arif, J.M., Ab Razak, M.F., Mat, S.R.T., Awang, S., Ismail, N.S.N., Firdaus, A.: Android mobile malware detection using fuzzy AHP. J. Inf. Secur. Appl. 61, 102929 (2021) 8. Li, L., Rong, S., Wang, R., Yu, S.: Recent advances in artificial intelligence and machine learning for nonlinear relationship analysis and process control in drinking water treatment: a review. Chem Eng J 405, 126673 (2021) 9. Jindal, R., Dahiya, D., Sinha, D., Garg, A.: A study of machine learning techniques for fake news detection and suggestion of an ensemble model. In: International Conference on Innovative Computing and Communications, pp. 627–637. Springer, Singapore (2022) 10. Faber, B., Michelet, G., Weidmann, N., Mukkamala, R.R., Vatrapu, R.: BPDIMS: a blockchainbased personal data and identity management system. In: Proceedings of the 52nd Hawaii International Conference on System Sciences, Hawaii, USA, pp. 6855–6864 (2019) 11. Barati, M., Rana, O., Petri, I., Theodorakopoulos, G.: GDPR compliance verification in Internet of things. IEEE Access 8, 119697–119709 (2020) 12. Kim, H., Park, J., Bennis, M., Kim, S.: Blockchained on-device federated learning. IEEE Commun. Lett. 24(6), 1279–1283 (2020) 13. Lu, Y., Huang, X., Dai, Y., Maharjan, S., Zhang, Y.: Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Trans. Industr. Inf. 16(6), 4177–4186 (2020) 14. Durazno, A.R., Moradpoor, N., McWhinnie, J., Porcel-Bustamante: VNWTS: a virtual water chlorination process for cybersecurity analysis of industrial control systems. In: 2021 14th International Conference on Security of Information and Networks (SIN), vol. 1, pp. 1–7. IEEE (2021)

Neutralizing Adversarial Machine Learning in Industrial Control …

451

15. Mathur, A.P., Tippenhauer, N.O.: SWaT: a water treatment testbed for research and training on ICS security. In: IEEE International Workshop on Cyber-Physical Systems for Smart Water Networks (CySWater), pp. 31–36 (2018) 16. Inoue, J., Yamagata, Y., Chen, Y., Poskitt, C.M., Sun, J.: Anomaly detection for a water treatment system using unsupervised machine learning. In: IEEE International Conference on Data Mining Workshops (ICDMW), pp. 1058–1065 (2017) 17. Goh, J., Adepu, S.¸ Junejo, K.N., Mathur, A.: A dataset to support research in the design of secure water treatment systems. In: Critical Information Infrastructures Security, pp. 88–99 (2017) 18. Goh, J., Adepu, S., Tan, M., Lee, Z.S.: Anomaly detection in cyber physical systems using recurrent neural networks. In: IEEE 18th International Symposium on High Assurance Systems Engineering (HASE), pp. 140–145 (2017) 19. Schneider, P., Böttinger, K.: High-performance unsupervised anomaly detection for cyberphysical system networks. In: Proceedings of the Workshop on Cyber-Physical Systems Security and Privacy, pp. 1–12 (2018) 20. Yau, K., Chow, K.-P., Yiu, S.-M.: Detecting attacks on a water treatment system using oneclass support vector machines. In: IFIP International Conference on Digital Forensics, pp. 95–108. Springer, Cham (2020) 21. Gomez, A.L.P., Maimo, L.F., Celdran, A.H., Clemente, F.J.G.: MADICS: a methodology for anomaly detection in industrial control systems. Symmetry 12(10), 1583 (2020) 22. MPS PA Filtration Learning System [Online]. https://www.festo-didactic.com/int-en/lea rning-systems/process-automation/mps-pa-stations-and-complete-systems/mps-pa-filtrationlearning-system.htm?fbid=aW50LmVuLjU1Ny4xNy4xOC4xMDgyLjQ3ODU. Accessed 18 Oct 2021 23. Robles-Durazno, A., Moradpoor, N., McWhinnie, J., Russell, G., Maneru-Marin, I.: Implementation and detection of novel attacks to the PLC memory of a clean water supply system. In: International Conference on Technology Trends, pp. 91–103. Springer, Cham (2018) 24. Ethereum [Online]. https://www.ethereum.org/. Accessed 10 Oct 2021 25. Solidity [Online]. https://solidity.readthedocs.io/en/v0.5.3, Accessed 10 Oct 2021 26. Ropsten Testnet Pow Chain [Online]. https://github.com/ethereum/ropsten, Accessed 10 Oct 2021

A User-Centric Evaluation of Smart Contract Analysis Tools in Decentralised Finance (DeFi) Gonzalo Faura, Cezary Siewiersky, and Irina Tal

Abstract Blockchain and smart contract technology have led to the creation of an alternative financial system called Decentralised Finance (DeFi) which has grown exponentially in the last year alone to a current value of $76B. Without a central custodian or regulator, non-technical users may find it difficult to assess the security of their favourite projects. In this trustless environment, can the current state-of-theart smart contract analysis tools be used by non-technical users to protect investors from incurring losses and improving the security in the space? In the paper, we review the literature focusing on well-known vulnerabilities of financial smart contracts and show the scale of successful DeFi attacks. By analysing the root cause of recent exploits of contracts, we assess the feasibility of detecting these vulnerabilities by automatic verification. We investigate 21 analysis tools for detecting vulnerabilities in smart contracts with an in-depth evaluation of six tools: Slither, Mythril, DerScanner, Manticore, Oyente and Securify v2. The tools were evaluated for their efficiency and accuracy against a custom dataset containing 28 vulnerable and 16 healthy smart contracts and are ultimately rated based on how useful they may be from a DeFi user perspective. The results indicate that, while Slither received the highest rating, none of the existing tools can successfully assist DeFi users at present due to lack of reliability or lack of simplicity for the targeted market. Keywords Smart contracts · Solidity · Blockchain · Ethereum · DeFi · Security · Tools · Vulnerability analysis

G. Faura · C. Siewiersky School of Computing, Dublin City University, Dublin, Ireland e-mail: [email protected] C. Siewiersky e-mail: [email protected] I. Tal (B) Lero, School of Computing, Dublin City University, Dublin, Ireland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 C. Onwubiko et al. (eds.), Proceedings of the International Conference on Cybersecurity, Situational Awareness and Social Media, Springer Proceedings in Complexity, https://doi.org/10.1007/978-981-19-6414-5_25

453

454

G. Faura et al.

Fig. 1 Total value locked in DeFi as of July 2021 (Source defipulse.com)

1 Introduction Blockchain, or Distributed Ledger Technology, is still in its infancy, and is evolving at a rapid pace. It was initially brought to the spotlight with Bitcoin [1] in 2009, and was empowered with the Ethereum Network [2] when launched in 2015, as it allowed the running of smart contracts, immutable decentralised applications, on its network. With the use of smart contracts, new applications for blockchain technology have emerged. A prominent use case has been that of Decentralised Finance (DeFi), where users can operate financial instruments such as lending, borrowing, exchange of assets, etc. without having to rely on centralised intermediaries such as banks, exchanges or brokerages. The DeFi ecosystem, as can be seen in Fig. 1, has grown at an incredibly fast pace in the past years from a total value locked (TVL) of $3B in July 2020 up to $76.84B in July 2021 according to DeFi Pulse1 and is heavily reliant on the security of the smart contracts that make the ecosystem possible. As with other technologies, innovation brings risks, because the improvements often cannot cover all possible attack scenarios that may be uncovered as time passes. As a consequence, many smart contracts have become vulnerable to attacks due to flaws in the smart contract’s source code (e.g. software bugs), usability issues (e.g. issues with the custody of private keys), DeFi Protocol attacks (e.g. exploitation of a price oracle affecting another protocol) or attacks to the underlying network (e.g. mempool congestion). Based on the above-mentioned figures in the DeFi space, the consequences of an attack or exploit to smart contracts in the DeFi space would lead to losses topping the tens of millions of USD. Some recent examples of such consequences are an exploit to the DAI lending pool on Yearn Finance2 in early February 2021 (11 million USD) or the exploit of a simple math bug introduced to

1 2

https://defipulse.com/. https://blog.defiyield.app/yearn-finance-exploit-explained-a10b07c280c8.

A User-Centric Evaluation of Smart Contract Analysis Tools …

455

the UraniumPair contracts used by Uranium Finance3 at the end of April 2021 which led to losses of up to 57 million USD. Many static and dynamic smart contract analysis tools have been developed over the past few years to automate the analysis of smart contracts and to simplify the detection of vulnerabilities. These tools are actively used by smart contract developers and auditors. During our research, we have noticed that there are many studies that identify the smart contract vulnerabilities and that provide an overview of smart contract analysis tools. This paper focuses on researching the vulnerabilities of smart contracts and to extend them with vulnerabilities related to the DeFi space. The paper will also focus on the evaluation of the existing smart contract analysis tools based on how useful they can be for DeFi users. It is important to note that, to the best of our knowledge, there is currently no research that covers the subject of testing DeFi contracts with analysis tools and evaluating them based on a user-centric level. Hence, in this paper, we proposed to answer the following research questions: RQ1. Provide a comprehensive analysis of the analysis tools currently available and make a comparison of tools used for verification and detection of vulnerabilities in Solidity Smart contracts. RQ2. To what extent existing tools can empower DeFi users to verify the trustworthiness and security of smart contracts that will handle their investment. In order to address these questions, a thorough review was performed on the state of the art of smart contract analysis tools, followed by a selection of the most representative ones using a well-defined criteria. Then we have tested the tools selected against a custom dataset of vulnerable and healthy smart contracts and evaluated them not only based on well-known metrics used in the literature but also against custom metrics defined in this research from the perspective of DeFi users. This paper is organised as follows. Section 2 presents the related work with a focus on identifying the most relevant smart contract analysis tools, Sect. 3 details the research methodology, while Sect. 4 analyses the results. Section 5 draws the conclusions.

2 Related Work The literature review focuses on understanding the vulnerabilities of smart contracts, with a particular emphasis on the DeFi space, and identifying the existing Solidity smart contract analysis tools that can be used to identify and/or prevent such vulnerabilities. Smart contracts have a short lifetime since they were first introduced in 2015 when the Ethereum Network [2] was launched. Since then, many other blockchain networks have been launched that also support smart contracts such as Tron,4 EOS,5 3

https://www.coindesk.com/binance-smart-chain-defi-exchange-uranium-finance-exploit. https://tron.network/static/doc/white_paper_v_2_0.pdf. 5 https://github.com/EOSIO/Documentation/blob/master/TechnicalWhitePaper.md. 4

456

G. Faura et al.

Terra6 or Solana,7 etc. In this paper, we focus on Ethereum smart contracts written in Solidity due to their high adoption and their exponential growth in the Decentralised Finance space over the past year.

2.1 Smart Contract Vulnerabilities Irrespective of the few years since smart contracts were introduced, there is a wealth of literature available that describes and analyses the security of Solidity smart contracts. In the literature, there seems to be a consensus around the usefulness of the taxonomy of vulnerabilities proposed in [3], where they are organised in a custom taxonomy based on whether the root cause lies on the Blockchain, EVM or the Solidity language (i.e. software bug). In describing the taxonomy, [3] also provides examples of actual attacks that exploited smart contracts vulnerabilities for each class (e.g. matching the reentrancy vulnerability and the DAO Hack [4]). This taxonomy is largely adopted in the literature (e.g. [5–7]) and it is also extended. In [7], the taxonomy was extended with severity levels associated with the vulnerabilities that range between 1 (low severity) and 3 (high severity). There are other taxonomies proposed in the literature, such as the one proposed in [8]. This taxonomy is built around the software lifecycle and defines three big classes of vulnerabilities depending on the main cause behind the vulnerability: Solidity language, Blockchain platform and Misunderstanding of common practices. In [9], another taxonomy is introduced that is based on the classification of smart contract vulnerabilities or exploits into four categories: malicious acts, weak protocol, defraud and application bugs. In [10], in addition to investigating formal verification methods, the authors proposed different approaches to verifying smart contracts and itemised a very comprehensive list of vulnerabilities known at the time it was submitted in early 2020. A smaller subset of vulnerabilities are described in other papers such as [11, 12] and [9] which list Reentrancy, Timestamp dependence, Integer Overflow/Underflow, and Revert-based DoS, Dependence on transaction-ordering, Throwing an uncontrollable exception, Linearizability (a.k.a. call to the unknown), SC misbehaviour (i.e. business logic issue), short address attack, Delegate Call and Default visibilities. However, the only common vulnerabilities in all three papers are Reentrancy and Timestamp dependence. Finally, in [13], a number of vulnerabilities are listed but the focus of the research is on a random number vulnerability in the Fomo3d-like game contracts, as well as the attack and defense methods applied. The discrepancy in the vulnerabilities included in the different studies shows that Solidity is a continuously evolving language and new vulnerabilities are being found (and introduced) as time progresses, which highlights the importance of a centralised source for listing smart contract vulnerabilities. 6 7

https://www.terra.money/Terra_White_paper.pdf. https://solana.com/solana-whitepaper.pdf.

A User-Centric Evaluation of Smart Contract Analysis Tools …

457

Smart Contract Weakness Classification Overview (SWC) [14] is a live, opensource database where a comprehensive list of vulnerabilities with examples of vulnerable contracts and their fixed equivalents can be found. This seems to be the most comprehensive classification of the smart contact vulnerabilities, and hence we have decided to use this in our research.

2.2 Decentralised Finance (DeFi) and Related Smart Contract Attacks DeFi is an alternative financial system where there is no need for intermediaries to be able to access the cryptocurrency equivalent to a bank account (i.e. wallets), borrowing and lending services [15] among others such as yield-bearing deposit accounts. As mentioned in [16], DeFi “aims to disrupt finance by replacing the security and integrity provided by traditional central bookkeepers such as banks with trustless, transparent protocols executed by a smart contract”. When conducting our research, apart from numerous articles, blogs and white papers we found three academic papers which focus specifically on DeFi smart contracts [12, 17] and [16] and realised that a different attack vector can affect DeFi smart contracts based on complex integrations between the different existing DeFi protocols that could allow for smart contract exploits. In this subsection, we will provide a brief overview of the challenges and some of attack vectors to DeFi and analyse recent attacks in the space as well as providing a summary of the DeFi attacks between April and July 2021 which led to a total loss of USD 343.6 m. As already noted by [16], where attacks specific to the DeFi space are analysed, DeFi protocols are risky and highly complex. The vulnerabilities can be posed by multiple factors, such as flawed financial and business logic, mathematical error, code error, error in protocol implementation, mismanagement of private keys and other. In their research, they built a dataset of vulnerabilities exploited and gathered a list of incidents up to November 2020. Authors of [12] specifically focused on one of the biggest lending protocols— Maker. By conducting tests and research, they discovered additional vulnerabilities related to the logic of the protocol. All three popular attack types—Flashloan, governance and exploitation of minting could potentially happen. They place a particular emphasis on an attack on the governance mechanism of the Maker protocol. Based on multiple blogs, articles and [16], early attacks were widespread due to early software weaknesses. It was caused both by the fact that the early version of Solc—Ethereum compiler—was unreliable.8 Early adopters were learning new technology and developers did not have knowledge about serious vulnerabilities. These weaknesses allowed many successful attacks, including the well-known DAO attack [4] in 2016. Since then, we have witnessed a significant contribution of smart contract community, developers and academics. This allowed developing well-vetted 8

https://blog.trailofbits.com/2019/01/18/empire-hacking-ethereum-edition-2.

458

G. Faura et al.

code solutions for different scenarios and usage. An example of such resources is a library for secure smart contract development offered by OpenZeppelin.9 Developers are encouraged to use it instead of creating their own code from scratch. Doing so will not only shorten the time needed to create a functioning contract, but also more likely will lead to a safer and better quality code. Moreover, Solidity, the Ethereum programming language, is becoming more mature and has gotten many security improvements in the last few years. There are also more educational resources, official documentation and web posts which describe various known and recently discovered vulnerabilities and techniques of how to avoid or mitigate them. DeFi space is especially vulnerable to many attack vectors, and a successful breach of a protocol can potentially bring attackers substantial economic benefits. This was one of the reasons why several companies started to specialise in conducting security audits for smart contracts. However, as we have learnt, a positive result of an audit, or in some cases multiple audits, does not guarantee a DeFi protocol is safe, and attacks are impossible. In many cases, the protocols were audited (in some cases by more than one audit company—e.g. Alpha finance audited by QuantStamp and Peckshield)10 and still fell victim to attackers, who managed to find and exploit vulnerabilities. In this case, the Flashloan attack was carefully crafted and successfully executed, USD 37.5 m were lost. Even a high security rating awarded by smart contract experts does not assure the safety of funds, and users may lose their funds even days after a positive audit. Another example would be the exploit of USD 30 m from the Spartan Protocol after a positive audit from PeckShield inc.11 In Table 1, we list several such attacks which happened from mid-April until the end of July 2021, many of the protocols passed one or more audits. Looking at the post-mortem analysis of recent DeFi exploits, it is clear that many of such attacks were possible only due to private key mismanagement, where the owner of such a secret could withdraw the funds to any address. Separate types of attacks— code error (or human error) could happen as a developer made a relatively simple mistake. The problem in this case is partially related to the fact that almost all DeFi protocols are maintained as open-source libraries. Code from one, usually wellestablished protocol, can be changed and used in another project. Such modifications are sometimes the root cause of the attacks (e.g. Uranium Finance, where USD 57 million was lost). These cases are usually not conclusive. There are often speculations that certain modifications could have been made deliberately, and knowledge shared with potential attackers. It even happened that developers included a comment in the code that hints at potential vulnerabilities, and still the code is being used, and a future bad actor only needed to implement this weakness (for example, Thorchain attack from July 2021, where non-standard fake token strategy was used and USD 8M lost).

9

https://docs.openzeppelin.com/contracts/2.x. https://github.com/AlphaFinanceLab/homora-v2/tree/master/audits. 11 https://peckshield-94632.medium.com/the-spartan-incident-root-cause-analysis-b14135d3415f. 10

A User-Centric Evaluation of Smart Contract Analysis Tools …

459

Table 1 List of successful DeFi attacks in a 3-month period in 2021 Date

USD amount

Protocol

20/04/2021 53,000,000.00 EASY Finance

Attack type Private key

28/04/2021 57,000,000.00 Uranium Finance Code flaw in forked smart contract 02/05/2021 30,000,000.00 Spartan protocol

Flawed logic + flashloan

06/05/2021 7,000,000.00

Human error

Value DeFi 1

08/05/2021 11,000,000.00 Value DeFi 2

Code error

08/05/2021 10,000,000.00 RARI Capital

Fake token

12/05/2021 24,000,000.00 Xtoken

Flashloan

17/05/2021 18,000,000.00 bEarnFi

Error in code logic + Flashloan

20/05/2021 45,000,000.00 Bunny Finance

Flashloan

25/05/2021 745,000.00

Autoshark

Flashloan

26/05/2021 680,000.00

Merlin lab

Flashloan

27/05/2021 550,000.00

Merlin lab

Flashloan

28/05/2021 7,200,000.00

BurgerSwap

Removed check in code

29/05/2021 6,300,000.00

Belt Finance

Flashloan

17/06/2021 170,000.00

Iron Finance

Minting bug

18/06/2021 6,500,000.00

Alchemix

Removed check in code

23/06/2021 4,500,000.00

Eleven Finance

Flashloan

24/06/2021 27,000,000.00 StableMagnet

Private key

28/06/2021 250,000.00

SafeDollar

Bug in reward mechanism

29/06/2021 330,000.00

Merlin lab

Minting bug

13/07/2021 7,900,000.00

Anyswap

Error in private key generation

13/07/2021 5,200,000.00

chainswap

Minting bug

15/07/2021 5,900,000.00

Bondly Finance

Minting bug

16/07/2021 5,000,000.00

THORChain

Bug in DeFi bridgecode

18/07/2021 2,400,000.00

PancakeBunny

Flashloan + minting bug

26/07/2021 8,000,000.00

ThorChain

Fake token which upon approval drain wallets using tx.origin

Total: 343,625,000.00 Average: 13,216,346.15

2.3 Smart Contract Analysis Tools Similar to the smart contract vulnerabilities, there is extensive academic literature on the available smart contract analysis tools. While some simply mention the tools and provide a brief description as [8, 9, 11, 13, 18] or [10], authors of other publications such as [5–7, 19, 20] or [10] focus on testing if the selected tools can find the vulnerabilities in the datasets used. While there are papers that describe specific tools, like FairCoin [21], Ethainter [22], Gasper [23], SmartCheck [24], Oyente

460

G. Faura et al.

[25], Mythril [26], Slither [27] or Security [28], we will focus in this section on the literature that describe and/or analyse various tools. In general, as with the vulnerabilities, there is a lack of consistency in the academic literature with the tools identified by the different papers. For example, [13] includes Oyente, Porosity and Mythril and compares their advantages and disadvantages, [8] mentions Oyente, Securify, Zeus, Octopus, Teether, Mythril, SmartCheck and Manticore with the main emphasis on Oyente; [11] briefly mentions of some tools including Mythril, Solc-verify and others outside the scope of our research; and the only relevant tools for our research mentioned on [18] are Oyente, Zeus and Scilla. In [10], different methods for analysing the security of smart contracts are well explored and detailed. Such methods can be both by verification of correctness and using vulnerability detection tools that use symbolic execution (Oyente, Maian, SmartCheck, Slither, Gasper, Mythril and Osiris), abstract interpretation or fuzzing techniques. It also summarises what vulnerabilities each tool is able to detect as well as the type of analysis (static/dynamic) that is used. Following the above, which do not run any tests on the tools, we can highlight a few papers such as [19], which tested the effectiveness of 6 tools (Oyente, Securify, Mythrill, SmartCheck, Manticore and Slither) using a custom dataset of 50 contracts that contained 9369 distinct bugs. The tools were evaluated based on their effectiveness (i.e. how many bugs they were able to detect) and accuracy (i.e. how many false positives were found). As part of their work, they introduced a technique for evaluating Ethereum smart contracts analysis tools systematically by bug injection. The study presented in [5] focuses on testing the effectiveness of detecting EVM bytecode vulnerabilities, as well as their performance to analyse contracts directly from the Ethereum Mainnet. They evaluated three tools (Echidna (Fuzzer), Mythril and Manticore) by using two sets of contracts available online from Capture the Ether12 and Ethernaut13 which already contain vulnerable code. In [6] and [9], the authors do not run custom tests to evaluate the efficiency of the tools discussed. However, they provide a valuable comparison of the vulnerabilities that were expected to be covered by those tools based on their documentation and using the taxonomy from [7] and [3] as a comparison method. The authors concluded that “some tools, such as Remix and Securify focus on severe vulnerabilities only, while others, such as SmartCheck, Mythril, and Oyente, feature a more comprehensive set of vulnerability detection capabilities” [6]. The conclusion in [6] is that most vulnerabilities are caused by human error and that the tools can assist with early detection and prevention. However, as highlighted in [9], the tools can give a false sense of security as not all vulnerabilities can be found by any one tool. The tests carried out by [7] have been run using 23 vulnerable and 21 audited smart contracts to test the effectiveness. They evaluated 4 tools (Securify, Oyente, Remix and SmartCheck) based on their accuracy, effectiveness and consistency and found that SmartCheck excels in identifying all of the vulnerabilities.

12 13

https://capturetheether.com/. https://ethernaut.openzeppelin.com/.

A User-Centric Evaluation of Smart Contract Analysis Tools …

461

We have placed a particular interest in the thorough research carried out in [20] where they tested 9 analysis tools (Honey Badger, Maian, Manticore, Mythril, Osiris, Oyente, Securify, Slither and SmartCheck) using a dataset of 47,587 smart contracts sourced from the Ethereum Mainnet. They developed an execution framework called SmartBugs to simplify the use of different tools to test the same contracts with little effort, and that allows adding new tools as “plugins” based on Docker images. A subset of the dataset called SBcurated with 69 vulnerable contracts that contained 115 vulnerabilities was also used to validate the effectiveness of the tools. Surprisingly, the tools (together) were only able to detect 42% of the vulnerabilities in the SBcurated dataset, which clearly shows the need for better tools and detection mechanisms. It is important to note that many smart contracts in the DeFi space pass through security audits before their go-live date. Studies such as [7] have used audited contracts in order to remove false positives while testing the above tools. There are also studies that focus on formal verification of smart contracts [29] that propose to analyse and verify smart contracts for functional correctness and safety by translating them from Solidity into F* [30] or that use model checking for verification of the smart contracts by translating the Solidity code into a SPIN mode [18], which provides substantial insight into underlying issues of the source code, but does not fit our scope, as it’s limited to runtime evaluation. Finally, it is worth noting that none of the tools have been developed to identify business logic-related vulnerabilities as the ones described in [12] and [16] and discussed above, such as Flashloan attacks, which are of the utmost importance in decentralised finance. Table 2 summarises all the tools that we initially considered for our evaluation on the basis of the literature review conducted. Further criteria was applied on these tools to select the final tools used in our analysis. This is described in the next section.

3 Research Approach This section describes the methodology employed in order to address the aforementioned research questions. We first describe the selection of the tools that were evaluated and the criteria behind this selection. Then we discuss the testing dataset in terms of vulnerabilities illustrated.

3.1 Methodology Used for Selecting Tools In the paper, we focus on a number of tools, which are not only available to us, but also relatively easy to use and provide clear, understandable output. This approach is based on the fact that we want to rate and assess each tool from the perspective of a user, who is a potential DeFi investor, and does not have deep technical knowledge to inspect and verify the code of the contracts. Table 3 shows the type of tools we

462

G. Faura et al.

Table 2 Full list of analysis tools No.

Tool candidate

Brief description

Open source/proprietary (O/P)

Included for testing [Y/N] (excln criteria)

1

ZeppelinOS (OpenZeppelin SDK)a

Provides security products to O/P build, automate and operate decentralised applications

N (B)

2

SolCoverb

Analyses Ethereum code to find common vulnerabilities

O

N (B)

3

Oyentec

Static analysis of Solidity source code for security vulnerabilities and best practices

O

Y

4

SmartCheck [24]

Fully automated static analyser for smart contracts, providing a security report based on vulnerability patterns

O

Y

5

Securify v2 [29]

Securify 2.0 is a security scanner for Ethereum smart contracts. Version 1.0 is discontinued

O

Y

6

Remixd

Web-based IDE with add-ins

N (B)

7

Mythril [27] (ConsenSys)

Security analysis tool for O EVM bytecode. It uses symbolic execution, SMT solving and taint analysis to detect a variety of security vulnerabilities. It’s also used (in combination with other tools and techniques) in the MythX security analysis platform

Y

8

MythXe

MythX is a P professional-grade cloud service that uses symbolic analysis and input fuzzing to detect common security bugs and verify the correctness of smart contract code. Using MythX requires an API key from mythx.io

A

9

Octopusf

Security Analysis tool for Blockchain Smart Contracts with support of EVM and (e)WASM

O

B

10

Slither [28]

Static analysis framework with detectors for many common Solidity issues. Tool also detects code optimizations and summarise contract information

O

Y

(continued)

A User-Centric Evaluation of Smart Contract Analysis Tools …

463

Table 2 (continued) No.

Tool candidate

Brief description

11

Contract-Libraryg

Decompiler and security analysis tool for all deployed contracts

N (B)

12

Echidnah

The only available fuzzer for O Ethereum software. Uses property testing to generate malicious inputs that break smart contracts

N (B)

13

Manticore [31]

Dynamic binary analysis tool with EVM support

O

N (B)

14

sFuzzi

Efficient fuzzer inspired from AFL to find common vulnerabilities

O

N (D)

15

Vertigoj

Mutation testing for Ethereum smart contracts

O

N (B)

16

FairCoin [2]

Tool aims to verify the fairness in smart contracts

P

N (A)

17

Ethainter [11]

Authors argue that “balance of precision and completeness offers significant advantages over other tools such as Securify, Securify2, and teEther”

P

N (A)

18

F* [17]

Functional programming language

O

N (A)

19

Gasper [16]

Gas cost analysis. Not available publicly anymore

P

N (A)

20

DerScannerk

Commercial, static code analyser capable of identifying vulnerabilities and undocumented features in source code and binaries

P

Y

21

EY smart contract review tooll

Online tool to analyse vulnerabilities in smart contracts. Currently only valid for ERC20 contracts

O

N (E)

a https://openzeppelin.com/sdk/ b https://github.com/sc-forks/solidity-coverage c https://github.com/enzymefinance/oyente d https://remix.ethereum.org/ e https://mythx.io/ f https://github.com/quoscient/octopus g https://contract-library.com/ h https://github.com/trailofbits/echidna i https://sfuzz.github.io/ j https://github.com/JoranHonig/vertigo k https://derscanner.com/ l https://review-tool.blockchain.ey.com/

Open source/proprietary (O/P)

Included for testing [Y/N] (excln criteria)

464 Table 3 Solidity analysing tools

G. Faura et al. Description

Count

Total number of tools

21

Open-source tools

15

No. of commercial tool where access was temporarily granted

1

Proprietary tools where access is free upon registration 1 No. of tools that were selected for further testing

6

considered for testing. Below are the prerequisites (Acceptance Criteria) that were used to determine if a tool should be used for the evaluation stage: A. The tool is available to us. Some of the tools are not available to us free of charge or have been discontinued. B. The tool is available for general users, does not require additional software like IDE and generates output with warnings and recommendations. We do not focus on tools which are intended for developers only, e.g. SDK, add-in to IDEs or do not generate a report with vulnerabilities. C. The tool is relatively new. Because the space of smart contract is new and fast evolving we decided to test tools that were created or updated in the last 3 years (latest updates on GitHub not older than 2018). D. Successful Installation and successful run. In our approach, we aimed to simulate a user with some knowledge of smart contracts, but without deep understanding of the code. For each tool, we took the fewest possible installation steps to run the software and produce a relevant output. E. It is useful for all Ethereum smart contracts. The tools that can only be used against a specific type of smart contracts (e.g. ERC20 tokens) have been excluded. As it can be seen in Table 2, the right-most column shows in brackets the acceptance criteria used to exclude each of the tools. Following the application of the acceptance criteria, six tools were selected for our tests and rating process that will be explained in the following section.

3.2 Survey In addition to the above criteria, we also sought feedback from the community of Solidity smart contract developers and auditors by publishing a survey that allowed participants to rate the usefulness of each tool as well as to provide feedback of any tools that may not have been listed initially. There were nine participants only in the survey. From the feedback received, 45% of the participants had less than 2 years’ experience working with smart contracts, 33% between 2 and 5 years and only 22%

A User-Centric Evaluation of Smart Contract Analysis Tools …

465

had been working with smart contracts for longer than 5 years as can be seen in Fig. 2. We asked participants to rate tools based on their experience from 0 (no previous experience) to 5 (very useful) and the outcome can be seen in Fig. 3. Surprisingly, 78% of participants never used Oyente despite its widespread presence in academic research flagging it as the least used tool of the ones presented and followed closely by DerScanner, Echidna and Ethainter with 67%. On the other hand, Mythril, MythX, Remix and Slither received the highest level of usefulness by the participants.

Fig. 2 Participants’ experience working with Ethereum smart contracts

Fig. 3 Participants’ opinions on the tools

466

G. Faura et al.

The survey shows that most participants have used more than one tool, which is a clear sign that there is not a particular tool that can find all the vulnerabilities in a smart contract and therefore using more tools would increase the number of vulnerabilities identified. It also provides a developer with better assurances that the code does not manifest obvious problems.

3.3 Evaluated Tools Based on the insights of the survey and the exclusion criteria above, the final tools that we considered for our tests are: Slither, Mythril, DerScanner, Manticore, Oyente and Securify v2. Slither [27]: Static analysis framework written in Python. The tool uses a multistage approach. Initially, the syntax is translated into Abstract Syntax Tree, generated by compiler, then into intermediate representation (SlithIR) and finally Static Single Assignment (SSA) for analysis. Mythril [26]: The tool employs symbolic execution back-end technology called LASER-Ethereum. The LASER method is able to organise programme states via Control Flow Graphs (CFG). This allows the use of the concept of path formula—a logical method to include the constraints. DerScanner: A commercial, static application code analyser capable of identifying vulnerabilities and undocumented features. The product can scan and analyse tens of different programming languages, including compiled binary files.14 Securify [28]: The tool is supported by Ethereum Foundation and it uses compliance and violation patterns to detect safe and unsafe behaviour, written in a domainspecific language. Then semantic facts are compared with security patterns derived from known attacks and smart contract best practices. Oyente [25]: The tool uses evaluation of symbolic execution to find vulnerabilities through a multi-step process. Last component—Validator—is used to remove the false positive results. SmartCheck [24] is an extensible static analysis tool implemented in Java. The tool translates contract code into an XML-based intermediate representation (IR), which is then used to detect vulnerability patterns that are specified as XPath expressions. It needs to be noted that initially we included three more tools in the assessment, but encountered difficulties running the scans. They either produced results which were not readable for non-developers (Manticore) or displayed error messages not related to solidity contracts, but internal tool libraries (sFuzz). In case of sfuzz, (renamed as Contract Guard), it is an online tool, with a clear and user-friendly interface. Unfortunately, any attempt to scan contracts outside of the demo samples failed at the initialisation phase. Another tool which we tested, but excluded, was Echidna. It is an effective fuzzing property testing tool, however, it requires test 14

https://derscanner.com/.

A User-Centric Evaluation of Smart Contract Analysis Tools …

467

Table 4 Vulnerabilities in dataset and their swc/cwe references Vulnerability type

Vulnerabilities in dataset

SWC Ref.

Overflow/underflow

8

101

CWE Ref. 682

Unchecked calls

1

104

252

Unprotected Ether withdrawal

7

105

284

Reentrancy

2

107

841

tx.origin

1

115

477

Improper cryptographic signature

1

117

347

Improper constructor name

2

118

665

Gas griefing

1

126

691

Irrelevant code

2

131

1164

Unexpected Ether balance

1

132

667

Function transfer/send or hardcoded gas

4

134

655

cases and test conditions to be prepared in advance, which breaches one of our prerequisites.

3.4 Vulnerabilities Dataset A custom vulnerability dataset was built based on samples of identified vulnerabilities from [14] and Consensys known attacks site.15 Our initial objective was to build a set of vulnerabilities exploited in recent attacks on DeFi contracts. We have analysed multiple attacks from the current year (2021) using many cryptonews portals, and a site specifically dedicated to reporting attacks in the cryptospace.16 Recent audit reports were also researched in search of typical problems and bugs. Only after this work, we realised that usual vulnerabilities, which are detectable by analysis tools, would not have detected problems in those cases and would not have prevented the exploits from happening. For the dataset, we decided to use a selection of vulnerabilities that are applicable to all smart contracts—not exclusively to DeFi. The built vulnerability dataset can be divided into the categories listed in Table 4. The prepared dataset contains a total of 44 smart contracts, of which 28 contained 30 confirmed vulnerabilities and 16 were considered healthy. Results from our tests with these vulnerabilities can be found in Table 5.

15 16

https://consensys.github.io/smart-contract-best-practices/known_attacks. https://www.rekt.news.

Installation

4

4

5

5

3

3

Tool

Slither

Mythril

DerScanner

Oyente

Securify v1/v2

SmartCheck

1

2

3

5

5

5

Additional features

1

1

2

1

3

2

Effectiveness

Table 5 Tools rating, based on our experience and test results

1

2

3

2

3

5

Liveliness

2

4

3

5

5

5

Friendly output

2

2

5

1

5

5

Open source

4

4

3

4

4

4

Accuracy

14

18

24

23

29

30

Total

40

51

69

66

83

86

Total %

468 G. Faura et al.

A User-Centric Evaluation of Smart Contract Analysis Tools …

469

4 Evaluation of Results 4.1 Evaluation Criteria To evaluate the tools, we not only use well-known metrics for effectiveness and accuracy, but also propose a set of custom user-centric metrics focused on the usefulness of the tools for DeFi users. For each of the tools, we assigned ratings from 1 (lowest) to 5 (highest) in each of the categories below: • Installation: It evaluates how easy the process of installation and usage is. • Additional Features: It evaluates the selection of additional options and extendibility which make a tool easy to use. This may include easy access to additional documentation related to vulnerability, option to scan contract by providing its mainnet network address or GitHub link, presentation of data in different formats (including graphs), showing functions dependency, etc. • Effectiveness: It evaluates how many of the vulnerabilities the tools can detect when scanning the custom smart contracts dataset. • Liveliness: It evaluates whether the tools are actively maintained. This is measured based on the commits and activity on Github. • Open Source: It evaluates whether the tool is open source and, if so, the number of community members involved. • Friendly Output: It evaluates how easy the output results would be for an average DeFi user to understand. • Accuracy: It evaluates the accuracy of the tools based on the number of false positives detected. For a better understanding of these ratings, refer to Appendix. It is worth noting for the evaluation of the effectiveness and accuracy that Solidity recommendations are changing as the language matures and developers keep reporting specific problems or security issues. For example, functions transfer() and send() were introduced to address the cause of reentrancy vulnerability by allowing only 2,300 gas limit which is enough to send log entry but not sufficient to allow reentrancy. After the Istanbul modification, the gas cost is subject to change so these functions are no longer sufficiently safe to use. Instead, call{value:}() should be used. However, this function does not protect against reentrancy problems either, so the developers need to address this vulnerability separately. Consenys recommends using the “Checks-Effects-Interactions” pattern, which is described in solidity official documentation17 or for more complex contracts, Mutex techniques can be used.18 Another requested change that was introduced in Solidity 0.8 is the introduction of the overflow/underflow errors. Previously developers had to use SafeMath library or verify if any of the arithmetic operations can cause an 17

https://docs.soliditylang.org/en/develop/security-considerations.html?highlight=check%20effe cts#use-the-checks-effects-interactions-pattern. 18 https://docs.soliditylang.org/en/v0.5.3/contracts.html#function-modifiers.

470

G. Faura et al.

overflow. In the new version, if overflow/underflow is intended—it can be explicitly enabled with the keyword “unchecked{…}”19 otherwise it will not be allowed by the compiler. These are only some examples of a few changes that were introduced recently and that affect the code. It is easy to demonstrate that the same block of code will behave differently with different versions of Solidity compiler selected with keywords “Pragma Solidity x.x.x”.

4.2 Effectiveness—Test Results We ran selected tools against our dataset with selected vulnerabilities. To simplify the process and allow easy comparison between results, a simple bash script for each tool (except DerScanner which is an online application) was prepared and used to iterate through all the files and generate a report for each source file. Table 6 shows the results of running the scan using each tool. It is easy to notice that some tools did not detect any potential overflow/underflow bugs. This is unrelated to their poor effectiveness, rather to a decision that some vulnerabilities are not in scope of the tools. This also proves that multiple tools should be used to verify smart contracts as no one tool is effective in every category. Also we can see here that certain vulnerabilities are not detected by any tool from our set. For example, improper constructor names or improper cryptographic signatures were not picked up at all.

4.3 Accuracy—False Positives While running our tests, we also aimed to identify critical and high-severity issues, reported by the tools but found invalid. To do that, we reviewed the results and also ran another set of files with healthy contracts (i.e. where the vulnerabilities were fixed). We used both our knowledge and code samples marked as “fixed” in the vulnerabilities database from [14]. For some tools, the results can be unambiguous, as the output showed a plain message “No issues were detected” in case of Mythril. For other tools, many warnings, recommendations as well as critical, high and medium vulnerability indicators were received. Such reports were used as the base for rating the accuracy of the tools.

19

https://docs.soliditylang.org/en/v0.8.3/080-breaking-changes.html#silent-changes-of-the-sem antics.

A User-Centric Evaluation of Smart Contract Analysis Tools …

471

Table 6 Results of running the scan using each tool Vulnerability type/tool

Total Slither Mythril DerScanner Securifyv1/v2 Oyente Smart cases check

Overflow/underflow 8

0/8

7/8

0/8

0/8

7/8

0/8

Unchecked call

1

1/1

1/1

1/1

0/1

0/1

1/1

Unprotected Ether withdrawal

7

3/7

4/7

0/7

0/7

3/7

1/7

Reentrancy

2

2/2

2/2

0/2

1/2

1/2

1/2

tx.origin

1

1/1

1/1

1/1

1/1

0/1

1/1

Improper cryptographic signature

1

0/1

0/1

0/1

0/1

0/1

0/1

Improper constructor name

2

0/2

0/2

0/2

0/2

0/2

0/2

Gas griefing

1

0/1

0/1

0/1

1/1

0/1

0/1

Irrelevant code

2

2/2

0/2

0/1

0/2

0/2

0/2

Unexpected Ether balance

1

1/1

0/1

1/1

1/1

0/1

1/1

Function transfer/send or hardcoded gas

4

2/4

0/4

0/4

0/4

0/4

0/4

Total

30

12

15

3

4

11

5

Total (%)

100

40

50

10

13

37

17

4.4 Test Results and Tools’ Rating We acknowledge that the ratings and our evaluation of the tools might not be shared by all users, depending on various conditions, e.g. version of the tool, type of contract, PC configuration, etc. Nevertheless, in our assessment, we aimed to be as unbiased as possible and depend on the results and experience when running these tools and generating reports. Table 5 shows the rating of every tool in each category as well as an overall rating that will serve as the basis of usefulness of such a tool for DeFi users. For most of the tools presented in Table 6, we used the latest docker image available in the Docker hub. The only two exceptions are: DerScanner—which is an online tool and Slither—which has Docker image available but we decided to install it using Python Pip method as it gave us more flexibility in terms of selection of compiler version. In the case of Solidify we ran version 2 of the tool but in some cases had to revert to version 1 as some samples of the code required an older compiler version. The findings for each of the tools are detailed next. DerScanner. In our tests we used version 3.9.2. To initiate a scan it was sufficient to indicate the GitHub location with the contracts. The output can be reviewed interactively online or exported as pdf. An explanation, recommendations and links

472

G. Faura et al.

to additional resources are also visible. Interestingly, the tool does not recommend using SafeMath library and instead recommends manual checks where overflow is possible. In the case of false positives, we did not agree with a few medium level recommendations that the call function used for transfer Ether should also provide additional data—gas limit. According to Solidity documentation20 and ConsenSys’ recommendation,21 simple Ether calls do not require any data. Reentrancy risk should be avoided using other means. Slither is an actively maintained tool with last updates to the code in May 2021. The installation was simple, and we did not encounter any problems when running the scans. There are multiple additional options available, e.g. high-level summary or inheritance graphs. It is also possible to scan a contract by providing its address on the chain. The tool provides valuable recommendations with links to additional resources. The tool does not detect overflow problems. There were no critical or high-severance vulnerabilities that we did not agree with. Mythril. Last commits to the code were recorded in November 2020. We did not encounter any problems using the latest docker container. Usage is also simple, and the results prove high effectiveness of the tool. The output can be also formatted as well-structured MD (marked-down) files that are easy to read and provide sufficient information. In the false positive category, we did not fully agree with all the highseverity issues related to overflow possibility in all arithmetic operations. Securify. The last changes to the code on GitHub were made in May 2020. Using the docker image was simple. Unfortunately, we encountered some issues when running the scans. On some occasions, Securify v1 had to be used instead to verify our samples as the tool was crashing. The output is simple and provides a short description only. In the false positive section, we notice several critical highlights that indicate “Unrestricted write to storage”. In our samples this was not correct. Oyente. Latest updates to the code were made in November 2020. Using the tool from docker image was uncomplicated, and the output messages were clear. However, the latest Solidity version supported in the tool is 0.4.19, which can be considered obsolete now. Some code samples which were tested did not produce any valid report, which very likely was caused by the outdated compiler. Our results show that the tool is effective in finding integer overflow problems but not in other categories. SmartCheck. The latest contributions to the code were made in January 2020 and its development is now discontinued. The installation using the npm framework was uncomplicated. The interface does not provide any additional options, and scanning is simple. The reports are succinct and do not provide detailed descriptions or links. This means the tool is better suited for developers. For false positive results, Solidity call without data was highlighted which in our sample was used for simple Ether transfer, which is an acceptable practice,22 provided that reentrancy vulnerability is also mitigated. 20

https://docs.soliditylang.org/en/v0.6.11/contracts.html. https://consensys.net/blog/developers/solidity-best-practices-for-smart-contract-security/. 22 https://consensys.github.io/smart-contract-best-practices/recommendations/. 21

A User-Centric Evaluation of Smart Contract Analysis Tools …

473

In the tests, a portion of popular open-source analysis tools (maintained by active developers communities) were covered and were contrasted with a successful commercial product. As pointed out earlier, one of the most important aspects of the tools is to maintain the code updated as recommendations regarding Solidity smart contracts are continuously evolving as well as new versions of Solidity are released. The aim behind having an additional criteria regarding the installation and simplicity of use was to indicate which products are better suited for non-developers. In our tests, the best overall rating, considering all the evaluation criteria, was achieved by Slither with 86%, followed closely by Mythril 83%. Also in our survey at least 30% or respondents rated both tools as useful or very useful. Both tools are easy to use from docker image. The reports they generate are well formatted and provide clear descriptions and links to articles with found vulnerabilities. On the other side of the scale are Securify and Smartcheck. Both tools are able to find many vulnerabilities but the reports are not as clear. In both cases, the code on GitHub was not updated for more than a year.

5 Conclusion The main goal of this paper was to assess the usefulness of the existing state-ofthe-art smart contract analysis tools from the perspective of DeFi users without technical knowledge. We acknowledge that smart contracts operating in the DeFi space are particularly complex. There are multiple dependencies, and many solutions are replicated among different protocols. This dependency and interoperability between different networks bring value to investors, but also pose a significant risk of financial contagion, as a successful attack on one protocol can affect many other projects that trade affected tokens or use the same vulnerable code. In total, 21 tools were investigated and 6 were selected for the final evaluation based on a well-defined acceptance criteria and the input received from participants of a custom survey sent to smart contract developers, auditors and other members of the Blockchain community. We have evaluated several tools from a DeFi user perspective and conclude that we did not find a viable solution that would allow non-technical users to verify the trustworthiness and security of smart contracts in the DeFi space based on the existing smart contract analysis tools. While the use of tested analysis tools was not complicated, the reports do not provide definitive answers and need to be verified based on the context. In addition to this, after analysing recent attacks on DeFi protocols, we came to the conclusion that smart contract analysis tools would not have detected these problems even if they were used, e.g. Flashloan exploits, minting bugs or private key mismanagement. Based on the above, we conclude that there is a need for future research focused on the security of DeFi-specific smart contracts, including business logic-related issues. More importantly, an effort needs to be made in researching alternatives to develop smart contract analysis tools that can easily be used by non-technical users.

474

G. Faura et al.

Appendix—Rating Criteria Installation 1. 2. 3. 4. 5.

We were not successful installing the tool. Installation was cumbersome and had multiple dependencies. Only manual installation is available. Docker image is available. Tool is based online and no installation is required.

Additional Features 1. 2. 3. 4.

Features limited to main scan and no option to extension. Few basic options only. Few additional options but scan types can be selected. Additional scanners can be added. Arguments allow flexible selection of checks and modification of solidity version. 5. Tool has an excellent selection of additional options and offers significant extendibility which are also easy to use. This may include easy access to additional documentation related to vulnerability, option to scan contract by providing its address or GitHub link, presentation of data in different formats (including graphs), showing functions dependency, etc. Effectiveness 1. 2. 3. 4. 5.

20% of vulnerabilities found. >50% of vulnerabilities found. >70% of vulnerabilities found. Tool successfully detected >90% of prepared vulnerabilities and correctly indicated the line and type of vulnerability.

Liveliness 1. 2. 3. 4. 5.

Latest commits older than 2 years. Latest commits are older than 1 year, but newer than 2 years. Latest commits over 6 months, but less than 1 year. Latest commits between 3 and 6 months. Latest commits to git library within the last 3 months.

Open Source 1. 2. 3. 4. 5.

Proprietary tool. Open source and community below 100. Open source and community over 100. Open source and community over 200. Tool is open source and git community is over 300 members.

A User-Centric Evaluation of Smart Contract Analysis Tools …

475

Friendly Output 1. 2. 3. 4.

General names of issue category only. No description or links. Names of issues are descriptive, but lack clear explanation. Simple categories of issues or short descriptions. Categories are clear and descriptions of issues are also provided. No links with further information. 5. The tool uses clear ranks to indicate the severity of the problem, gives clear description and links where additional information could be found. Accuracy 1. 2. 3. 4.

6 or more false positive issues indicated (high or medium category). 5 or less false positive issues indicated (high or medium category). 2 or less false positive issues indicated (high or medium category). No false positive with critical or high-severity issues indicated. Some recommendations may be not correct. 5. The tool did not indicate any false positive vulnerabilities. All warnings or recommendations are correct.

References 1. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system (2008) 2. Buterin, V.: Ethereum white paper—a next generation smart contract & decentralized application platform. ethereum.org (2015) 3. Atzei, N., Bartoletti, M., Cimoli, T.: A survey of attacks on Ethereum smart contracts (SoK). In: Principles of Security and Trust (2017) 4. Understanding the DAO Hack. http://www.coindesk.com/understanding-dao-hack-journalists 5. Leid, A., Van der Merwe B., Visser W.: Testing Ethereum smart contracts: a comparison of symbolic analysis and fuzz testing tools. In: South African Institute of Computer Scientists and Information Technologists (2020) 6. Mense, A., Flatscher, M.: Security vulnerabilities in Ethereum smart contracts. In: 20th International Conference on Information Integration and Web-Based Applications & Services (2018) 7. Dika, A., Nowostawski, M.: Security vulnerabilities in Ethereum smart contracts. In: IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) (2018) 8. Huang, Y., Brian, Y., Li, R., Zhao, L., Shi, P.: Smart contract security: a software lifecycle perspective. IEEE Access 150184–150202 (2019) 9. Sayeed, S., Marco-Gisbert H., Caira, T.: Smart contract: attacks and protections. IEEE Access 24416–24427 (2020) 10. Almakhour, M., Sliman, L., Samhat, A.E., Mellouk, A.: Verification of smart contracts: a survey. Pervasive Mobile Comput. (Elsevier) (2020) 11. Petrovi´c, N., Toši´c, M.: Semantic approach to smart contract verification. Facta Universitatis. Series: Autom. Control Robot. 19(1), 021–037 (2020) 12. Gudgeon L., Perez, D., Harz, D., Gervais A., Livshits, B.: The decentralized financial crisis: attacking DeFi (2020)

476

G. Faura et al.

13. He, D., Deng, Z., Zhang, Y., Chan, S., Cheng, Y., Guizani, N.: Smart contract vulnerability analysis and security audit. IEEE Netw. 34(5), 276–282 (2020) 14. SWC registry—smart contract weakness classification and test cases [Online]. https://swcreg istry.io. Accessed 14 May 2021 15. Yang, Q., Zeng, X., Zhang, Y., Hu, W.: New loan system based on smart contract. In: The 2019 ACM International Symposium on Blockchain and Secure Critical Infrastructure, pp. 121–126 (2019) 16. Oosthoek, K.: Flash crash for cash: cyber threats in decentralized finance. Preprint arXiv:2106. 10740 (2021) 17. Schär, F.: Decentralized finance: on blockchain- and smart contract-based financial markets. Federal Reserve Bank of St. Louis Review (2021) 18. Imeri, A., Agoulmine, N., Khadraoui, D.: Smart contract modeling and verification techniques: a survey. In: 8th International Workshop on ADVANCEs in ICT Infrastructures and Services (ADVANCE 2020) (2020) 19. Ghaleb, A., Pattabiraman, K.: How effective are smart contract analysis tools? Evaluating. In: 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 415–427 (2020) 20. Durieux, T., Ferreira, J., Abreu, R., Cruz, P.: Empirical review of automated analysis tools on 47,587 Ethereum smart contracts. In: ACM/IEEE 42nd International Conference on Software Engineering (2020) 21. Liu, Y., Li, Y., Lin, S., Zhao, R.: Towards automated verification of smart contract fairness. In: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (2020) 22. Brent, L., Grech, N., Lagouvardos, S., Scholz, B., Smaragdakis, Y.: Ethainter: a smart contract security analyzer for composite vulnerabilities. In: 41st ACM SIGPLAN Conference on Programming Language Design and Implementation (2020) 23. Chen, T., Li, X., Luo, X., Zhang, X.: Under-optimized smart contracts devour your money. In: IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER) (2017) 24. Tikhomirov, S., Voskresenskaya, E., Ivanitskiy, I., Takhaviev, R., Marchenko, E., Alexandrov, E.: SmartCheck: static analysis of Ethereum smart contracts. In: 1st International Workshop on Emerging Trends in Software Engineering for Blockchain (2018) 25. Luu, L, Chu, D.-H., Olickel, H., Saxena, P., Hobor, A.: Making smart contracts smarter. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 254–269 (2016) 26. Mueller, B.: Github—Mythril [Online]. https://github.com/b-mueller/mythril/. Accessed 16 July 2021 27. Feist, J., Grieco, G., Groce, A., Slither: a static analysis framework for smart contracts. In: 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB), pp. 8–15 (2019) 28. Tsankov, P., Dan, A., Drachsler-Cohen, D., Gervais, A., Buenzli, F., Vechev, M.: Securify: Practical security analysis of smart contracts. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 67–82 (2018) 29. Amani, S., Bégel, M., Borti, M., Staples, M.: Towards verifying ethereum smart contract bytecode in Isabelle/HOL. In: 7th ACM SIGPLAN International Conference on Certified Programs and Proofs (2018) 30. Bhargavan, K., Delignat-Lavaud, A., Fournet, C., Gollamudi, A., Gonthier, G., Kobeissi, N.: Formal verification of smart contracts: short paper. In: 2016 ACM Workshop on Programming Languages and Analysis for Security (PLAS ‘16) (2016) 31. Mossberg, M., Manzano, F., Hennenfent, E., Groce, A., Grieco, G., Feist, J., Brunson, T., Dinaburg, A.: Manticore: a user-friendly symbolic execution framework for binaries and smart contracts. In: 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 1186–1189 (2019)