Human Aspects of Information Security and Assurance: 17th IFIP WG 11.12 International Symposium, HAISA 2023, Kent, UK, July 4–6, 2023, Proceedings ... and Communication Technology, 674) [1st ed. 2023] 3031385292, 9783031385292

This book constitutes the proceedings of the 17th IFIP WG 11.12 International Symposium on Human Aspects of Information

186 2 22MB

English Pages 496 [488] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents
Education and Training
Combating Digital Exclusion with Cybersecurity Training – An Interview Study with Swedish Seniors
1 Introduction
2 Methodology
3 Results
3.1 Phishing
3.2 Security Awareness
3.3 WebSec Coach
3.4 Contextual Training
4 Conclusions
4.1 Results Discussion
4.2 Contributions
4.3 Limitations and Future Work
References
Another Look at Cybersecurity Awareness Programs
1 Introduction
2 The ‘Fighter/Protector’ Approach
3 The ‘Ownership’ Approach
4 The ‘Workplace’ Approach
5 Evaluating the Fighter, Ownership and Workplace Approaches in Creating a Cyber Risk-Aware Workforce
6 Start Point for Creating the Proposed Cyber-Risk Aware Workforce
7 Summary and Conclusion
References
Cyber Range Exercises: Potentials and Open Challenges for Organizations
1 Introduction
2 Background and Related Work
3 Method
4 Findings
4.1 Status Quo of Onboarding and Reskilling in Cybersecurity
4.2 Potential of CRXs
4.3 Prospects on Implementation and Open Challenges
4.4 Limitations and Future Work
5 Conclusion
References
An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting Based Cybersecurity Curricula
1 Introduction
1.1 Overview of the E-Voting Protocol for Interactive Teaching and Learning
2 E-Voting System
3 The Adaptive Interactive Plug-and-Play Platform
4 The Assignments
4.1 Assignment 1: Preliminary
4.2 Assignment 2: Voter
4.3 Assignment 3: Collector
4.4 Assignment 4: Administrator
5 Relevant Topics from Cybersecurity Guidelines
6 Conclusion
A Appendix
References
Cybersecurity Training Acceptance: A Literature Review
1 Introduction
2 Socio-technical Perspectives on User Acceptance
3 Methodology
4 Results
4.1 Technical Dimensions
4.2 Organizational Dimension
4.3 User-Centered Dimension
5 Conclusions
5.1 Limitations
5.2 Future Work
References
Cyber Security Awareness and Education Support for Home and Hybrid Workers
1 Introduction
2 Cyber Security Issues for Home/hybrid Workers
3 Assessing Current Security Guidance for Hybrid Workers
4 Consolidating the Guidance for Home and Hybrid Workers
4.1 Backup and Recovery
4.2 Device Care
4.3 Guidelines and Policies
4.4 Incident Management
4.5 Network Security
4.6 Passwords and Authentication
4.7 Training and Education
4.8 Updating Software
5 Re-assessing Coverage of the Existing Sources
6 Conclusions
References
On-Campus Hands-On Ethical Hacking Course
1 Introduction
2 Background
2.1 Infrastructure
2.2 Course Setting
3 On-Campus Ethical Hacking Course
3.1 Course Planning and Platform of Choice
3.2 On-Campus Course Implementation
3.3 Adjusting the Scoring System, Passing Conditions, and Grading
3.4 Course Plan and Structure
4 Course Delivery, Feedback and Results
4.1 Student Feedback
4.2 Student Results and Grades
5 Discussion
5.1 Online VS On-Campus
5.2 Proposed Changes
6 Conclusions
References
Planning for Professional Development in Cybersecurity: A New Curriculum Design
1 Introduction
2 Methodology
3 Related Work
4 Curriculum Design
4.1 Audience and Scope
4.2 Learning Objectives
4.3 Learning Content and Pedagogy
5 Εvaluation
6 Conclusion
References
A Comprehensive Design Framework for Multi-disciplinary Cyber Security Education
1 Introduction
2 Background
3 COLTRANE Educational Design Framework CEDF
3.1 COLTRANE Design Principles
3.2 Core COLTRANE Process Architecture
3.3 Requirements Collection
3.4 Module Configuration
3.5 Module Delivery
4 Practical Implications of the CEDF
5 Conclusions
References
Key Elements for Cybersafety Education of Primary School Learners in South Africa
1 Introduction
2 Research Methodology
3 Challenges Relating to Cybersafety Education in South Africa
4 Key Elements in Addressing Cybersafety Education in South Africa
4.1 Goals of Cybersafety Education in South Africa
4.2 Overseers of Cybersafety Initiatives in South Africa
4.3 Cybersafety Education Role Players in South Africa
4.4 Constraints of Cybersafety Initiatives in South Africa
4.5 Target Audience of Cybersafety Initiatives in South Africa
4.6 Context of Cybersafety Initiatives in South Africa
4.7 Resources for Cybersafety Initiatives in South Africa
4.8 Topics to be Covered in South Africa
4.9 Delivery Methods in South Africa
5 Conclusion
References
Factors Associated with Cybersecurity Culture: A Quantitative Study of Public E-health Hospitals in South Africa
1 Introduction
2 Research Aim and Question
3 Background
3.1 Cybersecurity Culture and Its Factors
4 Cybersecurity Culture Components
4.1 Model of the Cybersecurity Culture Components
5 Research Method
6 Results
6.1 Demographical Information
6.2 Construct Reliability
6.3 Descriptive Statistics for the Cybersecurity Culture Factors
6.4 Pearson Correlation Coefficients Analysis
7 Discussion and Contribution
8 Limitations and Future Work
9 Conclusion
References
Towards a Framework for the Personalization of Cybersecurity Awareness
1 Introduction
2 Cybersecurity Awareness and Communication
2.1 Pre-emptive Communication Channels
2.2 Responsive Communication Channels
3 Limitations of Current Provision
4 A Personalized Security Awareness Program
5 PSAP Functional Outline
5.1 User Profile
5.2 Security Message
5.3 Assessment and Evaluation
6 Conclusion
References
Management, Policy and Skills
A Qualitative Content Analysis of Actionable Advice in Swedish Public Agencies’ Information Security Policies
1 Introduction
2 Literature Review
3 Research Method
3.1 Collecting ISPs
3.2 Identifying the Keywords
3.3 Extracting Sentences in Each ISP
3.4 Analyzing the Extracted Sentences
4 Result
5 Discussion and Conclusion
References
Business Language for Information Security
1 Introduction
2 Background and Related Research
3 Research Method
3.1 Potential Weaknesses of Study
4 Results
4.1 Definition
4.2 Business and Information Security
4.3 Communication and Soft Skills
4.4 Pedagogy
5 Future Work
6 Conclusion
References
“Check, Check, Check, We Got Those” – Catalogue Use in Information Security Risk Management
1 Introduction
2 Background
2.1 Information Security Risk Management in Air Traffic Management
2.2 Catalogues in Information Security Risk Management
3 Method
4 Results
4.1 Why Do We Need Catalogues?
4.2 How is Catalogue Granularity Perceived?
4.3 How do Catalogues Help Novices?
5 Discussion
6 Conclusions
References
Proposed Guidelines for Website Data Privacy Policies and an Application Thereof
1 Introduction
2 Research Problem
3 Background
3.1 Privacy
3.2 The POPI Act
3.3 Privacy Policies and Consumer Privacy Concerns
4 Research Methodology
4.1 Design of Literature Review
4.2 Databases and Search Method
4.3 Website Privacy Policy Guidelines
4.4 Research Approach
4.5 Research Strategy and Sampling
4.6 Data Analysis
5 Results
6 Discussion and Recommendations
7 Conclusion
Appendix a: Website Privacy Policy Guidelines
References
Towards Roles and Responsibilities in a Cyber Security Awareness Framework for South African Small, Medium, and Micro Enterprises (SMMEs)
1 Introduction
2 Problem Statement and Research Question
3 Background
3.1 Cyber Security Awareness
3.2 SMMEs Cyber Security Needs
3.3 Cyber Security Awareness Requirements Framework for SMMEs
4 Research Methodology
5 Roles and Responsibilities within the Csa4Smmes {RSA} Framework for SMMEs
5.1 Strategic Layer
5.2 Tactical Layer
5.3 Preparation Layer
5.4 Delivery Layer
5.5 Monitoring Layer
6 Limitations and Future Work
7 Conclusion
References
Is Your CISO Burnt Out yet?
1 Introduction
1.1 What is Burnout?
1.2 What Causes Burnout?
2 Literature Review
2.1 The Importance of Demographics: Gender and Job Role
3 Research Method
3.1 Hypotheses
3.2 Participants
3.3 Data Collection
3.4 Materials
3.5 Data Analysis
4 Results
5 Discussion
6 Limitations and Future Research
7 Conclusions
References
An Investigation into the Cybersecurity Skills Gap in South Africa
1 Introduction
2 Background
3 Research Methodology
4 Results and Findings
4.1 Section A - Demographics
4.2 Section B – General Cybersecurity Perceptions
4.3 Section C – Cybersecurity Skills Development
4.4 Section D – Cybersecurity-Related Skills
5 Conclusion
References
Cybersecurity-Related Behavior of Personnel in the Norwegian Industry
1 Introduction
2 Related Work
3 Hypotheses
4 Methodology
4.1 The Survey Instrument
4.2 Sample Selection and Distribution
5 Findings
5.1 Collected Data
5.2 Analysis
5.3 Discussion
6 Conclusion
References
Evolving Threats and Attacks
It's More Than Just Money: The Real-World Harms from Ransomware Attacks
1 Introduction and Background
2 Methodology
2.1 Definition and Scope
2.2 Collection and Analysis of Cases
2.3 Harm Model Design
3 Results
3.1 Hackney Council, UK, 2020 Attack by Pysa Ransomware
3.2 HSE, Ireland, 2021 Attack by Conti Ransomware
3.3 Observations from Case Analysis and Modelling
4 Discussion and Conclusion
4.1 Discussions
4.2 Limitations and Future Work
References
Cyberthreats in Modern Cars: Responsibility and Readiness of Auto Workshops
1 Introduction
1.1 Related Research
2 Study Approach
2.1 Data Collection
2.2 Data Analysis
3 Empirical Insights
4 Discussion and Conclusion
4.1 Connections to Existing Research
4.2 Conclusion
4.3 Future Work
References
Decreasing Physical Access Bottlenecks through Context-Driven Authentication
1 Introduction
2 Cyber-Physical Access Control Environments
3 Context-Driven Physical Access Control Model
3.1 Main Control Unit
3.2 Temporal Profile Module
3.3 GPS Module
3.4 Authentication Flow
4 Model Implementation
4.1 The Physical Access Data Set
4.2 Potentially Missing Data
4.3 Prototype Implementation
5 Evaluation of Model
5.1 Results
5.2 Considerations
6 Conclusion
Appendix A: The Access Control Data Set
References
Blockchain in Oil and Gas Supply Chain: A Literature Review from User Security and Privacy Perspective
1 Introduction
2 Background
2.1 Oil and Gas Supply Chain
2.2 Blockchain Technology
2.3 Advantages of Blockchain in the Oil and Gas Industry
3 Methodology
3.1 Database Search
3.2 Abstract and Full-Text Screening
3.3 Thematic Analysis
4 Results and Discussions
4.1 Application of Blockchain Security and Privacy in Oil and Gas
4.2 Addressing Blockchain Integration Challenges in the Oil and Gas Sector
5 Conclusion and Future Work
References
Are People with Cyber Security Training Worse at Checking Phishing Email Addresses? Testing the Automaticity of Verifying the Sender’s Address
1 Introduction
2 Background
2.1 Testing for Automaticity
3 Method
3.1 Participants
3.2 Materials
3.3 Procedure
4 Results
4.1 Interference Score
5 Discussion
6 Limitations
7 Conclusions
References
Content Analysis of Persuasion Principles in Mobile Instant Message Phishing
1 Introduction
2 Background
3 Method
4 Results
5 Discussion and Limitations
6 Conclusion
References
Six-Year Study of Emails Sent to Unverified Addresses
1 Introduction
2 Overview
3 Study on Unverified Emails
3.1 Categorization and Related Concerns
3.2 Web Applications and Associated Risks
3.3 Future Work
4 Solutions
References
Social-Technical Factors
Evaluating the Risks of Human Factors Associated with Social Media Cybersecurity Threats
1 Introduction
2 Related Work
3 Methods
4 Results
4.1 Social Media Risk Assessment
5 Conclusions
References
Online Security Attack Experience and Worries of Young Adults in the Kingdom of Saudi Arabia
1 Introduction
2 Related Work
3 Method
3.1 Participants
3.2 Online Questionnaire
4 Results
5 Discussion and Conclusions
References
To Catch a Thief: Examining Socio-technical Variables and Developing a Pathway Framework for IP Theft Insider Attacks
1 Introduction
1.1 Related Work
1.2 The Current Study and Theoretical Framework
2 Methodology
2.1 Data
2.2 Selecting a Methodology
2.3 Grounded Theory
2.4 Behavior Sequence Analysis (BSA)
3 Results
3.1 Grounded Theory Analysis Findings
3.2 Behavior Sequence Analysis Findings
4 Discussion
5 Conclusions
References
Analyzing Cybersecurity Definitions for Non-experts
1 Introduction
2 Related Work
2.1 Non-Experts and Cybersecurity
2.2 Cybersecurity Definitions
3 Methods
3.1 Systematic Search
3.2 Analysis
4 Results
4.1 Word Frequencies and Trends
4.2 Source Type Differences
5 Discussion
5.1 RQ1: Terms and Components Commonly Used in Definitions
5.2 RQ2: Differences Based on Source Type
5.3 Future Work
6 Conclusion
References
On using the Task Models for Validation and Evolution of Usable Security Design Patterns
1 Introduction
2 Background
2.1 Task Models in User-centred Design
2.2 Task Model-based Analysis of Usability
2.3 HAMSTERS Tool Supported Task Modelling Notation
3 Task Model-Based Approach for the Validation and Evolution of Usable Security Patterns
4 Illustrative case study of the Approach: The Adaptable Authentication Design Pattern
4.1 Model user tasks with the Authentication Mechanism
4.2 Model user tasks with the Usable Security Pattern for Adaptable Authentication
4.3 Analysis of the Usable Security Pattern for Adaptable Authentication
4.4 Refine the Usable Security Pattern
5 Related Work
6 Conclusion
References
Chatbots: A Framework for Improving Information Security Behaviours using ChatGPT
1 Introduction
2 Theoretical Background
2.1 Theory of Planned Behaviour
2.2 Persuasion Theory
3 Chatbots
4 ChatGPT
5 Information Security
5.1 Security Solutions
6 Information Security Awareness and Training
7 Using Chatbots for Information Security Awareness and Training
7.1 Advantages of Using Chatbots
8 Methods
9 ChatGPT Based Information Security Behavioural Change Framework
9.1 Persuasive Message and Audience
9.2 Heuristic and Systematic Processing
9.3 Persuasion
10 Results
11 Discussion
12 Conclusion
References
Factors Influencing Internet of Medical Things (IoMT) Cybersecurity Protective Behaviours Among Healthcare Workers
1 Introduction
1.1 Background
2 Literature Review
2.1 Internet of Medical Things (IoMT)
2.2 IoMT and Cybersecurity
3 Theoretical Framework
3.1 The Information-Motivation-Behavioural Skills Model
3.2 Hypotheses Development
4 Research Methodology
5 Findings
6 Discussion
7 Conclusion
References
The Influence of Interpersonal Factors on Telecommuting Employees' Cybercrime Preventative Behaviours During the Pandemic
1 Introduction
2 Literature Review
3 Theoretical Framework
4 Research Design
5 Analysis and Results
5.1 Measurement Model
5.2 Structural Model
6 Discussion
7 Conclusion
References
Research Methods
A Review of Constructive Alignment in Information Security Educational Research
1 Introduction
2 Related Literature
2.1 Constructive Alignment
3 Research Methodology
4 Research Process
5 Results
6 Discussion
7 Conclusion
References
What Goes Around Comes Around; Effects of Unclear Questionnaire Items in Information Security Research
1 Introduction
2 Related Research and Protection Motivation Theory
2.1 Related Research
2.2 Perceived Severity
3 Method
3.1 Data Collection
3.2 Method for Analysis
4 Emergent Findings
4.1 Ambiguity (i) -Vagueness
4.2 Ambiguity (ii) – Envisioning Unintended Properties
4.3 Ambiguity (iii) – ‘MiSses the Mark’
5 Discussion and Conclusion
References
Author Index
Recommend Papers

Human Aspects of Information Security and Assurance: 17th IFIP WG 11.12 International Symposium, HAISA 2023, Kent, UK, July 4–6, 2023, Proceedings ... and Communication Technology, 674) [1st ed. 2023]
 3031385292, 9783031385292

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

IFIP AICT 674

Steven Furnell Nathan Clarke (Eds.)

Human Aspects of Information Security and Assurance 17th IFIP WG 11.12 International Symposium, HAISA 2023 Kent, UK, July 4–6, 2023 Proceedings

IFIP Advances in Information and Communication Technology

674

Editor-in-Chief Kai Rannenberg, Goethe University Frankfurt, Germany

Editorial Board Members TC 1 – Foundations of Computer Science Luís Soares Barbosa , University of Minho, Braga, Portugal TC 2 – Software: Theory and Practice Michael Goedicke, University of Duisburg-Essen, Germany TC 3 – Education Arthur Tatnall , Victoria University, Melbourne, Australia TC 5 – Information Technology Applications Erich J. Neuhold, University of Vienna, Austria TC 6 – Communication Systems Burkhard Stiller, University of Zurich, Zürich, Switzerland TC 7 – System Modeling and Optimization Lukasz Stettner, Institute of Mathematics, Polish Academy of Sciences, Warsaw, Poland TC 8 – Information Systems Jan Pries-Heje, Roskilde University, Denmark TC 9 – ICT and Society David Kreps , National University of Ireland, Galway, Ireland TC 10 – Computer Systems Technology Achim Rettberg, Hamm-Lippstadt University of Applied Sciences, Hamm, Germany TC 11 – Security and Privacy Protection in Information Processing Systems Steven Furnell , Plymouth University, UK TC 12 – Artificial Intelligence Eunika Mercier-Laurent , University of Reims Champagne-Ardenne, Reims, France TC 13 – Human-Computer Interaction Marco Winckler , University of Nice Sophia Antipolis, France TC 14 – Entertainment Computing Rainer Malaka, University of Bremen, Germany

IFIP Advances in Information and Communication Technology The IFIP AICT series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; ICT and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Edited volumes and proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP AICT series is to encourage education and the dissemination and exchange of information about all aspects of computing. More information about this series at https://link.springer.com/bookseries/6102

Steven Furnell Nathan Clarke Editors •

Human Aspects of Information Security and Assurance 17th IFIP WG 11.12 International Symposium, HAISA 2023 Kent, UK, July 4–6, 2023 Proceedings

123

Editors Steven Furnell University of Nottingham Nottingham, UK

Nathan Clarke University of Plymouth Plymouth, UK

ISSN 1868-4238 ISSN 1868-422X (electronic) IFIP Advances in Information and Communication Technology ISBN 978-3-031-38529-2 ISBN 978-3-031-38530-8 (eBook) https://doi.org/10.1007/978-3-031-38530-8 © IFIP International Federation for Information Processing 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

It is now widely recognized that technology alone cannot provide the answer to cyber security problems. A significant aspect of protection comes down to the attitudes, awareness, behavior and capabilities of the people involved, and they often need support in order to get it right. Factors such as lack of awareness and understanding, combined with unreasonable demands from security technologies, can dramatically impede their ability to act securely and comply with policies. Ensuring appropriate attention to the needs of users is therefore a vital element of a successful security strategy, and they need to understand how the issues may apply to them and how to use the available technology to protect their systems. With all of the above in mind, the Human Aspects of Information Security and Assurance (HAISA) symposium series specifically addresses information security issues that relate to people. It concerns the methods that inform and guide users' understanding of security, and the technologies that can benefit and support them in achieving protection. This book presents the proceedings from the seventeenth event in the series, held at the University of Kent (UK), during July 2023. A total of 37 reviewed papers are included, spanning a range of topics including security management, cyber security education and training, management and usable security. All of the papers were subject to double-blind peer review, with each being reviewed by at least two members of the international program committee. We are grateful to all of the authors for submitting their work and sharing their findings. We are also grateful to Dr Sanjana Mehta, from (ISC)2, for being the keynote speaker for this year’s event. The HAISA symposium is the official event of IFIP Working Group 11.12 on Human Aspects of Information Security and Assurance, and we would like to thank Kerry-Lynn Thomson for supporting the event as Working Group chair. We would also like to acknowledge the significant work undertaken by our international program committee, and recognize their efforts in reviewing the submissions and ensuring the quality of the resulting event and proceedings. Finally, we would like to thank Jason Nurse and the local organizing team for making all the necessary arrangements to enable this symposium to take place. July 2023

Steven Furnell Nathan Clarke

Organization

General Chairs Nathan Clarke Steven Furnell

University of Plymouth, UK University of Nottingham, UK

IFIP TC11.12 Conference Chair Kerry-Lynn Thomson

Nelson Mandela University, South Africa

Local Organizing Chair Jason Nurse

University of Kent, UK

Publicity Chair Fudong Li

Bournemouth University, UK

International Program Committee Sal Aurigemma Maria Bada Peter Bednar Erik Bergström Matt Bishop Patrick Bours William Buchanan Mauro Cherubini Jeff Crume Adele Da Veiga Dionysios Demetis Ronald Dodge Paul Dowland Jan Eloff Lenzini Gabriele Ana I. Gonzalez-Tablas Ferreres Simone Fischer-Hübner Stephen Flowerday Lynn Futcher Stefanos Gritzalis

University of Tulsa, USA Queen Mary University of London, UK University of Portsmouth, UK Jönköping University, Sweden UC Davis, USA Norwegian University of Science and Technology, Norway Edinburgh Napier University, UK University of Lausanne, Switzerland IBM, USA University of South Africa, South Africa University of Hull, UK Palo Alto Networks, USA Edith Cowan University, Australia University of Pretoria, South Africa University of Luxembourg, Luxembourg Universidad Carlos III de Madrid, Spain Karlstad University, Sweden University of Tulsa, USA Nelson Mandela University, South Africa University of Piraeus, Greece

viii

Organization

Julie Haney Karin Hedström Kiris Helkala Yuxiang Hong John Howie Kévin Huguenin William Hutchinson Murray Jennex Andy Jones Christos Kalloniatis Fredrik Karlsson Vasilios Katos Sokratis Katsikas Joakim Kavrestad Stewart Kowalski Costas Lambrinoudakis Gabriele Lenzini Shujun Li Javier Lopez George Magklaras Herb Mattord Abbas Moallem Haris Mouratidis Marcus Nohlberg Jason Nurse Malcolm Pattinson Helen Petrie Jacques Ophoff Nathalie Rebe Karen Renaud Nader Sohrabi Safa Rossouw Von Solms Theo Tryfonas Aggeliki Tsohou Jeremy Ward Merrill Warkentin Naomi Woods Ibrahim Zincir

NIST, USA Örebro University, Sweden Norwegian Defence University College, Norway Hangzhou Dianzi University, China Cloud Security Alliance, USA University of Lausanne, Switzerland Edith Cowan University, Australia West Texas A&M University, USA University of Suffolk, UK University of the Aegean, Greece Örebro University, Sweden Bournemouth University, UK Norwegian University of Science and Technology, Norway University of Skövde, Sweden Norwegian University of Science and Technology, Norway University of Piraeus, Greece University of Luxembourg, Luxembourg University of Kent, UK University of Malaga, Spain University of Oslo, Norway Kennesaw State University, USA San José State University, USA University of Essex, UK University of Skövde, Sweden University of Kent, UK University of Adelaide, Australia University of York, UK Abertay University, UK University of Burgundy, France University of Strathclyde, UK University of Wolverhampton, UK Nelson Mandela University, South Africa University of Bristol, UK Ionian University, Greece Security Consultant, UK Mississippi State University, USA University of Jyväskylä, Finland Izmir University of Economics, Turkey

Contents

Education and Training Combating Digital Exclusion with Cybersecurity Training – An Interview Study with Swedish Seniors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joakim Kävrestad, David Lindvall, and Marcus Nohlberg

3

Another Look at Cybersecurity Awareness Programs . . . . . . . . . . . . . . . . . . . S. H. von Solms, Jaco du Toit, and Elmarie Kritzinger

13

Cyber Range Exercises: Potentials and Open Challenges for Organizations. . . . Magdalena Glas, Fabian Böhm, Falko Schönteich, and Günther Pernul

24

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting Based Cybersecurity Curricula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muwei Zheng, Nathan Swearingen, William Silva, Matt Bishop, and Xukai Zou Cybersecurity Training Acceptance: A Literature Review . . . . . . . . . . . . . . . . Joakim Kävrestad, Wesam Fallatah, and Steven Furnell

36

53

Cyber Security Awareness and Education Support for Home and Hybrid Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fayez Alotaibi, Steven Furnell, and Ying He

64

On-Campus Hands-On Ethical Hacking Course: Design, Deployment and Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leonardo A. Martucci, Jonathan Magnusson, and Mahdi Akil

76

Planning for Professional Development in Cybersecurity: A New Curriculum Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eliana Stavrou

91

A Comprehensive Design Framework for Multi-disciplinary Cyber Security Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Gregor Langner, Steven Furnell, Gerald Quirchmayr, and Florian Skopik

x

Contents

Key Elements for Cybersafety Education of Primary School Learners in South Africa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Lynn Futcher, Kerry-Lynn Thomson, Lean Kucherera, and Noluxolo Gcaza Factors Associated with Cybersecurity Culture: A Quantitative Study of Public E-health Hospitals in South Africa. . . . . . . . . . . . . . . . . . . . . . . . . 129 Emilia N. Mwim, Jabu Mtsweni, and Bester Chimbo Towards a Framework for the Personalization of Cybersecurity Awareness. . . . 143 S. Alotaibi, Steven Furnell, and Y. He Management, Policy and Skills A Qualitative Content Analysis of Actionable Advice in Swedish Public Agencies’ Information Security Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Elham Rostami and Fredrik Karlsson Business Language for Information Security . . . . . . . . . . . . . . . . . . . . . . . . . 169 Dinh Uy Tran and Audun Jøsang “Check, Check, Check, We Got Those” – Catalogue Use in Information Security Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Erik Bergström, Martin Lundgren, Karin Bernsmed, and Guillaume Bour Proposed Guidelines for Website Data Privacy Policies and an Application Thereof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Armand Vorster and Adéle da Veiga Towards Roles and Responsibilities in a Cyber Security Awareness Framework for South African Small, Medium, and Micro Enterprises (SMMEs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Tebogo Kesetse Lejaka, Adéle da Veiga, and Marianne Loock Is Your CISO Burnt Out yet?: Examining Demographic Differences in Workplace Burnout Amongst Cyber Security Professionals. . . . . . . . . . . . . 225 Andrew Reeves, Malcolm Pattinson, and Marcus Butavicius An Investigation into the Cybersecurity Skills Gap in South Africa . . . . . . . . . 237 Michael de Jager, Lynn Futcher, and Kerry-Lynn Thomson Cybersecurity-Related Behavior of Personnel in the Norwegian Industry . . . . . 249 Kristian Kannelønning and Sokratis Katsikas

Contents

xi

Evolving Threats and Attacks It’s More Than Just Money: The Real-World Harms from Ransomware Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Nandita Pattnaik, Jason R. C. Nurse, Sarah Turner, Gareth Mott, Jamie MacColl, Pia Huesch, and James Sullivan Cyberthreats in Modern Cars: Responsibility and Readiness of Auto Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 David Hedberg, Martin Lundgren, and Marcus Nohlberg Decreasing Physical Access Bottlenecks through Context-Driven Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Khutso Lebea and Wai Sze Leung Blockchain in Oil and Gas Supply Chain: A Literature Review from User Security and Privacy Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Urvashi Kishnani, Srinidhi Madabhushi, and Sanchari Das Are People with Cyber Security Training Worse at Checking Phishing Email Addresses? Testing the Automaticity of Verifying the Sender’s Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Daniel Conway, Marcus Butavicius, Kun Yu, and Fang Chen Content Analysis of Persuasion Principles in Mobile Instant Message Phishing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Rufai Ahmad, Sotirios Terzis, and Karen Renaud Six-Year Study of Emails Sent to Unverified Addresses. . . . . . . . . . . . . . . . . 337 Alexander Joukov and Nikolai Joukov Social-Technical Factors Evaluating the Risks of Human Factors Associated with Social Media Cybersecurity Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Fai Ben Salamah, Marco A. Palomino, Maria Papadaki, Matthew J. Craven, and Steven Furnell Online Security Attack Experience and Worries of Young Adults in the Kingdom of Saudi Arabia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Najla Aldaraani, Helen Petrie, and Siamak F. Shahandashti To Catch a Thief: Examining Socio-technical Variables and Developing a Pathway Framework for IP Theft Insider Attacks . . . . . . . . . . . . . . . . . . . . . 377 Monica T. Whitty, Christopher Ruddy, and David A. Keatley

xii

Contents

Analyzing Cybersecurity Definitions for Non-experts. . . . . . . . . . . . . . . . . . . 391 Lorenzo Neil, Julie M. Haney, Kerrianne Buchanan, and Charlotte Healy On using the Task Models for Validation and Evolution of Usable Security Design Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Célia Martinie and Bilal Naqvi Chatbots: A Framework for Improving Information Security Behaviours using ChatGPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Tapiwa Gundu Factors Influencing Internet of Medical Things (IoMT) Cybersecurity Protective Behaviours Among Healthcare Workers . . . . . . . . . . . . . . . . . . . . 432 Sinazo Brown, Zainab Ruhwanya, and Ayanda Pekane The Influence of Interpersonal Factors on Telecommuting Employees’ Cybercrime Preventative Behaviours During the Pandemic . . . . . . . . . . . . . . . 445 Tim Wright, Zainab Ruhwanya, and Jacques Ophoff Research Methods A Review of Constructive Alignment in Information Security Educational Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Vuyolwethu Mdunyelwa, Lynn Futcher, and Johan van Niekerk What Goes Around Comes Around; Effects of Unclear Questionnaire Items in Information Security Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470 Marcus Gerdin, Åke Grönlund, and Ella Kolkowska Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

Education and Training

Combating Digital Exclusion with Cybersecurity Training – An Interview Study with Swedish Seniors Joakim Kävrestad1(B)

, David Lindvall2 , and Marcus Nohlberg1

1 University of Skövde, Skövde, Sweden

{joakim.kavrestad,marcus.nohlberg}@his.se 2 Skövde Municipality, Skövde, Sweden [email protected]

Abstract. While rapid digitalization is beneficial for a majority of all people, some people struggle to adopt digital technology. Not only do these persons miss the potential benefits of digitalization, but they are also suffering from the fact that many services are no longer provided in a non-digital way. Previous research suggests that a lack of security literacy and awareness is one driving factor behind the digital exclusion for senior citizens. To that end, this research focuses on cybersecurity training for seniors. Seniors are here defined as those aged above 65. Using interviews with eight seniors, this research evaluates the use of contextual training in this user group. The rationale is that contextual training has been found to have positive results in other user groups. The results suggest that contextual cybersecurity training can increase cybersecurity awareness for senior citizens and be appreciated by the users. The participants also confirm previous research describing that cybersecurity concerns are a driving factor behind digital exclusion and that contextual cybersecurity training can make seniors more comfortable adopting digital services. Keywords: cybersecurity · awareness · senior · digital · exclusion · contextual · training

1 Introduction Rapid digitalization creates a society where more and more services are becoming increasingly available to a wide population [1]. However, some population groups struggle to adopt those digital services, and senior citizens make one such group [2]. Curiously, many services that are increasing their availability through digitalization, such as healthcare and public service, have senior citizens as one of their main target groups [3]. Furthermore, as clearly shown during the COVID-19 pandemic, being able to benefit from digital services means improved access to everything from shopping and communication to entertainment. In contrast, not being able to benefit from digital services leads to digital exclusion, which can be an equality problem [4]. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 3–12, 2023. https://doi.org/10.1007/978-3-031-38530-8_1

4

J. Kävrestad et al.

Holgersson and Söderström [3] present fear of using technology as one factor that prohibits seniors from adopting technology. That is further elaborated on by Holgersson, Kävrestad, and Nohlberg [5], who suggest that a central fear is a fear of being exposed to frauds and other cybercrimes and that those fears can have a paralyzing effect. That means that the mere fear of what might happen online can prohibit senior citizens from even trying to use online services. To make matters worse, previous research has shown that seniors are a user group that is often specifically targeted by online criminals [6, 7]. Alwanain [8] show that seniors are a common target group for online criminals and attributes that to a lack of security awareness. Alwanain [8] further show that an increased level of awareness will lead to lower susceptibility to phishing. A similar view is presented by Holgersson, Kävrestad, and Nohlberg [5], who suggest that cybersecurity training should be an integral part of efforts toward digital inclusion. As demonstrated by Aldawood and Skinner [9] and Hu, Hsu, and Zhou [10], there is a multitude of approaches for cybersecurity training which can be roughly divided as follows: • Physical training where an instructor teaches participants, for instance, classroom training. • Computer Based Training where participants access training on demand, for instance, online videos or gamified training. • Contextual training where users are provided with training when they are about to do something, for instance, phishing training that is provided to users when they open their inboxes. Previous research suggests that contextual training can be an effective way to train users in cybersecurity [11–13]. An explanation can be that such training puts security on top of the user’s mind and that it feels relevant to the user since it is presented in a situation where it is of direct relevance. To the best of our knowledge, however, no previous research focuses on contextual cybersecurity training for senior citizens. To that end, this research aims to research senior citizens’ perceptions of contextual cybersecurity training. Focusing on user perception is important since previous research shows that user perception is an important precursor to user adoption [14, 15]. Furthermore, user satisfaction will lead to user stickiness, meaning that positive users will continue to use a given training tool [16]. Consequently, while user perception of cybersecurity training can not be argued to equal secure behaviour, it is a crucial enabler for behaviour.

2 Methodology Semi-structured interviews were selected for this study. The rationale was that a qualitative approach would generate data-rich answers that would allow for a better understanding of the topic than a survey design [17]. Furthermore, the method allowed the participants to experience a contextual cybersecurity training tool during the interviews. The interviews were transcribed and analyzed using thematic coding [18]. To access the intended target group, a purposive sampling technique was used [19]. For this research, participants aged 65 or older, living in the city of Skövde with access

Combating Digital Exclusion with Cybersecurity Training

5

to a personal computer and internet were included. Participants were invited via a local municipality project, and eight participants, three male and five female, agreed to participate. An overview of the participants is presented in Table 1. All participants signed an informed consent form before the interview. Ethical approval of the research was not need in accordance with Vetenskapsrådet [20]. Table 1. Participant description. Literacy and usage are self-reported. Part.

Age

Gender

Computer literacy

Current computer usage

1

83

Female

Passable

Gaming, Social media, mail, news

2

80

Female

Bad

Banking

3

70

Male

Pretty good

News, banking, excel

4

65

Female

Good

Social media

5

70

Male

Not very advanced

Banking

6

72

Male

Overall good

Business and social media

7

65

Female

Very good

Work, banking, social media, googling, shopping

8

68

Female

Good

Banking, news, social media

The interviews were divided into two parts. In the first part, the participants were asked general questions about their understanding and experience of phishing. They were then subjected to contextual training using a tool called WebSec Coach1 . WebSec Coach is a training tool that provides users with training on passwords, phishing, fraud and fake news detection and is developed according to a method called Context-Based MicroTraining (CBMT) [21]. This research only used the phishing training, which appears to users in the upper left corner of the screen when they open an inbox, as shown in Fig. 1. Following the WebSec Coach training, the participants were asked questions about their perception of it. The research process is visualized in Fig. 2.

3 Results The interviews were recorded and then transcribed. The transcriptions were analyzed using a thematic approach and four pre-established themes: • • • •

Perception of phishing before and after using WebSec Coach. Their own security awareness before and after using WebSec Coach. Perception of WebSec Coach. Perception of contextual training in general. The remainder of this section will present the results in each theme.

1 https://chrome.google.com/webstore/detail/websec-coach/fppabiaolagdjpchoicgfikcjnilbdkl?

hl=sv.

6

J. Kävrestad et al.

Fig. 1. Demonstration of WebSec Coach

Fig. 2. Research process ovreview

Combating Digital Exclusion with Cybersecurity Training

7

3.1 Phishing The questions in the phishing theme were intended to reveal if the participants knew what phishing was and if they had encountered it. They further sought to understand the participants’ perception of phishing after using WebSec coach. Table 2 provides a summary of the interview responses. Table 2. Summary of results for the theme phishing Sub-Theme

Answers

Phishing knowledge

4/8 participants knew the term phishing. The other four did not know the term but recognized what phishing was when it was described to them

Victimization

All but one participant explicitly stated that they had received phishing mail, and two had been tricked. Furthermore, another four knew friends and family who had been tricked by phishing. In all cases, monetary loss and sadness/despair were the effects of phishing

Perception after training All participants state that they now know more about phishing and know what to look for. 6/8 describe the training as an eye-opener in the sense that they learned that there is more to think about than they previously knew. 7/8 described that they would pay more attention to links and/or addresses in the future

3.2 Security Awareness The intention of the security awareness theme was to analyze the participants perceived security awareness before and after using WebSec Coach. The questions also covered the possible connection between digital exclusion and cybersecurity awareness. The interview responses are summarized in Table 3. 3.3 WebSec Coach This theme intended to capture the participants’ perception of the WebSec coach tool itself. The sub-themes and results are presented in Table 4. 3.4 Contextual Training The final theme considered contextual training in general. It intended to capture the participants’ perception of contextual training both for learning about phishing and other cybersecurity topics. The sub-themes and results are presented in Table 5.

8

J. Kävrestad et al. Table 3. Summary of results for the theme cybersecurity awareness

Sub-Theme

Answers

Perceived security awareness before training

2/8 participants state that they worry about cyber threats, while 6/8 do not. Most (6/8) of the participants state that they exercise caution when using computers and 4/8 rely on security software to keep them secure. Only 2/8 of participants state that they do not think about security at all

Perceived change in security awareness after training

All participants expressed an increased security awareness after using WebSec coach. 3/8 participants said that the training could lead to reduced anxiety online. 6/8 participants describe that they learned new things, leading to an increased understanding of what they need to be cautious about

Cybersecurity and digital exclusion

All participants believe that increased cybersecurity knowledge can reduce digital exclusion. 7/8 describe that increased security knowledge can make people feel more comfortable using the internet. 2/8 mention that they know several seniors who are afraid to use computers and therefore avoid doing so

Table 4. Summary of results for the theme WebSec coach Sub-theme

Answers

General perception of WebSec coach

The participants provided an overall positive view of WebSec coach. All participants stated that it was user-friendly, easy to understand and added an extra layer of security. The tool contains a text-to-speech function which 4/8 of participants appreciated. Further, 2/8 participants mentioned that it is important that the tool is kept up-to-date

Attitude towards using WebSec coach

All participants were positive towards using WebSec coach on their own computers

Perception of other seniors’ willingness to use All participants believe that many seniors WebSec coach would be interested in using the tool. However, they suggest that it is important to find a way to inform about it and provide help with installing it

Combating Digital Exclusion with Cybersecurity Training

9

Table 5. Summary of results for the theme contextual training Sub-theme

Answers

Increased knowledge of phishing All participants described increased knowledge about phishing after using the training. The participants further stated an increased interest in knowing about phishing, and four participants mentioned that it was good to have brief, easy-to-understand information Warning function

5/8 participants explicitly stated that it is good to have a warning function that warns you when something bad might happen. Further, all participants described that the tool made them more aware of security issues

General perception

At a general level, all participants were positive towards using contextual training to learn about cybersecurity. In addition to the positive views on the warning functions, all participants described that it is good to receive brief information which is easy to digest. 6/8 explicitly stated that it is good to receive practical tips on how to act in specific situations

4 Conclusions The aim of this research was to research senior citizens’ perception of contextual cybersecurity training. The aim was met using an interview study where participants were subjected to contextual cybersecurity training using a tool called WebSec coach. At a high level, the results suggest that the participants are positive towards using contextual training to receive cybersecurity information. The participants described that they and other seniors would be likely to use the tool on their own computers. The research also shows that the participants’ level of security awareness increased after using the tool. After participating in the research, they also described an increased interest in learning more about phishing. In conclusion, this research suggests that contextual training can be a good way to educate senior citizens about cybersecurity. The remainder of this section will, in turn, elaborate on this research’s results, contributions, and limitations before outlining suggestions for future research. 4.1 Results Discussion At its core, contextual training means that cybersecurity training is presented to users when they are in a situation where that training is of direct relevance [11, 12]. The intention is to increase the user’s awareness of a specific security risk when that security risk is relevant. While the security benefits can be great, a previously reported risk of this approach can be that perceived usability is lowered because the user’s workflow is disrupted [22]. Interestingly, none of the participants in this research perceived that aspect of contextual training as negative. Rather, providing a warning was mentioned by 5/8 participants as a positive function of the provided training. The participants further

10

J. Kävrestad et al.

liked that the provided training was short and easy to understand, which they described as seldom the case with security information. Previous research describes that fear of online threats may prevent senior citizens from using technology and lead to digital exclusion [5]. Most of the participants in this research describe that they, or people they know, had been the victims of phishing attacks. 2/8 participants also mention that they know seniors who chose not to use computers because of fears, and in that way this research supports the results presented by Holgersson, Kävrestad, and Nohlberg [5]. 50% of the participants in this research describe that they are less worried about using the internet now than before the training. All participants also state that they believe that increased knowledge about cybersecurity will make more seniors feel comfortable using digital services. In that regard, this research suggests that cybersecurity training for senior citizens can help reduce digital exclusion. 4.2 Contributions Previous research evaluating contextual cybersecurity training for senior citizens is scarce, and the present research fills that gap by contributing knowledge about how senior citizens perceive such training. It builds on the research by Holgersson, Kävrestad, and Nohlberg [5] and confirms that anxiety and fear of security threats can be contributing factors to digital exclusion. On that note, this research suggests that contextual training can empower senior citizens to feel comfortable using digital services. Ng et al. [23] argues that using digital services independently is important since asking for help may imply additional risk, since the helper may exploit the person who asks for assistance. Interestingly, previous research shows that senior users are interested in protecting themselves [24]. The present research provides a possible direction on how. Practically, this research describes a ready-to-use cybersecurity training tool usable for senior citizens. However, the research suggests that an important challenge is to support those seniors in starting to use the tool. A given conundrum is that using digital services to inform users who are in digital exclusion does not work. Participants in this research suggested finding physical venues where seniors can be given assistance with installing the tool and that such events can take place in cooperation with municipalities or organizations for seniors. 4.3 Limitations and Future Work This research included senior citizens who all had access to a computer and the internet. They were all users of digital services to some extent. For that reason, they are not a representative sample of senior citizens. However, we argue that previous experience of digital services makes them an information-rich population which is able to discuss phishing and security issues. Nevertheless, future research targeting persons who are not using digital services is needed to fully understand the problem of digital exclusion. It can also be noted that the sample size of eight participants makes it difficult to generalize the results of this research. Nevertheless, the interviews provided informationrich responses and made it possible to demonstrate the cybersecurity training tool during the interviews. A reasonable continuation of this work would be to research similar

Combating Digital Exclusion with Cybersecurity Training

11

research questions using a methodology that allows for a bigger sample, such as a survey.

References 1. OECD, Hows Life in the Digital Age? (2019) 2. Gulbrandsen, K.S., Sheehan, M.: Social exclusion as human insecurity: a human cybersecurity framework applied to the European high north. In: Salminen, M., Zojer, G., Hossain, K. (eds.) Digitalisation and Human Security, pp. 113–140. Springer, Heidelberg (2020). https://doi.org/ 10.1007/978-3-030-48070-7_5 3. Holgersson, J., Söderström, E.: Bridging the gap: exploring elderly citizens’ perceptions of digital exclusion. In: 27th European Conference on Information Systems (ECIS), Stockholm & Uppsala, Sweden, 8–14 June 2019. Association for Information Systems (2019) 4. Aissaoui, N.: The digital divide: a literature review and some directions for future research in light of COVID-19. Global Knowledge, Memory and Communication (2021) 5. Holgersson, J., Kävrestad, J., Nohlberg, M.: Cybersecurity and digital exclusion of seniors: what do they fear? In: Furnell, S., Clarke, N. (eds.) Human Aspects of Information Security and Assurance. HAISA 2021. IFIP Advances in Information and Communication Technology, vol. 613, pp. 12–21. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81111-2_2 6. Trautman, L.J., Hussein, M., Opara, E.U., Molesky, M.J., Rahman, S.: Posted: no phishing (2020) 7. Zulkipli, N.H.N.: Synthesizing cybersecurity issues and challenges for the elderly. Turk. J. Comput. Math. Educ. (TURCOMAT) 12(5), 1775–1781 (2021) 8. Alwanain, M.I.: Phishing awareness and elderly users in social media. Int. J. Comput. Sci. Netw. Secur. 20(9), 114–119 (2020) 9. Aldawood, H., Skinner, G.: Educating and raising awareness on cyber security social engineering: a literature review. In: Proceedings of 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering, pp. 62–68. IEEE (2018) 10. Hu, S., Hsu, C., Zhou, Z.: Security education, training, and awareness programs: literature review. J. Comput. Inf. Syst. 62(4), 752–764 (2022) 11. Sharma, K., Zhan, X., Nah, F.F.-H., Siau, K., Cheng, M.X.: Impact of digital nudging on information security behavior: an experimental study on framing and priming in cybersecurity. Organ. Cybersecur. J.: Pract. Process People 1(1) (2021) 12. Xiong, A., Proctor, R.W., Yang, W., Li, N.: Embedding training within warnings improves skills of identifying phishing webpages. Hum. Factors 61(4), 577–595 (2019) 13. Kumaraguru, P., Rhee, Y., Acquisti, A., Cranor, L.F., Hong, J., Nunge, E.: Protecting people from phishing: the design and evaluation of an embedded training email system. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems (2007) 14. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319–340 (1989) 15. Jin, G., Tu, M., Kim, T.-H., Heffron, J., White, J.: Game based cybersecurity training for high school students. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education (2018) 16. Ma, S.F., Zhang, S.X., Li, G., Wu, Y.: Exploring information security education on social media use Perspective of uses and gratifications theory. Aslib J. Inf. Manag. 71(5), 618–636 (2019) 17. Oates, B.J., Griffiths, M., McLean, R.: Researching Information Systems and Computing. Sage (2022)

12

J. Kävrestad et al.

18. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006) 19. Etikan, I., Musa, S.A., Alkassim, R.S.: Comparison of convenience sampling and purposive sampling. Am. J. Theor. Appl. Stat. 5(1), 1–4 (2016) 20. Vetenskapsrådet, Good Research Practice (2017) 21. Kävrestad, J., Hagberg, A., Nohlberg, M., Rambusch, J., Roos, R., Furnell, S.: Evaluation of contextual and game-based training for phishing detection. Future Internet 14(4), 104 (2022) 22. Kävrestad, J., Friman, E., Bohlander, J., Nohlberg, M.: Can Johnny actually like security training? In: Proceedings of the 6th International Workshop on Socio-Technical Perspective in IS development. CEUR-WS (2020) 23. Ng, I.Y., Sun, L.S., Pang, N., Lim, D., Soh, G.: From digital exclusion to universal digital access in Singapore. Faculty of Arts and Social Sciences, National University of Singapore, Singapore (2021) 24. Morrison, B., Coventry, L., Briggs, P.: How do older adults feel about engaging with cybersecurity? Hum. Behav. Emerg. Technol. 3(5), 1033–1049 (2021)

Another Look at Cybersecurity Awareness Programs S. H. von Solms1(B)

, Jaco du Toit1

, and Elmarie Kritzinger2

1 University of Johannesburg, Johannesburg, South Africa

{basievs,jacodt}@uj.ac.za

2 University of South Africa, Pretoria, South Africa

[email protected]

Abstract. Cybercrime has become one of the biggest forms of crime in the world today – if not the biggest form. Everybody is seeking ways to address this growing cyber risk. Cybersecurity awareness of end users is an important component of helping to prevent cybercrime. However, research indicates that traditional cybersecurity awareness programs are not very successful. Budgets for cyber protection programs keep increasing, but there is no evidence that the levels of cybercrime are decreasing. Companies (across the globe) are searching for new ways and approaches to make their end users more cyber aware. What has become clear from many efforts and approaches in making end users cyber aware, is that an approach emphasizing the technical aspects alone does not work. A complementary human oriented approach is also needed. This paper advances 3 new possible approaches which can be considered in the challenge to create more cyber aware end users. The first approach, called the ‘Fighter’ approach, is taken from the area of firefighting, where employees are trained to fight a fire in an emergency. The second approach, called the ‘Ownership’ approach, is from the operational technology (OT) area where machine operators are trained to take ownership of their machines and safely operate their machines. The third approach, called the ‘Workplace’ approach, is taken from the area of workplace training where being cyber-awareness is seen as a part of a secure workplace. All three these approaches are based on primarily on letting the end user realise that cybersecurity awareness is actually part of their daily job environment. Keywords: Cybercrime · Awareness

1 Introduction One of the biggest risks, if not the biggest, faced by all companies, and especially small and medium companies, is cybercrime. Cyberattacks have become extremely sophisticated, and Covid-19 has resulted in an unprecedented rise in cybercrime [1, 2]. However, the so-called 4th Industrial Revolution (4th IR), or Industry 4.0, with its advanced technologies like Artificial Intelligence, Machine Learning, The Internet of © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 13–23, 2023. https://doi.org/10.1007/978-3-031-38530-8_2

14

S. H. von Solms et al.

Things and more, will just fuel this rise. ‘Industry 4.0 will expose (the) maximum personal information the world has ever seen’ [3]. The 4th IR will create massive benefits for humankind, but will also create more and serious risks. According to Dagada (2021), ‘Along with the huge advances being made in technology as the Fourth Industrial Revolution hurtles forward is an equally huge increase in cybercrime and politically motivated cyberattacks. This is fertile ground for criminals, terrorists and geopolitical strategists. It’s a war out there’ [4]. Therefore, protecting against cybercrime has become a massive challenge for companies, and new strategies are needed to address this growing challenge. In preventing cybercrime, it has become very clear that technology-oriented cybersecurity protection measures on its own are not sufficient anymore. The core role played by the humans has clearly indicated that new strategies for cybersecurity protection measures must concentrate on employees and end users as they are core to a well-balanced cybersecurity protection plan. The most widely used cybersecurity protection approaches for employees and end-users today concentrate on classroom presentations with large numbers of slides and lecturing. However, experience clearly shows that cybersecurity awareness programs as presently used, are not very effective and are not providing the desired results in increasing employee risk awareness of cybersecurity risks [5]. A wide range of reasons are given for such lack of effectiveness, but one which needs special attention is the fact that employees, in many cases, do not really understand the need for cybersecurity and do not see how it relates to their everyday work in the company. Cybersecurity awareness programs must be integrated into the essence and culture of the cybersecurity approach within an organisation. In organisations where there is a silo-based approach to cyber security awareness and training, employees see cybersecurity training and awareness as just another tick box to be completed [6]. This results in limited interaction and incorporation within the cyber culture of the organisation to be sustained in the long term. Employees also often experience cybersecurity awareness programs and courses as ‘scary, dull, and confusing’ [7]. This underlines that cybersecurity programs are not seen as part of the essence of protecting the company. One can think of this as an add-on that can be changed and left out when needed, optional rather than critical. The challenges are to bring such cyber-awareness training more directly in line and relevant to the employee’s daily job and to make it more interesting and enjoyable and to ensure it has a long-term effect. One way to achieve that is to ensure such programs and courses are business-relevant so that employees can evaluate the benefits of such courses in relation to their day-to-day business actions. ‘Connect awareness to business benefits’ [8]. It is the purpose of this paper to precisely do that – create a clearer understanding by employees of what role they can play in cyberprotecting the company. This understanding should clearly indicate to employees that they are part of defending their company against cyberattacks which may hurt the company. The purpose of cybersecurity awareness efforts should therefore create a clear link and bridge between the employees’ daily work role, the consequences of successful cyberattacks against the company and their responsibility as a co-cyber defender of the company against such attacks.

Another Look at Cybersecurity Awareness Programs

15

The paper wants to find specific clearly understandable goals that the end user employee can understand to strive for to become part of a cyber-aware workforce as a co-defender against cyberattacks. The rest of the paper is structured as follows to explain the suggested three approaches to try to achieve these goals: • Section 2: The “Fighter/Protector’ approach - Fighter and Protector of security • Section 3: The ‘Ownership’ approach - Ownership of security • Section 4: The ‘Workplace/Collaboration’ approach - Integrating cybersecurity holistically in the total workplace environment.

2 The ‘Fighter/Protector’ Approach This approach is based on the ideal situation that every end user will be a (cybercrime) fighter to prevent cybercrime and to protect the company against any damage from cyberattacks and resulting cybercrime. The analogy in this approach is taken from firefighting courses and training. Let us investigate this area a little more to see how the area can help to change traditional cybersecurity awareness approaches. Firstly, it is interesting that employees are not doing a fire-awareness course, but a firefighting course. It is easy to let the employee understand that a fire can damage and destroy the company’s buildings, which can cause massive losses – financial losses, job losses and even the closing of the company. The employees are therefore clearly made aware that they must help protect the company against damaging fires. Such a firefighting course consists of theoretical and practical parts. Aspects like how to recognise the signs of possible places where fires could erupt, how to identify possible fire hazard and how to act to eliminate such potential sources, are usually part of such a course. The purpose is therefore to make let them understand that they must act as fire fighters when necessary, to protect the company. Every employee is therefore basically a fire fighter, and knows precisely how to act to fight a fire if such a fire is noticed. The employee therefore has a direct link from the benefit of such courses and training to the benefit for company in general, and even to themselves. Furthermore, no firefighting course can ever be successful if it is only done in a classroom or lecturing situation. Such courses do cover classroom work, for example looking out for and recognising potential fire hazards and reporting such hazards. This background is then followed by practical training where the employee is actually given a fire extinguisher to physically put out a fire. Basically a culture is developed where every employee therefore clearly understands his or her role as potential fire fighters on behalf of the company. The challenge is how to develop this fire fighter culture as far as cybersecurity is concerned. This is discussed below. This can be done by training the user through theory and practice, how to recognise a possible cyber attack situation and how to act in such a situation. Users learn about the different modes and ways of cyberattacks and keep their eyes open to recognise such situations in their daily work. Through the training the realise that by recognising a potential attack, and not falling for the attack, they are actually protecting and defending the company against cyberattacks. As an example, the attack method known as Business Email Compromise (BEC) is used. In this scenario, the end user receives an email from

16

S. H. von Solms et al.

a known party [9]. The email contains info about a payment to be made to the sender’s company and provides a specific bank account number. Such transactions happen every day and may be totally legal and in order. However, cybercriminals have been very successful in exploiting such emails. They hack into the sender’s or receiver’s email system and change the bank account info in the email before arriving at the end user. The receiver accepts the contents of the email and deposits the amount into the account indicated in the email – the cybercriminals account! How can the sender in the company now act as a cyber fighter? Suppose the sender had been made aware of the BEC attack method, and understands the potential risks of the BEC attack method. The sender will now advise the receiver in some way before the email is sent, to contact the sender, using a separate means of communication, like a telephone call, to confirm the bank information before payment. In acting in this way, and because of his or her cybersecurity awareness of BEC attacks, the sender’s vigilance would have helped prevent a possible cybercrime incident. The conclusion – Cyberrisk-aware users can prevent cybercrime! A cyber risk naïve sender would not have warned the receiver, and a cyber risk naïve receiver would have trusted the content and transferred the money. Again, if the email had been compromised, the money would have gone to the criminals’ account, resulting in a cybercrime and possible financial loss to various parties. This situation was highlighted in a ruling from the High Court in Johannesburg, South Africa in January 2023. In a property transfer transaction, the relevant company of attorneys sent an email to a client, who was the purchaser of the property. The email informed the client that the transfer had occurred, and that the client must pay the purchase price balance into the attorneys’ trust account. The client’s email system was hacked, and the hackers changed the bank information. The client paid into the criminals’ account, and lost ZAR 5,5 million (about US$ 330 000). The case was taken to court and the Judge ruled that the company of attorneys must pay back the money to the client. In his motivation, he stated that the company was aware of BEC risks and that email systems are not secure. He also criticised the level of cybersecurity awareness of the administrative staff handling such emails and stated that was “hopelessly inadequate” [10]. If the sender had been aware of the risk the company would be exposed to by not taking cybersecurity precautions, the sender would have saved the company a lot of money. The sender would actually have acted as a cybercrime fighter by helping to cyber protect the company. Again, the conclusion – cyberaware users can prevent cybercrime! The main purpose of this approach is to let company end users understand that they actually are working in the frontlines of cyber attacks, and that in their normal business role they can help fight cybercrime. The “Fighter’ approach therefore means that all end users should be trained in such a way that they see themselves as cybercrime fighters to help protect the company against losses resulting from cyberattacks – just as they are fire fighters to help protect the company against fire losses. The goal of a cyber-aware workforce is to strive to become cybercrime fighters to help protect the company - a direct link to the business benefit to the company. Figure 1 highlights individuals that strive to become cybercrime fighters. Individuals are empowered to “fight fires” to protect the organisation from cyber attacks.

Another Look at Cybersecurity Awareness Programs

17

Fig. 1. Fighter and Protector Approach

3 The ‘Ownership’ Approach This approach is based on the ideal situation that every end user understands that he or she is probably in the aim of the cybercriminal to launch a cyberattack on the company. By understanding this, they will take extra responsibility for their computing devices (workstations) to protect the workstation and ensure that the workstation is not used as an instrument to cyber attack the company. The analogy in this approach is taken from the manufacturing area and operators who handle different forms of machines. Let us investigate this area a little more to see how the area can help to change traditional cybersecurity awareness approaches. The concept of Total Productive Maintenance (TPM) [11] includes the idea of developing a culture of total operator involvement, care and responsibility in and for the relevant equipment they are operating. In this culture, operators are trained to develop the concept of ownership for their equipment and become full partners with technicians and management to ensure the safety and protection of the equipment. The culture develops asset ownership in the sense of ‘this is my equipment, and I must protect it to the benefit of myself and my company’. Such a culture challenges the operator to ensure that his or her machine is always operated in an optimal way to ensure that the products created by the relevant machines are as perfect as possible, as that will benefit the operator and the company in general. This is precisely what the term ‘operator care’ means in such a manufacturing environment. It is the process of engaging all employees (and specifically the machine operators) towards a common goal of ensuring the best results and outputs for the company [12]. Let us now investigate how this approach can help to create a cyber-aware workforce. Suppose every end user employee can be trained to see his or her computing device

18

S. H. von Solms et al.

(workstation) as a ‘machine’ used to help the company succeed. In that case, the idea of operator care has direct relevance. This will challenge the end user to ensure that his or her computing device is always operated in an optimal way to ensure that the company is not negatively impacted. Therefore, training in aspects like phishing, fake websites, deep fakes, BEC and more are now seen as bad things which can happen on their computing devices, and which can potentially impact the company negatively if not stopped. Such bad things must therefore be detected, avoided and reported. The cyber-aware workforce goal to strive for is to take responsible ownership and responsibility for their computing devices and be aware of the type of events which can cause problems – a direct link to the business benefit to the company. The “Ownership” approach therefore means that all end users should be trained to see themselves as responsible owners of their computing devices and use such devices in a way that will help prevent any damage to the company. The cyber-aware workforce’s goal is to take responsibility and ownership of their devices, ensuring that such devices will not cause any risks to the company. Figure 2 demonstrates how individuals take ownership of ICT assets and take ownership of their responsibilities in protecting ICT assets from cyber attacks.

Fig. 2. Ownership approach

4 The ‘Workplace’ Approach This approach is based on the ideal situation that every end user will feel safe and secure in his or her workplace environment. The analogy in this approach is based on the usual health and safety training that all employees get in any company – in many cases such training is required by law. In general terms, the purpose of health and safety training

Another Look at Cybersecurity Awareness Programs

19

can be seen as providing for the safety and health of persons at work in connection with the use of machinery [13]. Health and safety training is important because it equips workers with the knowledge of how to perform their duties correctly and in the most secure and safe way possible [14]. The purpose of such health and safety training is to create a culture of mutual safety and protection in the organisation. Such a culture is an organisational culture that places a high level of importance on security and safety beliefs, values and attitudes—and these are shared by the majority of people within the company or workplace. It can be characterised as ‘the way we do things around here’ [15]. A positive security and safety culture can result in improved workplace health and safety and organisational performance. In a typical health and safety course, employees will be exposed to many aspects, but one of the main goals is to allow the employee to develop a positive health and safety culture, where working in a secure, safe and healthy environment becomes second nature to everyone. Such courses should create a sense of wellbeing in a specific employee [16]. Such a culture will include emphasising to the employees that they have a responsibility to collaborate to protect themselves, ensure that the company assets and broader surroundings are protected, and that the business is protected against unnecessary insurance claims and/or financial losses that could have been avoided [14]. Some health and safety courses now include online safety and cyber security [17]. By including cybersecurity in a health and safety course, should indicate to an employee that cybersecurity is as important and relevant as other aspects like environmental safety and security, equipment safety, firefighting and employee health resources in the workplace. The ‘Workplace’ or ‘Collaborator’ approach therefore means that all end users should be trained in a way to let them realise that protecting their computing devices against cyberattacks, is part of the culture that exists in a good workplace. It is not a standalone or strange type of requirement, but just re-emphasises the core benefit to the company as a whole. The cyber-aware workforce goal to strive for is to see cybersecurity as an integral part of normal workplace safety, security and operations. Figure 3 demonstrates the responsibility a Collaborator has in protecting the ICT from attacks, but emphasises collaboration as an important property in this approach.

5 Evaluating the Fighter, Ownership and Workplace Approaches in Creating a Cyber Risk-Aware Workforce As discussed in the sections above, end users are becoming the main targets for cybercriminals. ‘The human element is the most common threat vector; it was the root cause of 82% of data breaches’ [18]. It therefore seems logical that the most effort should go into creating a comprehensive cyber risk-aware workforce, which is acutely aware of their role and responsibility in the fight against cybercrime. It must become second nature to such end users, realising and understanding that they are actually the company’s first line of defence against cybercrime. Such an approach should advance a ‘cyberattack prevention first’ culture. This means a critical thinking mindset and evaluation in every situation where the end user

20

S. H. von Solms et al.

Fig. 3. Workplace/Collaborator Approach

is confronted with emails and telephone calls. In all such cases, cybercrime prevention must drive any action by the end user. ‘It’s common belief that people are the last line of defence during a cybersecurity attack. Wrong. In many instances people are in fact the first line of defence. If your employees are (1) aware and (2) properly trained, then they will be one of your single strongest assets in fighting a never-ending war against cybercrime’ [19]. The three approaches discussed above all work towards creating an instilled culture of some sort, and these different cultures all help towards a culture of ‘cyberattack prevention first’. With this approach, end users realise and accept their important and responsible role with the overwhelming intention to prevent cybercrime in their company. ‘Everyone from entry-level to C-suite should know how to identify and report breaches so they can defend the enterprise’ [20, 21]. ‘security is everyone’s job’ [22]. In the next section, we provide some ideas on how to start creating a cyber-risk aware workforce. Figure 4 shows that a cyber-risk aware workforce contains elements of a cyber fighter, collaborator and owner.

6 Start Point for Creating the Proposed Cyber-Risk Aware Workforce To create a cyber risk-aware workforce of cybercrime fighters, we have to move away from the established (and unsuccessful) methods we have been using up to now. ‘Training is the most crucial step in this process, and it doesn’t need to include rote messages and endless PowerPoint slides. Learning sessions can be humorous, fun, and—most importantly—educational’ [21].

Another Look at Cybersecurity Awareness Programs

21

Fig. 4. Cyber Risk-Aware Workforce

By explaining and elaborating on the three approaches above, cybersecurity awareness training will concentrate much more on the responsibility and ownership role of end users. They must understand that they are actually the frontline protectors against cybercrime. Therefore, it seems to actually build cybersecurity awareness and a cyberrisk aware workforce around the idea of cybercrime fighters, owners and workers. That gives the end user an identity and a goal within which all training and courses are seen holistically as part of cybercrime fighting and is directly linked to the job they perform every day – as end users operating their computing devices. The challenge is to make it as practical as possible. How can you teach someone about phishing attacks if you do not actually confront the person with such an attack? You can explain it in a classroom, but practically they must be exposed to real simulation exercises where they are receiving such an email and must decide what to do. Present approaches to awareness very often consist of classroom teaching or a webinar session, without any practical exposure. After all, you cannot train an employee to be a fire fighter if you do not expose him to a real fire, give him a fire extinguisher, and allow him actually to make the right decisions and extinguish the fire. ‘…we are continuously testing and training our employees by sending them simulated phishing emails, getting them more familiar with what to look for and how to spot phishing emails. Even just in this last quarter, we saw more employees spot and report the phishing simulation test than ever before’ [22]. This type of training must be integrated with the daily work environment of end users, and they must learn from experience to create the right cyber-risk aware mindset. A cybercrime fighting and prevention program needs to be integrated into employees’ daily work. Researchers and designers need to think of clever ways to increase the cyber risk-awareness of workforces. However, it seems that the concept of a Cyber Risk Aware workforce, supported by a cybersecurity culture is the basic deliverable of

22

S. H. von Solms et al.

any cybersecurity training Such a culture will be more acceptable to employees if it highlights some specific goals to strive for. The paper offers three such goals which employees should strive for: • Be a proud member of your company’s army of cybercrime fighters. • Take responsibility and ownership of your computing device and • Understand that cybercrime fighting is part of a well-established secure and safe workplace environment where you work. The three goals and ideas presented here can be incorporated into organisations’ bigger cybercrime fighting and awareness programs to develop a Cyber Risk Aware workforce.

7 Summary and Conclusion In this paper we suggested three possible considerations which can allow cybersecurity awareness programs to be more goal oriented and bring cybersecurity and cybercrime directly into the daily work of an end user. A cybercrime fighting and prevention program needs to be integrated into employees’ daily work. Researchers and designers need to think of clever ways to increase a cyber risk-aware workforce. However, it seems that the concept of a cybersecurity culture is the basic deliverable of any cybersecurity course. Such a culture will be more acceptable to employees if it highlights some specific goals to strive for. The paper offers three such goals: • Become cybercrime fighters. • Take responsibility and ownership for your computing device and • Cybercrime fighting is part of a well-established workplace environment. The three goals ideas presented here can be incorporated into organisations’ bigger cybercrime fighting and awareness programs.

References 1. Buil-Gil, D., Miró-Llinares, F., Moneva, A., Kemp, S., Díaz-Castaño, N.: Cybercrime and shifts in opportunities during COVID-19: a preliminary analysis in the UK, vol. 23, pp. S47–S59 (2021). https://doi.org/10.1080/14616696.2020.1804973 2. Radoini A.: Cyber-crime during the COVID-19 pandemic, vol. 2020, pp. 6–10 (2020). https:// doi.org/10.18356/5c95a747-en 3. Onik, M.M.H., Kim, C., Yang, J.: Personal data privacy challenges of the fourth industrial revolution. In: ICACT, pp. 635–638 (2019) 4. Dagda, R.: The escalation in global cyberattacks is an unintended consequence of 4IR technologies (2021). https://www.dailymaverick.co.za/opinionista/2021-09-13-gone-phi shing-the-escalation-in-global-cyberattacks-is-an-unintended-consequence-of-4ir-technolog ies/. Accessed 21 Mar 2023 5. Bada, M., Sasse, A.M., Nurse, J.R.C.: Cyber security awareness campaigns: why do they fail to change behaviour? (2019). https://doi.org/10.48550/arxiv.1901.02672 6. Alshaikh, M.: Developing cybersecurity culture to influence employee behavior: a practice perspective. 98, 102003–102010 (2020). https://doi.org/10.1016/j.cose.2020.102003

Another Look at Cybersecurity Awareness Programs

23

7. Haney, J.M., Lutters, W.G.: “It’s scary... it’s confusing... it’s dull”: how cybersecurity advocates overcome negative perceptions of security, pp. 411–425 (2018) 8. Candrick, W.: 3 steps to stop employees from taking cyber bait (2021). https://www.gartner. com/en/doc/3-steps-to-stop-employees-taking-cyber-bait. Accessed 21 Mar 2023 9. Checkpoint Software: Business email compromise (BEC) (2023). https://www.checkpoint. com/cyber-hub/threat-prevention/what-is-email-security/business-email-compromise-bec/ 10. Mudau J.: Hawarden v edward nathan sonnenbergs inc (13849/2020) [2023] ZAGPJHC 13 (2023) 11. Scheller E.: TPM and operator asset ownership: This is my equipment! (ND). https://www. reliableplant.com/Read/26763/tpm-operator-asset-equipment. Accessed 23 Mar 2023 12. Gehloff M.: Operator care – 4 elements to enhanced operator inspections (ND). https://rel iabilityweb.com/tips/article/operator_care_4_elements_to_enhanced_operator_inspections. Accessed 23 Mar 2023 13. Labour Guide: Overview of the OHS act (ND). https://labourguide.co.za/health-and-safety/ overview-of-the-ohs-act/. Accessed 23 Mar 2023 14. SafetyWallet (Pty) Ltd.: Importance of occupational health and safety training (2023). https:// www.safetywallet.co.za/healthandsafetytraining. Accessed 23 Mar 2023 15. The State of Queensland (Department of Justice and Attorney-General): Understanding safety culture, pp. 1–8 (2013) 16. Health and Safety Executive: Why is health and safety training important? (ND). https://www. hse.gov.uk/treework/training-is-important.htm. Accessed 23 Mar 2023 17. Bleich, C.: 15 safety training topics for your workplace (and free courses!) (2023). https:// www.edgepointlearning.com/blog/employee-safety-training-topics/. Accessed 23 Mar 2023 18. Kerner, S.M.: 34 cybersecurity statistics to lose sleep over in 2023 (2023) 19. Vener, D.: Create an army of employees to fight cybercrime (2020). https://www.tagsoluti ons.com/create-an-army-of-employees-to-fight-cybercrime/. Accessed 21 Mar 2023 20. Kassner, M.: Cybersecurity pros: Are humans really the weakest link? (2020). https://www.tec hrepublic.com/article/cybersecurity-pros-are-humans-really-the-weakest-link. Accessed 21 Mar 2023 21. Briggs, D.: Smart prevention: How every enterprise can create human firewalls (2019). https://www.darkreading.com/vulnerability-management/smart-prevention-how-every-ent erprise-can-create-human-firewalls. Accessed 21 Mar 2023 22. Scimone, J.: Security is everyone’s job in the workplace (N.D.). https://www.technolog yreview.com/2021/11/22/1040358/security-is-everyones-job-in-the-workplace. Accessed 21 Mar 2023

Cyber Range Exercises: Potentials and Open Challenges for Organizations Magdalena Glas1(B) , Fabian B¨ ohm2 , Falko Sch¨ onteich2 , 1 and G¨ unther Pernul 1

University of Regensburg, Regensburg, Germany {magdalena.glas,guenther.pernul}@ur.de 2 Q PERIOR AG, Munich, Germany {fabian.boehm,falko.schoenteich}@q-perior.com

Abstract. The shortage of skilled cybersecurity professionals poses a significant challenge for organizations seeking to protect their assets and data. To address this shortage, onboarding and reskilling employees for cybersecurity positions becomes a daunting task for organizations. Cyber ranges mirror digital infrastructures to provide a realistic yet safe environment for cybersecurity training. To date, the potential of cyber ranges has been leveraged primarily in academic education. This paper investigates how cyber range exercises (CRX) can enhance the onboarding and reskilling of cybersecurity professionals in organizations. To this end, we conducted semi-structured interviews with seven cybersecurity professionals from organizations in different industry sectors in Germany and India. Our findings indicate that the main potential of CRXs lies in conveying universal cybersecurity concepts that are transferable to the particular systems, technologies and tools of an organization. Thereby, CRXs represent a promising complement to existing organizational training strategies. Challenges to overcome were identified in establishing an organizational CRX infrastructure, building the necessary competencies to conduct the exercises, and ensuring the comparability of CRXs to validate personal competence development.

1

Introduction

Cybersecurity threats continue to grow in complexity, impact, and scale, making it crucial for organizations to have a strong cybersecurity workforce that provides an effective line of defense against these threats [10]. Meanwhile, the severe shortage of skilled professionals in the field poses a significant challenge to organizations seeking to protect their assets and data [14]. To address this shortage and meet the demands of the ever-changing threat landscape, there is a need to equip more people with the competencies to work as cybersecurity professionals. As conventional knowledge transfer methods lack the possibility to convey practical cybersecurity skills, hands-on training in cyber ranges has gained popularity in recent years [23]. Cyber ranges mirror real infrastructures c IFIP International Federation for Information Processing 2023  Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 24–35, 2023. https://doi.org/10.1007/978-3-031-38530-8_3

Cyber Range Exercises: Potentials and Open Challenges for Organizations

25

to provide a realistic yet safe cybersecurity training and testing environment. A cyber range can facilitate a variety of different training or testing scenarios. The execution of a training or testing scenario on a cyber range is referred to as CRX. In a CRX, a trainee can experience cybersecurity attacks against realistic digital infrastructures. Research on cyber ranges has rapidly evolved in the past three years [11]. Although cyber ranges are already being used to some extent in private or public organizations [1,2,13], innovations in the field are predominantly driven by academic research [6,23]. In this context, CRXs are primarily developed for the education of students in cybersecurity curricula. Prominent examples are the cyber range platforms KYPO [5,22], CyTrone [3,20] and THREAT-ARREST [12]. CRXs on these platforms with hundreds of participants provide far-reaching insights into how cyber ranges contribute to acquiring practical cybersecurity skills in academic curricula. While higher education institutions undoubtedly play a crucial role in educating the future cybersecurity workforce, they alone are not sufficient to close the workforce gap in cybersecurity in a timely manner. Therefore, the education of cybersecurity professionals must also become a priority for private and public organizations. Intra-organizational training is frequently identified as one of the most effective measures for organizations to mitigate organizational cybersecurity workforce gaps [14,19]. Creating CRX-based training enables organizations to equip employees with the precise skill set required for a cybersecurity position in the respective organization. Moreover, intra-organizational cybersecurity training enables organizations to grow internal cybersecurity talent by offering reskilling programs to employees outside cybersecurity with an IT or other technical background. In this work, we seek to pave the way for organizations to foster the potential of cyber range training for cybersecurity professionals. We focus on CRXs for individual competence development during cybersecurity onboarding or reskilling programs. To explore the potentials and challenges of CRXs in this context, we conducted semi-structured interviews with seven cybersecurity professionals from different industry sectors in Germany and India. To familiarize the interviewees with the concept of CRXs, we deployed two cyber ranges scenarios by Vielberth et al. [21] and Glas et al. [11] that aim at teaching practical detection and mitigation techniques to future cybersecurity professionals. The interviewees had different levels of expertise in cybersecurity, from less than one year to over 14 years of experience in the field. This allows us to depict different perspectives on the topic – from interviewees in the role of organizational cybersecurity trainees to professionals with years of experience with cybersecurity operations in one or more organizations. The rest of this paper is structured as follows. A brief account of background on cyber ranges and related work is given in Sect. 2. In Sect. 3, we outline the method of our study. We present the findings of the interviews we conducted in Sect. 4, including a description of current training methods in the participants’ organizations and highlighting potentials (Sect. 4.2) and open challenges (Sect. 4.3) for utilizing cyber ranges for organizational cybersecurity training. In Sect. 4.4, we shed light on the limitations of our study and give an outlook on future work before concluding our work in Sect. 5.

26

2

M. Glas et al.

Background and Related Work

The origin of cyber ranges traces back to the late 2000s when the US Department of Defense developed the first cyber range to prepare soldiers for cyber warfare [9]. The concept adapts to the idea of shooting ranges providing a safe environment for individuals to practice and test their skills. Cyber ranges have extended their applications beyond the military domain and are now utilized for training individuals in cybersecurity across various domains [6]. The prevailing definition of the concept originates from the National Institute of Standards and Technology (NIST) referring to cyber ranges as “interactive, simulated platforms and representations of networks, systems, tools, and applications” [18]. The potential of cyber ranges lies in providing a realistic facility for hands-on training and testing that can better prepare individuals and organizations for real-world cyber-attacks. Depending on the specific needs of the training, cyber ranges can be entirely virtual or include actual physical hardware [15]. Prior works widely acknowledge that CRXs provide benefits within academic education and for cybersecurity professionals in practice. For instance, Collins et al. [7] emphasize the importance of hands-on training for security operations centers (SOCs), proposing a cyber range concept to train SOC analysts at different levels of expertise. However, this CRX concept is not tailored to the context of a specific organization but represents a generic SOC infrastructure. Similarly, Brilingait˙e et al. [4] and Kim et al. [16] report about designing and conducting large-scale CRXs for professionals from different industries. These CRXs were centrally organized by research institutions to enable cybersecurity professionals from various public institutions to test and improve cybersecurity skills. The focus of these CRXs was not to provide professionals with an organization-specific skill set but to strengthen cyber resilience on a national level. These works show promising results regarding the suitability of hands-on training for cybersecurity professionals, however, consider CRXs from a national rather than an organizational perspective. The Austrian Institute for Technology (AIT) proposes a “professional training” module as one component of the AIT cyber range [17]. However, the authors focus on the functional description of the component and not on how this form of training can be incorporated into an organization’s cybersecurity strategy. With this paper, we build on these findings and look at the potential and challenge of CRXs from the perspective of organization-specific competence development.

3

Method

To gain insights on the potential and challenges of CRXs in organizations, we conducted interviews with seven cybersecurity professionals in Germany and India. Interviewees were recruited with a snowballing system, initially recruiting cybersecurity professionals from the researchers’ networks who recommended further colleagues. To consider a wide variety of professional contexts, we recruited participants with different levels of expertise and industry sectors.

Cyber Range Exercises: Potentials and Open Challenges for Organizations

27

Table 1. Interviewees’ industry sector, country, position and experience in cybersecurity (in years). No. Industry sector

Country

Position

Exp.

I1

Machine Manufact

India

SOC Team Lead

6

I2

Machine Manufact

Germany IT Governance Coordinator (Security)

1

I3

Machine Manufact

Germany Chief Information Security Officer

14

I4

Banking

Germany Information Security Officer

5

I5

Consulting

Germany Manager

10

I6

Consulting

Germany Consultant (Offensive Security)

1

I7

Consulting

Germany Consultant (Offensive Security)

1

Table 1 provides details about the recruited interviewees. The planned user study was designed in accordance with the guidelines of the ethics committee of our university and did not raise any ethical concerns. To familiarize the interviewees with the concept of CRXs we deployed two cyber range scenarios by Vielberth et al. [21] and Glas et al. [11]. These scenarios aim at training participants in responding to cyberattacks on a simulated industrial system. For this purpose, the participants engage with a Security Information and Event Management (SIEM) system for monitoring the industrial system. A Learning Management System (LMS) guides participants through the scenarios by providing information and dividing the scenarios into different tasks. For a more detailed description of the scenarios, we refer to the original works. We deployed the web-based CRXs a few days before the interviews, created user accounts for the interviewees and sent out the credentials to access the CRXs. Since this was just to introduce the interviewees to the idea of CRXs, the interviewees were left free to choose if they just wanted to actively participate in one of the exercises or were just given a brief overview on the CRXs at the beginning of the interview by the interviewer. To capture the interviewees’ assessment of the possibilities of a CRX in a professional context, a semi-structured interview approach following the method of Corbin et Strauss [8] was chosen. To this end, we structured the interviews around three distinct topics. Each topic looked at the use of CRX in organizations from a different perspective: 1. Status quo of cybersecurity onboarding and reskilling: This topic aims to provide insight into how onboarding/reskilling is currently positioned in the participant’s organization.

28

M. Glas et al.

2. Potential of CRXs: In the second phase of the interview we asked the interviewee for their opinion and assessment of the potential and possible uses of CRX in the context of the status quo. 3. Prospects on implementation and open challenges: Within this interview phase, we wanted to understand how the interviewee envisions the implementation of CRXs in the organization to leverage the previously discussed potential. Subsequently, we talked to the participants about the obstacles and challenges to using CRX for onboarding and reskilling, investigating what could hinder the participants’ organization from using a CRX. The specific questions representing our interview guideline are provided in the appendix of this paper. For the participants to familiarize themselves with the questions, the interview guideline was sent to them in advance. All interviews were conducted virtually over a video conferencing tool. With the participants’ consent, the interviews were recorded and transcribed afterward in preparation for the qualitative analysis. The captured raw data included 223 min of audio material and 32103 transcribed words. For data analysis, a structured coding approach was conducted. The first round of coding was performed by the first author, coding the interviews according to the three predefined topics. On this basis, the authors inductively drew conclusions from the interviews in several rounds of discussions. All findings were approved by the interviewees. Details about the individual interviews are given in Table 2. Table 2. Details about the individual interviews: Degree of participation in the CRX, interview language and duration.

4

No. CRX participation

Interview language

Interview duration (minutes)

I1

Active participation

English

24

I2

Active participation

German

27

I3

Overview given by interviewer German

25

I4

Overview given by interviewer German

42

I5

Active participation

German

57

I6

Active participation

German

30

I7

Active participation

German

17

Findings

In the following, we describe the findings of the seven interviews in relation to the three pre-defined topics Status quo of cybersecurity onboarding and reskilling, Potential of CRXs, Prospects on implementation and open challenges before outlining limitations of our study and giving an outlook on future work. A summary of our findings can be found in Table 3.

Cyber Range Exercises: Potentials and Open Challenges for Organizations

4.1

29

Status Quo of Onboarding and Reskilling in Cybersecurity

At the beginning of the interviews, all participants agreed that the cybersecurity workforce gap in cybersecurity is clearly noticeable in their respective organizations (I1–I7). The apparent symptoms of this problem are the low number of applicants for open positions (I3, I6) and the quality of applications regarding the suitability of candidates for the cybersecurity profiles they are looking for (I1–I4). The status quo of training for onboarding new colleagues and reskilling personnel from within the organization is currently limited to a mixture of onthe-job learning (I1, I3) and external training (I2–I7). In some cases, these two approaches are supplemented by training conducted privately (I5, I6, I7). The interviewees clarified that the organizations need training that focuses specifically on the skills and concepts required by the organization (I1, I2, I3, I5). However, internal resources and knowledge are lacking to develop and deliver such training (I6). Instead, generic, external training is used to map the onboarding and reskilling of employees. The role of generic certifications was seen ambivalently. While the consultants saw generic certificates as beneficial due to their cross-customer activities (I5, I6, I7), the other participants saw these external training as not being specific enough to the organization’s needs (I1–I4). Notably, none of the interviewees’ organizations have a formal training path for required roles and profiles in cybersecurity, despite the urgent need for such a formal but customized training approach was recognized. 4.2

Potential of CRXs

The status quo highlighted the organizations’ need for formal defensive cybersecurity skills training tailored to their requirements. All participants stressed the potential of CRX in this context (I1–I7). CRXs can not only be used in the further training of IT security professionals but could also support the training of overall IT professionals concerning cybersecurity. Positions with a high turnover, such as SOC analysts, were named as a key target group for a CRX (I3). When onboarding new employees in this area, the interviewees felt it is most essential to provide an understanding of the overarching concepts and processes of the technology and the organization. Teaching general concepts and knowledge, as well as their applicability, is more important to almost all participants than the specific application of a particular technology or tool (I1, I3, I4, I5, I7). This is the core of the potential of CRXs, as seen by the participants: The opportunity to train concepts, processes, and procedures in a safe environment (I1–I5). Practically this means that it is more important for employees to understand, e.g., how a SIEM system generally works rather than being able to employ all functions of specific SIEM systems like Splunk1 , QRadar2 or ArcSight3 .

1 2 3

https://www.splunk.com/. https://www.ibm.com/qradar. https://www.microfocus.com/en-us/cyberres/secops/arcsight-esm.

30

M. Glas et al.

The potential content taught via the CRX can be separated into three levels (rf. Fig. 1). At the most general level, the CRX can be used to teach an understanding of overarching concepts (I1–I5, I7), e.g., network technology. At this level, participants might learn central concepts of cybersecurity. Specific tools do not matter at this point in a trainee’s education.

Cybersecurity Processes (e.g., incident detection)

Cybersecurity Foundations

Non-technical skills (e.g. stress resilience)

Cybersecurity Tools (e.g., anti-malware tool, SIEM system)

(e.g., secure networking concepts)

Fig. 1. Core potential of CRXs in organizations: We identified use cases on three levels (Cybersecurity Tools, Cybersecurity Processes and Cybersecurity Foundations).

On the next, more concrete level, employees can enroll in CRX to learn the analytical procedures for detecting and analyzing security incidents, for example (I1, I2, I3, I5, I6, I7). Here, CRXs support employees in understanding the processes and frameworks applied within the organization. At this level, focusing on core cybersecurity processes, CRX can teach employees to understand and apply concepts like the Cyber Kill Chain4 , the MITRE ATT&CK framework5 or related aspects of day-to-day cybersecurity work. The training on this level can range from more a knowledge-focused approach (e.g., understanding the different techniques within MITRE ATT&CK) to very practical (e.g., actually analyzing threats following a specific framework). Even more practical exercises like the process of on-boarding data sources to a SIEM system or defining use cases for incident detection within said SIEM system might be part of a CRX at this level. Ultimately, a CRX can also support training on specific technologies (I2, I4, I5, I7). However, for the interviewees, the focus lies not on learning how to use a particular tool, but instead on the concepts of, e.g., anti-malware technology (I1, I3, I5). A CRX should enable employees to transfer general knowledge about, e.g., anti-malware technology to the specific tool which is used in the organization. Still, with regards to specific technologies, it might be feasible to implement a small set of CRXs focusing on these technologies. In practical terms, if an organization is using a particular SIEM system and has no intentions of transitioning to a different tool, it is reasonable to integrate this SIEM system in a CRX. In addition to these technical aspects, some interviewees noted that a CRX could also be leveraged to train non-technical skills related to defensive security operations, such as stress resilience (I2) and analytical reasoning (I6), when 4 5

https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html. https://attack.mitre.org/.

Cyber Range Exercises: Potentials and Open Challenges for Organizations

31

investigating an incident. From a practical perspective, this can be achieved through a more realistic simulation environment better reflecting the organization’s actual circumstances regarding the number logs and related data artifacts. Further possible use cases for CRX as a test bed (I2) and “safe space” (I6) for process changes and optimizations were highlighted in the interviews. 4.3

Prospects on Implementation and Open Challenges

Regarding the implementation of a CRX and associated challenges, we talked with the participants about the required level of fidelity that a CRX must attain to offer an advantage to an organization over existing training options (rf. Sect. 4.1). The interviews revealed that achieving a high degree of fidelity in a CRX is hardly feasible but also not necessarily required. The participants pointed out that due to the immense complexity of IT infrastructures, exact replicas of these infrastructures cannot be created with a reasonable amount of effort (I3–I6). Interviewee I4 highlighted that IT landscapes are so dynamic that CRX pursuing a high degree of fidelity will always be subject to limited timeliness. However, this was not seen as an issue limiting the usefulness of CRXs: “You will never achieve a 100% replication of a system [for a CRX]. But I believe this isn’t necessary in a security context as you want to generate transferable knowledge.” (I3) In the context of onboarding and reskilling, the interviewees described the potential of CRXs in transferring fundamental concepts in a CRX in a way that the trainees can apply them to different contexts in terms of tools and technologies (rf. Sect. 4.2). This sort of generic training does not require a CRX to mirror a particular organization. Instead, the interviewees considered a CRX for this purpose sufficient to provide a realistic training environment that takes into account industry-specific characteristics and the size of the organization (I3, I4, I5). These findings indicate that achieving a high level of fidelity is not a significant concern for the practical use of CRX in an organizational context. Instead, it is a walk along the lines of fidelity versus a limited budget. This demonstrates a mismatch between the requirements of practice and the current goals of academic research in the field of CRXs, for which fidelity is commonly prioritized as a vital objective in CRX design [11]. For use cases beyond onboarding and reskilling, the participants argued that the level of fidelity a CRX provides should align with the complexity of the training content (I2, I4–I7). In addition to technical expertise, higher-level cybersecurity professionals require a nuanced understanding of the organizational context in which they operate. They must be able to communicate effectively with both technical and non-technical stakeholders, e.g., to intermediate between the SOC detecting a cyberattack and the business departments whose systems are affected by the attack (I7). As such, CRXs designed for these roles must, to a certain degree of fidelity, reflect the respective organization’s structures and processes. In terms of sourcing, interviewees considered using CRXs provided by external vendors preferable to the option of internal sourcing (I1–I7). Interviewees I3 and I7 noted that smaller and medium-sized organizations might lack the competence and the resources to implement, manage, maintain, and conduct CRXs.

32

M. Glas et al.

In addition, the potential of external sourcing was seen as a way to ensure that the content of a CRX is up-to-date (e.g., regarding recent attack vectors) (I4, I5, I6), based on best practices (I2) and enriched with external know-how (I3, I5). One interviewee specifically envisioned a “cyber range ecosystem” (I5), which provides a modular system with reusable learning and scenario components that can be individually orchestrated for an organization or industry. Table 3. Summary of the interviews’ main findings per topic. Topic

Main findings

Status quo

• All interviewees noted the cybersecurity workforce gap in their respective organizations evident by the low number of sufficiently qualified applicants for cybersecurity positions. • Current training methods involve a mix of on-the-job learning, external training, and occasional private training, but lack customization and specific focus on organizational needs.

Potential

• The interviewees acknowledged the potential of CRXs providing a safe and isolated environment for hands-on cybersecurity skills training. • The main potential of CRXs was seen in conveying overarching cybersecurity concepts, rather than specific technologies or tools.

Implementation and challenges

• CRXs providing a high degree of fidelity is not seen as a strong requirement for practical application as providing transferable knowledge is prioritized over exact replicas. • The advantages of CRXs in comparison to conventional training methods must be clearly differentiated to justify organizational investments in CRXs.

During the interviews, several open challenges for the use of CRXs were discussed. The interviewees highlighted the significant investment of time, effort, and resources required to set up, operate, update, and maintain a cyber range (I2, I6, I7). To justify this investment, the advantages of CRXs over traditional training methods must be clearly differentiated for management stakeholders, as CRXs will compete with them over a limited budget (I1, I2, I3, I5, I6, I7). In this context, it was emphasized that CRXs, while undoubtedly beneficial, cannot replace on-the-job training entirely as they are not able to fully replicate the complexity and unpredictability of real-world cyberattacks (I1). In addition, the interviewees noted that participation in a CRX is not yet widely accepted as proof of competence compared to traditional certifications and, thus, cannot

Cyber Range Exercises: Potentials and Open Challenges for Organizations

33

be compared to them (I5). While participation in a CRX provides valuable experience, it may not carry the same weight as traditional certifications in the eyes of management or potential employers. Thus, participation may be less attractive to potential participants (I3, I5). Not only did the interviewees highlight the challenge of allocating budget for implementing a CRX, but also identified a hurdle in developing organizational competencies that are essential for managing a CRX. Specifically, the interviewees stressed the criticality of having a trainers conducting a CRX who possess both technical (I4) and didactic knowledge (I3, I6). 4.4

Limitations and Future Work

While our research has yielded valuable insights, certain limitations should be taken into consideration when interpreting our findings. The interviews were of varying lengths due to the time constraints of the participants, which may have impacted the depth in which the three topics could be addressed in some interviews. However, the open-question approach we followed helped us to ensure that each interviewee was still able to express the aspects that they felt were particularly relevant. As all of our interviewees were from organizations in Germany and India, this sample represents a limited geographical scope. Nevertheless, our interviewees represent a range of industry sectors, positions within cybersecurity, and levels of experience. Thus, we are confident to have captured a broad range of perspectives on the potentials and challenges of CRXs in organizations. To validate our results and provide a more complete picture of the topic, future research shall include a larger sample size with experts from further regions and types of organizations (e.g., public and non-governmental institutions).

5

Conclusion

Our study explores the benefits of CRXs in supporting organizations to overcome the cybersecurity workforce gap. Our findings show that from an organizational perspective, the main potential of CRXs lies in complementing existing training methods with strategies to convey fundamental cybersecurity concepts. A CRX serving this purpose should reflect characteristics common to the industry sector and size of an organization but does not need to precisely mirror the organization’s IT infrastructure. While research currently focuses on creating exact replicas (high-fidelity environments) for CRXs, our findings indicate that generalizability should be prioritized over fidelity. Although this indication needs further validation, it opens a new perspective on CRXs and a possible path for effectively utilizing CRXs in private and public organizations. Acknowledgement. We kindly want to thank all interviewees for sharing our enthusiasm for the topic and dedicating their time and effort to participate in our study. Without their valuable insights, this research would not have been possible. This work was performed under the INSIST project, which is supported under contract by the Bavarian Ministry of Economic Affairs, Regional Development and Energy (DIK0338/01).

34

M. Glas et al.

Appendix: Interview Guideline Topic 1: Status Quo of onboarding and reskilling in cybersecurity – Q1.1: Is onboarding and reskilling of cybersecurity professionals a topic relevant to your organization, and what strategies are in place to address this need? – Q1.2: Have you received any hands-on cybersecurity training throughout your professional career? Topic 2: Potential of CRXs – Q2.1: In your opinion, what specific target groups within an organization could benefit from participating in cyber range exercises, and why? – Q2.2: What specific skills and tools do you believe should be covered in a cyber range exercise? Topic 3: Prospects on implementation and open challenges – Q3.1: When it comes to cyber ranges, what level of fidelity (i.e., realism) do you think is necessary to provide an effective training experience? – Q3.2: In terms of cyber range development, do you see more potential in in-house development or in utilizing cyber range as-a-Service offerings, and why? – Q3.3: What are challenges that organizations face when implementing a cyber range, and what strategies can be employed to overcome these challenges?

References 1. Accenture: Accenture Security ICS Cyber Range. https://www.accenture.com/usen/services/security/cyber-resilience 2. Airbus: Airbus cyberrange: An advanced simulation solution. https://www.cyber. airbus.com/cyberrange/ 3. Beuran, R., Tang, D., Pham, C., Chinen, K., Tan, Y., Shinoda, Y.: Integrated framework for hands-on cybersecurity training: cytrone. Comput. Secur. 78, 43–59 (2018). https://doi.org/10.1016/j.cose.2018.06.001 4. Brilingait˙e, A., Bukauskas, L., Kutka, E.: Development of an educational platform for cyber defence training. In: Proceedings of the 2017 European Conference on Cyber Warfare and Security, pp. 73–81. Academic Conferences International Limited (2017) ˇ ˇ 5. Celeda, P., Cegan, J., Vykopal, J., Tovarˇ n´ ak, D.: Kypo-a platform for cyber defence exercises. M&S Support to Operational Tasks Including War Gaming, Logistics, Cyber Defence. NATO Science and Technology Organization (2015) 6. Chouliaras, N., Kittes, G., Kantzavelou, I., Maglaras, L., Pantziou, G., Ferrag, M.A.: Cyber ranges and testbeds for education, training, and research. Appl. Sci. 11(4) (2021). https://doi.org/10.3390/app11041809 7. Collins, M., Hussain, A., Schwab, S.: Towards an operations-aware experimentation methodology. In: Proceedings of the 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pp. 384–393 (2022). https://doi. org/10.1109/EuroSPW55150.2022.00046

Cyber Range Exercises: Potentials and Open Challenges for Organizations

35

8. Corbin, J., Strauss, A.L.: Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 4th edn. Sage, Thousand Oaks (2015) 9. Davis, J., Magrath, S.: A survey of cyber ranges and testbeds. Technical report, Defence Science and Technology Organisation Edinburg (Australia) Cyber and Electronic Warfare DIV (2013) 10. Furnell, S., Fischer, P., Finch, A.: Can’t get the staff? The growing need for cybersecurity skills. Comput. Fraud Secur. 2017(2), 5–10 (2017) 11. Glas, M., Vielberth, M., Pernul, G.: Train as you fight: evaluating authentic cybersecurity training in cyber ranges. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023, forthcoming) 12. Hatzivasilis, G., et al.: The threat-arrest cyber range platform. In: Proceedings of the 2021 IEEE International Conference on Cyber Security and Resilience (CSR), pp. 422–427 (2021). https://doi.org/10.1109/CSR51186.2021.9527963 13. IBM: IBM Security X-Force Cyber Range. https://www.ibm.com/services/ security-operations-center 14. (ISC)2 : (ISC)2 Cybersecurity Workforce Study 2022 - A critical need for cybersecurity professionals persists amidst a year of cultural and workplace evolution. Technical report (2022) 15. Kavallieratos, G., Katsikas, S.K., Gkioulos, V.: Towards a cyber-physical range. In: Proceedings of the 5th on Cyber-Physical System Security Workshop, CPSS 2019, pp. 25–34. Association for Computing Machinery, New York (2019). https:// doi.org/10.1145/3327961.3329532 16. Kim, J., Maeng, Y., Jang, M.: Becoming invisible hands of national live-fire attackdefense cyber exercise. In: Proceedings of the 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pp. 77–84 (2019). https://doi.org/ 10.1109/EuroSPW.2019.00015 17. Leitner, M., et al.: AIT cyber range: flexible cyber security environment for exercises, training and research. In: Proceedings of the European Interdisciplinary Cybersecurity Conference. EICC 2020. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3424954.3424959 18. National Initiative for Cybersecurity Education (NICE): The Cyber Range: A Guide. Technical report (2020) 19. Oltsik, J., Lundell, B.: The life and times of cybersecurity professionals. Technical report, The Enterprise Strategy Group (ESG) and Information Systems Security Association International (ISSA) (2021) 20. Pham, C., Tang, D., Chinen, K., Beuran, R.: Cyris: A cyber range instantiation system for facilitating security training. In: Proceedings of the Seventh Symposium on Information and Communication Technology, SoICT 2016, pp. 251–258. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/ 3011077.3011087 21. Vielberth, M., Glas, M., Dietz, M., Karagiannis, S., Magkos, E., Pernul, G.: A digital twin-based cyber range for SOC analysts. In: Barker, K., Ghazinour, K. (eds.) DBSec 2021. LNCS, vol. 12840, pp. 293–311. Springer, Cham (2021). https:// doi.org/10.1007/978-3-030-81242-3 17 22. Vykopal, J., Vizvary, M., Oslejsek, R., Celeda, P., Tovarnak, D.: Lessons learned from complex hands-on defence exercises in a cyber range. In: 2017 IEEE Frontiers in Education Conference (FIE), pp. 1–8 (2017). https://doi.org/10.1109/FIE.2017. 8190713 23. Yamin, M.M., Katt, B., Gkioulos, V.: Cyber ranges and security testbeds: scenarios, functions, tools and architecture. Comput. Secur. 88, 101636 (2020)

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting Based Cybersecurity Curricula Muwei Zheng1(B)

, Nathan Swearingen2 , William Silva3 and Xukai Zou2

, Matt Bishop1

,

1 University of California, Davis, CA 95616, USA

[email protected]

2 Indiana University Purdue University, Indianapolis, IN 46202, USA 3 University of Connecticut, Storrs, CT 06269, USA

Abstract. An electronic voting (e-voting) based interactive cybersecurity education curriculum has been proposed recently. It is well-known that assignments and projects are coherent parts of and important for any curriculum. This paper proposes a set of course projects, assignment design, and a coherent online plugand-play (PnP) platform implementation. The PnP platform and the proposed exemplary assignments and projects, are systematic (derived from the same system), adaptive (smoothly increasing difficulty), flexible (bound to protocols instead of implementations), and interactive (teacher-student and student-student interactions). They allow students to implement parts of the components of this e-voting system, which they can then plug into the PnP system, to run, test and modify their implementations, and to enhance their knowledge and skills on cryptography, cybersecurity, and software engineering. Keywords: Cybersecurity Education · Electronic Voting (E-Voting) · Interactive Teaching and Learning

1 Introduction Electronic voting (e-voting) technology has been an effective tool for cybersecurity education [2–4, 6, 10, 11] and recently, an e-voting based interactive cybersecurity education curriculum has been proposed [5, 13]. Interactive teaching and learning is advantageous, and interactions among students, instructors and inter-relations among course contents, projects, and assignments are an important part of enhancing students’ learning outcomes [7, 8]. In this paper, we propose an interactive course project design, assignment design, and implementation methodology of a plug-and-play online voting platform. Such a PnP platform, plus the proposed exemplary assignments and projects, is systematic, adaptive, flexible, and allow students to implement parts of the components of this evoting system, which they can then plug into the PnP system, to run, test and modify © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 36–52, 2023. https://doi.org/10.1007/978-3-031-38530-8_4

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

37

their implementations, and to enhance their knowledge of cryptography, cybersecurity, networking, and the architectures of client/server distributed or peer-to-peer systems, as well as skills of protocol designs, internet programming, system implementation and integration. Contribution and Related Work. To our knowledge, this is the first published set of coding projects dedicating on E-voting education. However, the field of e-voting may be too specific. But we also couldn’t find many publications of coding projects dedicating on cybersecurity. There are many educational guidelines, like CSEC 2017 [1], but they don’t provide matching coding projects. We hope our work fill this gap, and encourage teachers and professionals to share more cybersecurity coding projects with the community. The paper is organized as follows. Section 1.1 introduces the mathematical background of the online voting protocol used by the PnP project; Sect. 2 presents the e-voting system, and Sect. 3 discusses the PnP platform. Section 4 describes the assignments, Sect. 5 relates this to the CSEC 2017 cybersecurity guidelines, and the last section concludes the paper. 1.1 Overview of the E-Voting Protocol for Interactive Teaching and Learning As introduced in the last section, the authors in papers [14, 15] proposed a novel e-voting protocol involving multiple equivalent tallying servers and voting clients. To facilitate readers reading and understanding our work, and to make the contents of this paper self-contained and complete, we summarize the protocol here. For its technical details, please see papers [14, 15]. Assumptions: There are n ≥ 3 voters V1 , . . . , Vn and m ≥ 2 candidates c1 , . . . , cm running for office. Let L = nm. Two or more “tallying authorities” check the validity of each vote and ballot and count the votes. For simplicity, we assume two tallying authorities here, denoted as C1 and C2 . These tallying authorities have conflicting interests, such as representing different candidates or political parties. So they will not share information with each other but will cooperate to perform some tasks when dictated by the protocol. This matches how observers in current election processes work. The tallying authorities are called “collectors” too. Cryptographic Primitives and Cryptosystems. The first building block is a simplified (n, n) secret sharing scheme  [12] (denoted as S (n, n) SS). A secret s is split into n shares si (1 ≤ i ≤ n) with s = ni=1 si , over the group ZM, where M ≥ 2L + 1. Each member receives one share. All n members need to pool their shares together to recover s. The scheme is additively homomorphic [12]; the sum of two shares si + si (corresponding to s and s , respectively) is a share of the secret s + s . The other is an efficient secure two party multiplication (STPM) protocol [9]. Initially, each authority C i (i = 1, 2) holds a private input x i . At the end, each C i gets a (random) private output r i , such that x 1 × x 2 = r1 + r2. Other cryptographic principles and systems include the Discrete Logarithm Problem (DLP), RSA, and the Paillier public key cryptosystems.

38

M. Zheng et al.

The (mutual-restraining e-voting protocol (MR-EV) [14, 15] consists of the following three technical designs. TD1: Universal Viewable and Verifiable Voting Vector. For n voters and m candidates, a voting vector vi for V i is a binary vector of L = n × m bits. The vector can be visualized as a table with n rows and m columns. Each candidate corresponds to a column. Via a robust location anonymization scheme at the end of registration, each voter secretly picks a unique row which no one else including tallying authorities knows. A voter V i will put a 1 in the entry at his/her row and the column corresponding to a candidate he/she votes for (let the position be Lic ), and put 0 in all other entries. During tallying, all voting vectors will be aggregated and the final tallied voting vector is public and viewable to anyone. From the tallied voting vector (denoted as VA ), the votes for candidates are all viewable one by one and can be incrementally tallied by anyone. Any voter can check his vote and also visually verify that his vote is indeed counted in the final tally. Furthermore, anyone can verify the vote totals for each candidate. TD2: Forward and Backward Mutual Lock Voting. From their voting vector (with i a single entry of 1 and the rest of 0es), voter V i can derive a forward value υi (= 2L−Lc ) i and a backward value υi (= 2Lc −1 ). These two values are his/her vote. Importantly, υi × υi = 2L−1 , regardless which candidate V i votes for. During the vote-casting, V i uses the simplified (n, n)-SS scheme twice to cast their vote using both vi and υi respectively. vi and υi jointly ensure the correctness of the vote-casting process, and will be used by collectors to enforce V i to cast one and only one vote; any deviation, such as multiple voting, will be detected. Denote his/her own share of his/her vote vi as sii and similarly, sii for υi . His/her ballot will be (pi , pi ) where pi is the sum of sii and the sum of n − 1 shares of n − 1 votes of the other n − 1 voters, one per voter. Similarly, for pi . Rather than casting their vote (υi , υi ), V i casts their ballot (pi , pi ). To avoid the interactions or communications among voters (which of course is not practical at all), any voter only contacts collectors. Collectors generate and send n − 1 random shares to the voter and the voter computes their share by subtracting the sum of the n − 1 shares from their vote vi (similarly for vi ). To prevent a collector from having all n − 1 shares for a voter, each collector creates half of the n − 1 shares. This e-voting model deliberately distinguishes between a private vote and a secret ballot. Voter’s votes are known only to themselves. But its corresponding ballot, even though it is called a secret ballot, is revealed to the public in the vote-casting. TD3: In-process Check and Enforcement. During the voting processes, collectors will jointly perform two cryptographic checks on the voting values of each voter (See Sub-Protocol 1 and Sub-Protocol 2 in [14]).  STPM  to prevent a voter  The first check uses  . The second check vote v , v from incorrectly generating their share sii , sii of their i i   prevents a voter from publishing an incorrect ballot pi , pi . The ballot is the modular addition of a voter’s own share and the share summations that the voter received from other voters (in fact, from collectors) (Fig. 1).

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

39

Real-Time Public Bullen Board (only append-able and all including ballots are public and viewable) Incremental aggregaon Incremental tallying VA

Vote R count B count

0 Voter “Secret” Ballot

Aggregaon

1

R

1

0

B

1

1

R

2

1

B

2

2

1

V2

52

52

V1

-5

47

V4

-7

40

1

V3

62

102

1

0 0

1. Dynamic and incremental aggregaon of “secret” ballots by anyone when they are being cast in real me. “secret” ballots are in fact public. 2. Any of paral sums 52, 47 and 40 has no informaon about (any) votes. 3. The last aggregaon 102 (=40+62=32+4+2+64) exposes all votes and it is the final tallied vong vector VA . Voters can verify their votes visually.

0

A voting example involving 4 voters and 2 candidates: R & B (numbers in red are kept secret) Numbers with red or black underline are computed by voters, with blue double line generated by collector C1, and with green wavy line by collector C2

Voter Vi

Secret locaon

Secret vote vi

Secret random shares of C1 and C2 For V1: 5: received from C1; 15: from C2; x1,i

Li

x2,i

“Secret” ballot – published, so they are in fact public For V1: 5: received from C1; 15: from C2; 52: computed as 32+5+15 by voter herself

V1

2

B (32)

5,

15

52 (=32+5+15)

V2

3

R (4)

1,

-10

-5 (=4+1+(-10))

V3

4

B (2)

-20,

11

-7 (=2+(-20)+11))

V4

1

R (64)

14,

-16

62 (=64+14+(-16))

Fig.1. Bulletin Board (top section) and its corresponding example (bottom) (Modified and combined from Fig. 2 and Table 3 in [14])

2 E-Voting System To save space, the exact network packet design for each step during communication is attached in the Appendix A. The protocol involves three-way communications among the administrator, collectors, and voters. The whole process is divided into 4 parts: Initialization, Registration, Voting, and Publishing Result. Initialization. To create an election, an administrator first appoints two collectors. The administrator contacts each desired collector, and each collector responds to the administrator with their decision. The administrator can then check whether the desired collectors have accepted or rejected the request. Once the administrator has chosen two collectors, the administrator sends election metadata (including information about the collectors) to each collector. This includes information enabling a collector to connect to the other collector. Registration. The voter must prove their right to vote to the administrator before registering. How this is done is out of scope of this protocol. Once the voter is authorized to vote, they are given an ID and they give the administrator a public key. Both are just for this election.

40

M. Zheng et al.

The voter now connects to the administrator and registers, asking for each collector’s host, port, and public key, and a list of the candidates. Once all voters have registered (e.g. the registration period ends), the administrator sends the list of registered voters to each collector. Voting. Each voter must obtain shares from each collector, then creates their ballots and commitments. They send these to both collectors. There are two sub-protocols used by collectors to communicate with each to perform secure two-party multiplication (STPM) to increment the votes, which is described in detail in the Appendix. Publishing Result. As the collectors receive and verify ballots from the voters, they send them to the administrator. As the administrator receives ballots from the collectors, they sum them and publish them on the web-based bulletin board. No information about the vote totals is visible until all ballots are received.

3 The Adaptive Interactive Plug-and-Play Platform Our project comes with an advanced online e-voting platform for students, which serves as a testbed and example. The platform is equipped with a ready-to-use e-voting system students can use to gain an understanding of how the platform works. The system consists of one administrator, two collectors, and a minimum of three voters. Students who wish to use the platform create fictitious candidate names and begin the election. The election process will automatically run, and all the phases of the election will be printed out. This provides a comprehensive overview of how the system operates and what is expected from student implementations. Additionally, the platform can be used to test the students’ own implementations. They can replace either the collector or the voter with their own implementations and connect them to the platform to verify that they are working correctly. The unique flexibility of the platform is the key feature of this set of projects. Students can freely connect their collector or voter or both to the platform, and collectors and voters can come from different students. This feature makes our platform a plug-and-play e-voting system, where students can easily experiment with their own implementations and see if they work seamlessly with the platform. Figure 2 shows an exemplary plug-and-play testing platform interface. With this interface, a student can set their test candidates, number of collectors (at least 2), number of voters (at least 3) and fill in their implementations of the collectors and voters. Whatever the remaining collectors (if any) or voters (if any) need will be automatically provided by the platform using the default implementation. If students do not provide anything, they can still run the system using the default implementations of collectors and voters. The online platform and java source code needed for assignments are available on the web.1

1 http://cs.iupui.edu/˜xzou/NSF-EVoting-Project.

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

41

Fig.2. An exemplary plug-and-play testing platform interface

4 The Assignments In conjunction with the online platform that we have introduced, we have designed four engaging assignments for students to complete. While our system is written in Java and we will be using Java to demonstrate ideas in the assignment descriptions, students are not restricted to any particular programming language. The only requirement is that the protocol is implemented correctly to ensure a successful outcome. The project entails implementing an administrator-collector-voter trilateral online voting system that serves as the backbone of the platform. Through the completion of these assignments, students will gain hands-on experience in cryptography, cybersecurity, and software engineering. This immersive approach allows students to apply theoretical knowledge to real-world scenarios and encourages active learning. 4.1 Assignment 1: Preliminary In the first assignment, students will be introduced to the fundamental concepts of cryptography and the communication protocol between servers and clients. Given that the e-voting process relies heavily on encryption techniques, students will need to gain practical experience implementing various cryptographic algorithms. These include hash computation, symmetric encryption, and asymmetric encryption. The implementation process will involve utilizing specific algorithms for each technique, and students will be provided with the necessary references and guidance, but not the existing library. Thus, they are asked to implement the algorithms step by step. For hash computation, students will use the BLAKE2 algorithm. For symmetric encryption,

42

M. Zheng et al.

the Salsa20 algorithm will be employed. As for asymmetric encryption, students will implement both the RSA and Paillier algorithms. In addition to implementing cryptographic tools, students are required to practice setting up a TCP server and client communication. It is important to note that while TCP ensures the secure delivery of messages, it does not guarantee confidentiality or integrity. Therefore, students will also be required to encrypt and decrypt messages using the cryptographic tools they have implemented, and using private keys to sign their messages. It is worth emphasizing that these two components of the assignment, implementing cryptographic tools and setting up TCP server and client communication, form the foundational basis for all subsequent assignments. The techniques that students will learn from this assignment will provide them with a solid foundation in the field of secure network communication. 4.2 Assignment 2: Voter In this assignment, students will implement a voter client that will enable authorized voters to participate in an online election. It is important to note that the scope of the project does not cover whether the user is authorized to vote; it is assumed that any user attempting to vote is authorized. However, it is possible for these valid users to make mistakes, which will be addressed in assignment 4. In the e-voting protocol, a voter is essentially a client in the communication process. It is also the easiest part of the communication system that includes the administrator, collector, and voter to implement. The primary focus of this project is to practice closely following a defined protocol (in this case, the evoting protocol), and to implement a systematic way of sending and receiving encrypted messages according to the protocol, while also handling exceptions. During an online e-voting session, there are five parts in a voter’s communication process: registering, obtaining a location, creating ballots, creating commitments, and submitting votes. To register, voters will need RSA private and public keys. They will then obtain shares from collectors, which are used to protect the secrecy of their votes. Using the shares, they will compute ballots and commitments, and send them to both collectors. The votes of the voters are not revealed until all ballots are aggregated, at which point each vote becomes anonymous. To assist in the implementation process, a framework for a voter class and other support classes will be provided. Students will be required to communicate with the administrator and collector. At this point, the instructor will provide both the administrator and collector, which will enable students to test their implementations. Sample voters will also be provided, and while their source codes are hidden, students will be able to observe their network behaviors. By the end of this assignment, students will have accomplished two objectives. The first objective is to gain a clear understanding of the step-by-step processes involved in how voters operate in an E-voting protocol. This will help them to grasp the intricacies of the system and the mechanisms that drive it. The second objective is to equip students

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

43

with the skills and experience to confidently implement any networking communication protocol when necessary tools are provided (such as cryptographic tools in this assignment). 4.3 Assignment 3: Collector In this assignment, students will create and implement a collector, which differs from the voter. Collectors are responsible for managing communications with the administrator, the other collector, and voters. This requires collectors to act both as servers and clients, making the implementation process considerably more complex than the voter client from the previous assignment. During an online election, collectors play an essential role in the communication process, which consists of six stages. These include accepting collection requests, implementing location anonymization schemes, generating shares, communicating with voters, executing secure two-party multiplication, and forwarding verified ballots. This assignment will allow students to expand and enhance their knowledge and skills in implementing a network communication protocol and the usage of cryptographic tools. Additionally, they will learn and practice secure multiparty computation techniques, a valuable skill set in many cybersecurity fields. As usual, a framework for the collector class and supporting classes will be provided to the students. These classes will allow students to practice the important concept of encapsulation by reusing previously implemented classes. Additionally, the assignment includes an administrator and the other collector for students to test their implementations. By utilizing pre-existing classes, students will be able to gain a deeper understanding of how various software components can work together to create complex systems. This will also provide an opportunity for students to hone their skills in software design and implementation, as well as improve their ability to work with existing codebases. 4.4 Assignment 4: Administrator In this upcoming project, students will be tasked with implementing the administrator class, which is the final component of the e-voting system consisting of administratorcollector-voter. The administrator class has multiple jobs, including creating an election, registering collectors and voters, distributing election and collector information, and publishing voting results. As both server and client, the administrator class is responsible for communicating with collectors and voters. It is to use the same cryptographic tools developed in previous assignments. Thus, students who have successfully completed previous assignments should find the implementation of the administrator class relatively straightforward. However, in this assignment, the focus will shift from simply testing the accuracy of the implementation to assessing the reliability of the system. Unlike in previous projects, where students only had to test their implementations with legitimate votes to ensure the protocol was correctly followed, this project will also examine how well the system handles malicious inputs, such as double voting. To address this, students will need to

44

M. Zheng et al.

incorporate necessary input sanitation and security measures in both the administrator and collector classes to mitigate the effects of malicious inputs. After completing these assignments, students should have a working 2-collector online voting system of their own. While the e-voting platform does offer default collectors and voters, students’ systems must be able to operate autonomously. Furthermore, as all students will be implementing the same protocol, the voting systems created by different students should be compatible with one another. This means that administrators, collectors, and voters from various students’ projects should be able to seamlessly connect and conduct elections. The ability to interoperate is a crucial aspect of online voting systems (as well as many other client/server models) and is a key requirement for the success of the final project. The four assignments have been designed to increase progressively in difficulty, with each subsequent assignment building upon the skills learned in the previous one while introducing new concepts. From a software engineering standpoint, the first assignment serves as a gentle introduction to coding, with each component working mostly independently. In contrast, the second assignment requires a deeper understanding of class interactions and emphasizes the importance of following a defined protocol closely. The third assignment represents a significant step up in complexity, demanding mastery of the three core skills from the previous two assignments: cryptography, networking, and software engineering. Students will need to demonstrate their ability to apply these skills in a more sophisticated manner, working with more intricate systems. Finally, the fourth and last assignment delves into the realm of secure coding and introduces the basics of attacks and defenses in e-voting systems. This assignment will challenge students to apply their knowledge in a practical way, creating secure and robust systems that can withstand a variety of potential threats. Each project results in a 2-collector online voting system. As an open invitation, enthusiastic students are encouraged to further enhance this system and create a more sophisticated N-collector online voting system. This online voting system can be an invaluable resource for future research into security, particularly in areas such as man-in-the-middle attacks, database security for storing a large volume of ballots, and secure transmission and connection.

5 Relevant Topics from Cybersecurity Guidelines To assist instructors in gaining a better understanding of the project and making informed decisions about its usefulness in their teaching, we analyzed the relevant topics outlined in the CSEC 2017 guidelines (shown in Table 1) and the e-voting curriculum learning objectives (shown in Table 2). The e-voting curriculum was introduced in [5, 13]. Here, a brief overview of CSEC 2017 guidelines will be given. CSEC 2017 is a set of cybersecurity curricular guidelines developed by a joint task force of the ACM, IEEE Computer Society, the AIS Special Interest Group on Security, and the International Federation of Information Processing Societies’ Working Group 11.8, dealing with computer security education. It defines eight Knowledge Areas, each composed of different Knowledge Units broken into topics. We found that the concepts

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

45

covered by these topics and learning objectives are either already addressed by the project or necessary to understand and successfully execute the project. Table 1. Relevant CSEC2017 Topics Knowledge Area

Knowledge Units

Topics

Data Security

Cryptography

Basic concepts, advanced concepts, mathematical background, symmetric ciphers, asymmetric ciphers

Data Integrity and Authentication

Authentication strength, data integrity

Data Privacy

Overview

Fundamental Principals

Least privilege, Fail-safe defaults, Separation, Economy of mechanism, Least astonishment, Open design, Layering, Abstraction, Modularity, Design for iteration

Design

Derivation of security requirements, specification of security requirements

Implementation

Validating input and checking its representation, using APIs correctly, using security features, handling exceptions and errors properly, programming robustly, encapsulating structures and modules, taking environment into account

Analysis and Testing

Static and dynamic analysis, unit testing, integration testing, software testing

Documentation

User guides and manuals

Distributed Systems Architecture

The Internet, protocols and layering

Network Architecture

General concepts, common architectures

Network Services

Concept of a service, service models, service protocol concepts, service virtualization, vulnerabilities and example exploits

System Thinking

Security of special-purposes systems

Software Security

Connection Security

System Security

46

M. Zheng et al.

This mapping enables instructors to determine the project’s alignment with their classes’ requirements. This information will help instructors assess the suitability of the Table 2. Relevant E-Voting Curriculum Learning Objectives Module

Learning Objectives

Introduction to E-Voting

0.2 Understand different parties involved in an E-Voting process 0.3 Understand law and policy requirements for E-Voting systems

Authentication

1.2 Describe how voters authenticate themselves 1.4 Understand software security principles and practices of robust, secure coding

Confidentiality

2.1 Understand basic cryptography concepts 2.2 Describe public key cryptography and algorithms 2.3 Describe how public key cryptography is used in end-to-end encryption protocols

Data integrity and message(sender) authentication

3.1 Understand different ways to generate and use hash functions 3.2 Describe techniques used to store protected data and to verify or compute them without revealing sensitive information

Cryptographic Key Management

4.1 Describe common key exchange protocols 4.3 Explain how secret keys are used in proofs of identity, integrity protection mechanisms, and challenges in doing so

Privacy and anonymity

5.2 Describe the procedures taken in elections to protect voter privacy 5.4 Explain techniques used to help voters verify their votes being recorded correctly without being able to reveal those votes

Secure Group/MultiParty Interaction and Secret Sharing

7.1 Describe schemes for multi-party secret sharing 7.2 Describe how these schemes handle insider threats 7.3 Explain how these schemes protect transmissions of shares

Secure Multi-Party Computation and Homomorphic Encryption

8.1 Describe different schemes for secure multi-party computation 8.3 Explain how voters can verify the correctness of the results of the election (continued)

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

47

Table 2. (continued) Module

Learning Objectives

Attacks and defenses

9.4 Simulate different attacks targeting E-Voting systems

project for their students and make informed decisions about integrating it into their teaching curriculum.

6 Conclusion Our team has developed an innovative 2-collector online voting system that utilizes advanced cryptographic tools and secure multi-party computation techniques to ensure the security and integrity of online elections. The system is designed with modular components that can be easily disassembled and reassembled, making it an excellent tool for students to test and implement their own online voting systems in a step-bystep manner. By providing students with ample opportunities to practice their skills in cybersecurity and software engineering, we believe they will be able to integrate their knowledge from the classroom with real-world development. We plan to incorporate this system into our future classes and have made it publicly available in the hope that other educators and developers will also find it useful. By sharing this resource, we hope to contribute to the ongoing efforts to promote secure and transparent online elections globally. Acknowledgements. This material is based upon work supported by the National Science Foundation under Grant Nos. DGE-2011117 and DGE-2011175. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

A Appendix Initialization - Tables 3, 4 and 5 (Packets during initialization state).

48

M. Zheng et al. Table 3. Administrator sends to collector Signed message_type

TYPE_COLLECT_REQUEST

election_ID

16 bytes

collector_index

1 byte

pk_length

2 bytes

Pk

variable length

collector_key_hash 16 bytes

Table 4. Collector responds Signed message_type

TYPE_COLLECT_STATUS

key_hash

16 bytes

election_ID

16 bytes

acceptance

0x00 or 0x01

Table 5. Administrator distributes election metadata to each collector Signed message_type

TYPE_METADATA_COLL

key_hash

16 bytes

election_ID

16 bytes

other_C_host_length

2 bytes

other_C_host

var. length

other_C_port

2 bytes

other_C_pk_length

2 bytes

other_C_pk

var. length

M:

1 byte

Registration - Tables 6, 7 and 8 (Packets during registration statepackets during registration state).

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

49

Table 6. Voter sends to administrator for registration Signed message_type

TYPE_REGISTER

key_hash

16 bytes

election_ID

16 bytes

voter_ID

4 bytes

Table 7. Administrator responds Signed message_type

TYPE_METADATA_VOTER

election_ID

16 bytes

C1_host_length

2 bytes

C1_host

var. length

C1_port

2 bytes

C1_pk_length

2 bytes

C1_pk

var. length

C2_host_length

2 bytes

C2_host

var. length

C2_port

2 bytes

C2_pk_length

2 bytes

C2_pk

var. length

M

1 byte

name1_length

1 byte

name1

var. length





Voting - Tables 9, 10 and 11 (Packets between voters and collectors in voting state). Following Network Packet Design. Due to the limitation on size of paper, we couldn’t present all the packets. However, they can be found on our website. We hope the packets presented above are able to provide readers an overall impression about the project.

50

M. Zheng et al. Table 8. Administrator sends the list of registered voters to each collector Signed message_type

TYPE_VOTERS

election_ID

16 bytes

N

4 bytes

voter1_ID

4 bytes

voter1_pk_length

2 bytes

voter1_pk

var. length





Table 9. Voter connects to collector Signed message_type

TYPE_SHARES_REQUEST

key_hash

16 bytes

election_ID

16 bytes

voter_ID

4 bytes

Table 10. Collector responds Signed and Encrypted message_type

TYPE_SHARES

key_hash

16 bytes

election_ID

16 bytes

N

4 bytes

Si, Cj

k bytes

S i, Cj

k bytes

~ Si, Cj

k bytes

~ S’i,Cj

k bytes

An Adaptive Plug-and-Play (PnP) Interactive Platform for an E-Voting

51

Table 11. Voter sends ballots and commitments Signed message_type

TYPE_BALLOT

key_hash

16 bytes

election_ID

16 bytes

voter_ID

4 bytes

p_i

k bytes

p _i

k bytes

g^s_ii

k bytes

g^s _ii

k bytes

g^(s_ii s _ii)

k bytes

References 1. ACM/IEEE-CS/AIS SIGSEC/IFIP WG 11.8 Joint Task Force. Cybersecurity curricula 2017: Curriculum guidelines for undergraduate degree programs in cybersecurity. Technical Report Version 1.0. ACM, New York (2017) 2. Bishop, M., Frincke, D.A.: Achieving learning objectives through evoting case studies. IEEE Secur. Priv. 5(1), 53–56 (2007) 3. Cutts, Q.I., Kennedy, G.E.: Connecting learning environments using electronic voting systems. In: Australiasian Computing Education Conference, pp. 181–186 (2005) 4. Halderman, A.J.: Secure digital democracy (2014). https://www.coursera.orginstructorjh alderm 5. Hosler, R., Zou, X., Bishop, M.: Electronic voting technology inspired interactive teaching and learning pedagogy and curriculum development for cybersecurity education. In: Drevin, L., Miloslavskaya, N., Leung, W.S., von Solms, S. (eds.) WISE 2021. IAICT, vol. 615, pp. 27–43. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80865-5_3 6. Kennedy, G.E., Cutts, Q.I.: The association between students’ use of an electronic voting system and their learning outcomes. J. Comput. Assist. Learn. 21(4), 260–268 (2005) 7. Kolb, D.A.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Englewood Cliffs (1984) 8. Laurillard, D.: Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technology, 2nd edn. RoutledgeFarmer, London (2002) 9. Samet, S., Miri, A.: Privacy preserving ID3 using Gini index over horizontally partitioned data. In: Proceedings of IEEE/ACS, AICCSA 2008, pp. 645–651 (2008) 10. Shamos, M.I.: Electronic voting (2014). http://euro.ecom.cmu.eduprogramcoursestcr17-803 11. Stowell, J.R., Nelson, J.M.: Benefits of electronic audience response systems on student participation, learning, and emotion. Teach. Psychol. 34(4), 253–258 (2007) 12. Zhao, X., Li, L., Xue, G., Silva, G.: Efficient anonymous message submission. In: IEEE INFOCOM 2012, vol. 2012, pp. 2228–2236 (2012) 13. Zheng, M., Swearingen, N., Mills, S., Gyurek, C., Bishop, M., Zou, X.: Case study: mapping an e-voting based curriculum to CSEC2017. In: Proceedings of ACM SIGCSE TS 2023 (accepted, Best Paper Award) (2023)

52

M. Zheng et al.

14. Zou, X., Li, H., Li, F., Peng, W., Sui, Y.: Transparent, auditable, and stepwise verifiable online e-voting enabling an open and fair election. Cryptography MDPI 1(2), 1–29 (2017) 15. Zou, X., Li, H., Sui, Y., Peng, W., Li, F.: Assurable, transparent, and mutual restraining evoting involving multiple conflicting parties. In: Proceedings of the 2014 IEEE Conference on Computer Communications, IEEE INFOCOM 2014, Piscataway, NJ, USA, pp. 136–144. IEEE (2014)

Cybersecurity Training Acceptance: A Literature Review Joakim Kävrestad1(B)

, Wesam Fallatah2

, and Steven Furnell2

1 University of Skövde, Skövde, Sweden

[email protected]

2 University of Nottingham, Nottingham, UK

{wesam.fallatah,steven.furnell}@nottingham.ac.uk

Abstract. User behavior is widely acknowledged as a crucial part of cybersecurity, and training is the most commonly suggested way of ensuring secure behavior. However, an open challenge is to get users to engage with such training to a high enough extent. Consequently, this paper provides research into user acceptance of cybersecurity training. User acceptance can be understood from a sociotechnical perspective and depends on the training itself, the organization where it is deployed, and the user expected to engage with it. A structured literature review is conducted to review previous research on cybersecurity training acceptance using a social-technical approach. The paper contributes with an overview of how user acceptance has been researched in the three social-technical dimensions and with what results. The review shows that previous research mostly focused on how the training method itself affects user acceptance, while research focusing on organizational or user-related dimensions is more scarce. Consequently, the paper calls for further research on the organizational aspects of user acceptance of cybersecurity training and how user acceptance can differ between user groups. Keywords: security · training · education · user · acceptance · adoption

1 Introduction Developing resistance to cyberattacks is now a crucial task for any organization. A core step is establishing an underlying security culture, which involves raising cybersecurity awareness among staff members [1]. That seeks to make staff aware of existing cybersecurity threats and how they can avoid them. The most common threats targeting users include phishing, credential theft, and ransomware, and a common notion of how to protect against these is via cybersecurity training [2]. Such training efforts seek to educate users on how to behave securely, thereby contributing to better security culture. While the potential benefit of cybersecurity training is undeniable, a common problem is getting users to actively engage with it [3]. If users do not do so, the training cannot provide its intended beneficial effect. Consequently, this paper seeks to present initial research into user acceptance of cybersecurity training. Adopting a socio-technical approach, the project postulates that © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 53–63, 2023. https://doi.org/10.1007/978-3-031-38530-8_5

54

J. Kävrestad et al.

acceptance is influenced by the proposed training itself, the user, and organizational factors. The effect of the training itself has been demonstrated in several previous studies investigating user perception of various cybersecurity training methods [4]. As for the user, the effects of demographics, such as age and gender, have been studied but with conflicting results [5]. However, no previous research has taken a holistic approach to research acceptance of cybersecurity training through a socio-technical lens. Therefore, this research aims to synthesize existing research about user acceptance in the three socio-technical dimensions. The paper presents the findings from a structured review of prior works on the acceptance of cybersecurity training and education. The next section provides background context in terms of the socio-technical approach being applied, leading to a more specific discussion of the literature review methodology in Sect. 3. The main findings are then presented in Sect. 4, focusing on the technical, organizational, and user-centered dimensions. The discussion then concludes in Sect. 5, drawing together the implications of the work thus far. The results provide an overview of the research landscape. Others can use them to gain a holistic understanding of the domain and as a foundation for future research.

2 Socio-technical Perspectives on User Acceptance Baxter and Sommerville [6] describe that a socio-technical approach to system design leads to higher user acceptance and stakeholder value. Interpreted in the domain of this research, it can be argued to increase user adoption of cybersecurity training and improve the outcomes of such training. In essence, a socio-technical approach assumes that a technology depends on itself, its users, and the organization in which it is used [7]. Prior user acceptance research in those dimensions is outlined below. Technical aspects, relating to the technology itself, have been extensively discussed, and summaries are provided by Venkatesh and Bala [8] and Lee et al. [9]. In the context of this research, technology refers to the cybersecurity training effort itself. Venkatesh and Bala [8] and Lee et al. [9] highlight system quality as a contributor to user acceptance. System quality is further described as how easy and intuitive a technology is to use and how well it supports the user’s job performance. Those factors are to be considered in relation to other technologies with similar purposes. Users will adopt a technology that is comparatively better to a greater extent. Technical aspects relating to cybersecurity training have been the focus of previous research. In Kävrestad et al. [10], it is demonstrated that the implementation of cybersecurity training impacts users’ willingness to adopt the training. A possible reason is described in Bello and Maurushat [11], who argue that users are more prone to use intuitive and easy-to-use cybersecurity training. Similar findings are presented by Dahabiyeh [12], who also emphasizes the quality of the presented content. Aspects relating to the user include demographic aspects where age, nation of residence, and gender have been discussed as possible mediators of user acceptance [5]. Venkatesh and Bala [8] and Lee et al. [9] further describe that a user’s computer skills and attitude will also have an impact. Both a user’s predisposition to try new technology and self-efficacy will also impact user acceptance. Organizational aspects first include

Cybersecurity Training Acceptance: A Literature Review

55

availability, where time, support, and systems availability positively impact user acceptance [8, 9]. Organizational culture will also have an impact, where user acceptance is impacted by managerial support and how the organization’s members perceive a technology, i.e., subjective norms [8, 9]. The impact of organizational aspects relating to cybersecurity training acceptance has received less attention from the research community. Nevertheless, Dahabiyeh [12] suggested management support, engagement from colleagues, and dedicated IT staff to impact the acceptance of cybersecurity training positively. Reeves et al. [4] also describe colleagues as important positive or negative mediators of cybersecurity training acceptance. A core component of a socio-technical approach to systems design is the view that performance is reliant on all socio-technical dimensions, which are highly intertwined [6]. A consequent property is that system goals can typically be addressed in more than one way. One can, for instance, assume that a certain technology may work well in one organization but less so in another. As an example, one can assume that users in a military organization may be more prone to adopt cybersecurity training than users in a more relaxed environment because of their predisposition to follow orders and strive for security. A solution with the same goals may have to be designed in a different way in, for instance, a non-hierarchical startup company.

3 Methodology This research was conducted as a structured literature review following the methodology proposed by Paré and Kitsiou [13]. As suggested by Meline [14] and Jesson et al. [15], an inclusive approach was adopted in the selection of databases and development of search query. The search query used in this research was (Cyber OR Information OR IT OR computer) AND security AND (training OR education) AND (adoption OR acceptance OR usage). The intention was to capture all likely variants of how ‘security’ might be labeled within the relevant sources while also seeking works specifically relating to the training and education perspectives (‘awareness’ was not used as a keyword alternative on the basis that this potentially refers to a shallower level of engagement with users, and so had the potential to bring in studies that had merely sought to bring attention to security issues rather than more specifically guiding users on how to deal with them). The last segment of the query then used keyword variants that aimed to focus on works specifically concerned with the user acceptance aspect of the training experience. The search string was applied to the following databases and indexes; ACM, IEEE Xplore, DBLP, Science Direct, and Scopus. The searches were restricted to research papers published within the last ten years. Inclusion criteria were established before the searches [16]. Only papers meeting the following criteria were included in this research: Included papers should be peer-reviewed and report their own data and conclusions relating to end-user acceptance of cybersecurity training. Reviews of other research, opinion papers, and likewise are removed. The domain of included papers should be the training of users and not the education of security professionals. Following database searches, the resulting papers were subject to screening. The screening process is documented in the PRISMA diagram displayed in Fig. 1, which is

56

J. Kävrestad et al.

based on Page et al. [17] and Sarkis-Onofre et al. [18]. Identified papers were analyzed using thematic coding as follows [19]: 1. Each publication was read in its entirety and labeled by which of the three sociotechnical dimensions of user acceptance it was related to. One publication could be related to more than one dimension. 2. How each publication contributed to each respective dimension was extracted. 3. The extractions for each dimension were combined into a cohesive text, which is presented in the upcoming results section. The resulting publications selected for inclusion are cited throughout the results section.

Fig. 1. PRISMA diagram outlining the searching and screening process

Cybersecurity Training Acceptance: A Literature Review

57

4 Results A total of 16 papers were selected for inclusion in this study after the filtering process. The papers were first labeled according to what socio-technical dimensions they discussed, revealing that all papers discussed the training itself, the technical dimension. Five papers discussed the organizational dimension, and one paper discussed the social dimension. All included papers and the labeling is documented in Table 1. Table 1. Categorization of included publications Paper

(T)echnical

(O)rganizational

(U)ser-Centered

Kävrestad et al. [10]

X

X

X

Dahabiyeh [12]

X

X

Haney and Lutters [20]

X

X

Ma et al. [21]

X

X

Shillair [22]

X

Wash and Cooper [23]

X

Silic and Lowry [24]

X

Shen et al. [25]

X

Wen et al. [26]

X

Jin et al. [27]

X

Kletenik et al. [28]

X

Cullinane et al. [29]

X

Gokul et al. [30]

X

Stockett [31]

X

Offor and Tejay [32]

X

Bélanger [33]

X

X

Next, how the papers contributed to the understanding of the adoption of cybersecurity training in each respective dimension was extracted and stored in an Excel table. An excerpt of the resulting classifications and summaries is shown in Table 2. The following subsections summarize the extracted content for each dimension of cybersecurity training adoption. 4.1 Technical Dimensions The technical dimension was the most prominent dimension in the 16 papers included in this review. In fact, the nature of the training itself was, to some extent, discussed in all included papers. Five of those argue that user satisfaction is important for cybersecurity training efforts before developing training efforts and evaluating user satisfaction of them

58

J. Kävrestad et al.

Table 2. Excerpt of information extracted from analyzed papers (O = Organizational and T = Technical). Paper

Dimension Extraction

Haney and Lutters [20] T

The paper describes that the message itself must be understandable and usable, meaning that users must be able to understand and convert the message into their daily life. The research was an interview study with 28 security experts

Haney and Lutters [20] O

The paper reports that trust in the source of a security message is imperative for users to listen to that message. Consequently, the security organization must build a reputation within the organization, leading to a higher degree of user adoption. The research was an interview study with 28 security experts

Ma et al. [21]

T

Reports on a survey with 293 respondents and show that user satisfaction impacts user stickiness, how willing users are to use training repeatedly. Satisfaction is both impacted by the perception of the training tool (quality and fun) and how other around a person thinks about the training tool (social influence)

Ma et al. [21]

O

Reports on a survey with 293 respondents and show that user satisfaction impacts user stickiness, how willing users are to use training repeatedly. Satisfaction is both impacted by the perception of the training tool (quality and fun) and how other around a person thinks about the training tool (social influence)

Dahabiyeh [12]

T

The paper reports on research primarily studying organizational decision-making concerning cybersecurity training but argues for user-friendliness as important for user adoption

Dahabiyeh [12]

O

The paper reports on research primarily studying organizational decision-making regarding cybersecurity training but argues for the engagement of stakeholders throughout the organization as important for user adoption

Shillair [22]

T

This research aims to research user needs regarding cybersecurity training and presents recommendations for cybersecurity training. The key conclusions are that training adoption can be facilitated by training available in different formats and with easy-to-understand content. The research method was a survey with 800 respondents (continued)

Cybersecurity Training Acceptance: A Literature Review

59

Table 2. (continued) Paper

Dimension Extraction

Wash and Cooper [23]

T

The paper describes an interesting dilemma relating to the technical nature of the training. Training “after the event” is only provided to users who do the event. Phishing training will, for instance, only be provided to users who click a link. The paper demonstrates how the technical implementation of training can facilitate or hinder adoption

[24–28]. Shen et al. [25] describe that the properties of the training itself will greatly impact the user’s perception of that training, which is further supported by Dahabiyeh [12]. Ma et al. [21] and Cullinane et al. [29] further show that user satisfaction impacts a user’s willingness to use and re-use cybersecurity training tools and states that the perceived quality and fun of a training tool will influence the perception of it. The so-far presented papers discuss gamified training, and it can be concluded that user perception differs between cybersecurity games. Kävrestad et al. [10] further show that user perception differs between different types of security training and suggests that contextual training is preferred over eLearning platforms and cybersecurity games. In contrast, Gokul et al. [30] suggest that games are more engaging than training using mandatory quizzes. To ensure that any cybersecurity training effort is positively perceived, it must be developed with its intended recipients in mind [31]. On that note, Offor and Tejay [32] argue that cybersecurity training for adults must be developed using pedagogical principles for adults. Stockett [31] further suggests that tailoring training to different user groups will facilitate adoption. Some suggestions pertaining to the content of the training can also be found in the included papers. Haney and Lutters [20] argue that the material must be tailored to the recipient in a way that makes it easy to understand and convert into their daily life. Shillair [22] also stresses the importance of understandable content and suggests that users have diverse preferences, which could be met by providing training in different formats. Making the material appear personal and mandatory is also described as a factor that can improve user adoption rates [31, 33]. A final interesting dilemma is reported by Wash and Cooper [23], who discuss the timing of training. Wash and Cooper [23] describe that phishing training is sometimes a part of a phishing exercise and is provided to users who act on the messages in those drills by clicking a link. While that may be effective for the users who click the links, other users would not be trained. 4.2 Organizational Dimension Organizational dimensions are discussed in five included publications and in two main themes. The first theme is trust in the security organization, which is described by Haney and Lutters [20]. They describe that trust in the source of a security message is imperative for users’ willingness to listen to that message. Consequently, the security

60

J. Kävrestad et al.

organization must build a reputation within the organization, leading to a higher degree of user adoption. Similarly, Dahabiyeh [12] describes that commitment from various organizational stakeholders is needed to ensure user participation in training efforts. The notion of trust has also been found to impact user adoption in a survey with over 1400 respondents, who ranked trust in the sender as one of the most important factors for willingness to adopt cybersecurity training [10]. The second theme can be described as informal culture, where Ma et al. [21] describe that social influence is important for user stickiness, the degree to which users will continue to use a training effort. Social influence can assist in making training feel mandatory, which also contributes to user adoption [33]. 4.3 User-Centered Dimension User-centered dimensions include how user demographics, abilities, and traits can impact user adoption of cybersecurity training. It is only explicitly studied in one of the included papers, Kävrestad et al. [10]. Kävrestad et al. [10] research if worry about cyberthreats impacts users’ willingness to adopt cybersecurity training. While they find that to be the case, they also describe worry as a weak mediator. In addition, it can be mentioned that several papers included in this research argue that training should be tailored to various user groups, suggesting that user groups’ different needs are understood. However, none of these describe how different groups should be trained.

5 Conclusions The aim of this research was to synthesize existing research about user acceptance of cybersecurity training from a socio-technical perspective. It was conducted as a structured literature review, where 16 papers were included after database searches and screening. The results suggest that the majority of the existing research focused on the nature of the training interventions themselves. This research reveals a consensus that user perception of cybersecurity training is imperative for the adoption of such training. Furthermore, included papers describe that easy-to-understand material that users can adopt in their daily routines is paramount for positive user perception. Furthermore, the included papers demonstrate a great variety in how cybersecurity training can be implemented, with different results in terms of user perception. This suggests that employing design practices such as user-centric design can be beneficial. Several included papers further describe formal and informal culture as important mediators for adoption. Trust in the security organization and social influence were the most prominent themes and show that awareness-raising stretches beyond the delivery or procurement of a measure. Rather, it is a matter of organizational culture. Finally, the impact of user demographics, abilities, or traits was implicitly acknowledged in several included papers that described individualization as important for cybersecurity training adoption. However, only a single paper researched how it could impact adoption and found worry about cyberthreats to have a limited impact.

Cybersecurity Training Acceptance: A Literature Review

61

5.1 Limitations This research intended to review the current research landscape using a structured literature review methodology. To ensure that relevant publications were included in the research, an inclusive mindset was adopted in developing search strings, selecting databases, and screening papers. Nevertheless, a possible limitation of this research can be that relevant research was not included. A detailed explication of the search and inclusion protocol enables others to build from the present review. 5.2 Future Work This research reveals an uneven distribution of publications in the three social-technical dimensions of cybersecurity training acceptance. Most notably, several publications seem to imply that different users should be trained in different ways. However, sources detailing which those groups are and how they should be trained are scarce. This dilemma can be addressed by research focusing on what distinct groups there are and what needs they have. It can, for instance, be imagined that users with different skill sets, cognitive abilities, or roles have different needs and expectations. A second direction for future work concerns organizational culture, which is described as important in several papers included in this research. However, how to develop a culture that fosters high adoption of cybersecurity training remains largely unknown.

References 1. Uchendu, B., Nurse, J.R., Bada, M., Furnell, S.: Developing a cyber security culture: current practices and future needs. Comput. Secur. 109(c) (2021) 2. Joinson, A., van Steen, T.: Human aspects of cyber security: behaviour or culture change? Cyber Secur.: A Peer-Rev. J. 1(4), 351–360 (2018) 3. Bada, M., Sasse, A.M., Nurse, J.R.: Cyber security awareness campaigns: why do they fail to change behaviour? arXiv preprint (2019) 4. Reeves, A., Calic, D., Delfabbro, P.: “Get a red-hot poker and open up my eyes, it’s so boring” 1: employee perceptions of cybersecurity training. Comput. Secur. 106 (2021) 5. Kävrestad, J., Furnell, S., Nohlberg, M.: What parts of usable security are most important to users? In: Drevin, L., Miloslavskaya, N., Leung, W.S., von Solms, S. (eds.) WISE 2021. IAICT, vol. 615, pp. 126–139. Springer, Cham (2021). https://doi.org/10.1007/978-3-03080865-5_9 6. Baxter, G., Sommerville, I.: Socio-technical systems: from design methods to systems engineering. Interact. Comput. 23(1), 4–17 (2011) 7. Mumford, E.: The story of socio-technical design: reflections on its successes, failures and potential. Inf. Syst. J. 16(4), 317–342 (2006) 8. Venkatesh, V., Bala, H.: Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 39(2), 273–315 (2008) 9. Lee, Y., Kozar, K.A., Larsen, K.R.: The technology acceptance model: past, present, and future. Commun. Assoc. Inf. Syst. 12(1) (2003) 10. Kävrestad, J., Gellerstedt, M., Nohlberg, M., Rambusch, J.: Survey of users’ willingness to adopt and pay for cybersecurity training. In: Clarke, N., Furnell, S. (eds.) HAISA 2022, pp. 14–23. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-12172-2_2

62

J. Kävrestad et al.

11. Bello, A., Maurushat, A.: Technical and behavioural training and awareness solutions for mitigating ransomware attacks. In: Silhavy, R. (ed.) CSOC 2020. AISC, vol. 1226, pp. 164– 176. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51974-2_14 12. Dahabiyeh, L.: Factors affecting organizational adoption and acceptance of computer-based security awareness training tools. Inf. Comput. Secur. 29(5), 836–849 (2021) 13. Paré, G., Kitsiou, S.: Methods for literature reviews. In: Handbook of eHealth Evaluation: An Evidence-Based Approach. https://www.ncbi.nlm.nih.gov/books/NBK481583/. Accessed 12 Apr 2023 14. Meline, T.: Selecting studies for systematic review: inclusion and exclusion criteria. Contemp. Issues Commun. Sci. Disord. 33, 21–27 (2006) 15. Jesson, J., Matheson, L., Lacey, F.M.: Doing Your Literature Review: Traditional and Systematic Techniques. Sage (2011) 16. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-64229044-2 17. Page, M.J., et al.: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int. J. Surg. 88, 105906 (2021). https://doi.org/10.1016/j.ijsu.2021.105906 18. Sarkis-Onofre, R., Catalá-López, F., Aromataris, E., Lockwood, C.: How to properly use the PRISMA statement. Syst. Rev. 10(1), 1–3 (2021). https://doi.org/10.1186/s13643-021-016 71-z 19. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3(2), 77–101 (2006) 20. Haney, J.M., Lutters, W.G.: “It’s scary... It’s confusing... It’s dull”: how cybersecurity advocates overcome negative perceptions of security. In: Proceedings of the Fourteenth Symposium on Usable Privacy and Security (SOUPS 2018). USENIX (2018) 21. Ma, S.F., Zhang, S.X., Li, G., Wu, Y.: Exploring information security education on social media use Perspective of uses and gratifications theory. Aslib J. Inf. Manag. 71(5), 618–636 (2019) 22. Shillair, R.: Talking about online safety: a qualitative study exploring the cybersecurity learning process of online labor market workers. In: Proceedings of the 34th ACM International Conference on the Design of Communication. ACM (2016) 23. Wash, R., Cooper, M.M.: Who provides phishing training? Facts, stories, and people like me. In: Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems. ACM (2018) 24. Silic, M., Lowry, P.B.: Using design-science based gamification to improve organizational security training and compliance. J. Manag. Inf. Syst. 37(1), 129–161 (2020) 25. Shen, L.W., Mammi, H.K., Din, M.M.: Cyber security awareness game (CSAG) for secondary school students. In: Procedings of the 2021 International Conference on Data Science and Its Applications (ICoDSA). IEEE (2021) 26. Wen, Z.A., Lin, Z.Q., Chen, R., Andersen, E.: What hack: engaging anti-phishing training through a role-playing phishing simulation game. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM (2019) 27. Jin, G., Tu, M., Kim, T.-H., Heffron, J., White, J.: Game based cybersecurity training for high school students. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education. ACM (2018) 28. Kletenik, D., Butbul, A., Chan, D., Kwok, D., LaSpina, M.: Game on: teaching cybersecurity to novices through the use of a serious game. J. Comput. Sci. Coll. 36(8), 11–21 (2021) 29. Cullinane, I., Huang, C., Sharkey, T., Moussavi, S.: Cyber security education through gaming cybersecurity games can be interactive, fun, educational and engaging. J. Comput. Sci. Coll. 30(6), 75–80 (2015)

Cybersecurity Training Acceptance: A Literature Review

63

30. Gokul, C.J., Pandit, S., Vaddepalli, S., Tupsamudre, H., Banahatti, V., Lodha, S.: PHISHY a serious game to train enterprise users on phishing awareness. In: Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play Companion Extended Abstracts. ACM (2018) 31. Stockett, J.: Dr. InfoSec: how to teach your community to stop worrying and love 2-factor authentication. In: Proceedings of the 2018 ACM SIGUCCS Annual Conference. ACM (2018) 32. Offor, P., Tejay, G.: Information systems security training in organizations: andragogical perspective. In: Proceedings of the 20th Americas Conference on Information Systems. AIS (2014) 33. Bélanger, F., Maier, J., Maier, M.: A longitudinal study on improving employee information protective knowledge and behaviors. Comput. Secur. 116, 102641 (2022)

Cyber Security Awareness and Education Support for Home and Hybrid Workers Fayez Alotaibi1,2(B) , Steven Furnell1 , and Ying He1 1 School of Computer Science, University of Nottingham, Nottingham, UK {Fayez.Alotaibi,Steven.Furnell,Ying.He}@nottingham.ac.uk 2 College of Science and Humanities, Shaqra University, Al-Dawadmi, Saudi Arabia

Abstract. Home and hybrid working is now increasingly commonplace in many organizations, particularly in the wake of the enforced home working faced by many during the COVID-19 pandemic. However, while many organizations and workers have now embraced the opportunity, questions remain over whether security practices in the home-based and hybrid context are as robust as those within the traditional workplace. The pandemic highlighted that many organizations were unprepared in terms of related security policies and procedures, and today many still lack specific attention toward these aspects even though home-working itself has become a business norm. One of the potential challenges facing organizations seeking to understand what they should do is the varied range of advice and related sources that can be found, which can lead to relevant issues being overlooked or omitted if they were not covered in the chosen guidance. This paper seeks to examine and evidence this challenge. It does so by analyzing a series of guidance resources from commercial, governmental, and professional body sources. From these, a set of eight thematic control areas are identified, and the sources themselves are then evaluated against this new, consolidated reference point to see how closely they each match. The findings reveal that all of the consulting sources fall significantly short of being able to be considered comprehensive in their coverage of the relevant topics and the level of detail. This in turn highlights the importance of organizations adopting this more holistic view and ensuring that different types sources are consulted before determining the necessary practices and awareness needs of their home-based and hybrid workers. Keywords: Employee awareness · Home working · Hybrid working · Remote working

1 Introduction Many organizations are now seeing an increasing proportion of home-based and hybrid working, much of which is due to a continuation of practices adopted during the COVID19 pandemic [1]. However, not all employees are familiar with the home-working environment, which may cause an increase in cyber security risks [2, 3]. Prior studies have found that many remote workers exhibited aspects of bad security behaviour during the © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 64–75, 2023. https://doi.org/10.1007/978-3-031-38530-8_6

Cyber Security Awareness and Education Support

65

pandemic, which has made their organization vulnerable to cyber-attacks. For example, 67% of these employees admit that they have not followed security policies in order to be more productive, by sending work documents to their personal email, sharing passwords, and installing malicious apps [4]. In response to this situation, various sources have published guidelines to support businesses in providing a secured Work from Home (WFH) environment for remote workers. However, there is lack of agreement on what such guidance should cover, and therefore the nature and level of advice can still be inconsistent. In order to investigate the situation, this paper aims to identify the cyber security issues that relate to remote workers, and then to review and evaluate the extent to which these issues have been covered by existing guidelines/advice published by IS commercial organizations and official bodies. The findings suggest that even though home/hybrid workers are identified as a relevant audience to address, the nature of support and guidance they receive can still be significantly variable depending upon the source of reference.

2 Cyber Security Issues for Home/hybrid Workers Working from home, whether fully remote or in a hybrid combination with working in the office, is now an increasing trend in the post-pandemic world. While the notion of home-based, hybrid, and remote work are not synonyms [5], all involve elements of working outside the office and in the employee’s home environment. It is therefore important to ensuring that expected security practices can translate and transfer into this context. However, evidence suggests that it is not unusual for issues to emerge. For example: • 67% of remote employees admit that they have tried not to follow security policies to be more productive by sending work documents to their email, sharing passwords, and installing malicious apps [4]. • 23% of remote workers reuse their passwords across work and personal accounts such as social media accounts and online banking [6]. • around half of US employees believe that public Wi-Fi can be trusted to access their organization’s system [7], whereas 56% of IT security leaders believe that home and public Wi-Fi is one of the biggest security challenges organizations should consider [8]; • 59% of IT decision makers consider user awareness as one of the biggest security challenges [8] and 58% of organizations said most employees do not follow their information security awareness policy [10]. • 61% of organizations believe that remote workers are more likely to use the same device for work and personal activities [6], and 14% of UK workers do not lock their smart devices after they complete their organization tasks [9]. According to a report by Tessian [11], 56% of IT team leaders believed homeworkers had picked up bad information security practices since they had been working remotely. Moreover, 54% were concerned that some employees would bring infected devices, and the report also found that 40% of remote workers were planning to bring their own devices to the workplace. Moreover, 40% of young homeworkers admitted that they had

66

F. Alotaibi et al.

made information security mistakes while working from home that nobody would know about. In addition, 29% of these employees were afraid to notify the IT team that they had made information security mistakes. Having established that there are problems to be addressed in the home/hybrid working contexts, it is relevant to consider the extent to which related guidance is available, as well as the degree to which is it comprehensive and consistent in the coverage that it provides. These aspects are examined in the sections that follow.

3 Assessing Current Security Guidance for Hybrid Workers In order to assess the level of support that has been made available to support and guide organizations on protecting their home and hybrid workers, this study has surveyed a series of 20 distinct resources, drawn from a range of sources. All are provided based on offering relevant guidance but can prove to have differing topic focus and levels of coverage. The resources were broadly categorized as being drawn from two main types of sources – commercial organizations (typically originating from the cyber security area and/or more general IT domain) and official/professional organizations (encompassing governmental sources from different countries, and professional bodies and membership organizations linked to cyber security). These are more specifically described below, and the sources used in each case are also listed in Tables 1 and 2: • Commercial organization: This category refers to companies in the cyber security or wider IT sectors, who sell security products and services and have published related advice in the context of home/hybrid working. Note that the eight organizations sampled here are intended to be a representative set rather than an comprehensive catalogue. Table 1. Guidance from Commercial Organizations Source

Guidance Title

Ref.

Cisco

“Five tips to enable a remote workforce securely”

[12]

CMA (Cyber Management Alliance)

“Best Cybersecurity Tips for Remote Workers”

[13]

ESET

“Tips for cybersecurity when working from home

[14]

HP (Hewlett Packard)

“HP Remote Worker Cybersecurity Best Practices”

[15]

Kaspersky

“Cyber Security Risks: Best Practices for Working from Home and Remotely”

[16]

McAfee

“How to Stay Connected and Protected in a Remote Work Environment”

[17]

Microsoft

“11 security tips to help stay safe in the COVID-19”

[18]

Samsung

“8 tips for securing remote workforces”

[19]

Cyber Security Awareness and Education Support

67

• Official and Professional Sources: These are organizations that provide security guidance from the perspective of a particular country, region, or professional sector. There are many examples of these resources in many different countries, and as with the commercial organizations the list aims to be illustrative rather than exhaustive.

Table 2. Guidance from Official Sources and Professional Bodies Source

Guidance Title

Ref.

ACSC (Australian Cyber Security Centre)

“Cyber security tips when working from home” [20]

CCCS (Canadian Centre for Cyber Security)

“Security tips for organizations with remote workers”

[21]

CIS (Center for Internet Security)

“5 Online Safety Tips for Work”

[22]

CISA (Cybersecurity & Infrastructure Security Agency)

“Telework best practices”

[23]

CPNI (Centre for the Protection of National Infrastructure)

“Remote Working, including Working from Home, during of COVID-19”

[24]

ENISA (European Union Agency for Cybersecurity)

“Tips for cybersecurity when working from home”

[25]

FBI (Federal Bureau of Investigation)

“Cybersecurity Awareness Month: Steps to Protect Yourself

[26]

IIA (Institute of Internal Auditors)

“Security in a Work-From-Home Environment”

[27]

NCSC (UK National Cyber Security Centre)

“Home working: preparing your organization and staff”

[28]

NIST (US National Institute of Standards and Technology)

“Telework security overview & tip guide”

[29]

NSA (National Security Agency)

“Best Practices for Keeping Your Home Network Secure”

[30]

SANS Institute

“Top 5 Tips for Working from Home Securely” [31]

It should be noted that the sample does not purport to be an exhaustive survey of available guidance. However, it represents a sufficient volume and diversity of sources from which to form a credible impression of the type of material that is available from some key sources to then guide organizations who seek such advice. As was anticipated, it became clear even from casual inspection that the sources varied in terms of both topic coverage and the depth of treatment being given. As such, the use of any individual source in isolation would give organizations and their employees on a partial view of the issues that may be relevant to them in a home/hybrid working context. With this in mind, it was considered relevant to determine what a consolidated set of guidance would look like, and then to compare each of the sources against this reference point to see the extent of their coverage.

68

F. Alotaibi et al.

4 Consolidating the Guidance for Home and Hybrid Workers Having identified a range of relevant sources, their content was then analyzed based on cyber security issues that were discussed in Sect. 2 in order to identify the key thematic areas addressed across the range of guidance sources. This thematic analysis led to the identification of the following eight topics, which then serve to represent the core set of areas that home and hybrid workers need to be guided upon or supported with (noting that the abbreviations align with the representation used later in Table 3): • • • •

Backup and Recovery (B&R) Device Care (DC) Guidelines and Policies (G&P) Incident Management (IM)

• • • •

Network Security (NS) Passwords & Authentication (P&A) Training and Education (T&E) Updating Software (U)

The topics themselves are outlined in the sub-sections that follow, with each giving an indication of what the topic is, and some examples of what some different sources had to say about them. 4.1 Backup and Recovery Backup and recovery can be introduced as step that employees should follow to save or recover their data in and from the cloud or servers of their organization. HP [15] has recommended organization to test their system backup and make sure their employees using this system to back up their data. However they have not illustrated that if the employees need to know why it is important to back up their data using their organization back up system. Kaspersky [16] have mentioned that organizations should provide cloud or server storage for their employees and encourage their employees to back up their data regularly to avoid losing that data. IIA [27], recommended that organization should check if backup and recovery have been established at home devices. They should indicate where the employees should back up their data and why this is important. ESET has mentioned that remote workers should always turn on automatic backup to save their data under any circumstances [14]. 4.2 Device Care This area addresses steps that employees should follow to protect their devices from unauthorized used. HP [15], CMA [13] and McAfee [17] have presented that home workers should only use a specific device for work and no other activities such as watching Netflix or sharing them with family members. IIA [27] and NIST [29] recommend that user should lock their screen immediately after finishing their work and enable ID features to keep their devices away from unauthorized usage. NCSC [28], CISA [23] and ENISA [25] suggested that the organization encourage staff to activate the screen lock feature, allowing users to protect themselves from unwanted use, also they should not share their work device with family and use it for personal activities. Additionally, the CCCS recommended that employees should not leave their devices open while they are away and report to their organization if their devices have been stolen or lost [21].

Cyber Security Awareness and Education Support

69

4.3 Guidelines and Policies This section refers to the provision of information regarding the areas of security that employees need to follow. Hewlett Packard (HP) [15] has only recommend organization to create security policy for saving data. The CMA [13] guide highlights the importance of security policy and suggests seven areas that must be included: Compliance requirements, Information systems security, Data protection, Remote access control, Backup and media storage, Information disposal, and Alternative work sites. Kaspersky [16] have mentioned that organizations should provide cyber security policies for their employees to illustrate which good practices they should follow. They also provide an example of security policy, but this does not cover some areas such as network security and different work environments. Moreover, Microsoft [18] suggested that remote workers should follow their organization’s security policies. They also present some extra tips that remote workers should follow, but they do not mention the important of following these policies. The UK NCSC [28] and ENISA [25] have suggested that organizations must create a security policy for their employers in terms of Apps instillation and incident response. Also, the Centre for the Protection of National Infrastructure (CPNI) indicates that “organizations should update their security policies based on the new measurement related to information security and deliver them to their employees” [24]. 4.4 Incident Management This area can introduced as plane that was created by the organization in term of security incidents For instance, ACSC [20], and CMA [13] suggested that organizations should have a plan for incident activity and respond based on these incidents. It is important to have security incidents plan but do the employees need to know about it they should make it clear that organization should provide security incident plan, teach users how to respond and whom the need to contact. Moreover, NIST [29] and CPNI [24] have illustrated that the important of security incident plan, how users can report incident, when the should race issues, and to whom. Furthermore, ENISA [25] recommended that organizations should provide or create a policy for incident response and informed their employees about it. Additionally, NCSC [28] only mentioned that organizations teach their employees how to report security concerns. CCCS [21] mentioned that organizations teach their employees how to report security concerns and to whom. HP [15] suggested that organizations should have a plan for incident activity, this plan should be updated regularly and it should include a contact number of IT teams. 4.5 Network Security This considers steps that users should take to protect their communications when working at home or remotely. For this area, Kaspersky [16], HP [15], and ACSC [20] illustrated that remote workers should secure their home Wi-Fi using a strong password, updating network device, and choosing secure protocol. Also, if users decide to use public networks then they should use VPN for data encryption. Meanwhile, other sources (such as Samsung [18], NIST [29], and Microsoft [18]) mentioned most of the points but omitted

70

F. Alotaibi et al.

the guidance on public Wi-Fi. CISA [23] and NSA [30] recommend that remote workers should not use any unprotected network to access their organization system and should always use trusted Wi-Fi for work purposes. However, they have not mentioned how users should protect their home network. 4.6 Passwords and Authentication This addresses the step that remote workers should follow to protect their accounts, when it should be changed, and tools that can be used to ensure password security. In this area, HP [15] has recommended that employees have to use strong passwords for their devices and change it immediately if they note that their passwords have been known. They should explained hoe the strong password look like and why it is important to change their passwords. CMA [13], Kaspersky [16], SANS [31], ACSC [21] and NCSC [28] all explain the important of using a strong passwords, also the illustrated the differences between strong passwords and weak passwords, followed by examples. Moreover, they encourage remote workers to not used the same password across different devices or accounts. ESET [14] recommended that employees should ensure that their passwords are strong enough. However, they have not described how the password should be created to satisfy this expectation. NIST [29] also explained that, if users decide to use their own devices, they should consider making a strong password enabling ID features to protect against unauthorized access. However, they have not explained how to create a strong password and when users should have to change their passwords. CCCS [21] recommends that remote workers have to pay attention when they enter their passwords to avoid them being known by an unexpected person. 4.7 Training and Education Training and education can be illustrated as any programs provided by organization in order to increase their employees’ security awareness. Samsung [19] has mentioned that “organizations should provide cyber security training programs because human mistakes are the biggest risk factor for security: this program should include cyber security threats and security practice”. They have also recommended that organizations need to make sure that their training programs are up to date. However, they have not mentioned which best practice is as well as there are some topics have not covered such as incident management. Kaspersky [16] presents many questions that organization should consider when they are preparing to switch their employees to remote working, one of these questions is “Does the organizations provide cyber security awareness program for their employees?”. However, they do not mention that what the organization needs to do after the question, and what security topics need to be covered if they have training program. CMA says that “teaching employees how to detect cyber security threats can be helpful for them to avoid these threats and not become victims” [13]. However cyber threats not only the topics employees need to know about it there are more topics need to be presented to them such as data storage and secure network. CPNI also says that “education and training must be provided for employees to understand the situation of home working and practice as it is required”, and have suggested some methods that organization can be consider (e.g. web based and video methods). However, they have

Cyber Security Awareness and Education Support

71

not mentioned which areas should be included in this training program. Finally, ENISA [14] indicates that regular cyber security training and education for employees can help them deal with cyber threats. 4.8 Updating Software This area relates to installing new updates and patches released by suppliers. CMA [13] says that home/remote workers should update their software, apps, and OS; updating software and antivirus program can provide a high level of protection for end-users. Also, they have illustrated the importance of updating OS and suggested that the best way to ensure updating is to enable auto-update Moreover, ESET [14] illustrates how is it important for employees to make sure that they use the latest operating system version. Also, they gave an example of an operating system that is no longer supported, such as Windows 7. SANS [31] has mentioned the importance of updating and how it can be useful to protect employees’ devices. They have illustrated that users should ensure that their devices, programs, and Apps are running the last software version to secure their devices. Also, they have recommended a list of devices that are connected to the same network. ENISA [25] recommended that organization should provide their employees with upto-date software and applications, but they have not mentioned when and how should employees keep their devices up to date automatically. Moreover, the Federal Bureau of Investigation (FPI) [26] has recommended that remote users should make sure to update their OS and App as soon as these updates are required and tune on the automatic update feature. ACSC [20] recommended that remote workers should always turn on the automatic update. Also, they provide a guide that employees can follow in order to setup automatic update in different operating systems.

5 Re-assessing Coverage of the Existing Sources In this section, each of the original resources are assessed to determine the extent to which they provide coverage of the eight target topic areas. Beginning first with a toplevel view of the extent to which each topic was represented across the set of sources, Fig. 1 shows the differing levels of representation in each case. As can be seen, the issue of Network Security appears to be the most well-represented, but even this is not uniformly present in all cases. Meanwhile, backup and recovery gets the most limited degree of attention, and was only emphasized significantly in the guidance materials from the commercial sources. Of course, the fact that a topic area is represented in different sources does not necessarily mean that they afford it the same level of attention. With this in mind, Table 3 presents a deeper level of assessment, looking at the level of coverage of each theme. With the source being rated Low (L), medium (M) and high (H) accordingly. The assessment is made based upon assessing various underlying aspects that should be included for each topic: • Explanation of the importance of the issue. • When users and organizations should consider the issue.

72

F. Alotaibi et al.

Fig. 1. Coverage of Topics in Different Sources

• Where users and organizations should consider the issue. • Security controls that help to address the issue. • What users should do regarding the issues. The source needs to cover most/all the above aspects for a given topic to score High. If it covers around half of the aspects it gets a Medium rating, and a notably lesser level of coverage is scored Low. The overall result of the assessment is shown in Table 3. When viewed on a source-by-source basis, what clearly emerges is that they are extremely variable in terms of both the topics they cover and the level to which they cover them. As illustrated by Fig. 1 and Table 3, organizations must look at multiple sources if they are to determine a comprehensive view of best practice for their employees. It can also be seen that the topics do not receive a uniform level of coverage and none of them are covered in all of the sources. Moreover, none of these resources have covered all of these topics. Additionally, the breadth of topic coverage across some of the sources is extremely limited (e.g. the Cisco material only covers two of the topics). Also, out of 20 sources there are only six sources that cover backup and recovery. This issue should have more attention, especially when it comes to the security of the organization’s data, users need to know when and where they should back up their data if it is their responsibility to do so. In addition to providing their employees with the guidance, organizations will also need to consider if their users will be in a position to understand the advice and act upon it. Telling people what to do is not enough, they need to know how to do it. For example, when asking users to keep their devices up to date, they need to know why and how to do so. Similarly, when the sources recommend using VPN or MFA, will all users have a sufficient level of knowledge of what these features are and how to use them? Linked to the issue of providing support, another potentially missing element is that the guidelines have not given much focus that much to the communications between hybrid workers and the IT team. According to Tessian [11], 29% of these employees were afraid to notify the IT team that they had made information security mistakes. Therefore, organizations

Cyber Security Awareness and Education Support

73

Table 3. Comparison of topic coverage in different sources Sources Commercial Official/Professional

Official/Professional

B&R

DC

G&P

IM

NS

P&A

T&E

U

Cisco [12]









M



H



CMA [13]



H

H

L

L

H

L

H

ESET [14]

H



M



M

M



H

HP [15]

H

M

L

M

H

L



L

Kaspersky [16]

H

H

L



H

H

M

L

McAfee [17]



M

L



M







Microsoft [18]





L



M

H

L

M

Samsung [19]



L





M

L

L

L

ACSC [20]



H





H

H

L

H

CCCS [21]

L

H

L

L

M



L

H

CIS [22]



L





L

L





CISA [23]

L



M

L

H

L

H

L

CPNI [24]





H

H





M



ENISA [25]



M



M

M





M

FBI [26]









M

H



M

IIA [27]

L

H

L



M

M

H



NCSC [28]



H

H

M

L

H

M

M

NIST [29]



H

H

H

M

H



H

NSA [30]









H

H

M

M

SANS [31]









H

H

H

H

must find a solution to enable a dialogue between home and hybrid employees and IT support to ensure their work environment is secured.

6 Conclusions In conclusion, this paper has identified a series of cyber security issues relevant to remote workers based upon the assessment of a range of guidance sources. It reviewed and evaluated if these issues have been covered enough the existing guidelines/advice published by commercial organizations and official bodies. It is clear from the findings that the issues do not receive a uniform level of coverage, and none of them are covered in all of the sources. In view of the findings, it is clear that, as things stand, organizations must look at multiple sources to establish a comprehensive view of best practice for their employees. In this context, there is a good chance that organizations will instead select one source and then potentially overlook and omit something of relevance. What would be preferable is for them to have a clearer view of what their home and hybrid workers need to know in

74

F. Alotaibi et al.

terms of security, and to then have access to appropriate and comprehensive guidance that is suited to their needs. This aspect will be a focus of the authors’ future research, with a view to enabling organizations to profile their home and hybrid operations and then establish suitably tailored guidance accordingly.

References 1. Borkovich, D., Skovira, R.: Working from home: cybersecurity in the age of Covid-19. Issues Inf. Syst. 21(4), 234–246 (2020) 2. Curran, K.: Cyber security and the remote workforce. Comput. Fraud Secur. 2020(6), 11–12 (2020) 3. Furnell, S., Shah, J.N.: Home working and cyber security – an outbreak of unpreparedness? Comput. Fraud Secur. (August 2020’), 6–12 (2020) 4. Cyber Ark, 2021. CyberArk State of Remote Work Study: Poor Security Habits Raise Questions About the Future of Remote Work. CyberArk. https://www.cyberark.com/press/ cyberark-state-of-remote-work-study-poor-security-habits-raise-questions-about-the-futureof-remote-work/. Accessed 15 Mar 2023 5. Webex. 2023. “What is hybrid work?” https://www.webex.com/what-is-hybrid-work.html. Accessed 14 Mar 2023 6. Schulze, H.: The State of Remote Work Security. Cybersecurity Insiders (2021). https:// www.archtis.com/wp-content/uploads/2021/04/2021-Remote-Workforce-Security-ReportarchTIS_Final.pdf. Accessed 9 Apr 2023 7. O’Driscoll, A.: Statistics and Facts: Human Error in Cybersecurity. Comparitech (2022). https://www.comparitech.com/blog/information-security/human-error-cybers ecurity-stats/. Accessed 11 Mar 2023 8. Schulze, H.: Remote Work From Home Cybersecurity Report. Cybersecurity Insiders (2020). https://rs.ivanti.com/ivi/2537/682a478cc54f.pdf. Accessed 22 Mar 2023 9. Proofpoint, 2020. User Risk Report. Proofpoint. https://www.proofpoint.com/sites/default/ files/2020-05/gtd-pfpt-us-tr-user-risk-report-2020_0.pdf. Accessed 10 Mar 2023 10. Netwrix, 2020. 2020 Cyber Threats Report. Netwrix. https://www.netwrix.com/download/ collaterals/2020_Cyber_Threats_Report.pdf. Accessed 19 Mar 2023 11. Tessian. (2021). Security Behaviors Report. Tessian. https://www.tessian.com/resources/ back-to-work-cybersecurity-behaviors-report/. Accessed 9 Apr 2023 12. Cisco, 2020. Five tips to enable a remote workforce securely. Cisco. https://www.cisco.com/ c/dam/global/en_uk/products/collateral/security/secure-remote-worker-solution/srw-5tipsen.pdf. Accessed 17 Mar 2023 13. CMA, 2020. Best Cybersecurity Tips for Remote Workers. Cm-alliance https://www.cm-all iance.com/cybersecurity-blog/best-cybersecurity-tips-for-remote-workers. Accessed 18 Mar 2023 14. ESET, 2020. Working from home: Tips and Advice. https://www.eset.com/uk/working-fromhome-tips/. Accessed 15 Mar 2023 15. HP, 2020. HP Remote Worker Cybersecurity Best Practices. Hewlett Packard. https://h20195. www2.hp.com/v2/getpdf.aspx/4AA7-7194ENW.pdf. Accessed 24 Feb 2023 16. Kaspersky, 2022. Cyber Security Risks: Best Practices for Working from Home and Remotely. Kaspersky. https://www.kaspersky.co.uk/resource-center/threats/remote-workinghow-to-stay-safe. Accessed 24 Feb 2023 17. McAfee, 2021. How to Stay Connected and Protected in a Remote Work Environment | McAfee Blog. McAfee Blog. https://www.mcafee.com/blogs/tips-tricks/how-to-stay-connec ted-and-protected-in-a-remote-work-environment. Accessed 28 Feb 2023

Cyber Security Awareness and Education Support

75

18. Microsoft, 2020. 11 security tips to help stay safe in the COVID-19. Microsoft. https://www. microsoft.com/security/blog/2020/06/09/11-security-tips-stay-safe-covid-19-era/. Accessed 26 Feb 2023 19. Samsung, 2020. 8 tips for securing remote workforces. Samsung Business Insights Sep 25. https://insights.samsung.com/2020/09/25/8-tips-for-securing-remote-workforces. Accessed 23 Feb 2023 20. ACSC, 2020. COVID-19: Cyber security tips when working from home. Australian Cyber Security Centre. https://www.cyber.gov.au/acsc/view-all-content/advisories/covid-19-cybersecurity-tips-when-working-home. Accessed 21 Feb 2023 21. CCCS, 2020. Security tips for organizations with remote workers (ITSAP.10.016) - Canadian Centre for Cyber Security. Canadian Centre for Cyber Security. https://cyber.gc.ca/en/gui dance/telework-security-issues-itsap10016. Accessed 24 Mar 2023 22. CIS, 2020. 5 Online Safety Tips for Work. CIS. https://www.cisecurity.org/insights/blog/5online-safety-tips. Accessed 13 Mar 2023 23. CISA, 2020. Telework Essentials Toolkit. Cybersecurity & Infrastructure Security Agency. https://www.cisa.gov/sites/default/files/publications/20-02019b%2520-%252 0Telework_Essentials-08272020-508v2.pdf. Accessed 23 Mar 2023 24. CPNI, 2021. Remote Working, including Working from Home, during of COVID-19. Centre for the Protection of National Infrastructure, December 2021. https://www.npsa.gov.uk/res ources/remote-working-during-covid-19. Accessed 29 Mar 2023 25. ENISA, 2020. Tips for cybersecurity when working from home. European Agency for Cyber Security, 24 March 2020. https://www.enisa.europa.eu/tips-for-cybersecurity-when-workingfrom-home. Accessed 10 Mar 2023 26. FBI, 2021. Cybersecurity Awareness Month: Steps to Protect Yourself | Federal Bureau of Investigation. Federal Bureau of Investigation. https://www.fbi.gov/video-repository/por tland-cyber-home-102121.mp4/view. Accessed 4 Mar 2023 27. IIA, 2020. Security in a Work-From-Home Environment. Global Knowledge Brief, Institute of Internal Auditors, October 20202. https://iia.no/wp-content/uploads/2021/01/2020-GKBSecurity-in-a-Work-From-Home-Environment.pdf. Accessed 20 Mar 2023 28. NCSC, 2020. Home working: preparing your organisation and staff. National Cyber Security Centre. https://www.ncsc.gov.uk/guidance/home-working. Accessed 8 Mar 2023 29. NIST, 2020. Telework security overview & tip guide. US National Institute of Standards and Technology. https://www.nist.gov/document/telework-overview-and-tips-pdf. Accessed 19 Mar 2023 30. NSA, 2018. Best Practices for Securing Your Home Network. Cybersecurity Information Sheet, National Security Agency, September 2018. https://www.nsa.gov/portals/75/docume nts/what-we-do/cybersecurity/professional-resources/csi-best-practices-for-keeping-homenetwork-secure.pdf?v=1. Accessed 31 Mar 2023 31. SANS, 2020. Top 5 Tips for Working from Home Securely. SANS Institute. https://www. sans.org/blog/top-5-tips-for-working-from-home-securely/. Accessed 13 Mar 2023

On-Campus Hands-On Ethical Hacking Course Design, Deployment and Lessons Learned Leonardo A. Martucci(B) , Jonathan Magnusson, and Mahdi Akil Karlstad University, Karlstad, Sweden {leonardo.martucci,jonathan.magnusson,mahdi.akil}@kau.se

Abstract. In this paper, we report on designing and deploying an oncampus, highly practical ethical hacking course using the foundation of Kungl. Tekniska H¨ ogskolan’s (KTH) existing, well-established, distancebased course. We explain our course organization, structure, and delivery and present the students’ formative and summative feedback and their results. Moreover, we justify the choice of our platform, a custom gcpbased cyber range with twelve capture the flag exercises designed for an online ethical hacking course, and how our on-campus course was implemented around it. Our ethical hacking course is organized around ten mandatory lectures, seven flag reports and three lectures on ethics, two demonstrations, and four guest lectures. The student evaluation is continuous and based on the flags captured. Our collected data indicates the amount of effort spent on each exercise, the used hints, and for how long most of the students were actively solving the exercises. The students’ feedback indicates they were overwhelmingly satisfied with the course elements and teaching staff. Finally, we propose changes to elements of our ethical hacking course. The course was delivered at Karlstad University over nine weeks between January and March 2023 for 24 students.

Keywords: Ethical hacking capture the flag

1

· education · ethics · cybersecurity ·

Introduction

Ethical hacking education trains individuals to acquire the knowledge and skills necessary for testing and evaluating systems while considering ethical issues at all process stages. Modern ethical hacking has distanced itself from its 1980s cyberpunk origins in hacker culture, as described by [1,7], by incorporating a set of conduct rules based on moral philosophy. Ethical hacking education should be built upon two equally important pillars: the practical technical skills required to exploit vulnerabilities and the ethical considerations of practicing these skills. Ethical hacking has been connected to professional training and academic courses for the past twenty years. However, these often pay more attention to c IFIP International Federation for Information Processing 2023  Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 76–90, 2023. https://doi.org/10.1007/978-3-031-38530-8_7

On-Campus Hands-On Ethical Hacking Course

77

the technical aspects of hacking and either neglect or completely ignore its ethical components, hence failing to provide students with a comprehensive understanding of the ethical questions, potential consequences, and personal responsibilities of hacking. It is essential to differentiate penetration testing and vulnerability analysis from ethical hacking. While the former two are bounded by legal and contractual aspects, primarily local in scope, the latter has universal value. In the past, we have reported our experiences from educating practitioners in vulnerability analysis. We now turn our attention to educating undergraduate students about ethical hacking. In this paper, we report on designing and deploying an on-campus, highly practical ethical hacking course using the foundation of an existing distancebased course from the Kungl. Tekniska H¨ ogskolan (kth). We describe the course structure and organization, introduce the learning outcomes, and discuss the teaching and learning activities implemented around the practical exercises and discussions on ethics in cybersecurity. Moreover, we outline the challenges we faced, the lessons learned, the student feedback, and our plans for the course. Some details of the implementation, especially those related to the infrastructure and exercise content, are left purposely vague to avoid leaking solutions. This paper is organized as follows. Section 2 outlines the infrastructure and organization of the distance-based ethical hacking course. Section 3 introduces our on-campus ethical hacking course, its design, learning outcomes, and the changes implemented from its original distance-based version. The results from the course are presented in Sect. 4, while Sect. 5 discusses the lessons learned and the proposed changes to the course. Section 6 concludes the paper.

2

Background

The ethical hacking course offered by kth since 2017 features a hands-on project where students (ca. 500 in 2022) are tasked with independently attacking a corporate computer network to extract data [4,6]. The students are provided access to a cyber range through a vpn connection, which enables them to navigate the network horizontally and vertically. The cyber range also features flags that students must identify at specific locations within the network. The flags follow the competitive Capture The Flag (ctf) community’s conventions, where players must identify unpredictable, recognizable strings in security-related challenges to earn points. The students submit them through the Learning Management System (lms) Canvas, where an API script verifies their correctness. The score of submitted flags determines the student’s grades. The Canvas system also manages a hint system, allowing students to request hints by paying with future points. The cost of hints varies based on their helpfulness, and the final hint for each flag costs the entire score but provides a comprehensive tutorial on how to solve the challenge. 2.1

Infrastructure

The cyber range comprises different Windows and Linux virtual machine hosts running various services. The instructors manage these through a custom-built

78

L. A. Martucci et al.

tool designed to create, build, and reset the instances on the Google Cloud Platform (gcp). The students are expected to scan the network, enumerate the hosts they find, discover vulnerabilities, exploit said vulnerabilities to get a foothold, escalate their privileges, and pivot to new machines in the network. In the context of the course, students are assigned to groups of 10 through a random selection process, and a distinct cyber range instance is provided to each group. As a result, students share resources and may encounter one another during the course. The cyber range machines reset every 24 h and in the event of accidental damage caused by students. If student A discovers a flag on day 1, it will be invalidated if subsequently submitted by student B on day 2. Moreover, the flags are unique to each cyber range instance; hence, a flag found by student A in one instance is useless to student B in another. The course instructors can monitor students’ activities on the hosts in the cyber range and identify unusual behavior in the context of hint requests and flag submissions. 2.2

Course Setting

The ethical hacking course is offered in three different variations: an entire course, two halves, and a doctoral student course with an additional assignment. The course is delivered individually and online, supplemented with optional guest lectures on-campus. The entire course comprises 17 flags, each carrying 10 points. The half courses have eight and nine flags, also 10 points each. Students must submit all flags and achieve a minimum of 30% of the total possible points. Hence, a passing grade requires 51 points for the entire course. An extra assignment for doctoral students entails designing and implementing a challenge for future cyber range expansion. The Canvas page hosts reading materials and videos on various computer science and hacking topics to supplement the course. The course provides a forum and a support email, which experienced alums and doctoral students moderate. This approach fosters meaningful engagement between students and instructors, allowing for prompt resolution of technical issues and clarification of tasks. The forum facilitates frequently asked questions, enabling students to inform the instructors only about unknown issues.

3

On-Campus Ethical Hacking Course

Our on-campus ethical hacking course is part of a newly designed 2-year master’s program in Computer Science with an estimated 200 h of student effort (equivalent to 7.5 ects) distributed over nine weeks between January and March. It is a mandatory course in the program, with two prerequisites: an introductory cybersecurity course (7.5 ects), which covers the basics of computer, network, software, and web security, risk analysis, secure design principles, a privacy introduction, and practical laboratory exercises, and a course on operating systems (7.5 ects). Moreover, a mandatory cybersecurity course focused on cryptographic building blocks, and applied cryptography in communication protocols (7.5 ects) precedes the ethical hacking course.

On-Campus Hands-On Ethical Hacking Course

79

The learning outcomes of our course and their aimed cognitive process dimension following the revised Bloom’s taxonomy [5] are: (1) explain responsible reporting of vulnerabilities (understand), (2) use tools for penetration testing (apply), (3) analyze how tools for penetration testing work (analyze), (4) examine and identify evidence of attacks on log data (analyze), (5) reflect upon and evaluate countermeasures used in case of attacks (evaluate), (6) apply and reflect upon good operational security practices (apply, evaluate), and (7) evaluate the ethical (societal) aspects of using hacking techniques and tools (evaluate).1 An additional learning outcome related to student performance and evaluation in the course is also included in the list: give an account of the results of completed tasks, orally and in writing. This set of learning outcomes reinforces the overall goal of our ethical hacking course: to teach the practical technical skills required to exploit vulnerabilities and the ethical considerations of practicing these skills. We planned to achieve the learning outcomes by structuring the course around the following four complementing teaching activities: lectures on ethics, a set of hands-on exercises, demonstration lectures, and guest lectures. In the remainder of this section, we explain how the ethical hacking course was planned around these four activities, how they are delivered, and how they support achieving the course’s learning outcomes. 3.1

Course Planning and Platform of Choice

Our course prerequisites defined the target group of students we aimed to educate: those already familiar with cybersecurity and with some knowledge about applied cryptography and its limits. They assured us that the course participants would have some experience with security testing tools and platforms. Hence, selecting a suitable platform for delivering the practical element of the course was a critical step in its planning. By designing the course around the hands-on cybersecurity-oriented exercises, we planned the remaining teaching activities to achieve the learning outcomes not achieved by the practical exercises. Delivering such exercises can be done either with standalone ctf challenges (jeopardy style), such as the ones offered by HackTheBox and TryHackMe,2 or with cyber ranges, which are interactive, virtual, cloud-based simulated infrastructures used in cybersecurity exercises, such as Virginia Tech’s us Cyber Range, Jamk University’s Realistic Global Cyber Environment (rgce),3 and University of Regensburg’s ForCyRange [3]. Cyber ranges differ in scope, target groups, scenarios, and nature of their tasks. ForCyRange, for instance, aims at IoT devices and Jamk’s rgce mimics a cyberattack response-team scenario focused on forensics and security management and communication. The advantage of standalone ctf challenges is that hacking exercises can be handpicked from one or more platforms and packaged for students to cover a set of desired scope of problems and tools. The advantage of cyber ranges is that they can 1 2 3

https://www3.kau.se/kursplaner/en/DVAD25 20231 en.pdf. https://www.hackthebox.com/ and https://tryhackme.com/. https://www.uscyberrange.org/ and https://jyvsectec.fi/cyber-range/.

80

L. A. Martucci et al.

offer a more realistic scenario with many virtual platforms, a simulated network, a narrative, and progressive steps. The platform of our choice is described in Sect. 2. It is a custom cyber range designed and implemented for kth’s online ethical hacking course, where we have established links of academic collaboration in research and postgraduate education. It is integrated with Canvas lms, which we also use. Moreover, it is a well-established course, running for over five years, and popular among students, including some our doctoral students. The coordinator of our ethical hacking course joined the teaching staff of their course as a guest lecturer to learn about the cyber range hacking exercises, their organization, and course management. With the extensive support from their leading cyber range developer, we ported the central infrastructure to our own gcp instance, and also a significant part of the integration with our Canvas instance with the help of a second developer, who is responsible for the lms interface. We took approximately three months of intermittent effort to finish this step, which ended just as our ethical hacking course welcomed its first students. 3.2

On-Campus Course Implementation

The chosen cyber range has 17 flags, with increasing difficulty where the earlier flags are easier to capture than later ones. To limit our course to the planned, estimated effort of 7.5 ects and accommodate all learning activities, we included the first 12 individual capture the flags exercises in our course, which cover hacking challenges related to: (1) traffic sniffing, (2) web crawling and hacking, (3) password and hash cracking, (4) database hacking, (5) privilege escalation, (6) client-side attacks, (7) binary exploitation, and (8) remote exploitation. An on-campus ethical hacking course offers advantages and challenges over an online version (a discussion about advantages and disadvantages is presented in Sect. 5). A positive aspect is that it allows students to discuss their different solutions, tools used, and alternative paths to capture flags. However, this only works by requiring students to work at a given and pre-established pace. To implement that, we organized the course with weekly deadlines for the students to submit their captured flags. A mandatory “flag report” lecture was scheduled for seven consecutive weeks of the course for the students to discuss their solutions, the tools they used, to clarify how those tools worked, and their paths to the solution. Two flags were discussed in the five flag report lectures and one in the last two weeks. Attending the three planned lectures on ethics is mandatory in our course. They were organized as follows. An article on applying ethics to information technology issues [9] and the hacker manifesto [1] were given to the students to be read before the first lecture on ethics, which aims at explaining the importance of ethics in hacking, and to give an overview of moral philosophy, especially about normative ethical theories. The remaining two lectures on ethics were planned around discussions about five selected episodes of Darknet Diaries,4 a 4

https://darknetdiaries.com/, episodes 47, 49, 82, 87, and 88.

On-Campus Hands-On Ethical Hacking Course

81

podcast about hackers, computer breaches, and related topics, that involve questions about ethics and ethical norms applied to hacking, including coordinated vulnerability disclosure. The two demonstration lectures are practical classroom exercises. The first introduces specialized hardware and hacking gadgets, such as programmable usb keystroke injection plugs and cables and rogue wi-fi access points. It has a short practical exercise for the students to complete. The second demonstration lecture has customized laptops with unpatched operational systems loaded with malware and ransomware installers, such as WannaCry and NotPetya, for the students to run and examine the outcome. These lectures aimed to demonstrate the tools and let the students experience and identify the outcome of malware. An introduction talk preceded the demonstration lectures. The four guest lectures were organized to cover learning outcomes, topics, and points of view not addressed by the other course components. The Chief Security Officer of a large business group in retailing and banking provided the point of view from the industry, an expert from the Signals Intelligence Agency, an academic with a research background in psychology, security, and decisionmaking, who talked about social engineering and dark patterns, and a doctoral student who leads the hacking club of his home university and has years of experience as a professional penetration tester. 3.3

Adjusting the Scoring System, Passing Conditions, and Grading

According to one or our doctoral students that attended the online ethical hacking course, it was possible to capture 30% of total flag points quite early in the course, as the initial challenges are significantly more straightforward to solve than later ones. This behavior was confirmed as a general problem in the online ethical hacking course by its cyber range lead developer, i.e., it is common for students to give up trying to solve challenges after capturing the minimum amount of flags required to pass the course. To encourage active participation in our on-campus ethical hacking course, we introduced the “flag report” lectures with mandatory attendance, even for those that have not successfully solved the challenges, adjusted the scoring system according to the estimated difficulty of the challenge, and increased the passing minimum from 30% to 50% of the total possible points. These adjustments were discussed and agreed upon by one of our doctoral students who took the online version of the course and the coordinator of our ethical hacking course. Grading is given a four-scale system: “fail” (f), “good” (c), “very good” (b) and “excellent” (a). Students that collected 62.5% or more points were eligible to pass with a b and those with 75% or more points with an a, as long as they attended the mandatory lectures on ethics and the flag reports. We redistributed the flag points as shown in Table 1. We halved the points for the first three challenges, which are easier to solve than the later flags. The maximum number of points for the 12 flags is 120 in online and on-campus ethical hacking courses. Flag 10 is the most difficult challenge in our set. It is a client-side attack that required careful tinkering and understanding of computer memory

82

L. A. Martucci et al.

Table 1: Flags points in the online and on-campus courses. The values in bold indicate the earliest stage a student would be able to pass the course, assuming that are 12 flags to capture. Course Online

Flag 1 2

3

4

5

6

7

8

9

10 11 12

10 10 10 10 10 10 10 10 10 10 10 10

On-Campus 5

5

5

10 10 12 12 12 10 15 12 12

layout and was worth 15 points. The objective was to give proportionally higher rewards to the flags that require the most effort to solve. The effect of updated flag point values is that the earliest stage a student could collect enough points to pass the course is completing the eight first challenges, compared to six had we kept the points equally distributed among the challenges. We adjusted the cost of all hints proportionally to the updated points. For example, for a hint that costs 6 in the online course, where every flag is worth 10 points, the same hint would cost 3 points if the flag value is 5 points in the on-campus course. Therefore, the cost of the hints remained unchanged between the online ethical hacking course and our on-campus course. 3.4

Course Plan and Structure

The teaching staff had four people: the course coordinator who led the flag report lectures and the lectures on ethics, a teacher who prepared and delivered the two demonstration lectures, and two teaching assistants (doctoral students), where one had attended the online course. Two lectures were added on the first two weeks of the course: a course introduction, in which we explained the course structure, organization, and its elements, and an introductory flag report lecture delivered by doctoral students to showcase how those lectures were going to be delivered and our expectations about their attitudes and participation. We warned the students that the cyber range is a shared resource and might break, depending on the action of their colleagues, and asked them to contact the course coordinator and the teaching assistants with direct messages on Canvas if the cyber range or one of its machines is unavailable. Canvas announcements were used to report service outages. Canvas direct messaging was cumbersome to manage because messages were not consistently sent or replied to all, which made them hard to track. We abandoned the direct messaging approach in the fourth week of the course and set up a forum on Canvas with the heading “Lab Status Notifications” available to all students and teaching staff. It was a better approach regarding transparency, but managing service tickets posted on a forum still needed improvement. It needed a more explicit indication if a problem was being attended to or not or even solved. In the last two weeks of the course, we set up a support email account linked to a git repository running a GitLab Support Bot that opens

On-Campus Hands-On Ethical Hacking Course

83

maintenance tickets, contacts the coordinator and teaching assistants by email, and allows the tickets to be sorted and categorized according to their status.

4

Course Delivery, Feedback and Results

The course was delivered for the first time between January and March 2023, with 29 students enrolled; five did not attend any lectures (no-shows). The remaining 24 students were from the following programs: 16 from our 5-year Master of Science in Computer Science and Engineering program, 2 from our 2-year International Master of Science in Computer Science program, 2 from our 3-year Bachelors in Computer Science program, 1 from our 3-year Bachelors in Computer Engineering program, and 3 Erasmus students. Erasmus is an European Union program that provides funding and support for student exchanges between universities in different European countries. Our on-campus ethical hacking course is mandatory for those in the two Master of Science programs (18 students), and it is an elective course for the other six students. We set up three cyber range instances, named matrices, with the help of the leading cyber range developer, manually assigned the students to them, and distributed their access credentials using Canvas direct messages. Matrix 1 and 2 had ten students assigned to each, and Matrix 3 had nine. 4.1

Student Feedback

Student feedback was collected continuously during the course at every mandatory lecture (flag reports and lectures on ethics) using Mentimeter,5 an interactive online presentation tool that enables audience participation and feedback, and two weeks after the end of the course by our university’s centralized student feedback and course evaluation system. Formative Feedback. Flag reports are highly interactive weekly sessions and a mandatory element of our course. Their objective is to review the hacking challenges scheduled for that week by recapitulating the problem, explaining their strategy to solve it, what they had tried (and why), the tools they have used, how these tools work, and the solution. In addition, for every challenge to be solved, the following questions were presented to the students using Mentimeter: – – – –

How difficult was the challenge in effort? (Multiple choice question). Was hint #i (where #i is the hint number) useful? (Multiple choice). Which tools did you used to solve the challenge? (Word cloud). My way to crack the problem was the following. (Open ended question).

One answer per respondent was permitted on the multiple choice questions, up to four entries on the word clouds, and multiple submissions for the openended questions. The last slide in all flag reports and lectures on ethics is a 5

https://www.mentimeter.com/.

84

L. A. Martucci et al.

Mentimeter muddy card slide. Muddy cards are anonymous exit tickets designed to obtain student feedback, identify the student’s need for additional support, and adjust instruction [10]. They are generally collected at the end of a lecture, analyzed, and revisited at the following lecture. Table 2 summarizes the self-reported effort for solving the ctf exercises and indicates the flag number corresponding to the exercises. Furthermore, it shows the maximum amount of points awarded for that flag.

Table 2: Self-reported effort to solve the ctf exercise. Exercise

Effort 55

46–55

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Never

14

4.4

30

27.2

22

26.8

3

11.0

7

6.4

Always

6

0.3

53

63.7

66

62.9

38

25.9

15

15

I regularly check for antivirus updates (χ 2 = 66.235, d f = 20, p-value < 0.01) 18–25

26–35

36–45

>55

46–55

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Never

16

4.7

33

29.3

21

29

4

11.9

8

6.9

Always

3

8.8

46

54.4

55

51.9

36

22.1

12

12.8

I always check the spelling of the URLs… (χ 2 = 51.654, d f = 20, p-value < 0.01) 18–25

26–35

36–45 Count

>55

46–55

Count

Exp.

Count

Exp.

Exp.

Count

Exp.

Count

Exp.

Never

12

3.2

18

20

16

19.8

7

8.1

3

4.7

Often

9

10.5

66

64.8

61

63.9

29

26.3

16

15.3

What do you if you receive this email? (χ 2 = 28.496, d f = 15, p-value < 0.05) 18–25 Count

26–35 Exp.

Count

36–45 Exp.

Count

>55

46–55 Exp.

Count

Exp.

Count

Exp.

Click on it

7

2

10

12.2

12

12

2

4.9

3

2.9

Ignore it

25

26.6

167

164.6

159

162.4

65

66.8

43

38.8

Report it

3

6

35

37.2

40

36.7

23

15.1

3

8.8

Contact HR

2

2.4

17

15

15

14.8

3

6.1

5

3.5

Which cybersecurity areas do you struggle with the most? (χ 2 = 39.289, d f = 15, p-value < 0.01) (continued)

Evaluating the Risks of Human Factors

355

Table 1. (continued) 18–25

26–35

36–45

>55

46–55

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Phishing

7

4.3

26

26.8

18

26.5

15

10.9

9

6.3

Spam

1

2.6

10

16.1

23

15.9

7

6.5

3

3.8

Hacking

11

4.9

25

30.1

29

29.7

13

12.2

6

7.1

Table 2. Job role and phishing. What do you do if you receive this email? (χ 2 = 36.644, d f = 24, p-value < 0.05) Education

IT

Healthcare

Leadership

Business

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Count

Exp.

Count

Arts Exp.

Count

Admin Exp.

Count

Military Exp.

Click on it

10

8.1

3

4

1

1

6

7.1

3

4.4

0

1.3

9

4.5

0

0.9

Ignore it

111

110

45

53.9

11

12.9

89

95.6

22

18.0

62

61.1

43

37.5

14

11.5

Report it

19

24.9

22

12.2

6

2.9

28

21.6

11

13.5

3

4.1

9

13.8

5

8.5

Contact HR

13

10.0

5

4.9

0

1.2

10

8.7

6

5.4

0

1.6

5

5.6

2

3.4

Table 3. Job role and behaviors. I use antivirus software to protect my devices (χ 2 = 62.167, df = 32, p-value < 0.01) Education

IT

Healthcare

Leadership

Business

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Ex.

Count

Arts Exp

Count

Admin Exp

Count

Military

Never

19

18.2

3

8.9

2

2.1

15

15.8

18

9.9

0

3

9

10.1

6

6.2

Always

45

42.6

32

20.9

3

5

38

37

18

23.1

2

7

22

23.6

12

14.5

Exp.

I regularly check the antivirus software update on my computer/laptop (χ 2 = 70.973, d f = 32, p−value < 0.001) Education

IT

Healthcare

Leadership

Business

Count

Exp

Count

Exp

Count

Exp

Count

Exp.

Count

Exp

Count

Arts Exp

Count

Admin Exp

Count

Military

Never

27

19.6

3

9.6

2

2.3

16

17

17

10.6

0

3.2

10

10.9

4

6.7

Always

31

36.3

30

17.8

3

4.3

32

31.6

14

19.7

1

5.9

20

20.2

13

12.4

Exp.

I always check the spelling of the URLs link before clicking or entering sensitive data (χ 2 = 51.749, d f = 32, p−value < 0.05) Education

IT

Healthcare

Leadership

Business

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Arts Exp

Count

Admin Exp.

Count

Military

Never

16

13.4

2

6.6

3

1.6

11

11.6

13

7.3

1

2.2

6

7.4

2

4.6

Always

44

39.2

33

19.2

3

4.6

29

34.1

13

21.3

6

6.4

19

21.8

11

13.3

Exp.

According to the chi-square test, those who work in the educational, learning and training sector struggle the most to protect their passwords and safeguard themselves

356

F. B. Salamah et al.

from phishing-related issues—see Table 4. At the same time, employees working on healthcare support and IT struggle largely with spam messages and emails. Those working in military establishments were the most challenged by hackers, followed by those who work in arts, entertainment, and sports. Table 4. Job roles and struggling areas. Where do you struggle the most? (χ 2 = 127.477, d f = 64, p−value < 0.001) Education

IT

Healthcare

Leadership

Business

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Arts Exp

Count

Admin Exp

Count

Military Exp

Privacy

59

59.8

21

29.3

8

7

49

52

34

32.4

15

9.8

43

33.2

13

20.3

Password

20

13.4

0

6.6

2

1.6

12

11.6

4

7.3

1

2.2

8

7.4

7

4.6

Phishing

19

13.4

7

6.6

2

1.6

8

11.6

7

7.3

0

2.2

10

7.4

3

4.6

Spam

12

10.8

8

5.3

4

1.3

11

9.4

5

5.8

0

1.8

5

6.0

0

3.7

Hacking

13

12.2

0

6

2

1.4

12

10.6

7

6.6

5

2.0

3

6.8

8

4.1

The chi-square test reveals three points of correlation between the employees’ academic qualifications and their cybersecurity awareness. Phishing was less understood Table 5. Academic qualifications and cybersecurity awareness I use a combination of letters… When choosing a password (χ 2 = 48.458, d f = 20, p−value < 0.001) Below Sec.

Secondary

College

Bachelor’s

Postgrad

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Exp.

Rarely

1

0.5

3

0.9

8

4.1

29

28

2

10.3

Often

0

2.2

3

4.1

19

19.1

124

130

58

47.8

Which of the following best describe ‘phishing’? (χ 2 = 40.684, d f = 20, p−value < 0.001) Below Sec.

A type of attack

Secondary

College

Bachelor’s

Postgraduate

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Exp

2

0.9

3

1.6

9

7.5

52

51

13

18.8

Gathering data

1

3.9

5

7.3

23

33.8

232

229

98

84.4

Get followers

0

0.3

0

0.5

7

2.4

14

16.6

5

6.1

Managing ships

0

0

0

0

1

0.1

0

0.6

0

0.2

I do not know

4

1.9

5

3.5

20

16.2

110

110.3

34

40.5

Which of the following do you consider a more secure link? (χ 2 = 24.217, d f = 5, p-value < 0.01) Below Sec.

Secondary

College

Bachelor’s

Postgraduate

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count

Exp

HTTP

4

2

6

3.6

30

16.8

107

114.1

32

42

HTTPS

3

5

7

9.4

30

43.2

301

293.9

118

108.8

Evaluating the Risks of Human Factors

357

by employees with secondary or college education. Also, they ignored password security policies and were unable to identify legitimate websites. In short, the higher the employees’ educational qualifications are, the lower the risk they pose—see Table 5. The chi-square test shows that employees with less than two years of experience are in great risk, as they were eager to click on a link in a phishing email—see Table 6. Employees with more than 25 years of experience seem to need more training, because most of them prefer to ignore phishing emails. Employees with 10–20 years of experience struggle mostly with the protection of their privacy. Employees with 2–5 years of experience struggle mostly with spam. Phishing poses a challenge to those with less than 2 years of experience—see Table 6. Table 6. Years of experience and cybersecurity awareness What do you if you received this email? (χ 2

= 34.406, d f = 21, p-value < 0.05) 25

15–20

20–25

Count

Exp

Count

Exp

Count

Exp

Count

Exp

Count Exp

Count

Exp

Count

Exp.

Click on it

4

1.2

7

5.1

5

6.3

9

6.2

6

6.3

3

4.2

3

3.7

Ignore it

13

16.5

67

69

91

85.5

82

83.4

83

85.5

50

56.8

57

50.3

Report it

3

3.7

15

15.6

17

19.3

18

18.8

28

19.3

18

12.8

5

11.4

Contact HR

3

1.5

7

6.3

6

7.8

7

7.6

6

7.8

8

5.2

5

4.6

Where do you struggle the most? (χ 2

= 104.958, d f = 56, p-value < 0.01) 25

15–20

Count

46.5

According to the chi-square test, the time spent on social media and the understanding of cybersecurity concepts have a positive correlation—see Table 7 below. For instance, employees who spend between two and three hours per day on social media are more familiar with the concept of phishing than those who spend less than two hours per day. A positive correlation was also discovered between the time spent by employees on social media and their understanding of a secure link. Those who spend less than 30 min, and those who exceeded 3 h a day on social media, are less knowledgeable than others.

358

F. B. Salamah et al. Table 7. Time spent on social media and cybersecurity awareness.

Which of the following options best describes ‘phishing’? (χ 2 = 38.866, d f = 16, p−value < 0.01) 3 h

Count

Exp

Count

Exp.

Count

Exp.

Count

Exp

Count

Exp.

A type of attack

8

3.3

6

7.6

22

20.9

26

21.5

18

26.8

Gathering data

12

14.6

37

34.3

87

93.9

102

96.8

122

120.4

Get followers

0

1.1

8

2.5

6

6.8

1

7

11

8.7

Managing ships

0

0

0

0.1

1

0.3

0

0.3

0

0.3

I do not know

6

7

10

16.5

51

45.1

43

46.5

63

57.8

2–3 h

Which of the following is considered a more secure link? (χ 2 = 38.866, d f = 16, p − value < 0.01) 3 h

2–3 h

Count

Exp

Count

Exp

Count

Exp

Count

Exp.

Count

Exp.

HTTP

12

7.3

10

17.1

45

46.7

42

48.1

70

59.9

HTTPS

14

18.7

51

43.9

122

120.3

130

123.9

144

154.1

Through our findings, we can confirm that age and job role are the two most significant factors associated with risk on social media. Younger employees are less informed than their older peers, and employees whose age is over 55 need more training than employees in other age groups. Business and finance employees tend to require more training, because they are the most attacked on social media—which is an issue also found by Pedley et al. [21]— followed by those who work on administrative positions and the educational sector. Factors such as work experience and time spent on social media are found to be correlated with cybersecurity awareness. However, those with more than 25 years of experience, and those with little experience, need more training than others. People spending between 2 and 3 h a day on social media are more familiar with cybersecurity than those who spend less than 30 min a day or exceed 3 h a day on social media. Our findings confirm that “age” and “job role” are the two most significant factors associated with risk on social media. Younger employees are less informed than their older peers, and employees whose age is over 55 require more training. Similarly, staff involved in business and finance require more training, because they are the most attacked on social media—which is an issue also found by Pedley et al. [21]—followed by those who work on administrative positions and the educational sector. Factors such as work

Evaluating the Risks of Human Factors

359

experience and time spent on social media are found to be correlated with cybersecurity awareness. 4.1 Social Media Risk Assessment Three methods are commonly used to evaluate cybersecurity in an organization: risk assessment, vulnerability assessment, and penetration testing [10]. We have pursued the first method, and we have identified resources, threats, and weaknesses to estimate risk. Based on Chapple et al. [10], the total risk is the amount of risk an organization would face if no safeguards were implemented. The traditional formula for total risk involves threats and vulnerabilities as shown below: Risk = Threat × Vulnerability. To be precise, risk can be understood as the possibility that a threat exploits a vulnerability to harm assets—a risk rises as a threat becomes more likely to occur. To reduce risks to an acceptable level, organizations should identify elements that could harm assets, and then implement strategies to prevent any harms to such assets. Given that employees are commonly considered the weakest link within an organization [10], we will consider them as our assets. Therefore, the harm posed by employees’ attitudes and behaviors on social media constitute the vulnerabilities. To quantify the risk, we propose to establish how much each human factor contributes to an increase of the total risk. We suggest considering the percentages stated in Table 8. However, these percentages are only estimates based on the data we have gathered. Policymakers and trainers can be advised by this estimation but can also amend the percentages based on their own experience and the conditions of the organizations they work for. Table 8. Suggested factors percentages for risk assessment. Sub-factors

Weight

Job Role (JR)

40%

Age (A)

30%

Education Status (ES)

10%

Years of Experience (YE)

10%

Time Spent (TS)

10%

We propose using an equation to help policymakers, those developing training materials, and cybersecurity trainers to estimate the level of risk that a particular employee poses while using social media. Our risk weighting equation is stated below—see Eq. 1—and considers the percentages listed in Table 8. We expect this equation to help stakeholders to create applicable social media cybersecurity training and prioritize the training for individuals who are at ‘high risk’. Risk = (JR × 0.4) + (A × 0.3) + (ES × 0.1) + (YE × 0.1) + (TS × 0.1)

(1)

360

F. B. Salamah et al.

Job role is the most significant factor related to cybersecurity training preferences, perceptions, and needs [14]. Hence, we assign a higher weight to the job role in Table 8 than to any other factor. That is also the reason why employees whose job role is to manage the organization’s social media accounts need to be trained extensively. The output derived from Eq. 1 indicates three different risk scenarios: low, moderate, and high risk. A value of at most 0.35 indicates low risk; a value greater than 0.35 and smaller than 0.65 indicates medium risk; and a value greater than 0.65 indicates high risk. These values are meant to estimate the risk associated with the employees in the organization while they are using social media. We expect this to help prioritize training. Table 9 shows all the risks associated with human factors considered in our study. Table 9. Risk parameters Risk (s)

Impact

Probability

Education and training

0.4

0.3

Computer and technology

0.4

0.0

Job Role(s): 40%

Healthcare

0.4

0.2

Leadership and management

0.4

0.1

Business and financial

0.4

0.4

Art, sport, entertainment

0.4

0.1

Office and administrative

0.4

0.4

Military

0.4

0.2

18–25

0.3

0.3

26–35

0.3

0.3

36–45

0.3

0.1

46–55

0.3

0.1

>55

0.3

0.3

0.1

0.1

Age: 30%

Education Status: 10% >

Disable (deactivation)

[>

Concurrent

|||

Suspend-resume

|>

Choice

[]

Order Independent

|=|

objects) are depicted using labels preceded by the abbreviation of the type of data. A specific value for data can be a pre-condition to the execution of a task. Section 4 presents task models built with the HAMSTERS notation, along with their textual description to facilitate the reader’s understanding. The interested reader can find further information about the benefits of using task models, as well as instructions on how to read and generate task models in the Handbook of Human-Computer Interaction [11].

3 Task Model-Based Approach for the Validation and Evolution of Usable Security Patterns Applying the right design patterns in the right contexts is a critical task in the management of security and usability conflicts. Therefore, to facilitate the designers and developers in validating the design patterns before use, we proposed a task model-based approach for modelling the user tasks in line with the solution proposed by the design pattern. Based on the analysis of the task models thus generated, if a design pattern needs to be evolved, a proposal for modification is prepared for updating the design pattern accordingly. The approach is presented in Fig. 3. And has four steps: – The first step involves generating the task model of the user tasks while interacting with the security mechanism according to the user requirements. This step would identify the usability issues arising with the security mechanism as it stands based on the requirement specifications. It is important to note that the first step involves modelling the users’ tasks before applying the design pattern. – The second step involves modelling user tasks with a usable security pattern that matches the security requirements. Once the user tasks from requirement specification are known, the relevant design pattern is applied to generate new task models and to analyse how effectively it enables the usability of the security mechanisms. This step will produce task models of the user tasks after applying the usable security design pattern. – The third step involves analysing the predicted effectiveness and efficiency criteria of usability from the task models. For the analysis purpose, the outcomes from both the first and second steps is compared to evaluate the effectiveness of the design patterns in the context under consideration. From this analysis, if the usable security design pattern is up to date, it is implemented in the product being developed, otherwise, it is subjected to evolution. – The fourth step involves the evolution of usable security design patterns which would require modification based on the outcomes of the third step. The developers and

410

C. Martinie and B. Naqvi

designers involved in generating the task models formulate a list of modifications for the usable security design patterns requiring updates. The updates are then verified by the usable security patterns library manager and the pattern is refined. Finally, the refined version of the design pattern is added to the catalogue.

Fig. 3. Four-step task model-based approach for the validation and evolution of usable security design patterns

The approach would allow assist the system designers and developers in the management of the conflicts allowing them to make reasonably accurate choices when it comes to using certain design patterns and specifically enabling them to: 1. validate the efficacy of the design patterns before applying them. 2. identify limitations in the solution proposed in the (existing version of) usable security design patterns and prepare a proposal for evolution.

4 Illustrative case study of the Approach: The Adaptable Authentication Design Pattern Authentication, especially for using personal devices such as smartphones, is a task that many people perform several times a day, for all days in a year. The impact of the authentication mechanism is thus very important on the user time consumed for the authentication user task. The outcome of the application of the proposed approach includes several task models. 4.1 Model user tasks with the Authentication Mechanism An extract of the task model describing user authentication is presented in Fig. 4. The main user goal is to authenticate (top-level task labelled “Authenticate”). The tasks below the main goal are performed using a smartphone (system element labelled “Sys: Smartphone” connected to the main goal in Fig. 4). The main goal decomposes in a choice (temporal ordering operator “[]”) between several sub-goals. These choices are: to authenticate with fingerprint (abstract task labelled “Authenticate with fingerprint”), to

On using the Task Models for Validation and Evolution

411

authenticate with password (abstract task “Authenticate with password”), and to authenticate with voice (abstract task labelled “Authenticate with voice”). Depending on the configured authentication mechanism, the sub-goals are available or not for the task to be performed. If the configured authentication mechanism in the smartphone is a fingerprint, then the user will authenticate using fingerprint (represented with a test arc from the software object labelled “Configured authentication mechanism” to the task “Authenticate with fingerprint”, on this test arc value it is indicated that the software object should have the value “fingerprint”). In the same way, if the configured authentication mechanism in the smartphone is a password (resp. Voice), then the user will authenticate using the password (resp. Voice). The user achieves the sub-goal “Authenticate with fingerprint” using the tactile screen of the smartphone (represented using a line from the input/output device labelled “i/o D: Tactile screen) and through a sequence of the following tasks. First, the user pushes the power-up button (represented with the interactive input task labelled “Push power up button” connected with a line to the input device labelled “in D: Power up button”). Then, the tactile screen displays the screen saver (represented with the interactive output task labelled “Display screen saver”) and concurrently (represented with the temporal ordering operator concurrency “|||”) displays the fingerprint place holder (represented with the interactive output task labelled “Display fingerprint place holder”). Then the user places the finger on the place holder (represented with the interactive input task labelled “Place finger on place holder”). Then, the smartphone checks the fingerprint (represented by the system task labelled “Check fingerprint”), and the tactile screen displays the home screen (represented by the interactive output task labelled “Display home screen”). Once all tasks are performed, the user reached the sub-goal “Authenticate with fingerprint”, as well as the main goal “Authenticate”. Sub-goals “Authenticate with password” and “Authenticate with voice” appear as pruned in the task model (represented with a ‘+’ symbol at the bottom right of the abstract task). They also decompose in sub-tasks, but in this article, we present in detail only sub-goal “Authenticate with fingerprint” to instantiate the proposed approach. From this task model, we can infer that, at any time for the user, there is only one way of authenticating. Meaning that once the authentication mechanism is configured, the user has no choice, even in case of a change of context (extreme lighting conditions, hands not available due to disability or with gloves on, noisy environment). A usable security pattern matching this context is the Adaptable authentication pattern [14] (page 17, Fig. 9 [14]). 4.2 Model user tasks with the Usable Security Pattern for Adaptable Authentication An extract of the task model of the user tasks with the adaptable authentication pattern is presented in Fig. 5. The main user goal is to authenticate (top-level task labelled “Authenticate”), and all the tasks needed to reach this goal require the use of a smartphone (represented by the line connecting the main goal with the system labelled “Sys: Smartphone”. This main goal decomposes in a sequence of tasks. First, the user pushes up the power-up button (interactive input task labelled “Push power-up button”). This task is performed using the power-up button (represented by the line connecting the

412

C. Martinie and B. Naqvi

Fig. 4. Extract of the task model describing the user authentication with a mobile tactile device

interactive input task with the input device labelled “in D: Power-up button”). Then, the system infers the context (system task labelled “Infer context”) using the sensed information (represented by an arc from the software object labelled “SW Obj: sensed information”) and producing an inferred context (represented by an arc from the system task to the software object labelled “inferred context”). Then, the system selects the appropriate authentication mechanism using the inferred context (represented by an arc from the software object labelled “inferred context” to the system task labelled “Select appropriate authentication mechanism”) and producing the software object “authentication mechanism”. Then, depending on the authentication mechanism, the user is guided towards one of the authentication types (represented by the temporal ordering operator choice “[]” associated with the precondition test arcs from the software object labelled “Authentication mechanisms” to the abstract tasks “Authenticate with fingerprint”, “Authenticate with password” and “Authenticate with voice”). For example, if the value of the software object labelled “Authentication mechanism” is “fingerprint”, then the user will be guided toward the abstract task to “Authenticate with fingerprint”. Sub-goals “Authenticate with fingerprint”, “Authenticate with password” and “Authenticate with voice” appear as pruned in the task model (represented with a ‘ +’ symbol at the bottom right of the abstract task). They also decompose in sub-tasks, and the abstract task “Authenticate with fingerprint” is described in Fig. 4. Adaptable authentication pattern aims to provide a balance between security and usability by proposing to the user the most suitable authentication method according to the current usage context [14]. From the model in Fig. 5, we analyse that once the

On using the Task Models for Validation and Evolution

413

Fig. 5. Extract of the task model describing the user tasks with the adaptable authentication

user pushed the power-up button, the smartphone infers the user context from the sensed information and infers the user context. From the inferred user context, it selects the appropriate authentication mechanism, which will in turn condition the authentication method (fingerprint, password, or voice). 4.3 Analysis of the Usable Security Pattern for Adaptable Authentication From the task model presented in Fig. 5, we can infer the following issues: – The smartphone decides the appropriate way for the user to log in. – If several authentication mechanisms are adapted to the context, only one mechanism is proposed. In that way, the user does not decide on the way to log in, however, several mechanisms may be adapted to the user context. 4.4 Refine the Usable Security Pattern In order to increase usability, a new version of the adaptable authentication pattern is proposed along with the model of the corresponding user tasks (an extract of the model is presented in Fig. 6). Compared to the task model presented in Fig. 5, two new sub-trees have been added between the system task “Selected appropriate authentication mechanism” and choose temporal ordering operator “[]”. The first refinement is that the system displays concurrently (represented with the temporal ordering operator “|||”) the selected authentication mechanism (interactive output task labelled “Display selected authentication mechanism”) and informs the user about the possibility to fall back to the user preferred authentication mechanism (interactive output task labelled “Inform user about the possibility to fall back to you preferred authentication mechanism”). The interactive input task “Display selected authentication mechanism” produces information for the user (represented by an arc from the interactive input task to the information labelled “Inf: Selected authentication mechanism”).

414

C. Martinie and B. Naqvi

Fig. 6. Extract of the task model of the refined adaptable authentication pattern

The second refinement is fallback to the user-preferred configuration mechanism (abstract task labelled “Fall back to configured authentication mechanism”). This abstract task is optional (optional symbol on the top right of the abstract task) and decomposes in a sequence of the following tasks: the user decides to fall back to the preferred authentication mechanism” (cognitive decision task labelled “Decide to fall back to preferred authentication mechanism”), the user confirms the request to fall back to the preferred authentication mechanism (interactive input task labelled “Fall back to preferred authentication mechanism”), the system sets the authentication mechanism (abstract task labelled “Set authentication mechanism”). The cognitive decision task requires information: the selected authentication mechanism and the preferred authentication mechanism (represented by the arcs from the information “Selected authentication mechanism” and from the information “Preferred authentication mechanism”). The analysis of the task model confirms that the new possible user tasks solve the issues that were raised during the previous step: – The user is informed about the selected authentication mechanism according to the context (represented with the interactive output task labelled “Display selected authentication mechanism” which produces the information “Selected authentication mechanism”, information is data in the head of the user). – The user is proposed the possibility to fall back to the preferred authentication mechanism (represented with the interactive input task labelled “Inform user about the possibility to…”). – The user may decide to fall back to the preferred authentication mechanism, represented with the sub-goal “Fall back to configured authentication mechanism” (this

On using the Task Models for Validation and Evolution

415

sub-goal is optional as indicated by blue arrows symbols on the top right of the sub-goal). These new actions that are possible for the user should increase the effectiveness dimension of usability because the user has now additional possibilities to reach the goal “Authenticate”.

5 Related Work The use of task models for taking into account the usability of security mechanisms during the design phase has been proposed recently. The first contribution [1] focuses on user tasks and potential threats to these user tasks. Task models can be used along with attack trees to ensure that every potential threat on user tasks has been identified [1]. Task models have also been used to increase the number of potential cybersecurity threats detected with risk assessment methods [10]. In particular, the authors [10] target the cybersecurity of the maritime supply chain and integrated task modelling to the MITIGATE risk assessment method adding the capability to identify threats on user tasks, whether they root in malicious attacks or human errors. Task models have also been proven useful during the design phase to analyse the complexity of attacker tasks. The identification of possible attacks is not enough to analyse the security of an authentication mechanism because the complexity of an attacker’s tasks cannot be systematically foreseen from the high-level description of possible attacks [17]. Such an approach enables one to build pieces of evidence to argue for design choices. For example, if a security mechanism decreases usability but does not make the attacker’s tasks more complex, it cannot be an acceptable design choice. These contributions highlight the benefits of using task models for the design of usable security mechanisms. Other types of model-based approaches can also help in analysing security property. For example, process modelling supports the identification of potential errors and vulnerabilities in coordination activities [19]. This type of model-based approach is complementary to the proposed task model-based approach, which focuses on the interactions between the users and the interactive systems.

6 Conclusion There is a collective consensus among the industry and the academic research community that usable security issues must be considered in the development of systems and services. In this regard, design patterns can prove a handful in enabling the designers and developers to evaluate the usability consequences of their security options and vice versa. Therefore, to facilitate the designers and developers in applying right design patterns in the right contexts, the paper proposed a four-step methodology based on generating task models concerning, (1) the users’ security interaction and tasks in line with the specified requirements, (2) the users’ security interaction and tasks in line after applying the usable security design patterns. The designers and developers can then evaluate the task models generated to see if the design pattern contributes effectively towards enabling the usability of security or not. If it does, the design pattern is implemented, otherwise,

416

C. Martinie and B. Naqvi

the limitations thus identified are used for updating the solution proposed by the pattern. This approach is in line with the patterns’ ability to evolve with time and the fundamental engineering principle of not re-inventing the wheel as the concept of reuse is integral in the case of design patterns.

References 1. Broders, N., Martinie, C., Palanque, P., Winckler, M., Halunen, K.: A generic multimodelsbased approach for the analysis of usability and security of authentication mechanisms. In: Bernhaupt, R., Ardito, C., Sauer, S. (eds.) HCSE 2020. LNCS, vol. 12481, pp. 61–83. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64266-2_4 2. Cockton, G., Woolrych, A.: Understanding inspection methods: Lessons from an assessment of heuristic evaluation. In: Blandford, A., Vanderdonckt, J., Gray, P. (eds.) People and Computers XV—Interaction without Frontiers, pp. 171–191. Springer, London (2001). https://doi. org/10.1007/978-1-4471-0353-0_11 3. Diaper, D.: Understanding task analysis for human-computer interaction. In: The Handbook of Task Analysis for Human-Computer Interaction. Lawrence Erlbaum Associates (2004) 4. Gould, I.D., Lewis, C.: Designing for usability: key principles and what designers think. Commun. ACM 28(3), 300–311 (1985) 5. Göransson, B., Gulliksen, J., Boivie, I.: The usability design process – integrating usercentered systems design in the software development process. Softw. Process Improv. Pract. 8(2), 111–131 (2003) 6. ISO 9241-210:2019(en), Ergonomics of human-system interaction—Part 210: Humancentred design for interactive systems. International Standard Organization (2019) 7. John, B. Kieras, D.E.: The GOMS family of user interface analysis techniques: comparison and contrast. ACM Trans. Comput.-Hum. Interact. 3(4), 320–351 (1996) 8. Johnson, P.: Human-Computer Interaction: Psychology, Task Analysis and Software Engineering. McGraw Hill, Maidenhead (1992) 9. Maguire, M.: Methods to support human-centred design. Int. J. Hum Comput Stud. 55(4), 587–634 (2001) 10. Martinie, C., Grigoriadis, C., Kalogeraki, E.M., Kotzanikolaou, P.: Modelling human tasks to enhance threat identification in critical maritime systems. In: PCI, pp. 375–380. ACM (2021) 11. Martinie, C., Palanque, P., Barboni, E.: Principles of task analysis and modeling: understanding activity, modeling tasks, and analyzing models. In: Vanderdonckt, J., Palanque, P., Winckler, M. (eds.) Handbook of Human Computer Interaction. Springer, Cham (2022) 12. Martinie, C., Palanque, P., Bouzekri, E., Cockburn, A., Canny, A., Barboni, E.: Analysing and demonstrating tool-supported customizable task notations. PACM Hum. Comput. Interact. 3(EICS), 1–26 (2019). Article ID 12 13. Naqvi, B., Seffah, A., Abran, A.: Framework for examination of software quality characteristics in conflict: a security and usability exemplar. Cogent Eng. 7(1), 1788308 (2020) 14. Naqvi, B.: Towards aligning security and usability during the system development lifecycle. LUT University, Finland (2020). https://urn.fi/URN:ISBN:978-952-335-586-6 15. Naqvi, Bilal: Dissecting the Security and Usability Alignment in the Industry. In: Bernhaupt, Regina, Ardito, Carmelo, Sauer, Stefan (eds.) Human-Centered Software Engineering: 9th IFIP WG 13.2 International Working Conference, HCSE 2022, Eindhoven, The Netherlands, August 24–26, 2022, Proceedings, pp. 57–69. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-14785-2_4

On using the Task Models for Validation and Evolution

417

16. Naqvi, B., Seffah, A.: Interdependencies, conflicts and trade-offs between security and usability: why and how should we engineer them? In: Moallem, A. (ed.) HCI for Cybersecurity, Privacy and Trust: First International Conference, HCI-CPT 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, pp. 314–324. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-22351-9_21 17. Nikula, S., Martinie, C., Palanque, P., Hekkala, J., Latvala, O., Halunen, K.: Models-based analysis of both user and attacker tasks: application to EEVEHAC. In: Bernhaupt, R., Ardito, C., Sauer, S. (eds.) Human-Centered Software Engineering: 9th IFIP WG 13.2 International Working Conference, HCSE 2022, Eindhoven, The Netherlands, August 24–26, 2022, Proceedings, pp. 70–89. Springer, Cham (2022). https://doi.org/10.1007/978-3-03114785-2_5 18. O’Donnell, R.D., Eggemeier, F.T.: Workload assessment methodology. In: Handbook of Perception and Human Performance, vol. II Cognitive Processes and Performance, pp. 42–41– 42–49. Wiley (1986) 19. Osterweil, L.J., et al.: Iterative analysis to improve key properties of critical human-intensive processes: an election security example. ACM Trans. Priv. Secur. 20(2), Article 5 (2017)

Chatbots: A Framework for Improving Information Security Behaviours using ChatGPT Tapiwa Gundu(B) Nelson Mandela University, Gqeberha, South Africa [email protected]

Abstract. This paper proposes a framework for improving information security behaviours using ChatGPT, a natural language processing chatbot. The framework leverages the Theory of Planned Behaviour (TPB) and the Persuasion Theory to promote secure behaviours through targeted interventions such as education, training, gamification, security tips, reminders, and nudges. ChatGPT can provide personalized and interactive training on information security best practices, deliver relevant threat alerts and tips, and assist with security assessments. Gamification can be used to increase engagement and retention of information security knowledge. The use of nudges and reminders can help sustain secure behaviours over time. Overall, the ChatGPT framework offers a promising approach for organizations looking to enhance their cybersecurity posture by promoting a culture of information security and empowering individuals with the knowledge and tools to protect sensitive information. The framework was then validated by prompting ChatGPT to raise awareness. The responses were then validated by experts to verify if the content from ChatGPT was good. Keywords: ChatGPT · Chatbot · Information security awareness · security training · cybersecurity

1 Introduction The increasing dependence on technology and digital platforms has brought about significant advancements in many aspects of our lives, including communication, commerce, and social interaction. However, this reliance on technology has also led to an increase in cybersecurity threats, including hacking, malware, phishing, and identity theft. Despite the implementation of various security measures, such as firewalls, antivirus software, and encryption, the human factor remains a significant vulnerability in cybersecurity. Individuals and employees often lack awareness and understanding of cybersecurity threats, making them easy targets for cybercriminals. In recent years, the number of cyberattacks has increased significantly, resulting in substantial financial losses and reputational damage to organizations. Many of these attacks are preventable, but a lack of awareness and education about information security is a significant contributing factor. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 418–431, 2023. https://doi.org/10.1007/978-3-031-38530-8_33

Chatbots: A Framework for Improving Information Security Behaviours

419

Traditional methods of information security education typically involve formal training programs and policies that are designed to promote information security awareness and best practices among employees [1]. These methods may include classroom-style training, online training modules, and employee handbooks or manuals that outline organizational policies and procedures related to information security [2–4]. In-classroom training sessions may involve presentations, lectures, and demonstrations by information security professionals, and may cover a range of topics, including password management, data encryption, phishing awareness, and social engineering [2]. Online training modules may use interactive and multimedia elements, such as videos, quizzes, and simulations, to engage learners and reinforce key concepts. Employee handbooks or manuals may provide guidance on how to handle sensitive information, as well as outline the consequences of violating organizational policies and procedures [5]. These Traditional methods, however, may have limitations in terms of engaging learners and sustaining behaviour change over time [5] and costs of facilitating training. Newer approaches, such as those that incorporate chatbots and gamification, may offer more personalized and engaging ways to promote information security awareness and behaviour change [6]. Chatbots are computer programs designed to simulate conversation with human users, typically through text or voice interactions [6]. Chatbots can be used for a variety of purposes, such as customer service, sales, and information retrieval. ChatGPT is a type of chatbot that uses natural language processing (NLP) to generate human-like responses to user queries [7]. ChatGPT is a deep learning algorithm that can generate coherent and contextually relevant responses to a wide range of queries [7]. The remainder of the paper is ordered as follows; Next in Sect. 2, the theoretical underpinnings of this study will be discussed. This will be followed by the introduction of the dimensions of this study in Sects. 3–7. Section 8 will then discuss the methods followed for this study, then Sect. 9 will introduce the proposed framework. Results, Discussion, and conclusion will then follow concurrently in Sects. 10, 11 and 12.

2 Theoretical Background 2.1 Theory of Planned Behaviour The theory of planned behaviour (TPB) can be used to explain how ChatGPT can improve information security behaviour. The TPB suggests that behaviour is influenced by three factors: attitudes, subjective norms, and perceived behavioural control [9] as shown in Fig. 1. 2.2 Persuasion Theory Persuasion theory suggests that effective communication involves understanding the audience, the message, and the context [10]. This is diagrammatically represented in Fig. 2. Persuasion Theory explains how people can be persuaded to change their attitudes and behaviours through communication. People are more likely to be persuaded by messages that are tailored to their individual needs and preferences [10]. In the context of

420

T. Gundu

Fig. 1. Theory of Planned Behaviour [9]

Fig. 2. Persuasion Theory [11]

information security, this means de-signing communication strategies that are personalized to the specific needs and characteristics of different user groups, such as employees with different roles or levels of technical expertise.

3 Chatbots Chatbots can be programmed to perform a wide range of tasks, from answering basic questions and providing customer service support to assisting with online shopping and booking travel arrangements [12]. They can be integrated into various messaging platforms, such as Facebook Messenger, WhatsApp, and Slack, and can also be found on websites and mobile apps. There are two main types of chatbots, the rule-based and AI-powered chatbots. The rule based chatbots are programmed to respond to specific keywords or commands, following a predefined set of rules. They are typically used for simple tasks, such as answering frequently asked questions or providing basic customer support. The AIpowered chatbots use machine learning algorithms and natural language processing to understand and interpret human language, allowing them to have more natural and sophisticated conversations. They can learn from past interactions and improve over time, providing more personalized and effective responses.

Chatbots: A Framework for Improving Information Security Behaviours

421

4 ChatGPT ChatGPT is currently the most common AI Powered chatbot, it provides natural language processing capabilities and generate human-like responses. ChatGPT is a language model developed by OpenAI that is trained on a massive amount of text data and can generate human-like responses to natural language input [13]. It can be fine-tuned to perform specific tasks, such as answering customer service inquiries or providing personalized recommendations [8]. ChatGPT has a variety of potential applications, including chatbots, virtual assistants, and automated content generation. It has been used to develop chatbots for a range of industries, from healthcare to finance to customer service [8]. Overall, ChatGPT is an impressive advancement in natural language processing and has the potential to significantly improve the efficiency and effectiveness of conversational interfaces.

5 Information Security Information security refers to the practice of protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction [14]. It involves a range of measures and technologies designed to ensure the confidentiality, integrity, and availability of information. Information security includes a variety of techniques, such as access controls, encryption, firewalls, intrusion detection and prevention systems, and vulnerability assessments [15]. It is important in ensuring the privacy of personal information, maintaining the integrity of business data, and safeguarding national security information. Some common information security threats include cyberattacks, malware, phishing, and social engineering. Effective information security requires a combination of technical controls, policies and procedures, and employee training and awareness [16]. 5.1 Security Solutions Proactive Security Solutions. Proactive security solutions refer to measures that are taken to prevent security incidents before they occur, rather than reacting to incidents after they happen [17]. This solution is based on the common metaphor “if a ship has a hole, it is better to patch the hole than bail the water”. These solutions are designed to identify and address potential vulnerabilities and threats before they can be exploited by attackers. Some examples of proactive security solutions include security awareness training [18], risk assessments [19], Penetration testing [20], security monitoring [21], security audits [22], or threat intelligence [23]. Reactive Security Solutions. Reactive security solutions refer to measures that are taken after a security incident has occurred, with the aim of limiting the damage caused by the incident and preventing similar incidents from happening in the future [17]. Reactive security measures are typically less effective than proactive measures, as they can only address the symptoms of a security problem rather than its root causes. Some examples of reactive security solutions include incident response [24], forensic analysis [19], patch management [19], data backup and recovery [24], security incident reporting [24].

422

T. Gundu

6 Information Security Awareness and Training Information security awareness and training refer to the processes of educating individuals and organizations on the importance of protecting sensitive data and systems from unauthorized access, use, disclosure, modification, or destruction [25]. The goal of information security awareness and training is to promote a culture of security that ensures the confidentiality, integrity, and availability of information assets [26]. While both awareness and training are important components of information security education, there are some key differences between them, these are their scope, depth delivery and goals. Awareness generally has a broader scope than training, while training is usually more in depth than awareness. Awareness is usually delivered through many communication channels while training is usually conducted in a structured learning environment. The goals of awareness and training are different [27, 28]. The primary goal of awareness is to create a culture of security within an organization by promoting best practices and behaviours that reduce the risk of security incidents. The primary goal of training is to equip individuals with the skills and knowledge needed to perform specific tasks or functions related to information security.

7 Using Chatbots for Information Security Awareness and Training Chatbots can be a useful tool for information security awareness and training, as they provide an interactive and engaging way for users to learn about security best practices and stay up to date on the latest threats and vulnerabilities. Table 1 shows the common themes associated with chatbot use in information security awareness and training. 7.1 Advantages of Using Chatbots Chatbots can provide real-time feedback to individuals, enabling them to identify and rectify security risks promptly [6]. For example, if an individual clicks on a suspicious link, the chatbot can immediately provide feedback on the potential risks and steps to take to mitigate them. They are available 24/7 to provide information security education to individuals, eliminating the need for scheduled training sessions [30]. They can provide interactive learning experiences that engage individuals and encourage them to participate in the learning process. Chatbots can use multimedia content such as videos, images, and quizzes to help individuals understand the importance of information security [34]. They can be a cost-effective way of conducting security simulations [30]. Instead of hiring a dedicated team to conduct security simulations, chatbots can provide a more efficient and cost-effective alternative.

8 Methods Because ChatGPT is relatively new (less than 5 months old at the time this paper was submitted) there were no articles specifically on chatGPT on security awareness and training to review. This paper then made a comprehensive review of the existing literature on chatbots in general. Chatbots in information security awareness and education have similarities in that their main goal is to disseminate knowledge.

Chatbots: A Framework for Improving Information Security Behaviours

423

Table 1. Literature Review Theme

Description

1 Personalised Question and Answer Users can ask Chatbots questions about specific topics related to information security, and Chatbots can provide accurate and relevant answers. Chatbots can analyse the user’s previous interactions and adapt the content 2 Simulations

Articles [12, 13, 29–31]

Chatbots can simulate real-world [6, 13, 32, 33] security scenarios, such as a phishing attack or malware infection, and guide users through the steps they should take to respond to the threat. Chatbots can simulate phishing attacks to train individuals on how to identify and respond to such attacks

3 Providing Security Alerts and Tips Chatbots can be used to provide users with security tips and alerts in a conversational and interactive manner. For example, a chatbot could provide tips on how to create strong passwords, how to avoid phishing scams, or how to secure their home network

[6, 30, 34, 35]

4 Real-Time Assistance

Chatbots can provide real-time [13, 29–31, 35] assistance to users by analysing their interactions and providing feedback on potential security risks. For example, if a user receives a suspicious email, Chatbots can analyse the email and provide guidance

6 Gamifying security awareness

Chatbots can be used to create [30, 32, 35] interactive games and quizzes that help to reinforce key security concepts and best practices. This can help to make security awareness more engaging and enjoyable for users (continued)

424

T. Gundu Table 1. (continued) Theme

Description

7 Conducting Security Assessments

Articles

Chatbots can be used to conduct [13, 35, 36] security assessments or quizzes to help users test their knowledge of security best practices. This can help identify areas where users may need additional training or education

Seven themes were identified, as training on best practices, threat alerts, Personalized recommendations, simulations, gamification, alerts and real time responses. Ten experts in the field were approached to review the framework and also add insights. The themes, the expert insights and the underlying theoretical background were then the basis on which the proposed framework discussed in Sect. 9 was formulated. A desktop validation was then conducted to verify if indeed ChatGPT could be used to conduct information security awareness and training. The method used in this study is presented graphically in Fig. 3.

Expert Review Literature Review Draft Framework

Validation

Theories Framework Fig. 3: Method followed to conceptualise and validate the framework

9 ChatGPT Based Information Security Behavioural Change Framework The framework in this paper is based on a statement by Tzu & Sunzi (in [37]) that says “if you know both the enemy and yourself, you will fight a hundred battles without danger of defeat; if you are ignorant of the enemy but only know yourself, your chances of winning and losing are equal; if you know neither the enemy nor yourself, you will certainly be defeated”. This study then links this statement back to the underlying theories of the study (Theory of Planned Behaviour and Persuasion Theory). In terms of the Theory of Planned Behaviour, the statement talks to understanding both the potential threats or “enemies” and one’s own vulnerabilities and strengths

Chatbots: A Framework for Improving Information Security Behaviours

425

are crucial to making informed decisions about how to protect oneself or one’s organization. This knowledge can inform attitudes towards security measures and perceived behavioural control, both key factors in the theory of planned behaviour. For example, if an individual understands the potential threats they face and their own vulnerabilities, they may have a more positive attitude towards implementing strong security measures and feel more confident in their ability to do so. Similarly, if an individual perceives that their peers or organization place a high value on security, they may feel more compelled to prioritize security measures. On the other hand, if an individual is ignorant of the potential threats or their own vulnerabilities, their chances of effectively defending against attacks may be diminished. In this case, attitudes towards security measures and perceived behavioural control may be lower, leading to less effective security practices. In terms of Persuasion Theory, the statement emphasizes the importance of understanding the target audience and tailoring persuasive messages to their specific needs and motivations. By understanding the threats and vulnerabilities that users face, as well as their attitudes and behaviours towards information security, messaging and communication strategies can be developed to effectively persuade them to adopt security practices and policies. Additionally, these persuasive messages are processed heuristically and systematically by the users and should persuade them to take action to protect themselves and their information (Fig. 4).

Fig. 4. ChatGPT based information security behavioural change framework

426

T. Gundu

9.1 Persuasive Message and Audience Personalised Training on Best Practices: ChatGPT can be used to provide training on information security best practices. This could include training on password management, phishing prevention, safe browsing, and other aspects of information security. ChatGPT can be programmed to deliver this information in an engaging and interactive way, making it easier for users to learn and retain the information. ChatGPT can provide personalized recommendations based on a user’s behaviour. For example, if a user tends to use weak passwords, ChatGPT can suggest stronger password options. If a user frequently opens suspicious emails, ChatGPT can recommend ways to identify and avoid phishing scams. Threat Alerts and Tips: ChatGPT can be programmed to provide real-time threat alerts to users. For example, if there is a new phishing scam or malware attack, ChatGPT can notify users and provide guidance on how to avoid it. ChatGPT can also provide users with tips and best practices for staying safe online. For example, it can offer advice on creating strong passwords, using two-factor authentication, avoiding phishing scams, and keeping software up to date. These tips can be delivered in the form of daily or weekly reminders, or users can ask ChatGPT for advice whenever they have a security-related question. Conducting Security Assessments: ChatGPT can be used to conduct security assessments by leveraging its natural language processing capabilities to interact with individuals or groups and gather information about their information security practices, knowledge, and awareness. ChatGPT could ask the user a series of questions about their information security practices, such as What kind of sensitive information they handle at work, if they use encryption to protect their sensitive data, if they have received any phishing or social engineering attempts in the past, etc. These questions can help gauge the user’s level of information security knowledge and awareness. Gamification: ChatGPT can use gamification for cybersecurity awareness: Cybersecurity challenges: ChatGPT can create cybersecurity challenges that require users to demonstrate their knowledge and skills in various areas, such as password security, phishing prevention, and data protection. Users can earn points and badges for completing challenges, and their progress can be tracked over time. Reminders and Nudges: ChatGPT can use reminders and nudges to help promote cybersecurity awareness and encourage users to adopt secure behaviours. Some examples of how ChatGPT can use reminders and nudges include Password reminders, Two-factor authentication (2FA) nudges, Phishing awareness reminders, Software update reminders, social engineering awareness nudges. By using reminders and nudges, ChatGPT can help reinforce cybersecurity best practices and encourage users to adopt secure behaviours. It can promote a culture of security awareness and help reduce the risk of cyber-attacks.

Chatbots: A Framework for Improving Information Security Behaviours

427

9.2 Heuristic and Systematic Processing Attitudes: Attitudes refer to a person’s positive or negative evaluation of a behaviour. ChatGPT can help improve attitudes towards information security behaviour by providing engaging and interactive training that makes users feel empowered and knowledgeable about how to protect their sensitive information [38]. By providing personalized recommendations and real-time alerts, ChatGPT can also help users see the benefits of following best practices. Subjective norms: Subjective norms refer to the influence of social norms and expectations on behaviour [39]. ChatGPT can help create a culture of information security by providing consistent messaging and reminders to users. By gamifying the experience and providing rewards for following best practices, ChatGPT can also make it more socially desirable to prioritize information security. Perceived behavioural control: Perceived behavioural control refers to a person’s belief in their ability to perform a behaviour [39]. ChatGPT can help improve perceived behavioural control by providing easy-to-understand instructions and guidance on how to follow best practices. By providing reminders and nudges, ChatGPT can also help users feel more confident in their ability to remember and follow through on information security behaviours. 9.3 Persuasion Intention and behaviour are two related concepts in psychology that are often used to explain human actions [11]. Intention refers to a person’s conscious decision or plan to engage in a particular behaviour, whereas behaviour refers to the actual execution of that behaviour [39]. Intention plays a crucial role in determining whether individuals adopt secure behaviours. For example, if an individual believes that using strong passwords is important for protecting their personal information and they feel confident in their ability to create and remember strong passwords, they are more likely to intend to use strong passwords and actually do so. Therefore, interventions that target attitudes, subjective norms, and perceived behavioural control can help promote intention and encourage individuals to adopt secure behaviours. TPB posits that intention is influenced by these three factors and that a strong intention to perform a behaviour is more likely to result in the actual execution of that behaviour [39].

10 Results The review of the literature and expert interviews revealed seven prominent themes that were consolidated into five and used to formulate the framework proposed in this study. These themes where then formulated into prompts to see if ChatGPT would be able to provide information to assist. The 10 experts rated the results from the chat GPT prompts and gave their satisfaction level per construct and they were averaged as is shown in Table 2.

428

T. Gundu Table 2. Results

Construct from framework

ChatGPT prompt

Result Satisfactory?

Personalised Training on Best Practices

I am an organisational Yes – 8; No – 2 employee, train me on In-between - 0 information best practice

80%

Threat Alerts & Tips

I am an organisational employee, provide me with the latest threat alerts & tips

Yes – 8; No –1 In-between - 1

85%

Conducting Security Assessments

Provide 10 multiple-choice questions to assess information security awareness

Yes – 9; No – 0 In-between - 1

95%

Gamification

Provide a game for Yes – 9; No – 0 information security that In-between - 1 I can play against chatgpt

95%

Reminders and Nudges

Can you provide nudges and reminders for information security weekly

90%

Yes – 8; No – 0 In-between - 2

Satisfaction Level

11 Discussion The findings of this study suggest that chatGPT can play a significant role in enhancing information security awareness and training. The study found that chatGPT can provide a more interactive and engaging learning experience, tailored to the specific needs and preferences of individuals. The study also found that chatGPT can provide real-time feedback, enabling individuals to identify and rectify security risks promptly. While the potential benefits of using ChatGPT for information security awareness and education are significant, there are also potential limitations to consider. Firstly, ChatGPT relies on the quality of the data used to train it. If the data used to train ChatGPT is biased or incomplete, it may provide inaccurate or incomplete information. Secondly, ChatGPT may not be able to understand the context of specific information security scenarios. For example, it may not be able to distinguish between a genuine email and a phishing email. Therefore, ChatGPT should be used in conjunction with other information security tools and practices to provide a comprehensive approach to information security. The biggest advantage of using ChatGPT is that it can provide information security awareness and education to a large number of users simultaneously.

Chatbots: A Framework for Improving Information Security Behaviours

429

12 Conclusion This paper provides insights into the role of chatbots in enhancing information security awareness and education. The study findings suggest that chatbots can provide a more engaging and interactive learning experience, tailored to the specific needs and preferences of individuals. The study also identified the potential benefits and limitations of using chatbots for information security awareness and education. The study concludes with recommendations for the implementation of chatGPT in information security awareness and education, including the need for careful design and customization, integration with existing security measures, and ongoing monitoring and evaluation. ChatGPT has the potential to be a valuable tool for information security awareness and education. Its natural language processing capabilities make it an ideal tool for delivering personalized learning experiences, real-time assistance, and receiving relevant threat alerts and tips. Further research is needed to evaluate the effectiveness of using ChatGPT for information security education in real-world scenarios. In conclusion, using the proposed framework for improving information security behaviours can be an effective approach for organizations looking to enhance their cybersecurity posture with a restricted budget. The use of nudges and reminders can help sustain a culture of information security, encouraging individuals to continue practising secure behaviours over time. Additionally, gamification can be an effective tool for making information security training more engaging and enjoyable, increasing the likelihood of retention and behaviour change. Overall, the ChatGPT offers a promising avenue for improving information security behaviours by combining the convenience and accessibility of chatbots with the sophistication of natural language processing. By promoting a culture of information security and empowering individuals with the knowledge and tools to protect sensitive information, organizations can mitigate the risk of security breaches and protect their valuable assets.

References 1. Gundu, T., Flowerday, S.V.: Ignorance to awareness: towards an information security awareness process. SAIEE Afr. Res. J. 104, 69–79 (2013) 2. Bauer, S., Bernroider, E.W.N., Chudzikowski, K.: Prevention is better than cure! Designing information security awareness programs to overcome users’ non-compliance with information security policies in banks. Comput. Secur. 68, 145–159 (2017). https://doi.org/10.1016/ j.cose.2017.04.009 3. Gundu, T., Flowerday, S.V.: The enemy within: a behavioural intention model and an information security awareness process. In: Information Security for South Africa (ISSA), pp. 1–8. IEEE (2012) 4. de Bruijn, H., Janssen, M.: Building cybersecurity awareness: the need for evidence-based framing strategies. Gov. Inf. Q. 34, 1–7 (2017). https://doi.org/10.1016/j.giq.2017.02.007 5. Bada, M., Sasse, A.M., Nurse, J.R.: Cyber security awareness campaigns: why do they fail to change behaviour? arXiv preprint arXiv:1901.02672 (2019) 6. Kowalski, S., Walentowicz, S., Mozuraite Araby, R.: Using chatbots for security training of ICT users (2008) 7. ChatGPT: Generative artificial intelligence (AI) (2022)

430

T. Gundu

8. Ajzen, I.: From intentions to actions: a theory of planned behavior. In: Kuhl, J., Beckmann, J. (eds.) Action Control. SSSSP, pp. 11–39. Springer, Heidelberg (1985). https://doi.org/10. 1007/978-3-642-69746-3_2 9. O’keefe, D.J.: Persuasion: Theory and Research. Sage Publications, Newbury Park (2015) 10. Eagly, A.H., Chaiken, S.: Cognitive theories of persuasion. In: Advances in Experimental Social Psychology, pp. 267–359. Elsevier (1984) 11. Kleebayoon, A., Wiwanitkit, V.: Artificial intelligence, chatbots, plagiarism and basic honesty: comment. Cel. Mol. Bioeng. 16, 173–174 (2023). https://doi.org/10.1007/s12195-023-007 59-x 12. Baidoo-Anu, D., Owusu Ansah, L.: Education in the era of generative artificial intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning (2023). https://papers.ssrn.com/abstract=4337484. https://doi.org/10.2139/ssrn.4337484 13. Kasneci, E., et al.: ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103, 102274 (2023). https://doi.org/10.1016/j. lindif.2023.102274 14. Mallaboyev, N.M., Sharifjanovna, Q.M., Muxammadjon, Q., Shukurullo, C.: Information security issues. In: Conference Zone, pp. 241–245 (2022) 15. Flowerday, S.V., Tuyikeze, T.: Information security policy development and implementation: the what, how and who. Comput. Secur. 61, 169–183 (2016). https://doi.org/10.1016/j.cose. 2016.06.002 16. Gundu, T., Maronga, V.: IoT Security and Privacy: Turning on the Human Firewall in Smart Farming. In: Kalpa Publications in Computing, pp. 95–104. EasyChair (2019). https://doi. org/10.29007/j2z7 17. Choi, Y.-H.: A framework for making decision on optimal security investment to the proactive and reactive security solutions management. J. Internet Comput. Serv. 15, 91–100 (2014) 18. Bogale, M., Lessa, L., Negash, S.: Building an information security awareness program for a bank: case from Ethiopia (2019) 19. Chen, J., Zhu, Q.: Interdependent strategic security risk management with bounded rationality in the internet of things. IEEE Trans. Inf. Forensics Secur. 14, 2958–2971 (2019). https://doi. org/10.1109/TIFS.2019.2911112 20. Bacudio, A.G., Yuan, X., Chu, B.-T.B., Jones, M.: An overview of penetration testing. Int. J. Netw. Secur. Appl. 3, 19 (2011) 21. Zakariyya, I., Al-Kadri, M.O., Kalutarage, H.: Resource efficient boosting method for IoT security monitoring. In: 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), pp. 1–6 (2021). https://doi.org/10.1109/CCNC49032.2021.9369620 22. Stafford, T., Deitz, G., Li, Y.: The role of internal audit and user training in information security policy compliance. Manag. Audit. J. 33, 410–424 (2018). https://doi.org/10.1108/ MAJ-07-2017-1596 23. Jang-Jaccard, J., Nepal, S.: A survey of emerging threats in cybersecurity. J. Comput. Syst. Sci. 80, 973–993 (2014). https://doi.org/10.1016/j.jcss.2014.02.005 24. Schlette, D., Caselli, M., Pernul, G.: A comparative study on cyber threat intelligence: the security incident response perspective. IEEE Commun. Surv. Tutor. 23, 2525–2556 (2021) 25. Mamonov, S., Benbunan-Fich, R.: The impact of information security threat awareness on privacy-protective behaviours. Comput. Human Behav. 83, 32–44 (2018). https://doi.org/10. 1016/j.chb.2018.01.028 26. Da Veiga, A.: Comparing the information security culture of employees who had read the information security policy and those who had not: illustrated through an empirical study. Inf. Comput. Secur. 24, 139–151 (2016). https://doi.org/10.1108/ICS-12-2015-0048 27. Gundu, T.: Towards an information security awareness process for engineering SMEs in emerging economies (2013)

Chatbots: A Framework for Improving Information Security Behaviours

431

28. Gundu, T., Modiba, N.: Building competitive advantage from Ubuntu: an African information security awareness model. In: ICISSP, pp. 569–576 (2020) 29. Choi, J.H., Hickman, K.E., Monahan, A., Schwarcz, D.: ChatGPT goes to law school (2023). https://papers.ssrn.com/abstract=4335905. https://doi.org/10.2139/ssrn.4335905 30. Gupta, A., Hathwar, D., Vijayakumar, A.: Introduction to AI chatbots. Int. J. Eng. Res. Technol. 9, 255–258 (2020) 31. Cameron, G., et al.: Towards a chatbot for digital counselling. In: Proceedings of the 31st International BCS Human Computer Interaction Conference (HCI 2017), vol. 31, pp. 1–7 (2017) 32. Duha, M.S.U.: ChatGPT in education: an opportunity or a challenge for the future? TechTrends 67, 402–403 (2023). https://doi.org/10.1007/s11528-023-00844-y 33. Yoo, J., Cho, Y.: ICSA: Intelligent chatbot security assistant using Text-CNN and multi-phase real-time defense against SNS phishing attacks. Expert Syst. Appl. 207, 117893 (2022) 34. Gulenko, I.: Chatbot for IT security training: using motivational interviewing to improve security behaviour. In: AIST (supplement), pp. 7–16 (2014) 35. Hamad, S., Yeferny, T.: A chatbot for information security. arXiv preprint arXiv:2012.00826 (2020) 36. Cotton, D.R.E., Cotton, P.A., Shipway, J.R.: Chatting and cheating. ensuring academic integrity in the era of ChatGPT (2023). https://edarxiv.org/mrz8h/. https://doi.org/10.35542/ osf.io/mrz8h 37. Lo, P.: Warfare ethics in Sunzi’s art of war? Historical controversies and contemporary perspectives. J. Mil. Ethics 11, 114–135 (2012) 38. Shropshire, J., Warkentin, M., Sharma, S.: Personality, attitudes, and intentions: predicting initial adoption of information security behaviour. Comput. Secur. 49, 177–191 (2015). https:// doi.org/10.1016/j.cose.2015.01.002 39. Ajzen, I.: The theory of planned behaviour: reactions and reflections. Taylor & Francis (2011)

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity Protective Behaviours Among Healthcare Workers Sinazo Brown, Zainab Ruhwanya , and Ayanda Pekane(B) University of Cape Town, Cape Town, South Africa {zainab.ruhwanya,ayanda.pekane}@uct.ac.za

Abstract. The healthcare sector is increasingly adopting technologies like the Internet of Things (IoT) to improve healthcare service delivery and outcomes. As the adoption rates increase, so do the attack rates. The Internet of Medical Things (IoMT) has increasingly become a target of cyberattacks. There is an awareness that human factors significantly contribute to IoMT vulnerabilities but minimal research on the phenomenon. This research explores the factors influencing the behaviours of South African healthcare workers as they relate to IoMT cybersecurity. The theoretical model for the research was developed based on literature and the Information-Motivation-Behavioural (IMB) Skills model to study how these factors influence the behaviour of South African healthcare workers. Partial Least Squares Structural and Equation Modeling (PLS-SEM) were used to assess the model. The results suggest that Information and Behavioural Skills positively influence behaviour, indicating that knowledge and perceived ability are required to enact cybersecurity protective behaviour. Keywords: Internet of Medical Things · IoMT · IoMT Cybersecurity · Information Motivation Behavioural Skills Model · IMB · Healthcare Workers

1 Introduction 1.1 Background The healthcare sector has significantly integrating technology, notably the Internet of Things (IoT), into its services, enhancing healthcare delivery [1]. Within healthcare, this application of IoT is referred to as the Internet of Medical Things (IoMT). By 2019, IoMT accounted for a third of IoT usage, illustrating its rising importance [2, 3]. This adoption has improved healthcare outcomes and helped in reducing costs [3]. However, along with these benefits, the rise in IoMT usage has introduced substantial cybersecurity challenges. These systems have become frequent cyberattack targets, posing serious data security threats and potentially endangering patient safety [4, 5]. Despite the mounting evidence of the role of human factors in these security vulnerabilities, there is a notable lack of research in this area, particularly within the South African context [6, 7]. © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 432–444, 2023. https://doi.org/10.1007/978-3-031-38530-8_34

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

433

Research in South Africa has focused on the implications of technology adoption in healthcare, often neglecting the pivotal security issues. For example, studies on wearable devices have shown a tendency for users to prioritise performance over security [8]. There is a reported lack of understanding among users regarding security risks associated with these devices [6]. This is alarming considering that the literature suggests a strong link between human error and successful cyberattacks on IoMT. As South Africa continues to invest in advanced medical technology, the need for stringent security measures and the protection of patient data have become increasingly urgent. It is therefore imperative to understand the factors that influence secure behaviours to address these emerging risks and ensure the safe use of IoMT [9]. The primary objective of this study is to explore the factors influencing cybersecurityprotective behaviour among healthcare workers in South Africa within the context of IoMT. We base our investigation on a theoretical model rooted in the InformationMotivation-Behavioural (IMB) Skills framework [10], and enriched by existing information security literature [11]. By examining these factors, we aim to provide valuable insights into the cybersecurity-protective behaviour of healthcare workers. Ultimately, our research seeks to contribute to developing targeted interventions that can enhance healthcare workers’ security behaviours when using IoMT, ensuring the protection of patient data and, most importantly, human lives.

2 Literature Review 2.1 Internet of Medical Things (IoMT) The Internet of Things (IoT) is “a dynamic, self-configuring network of physical and virtual things powered with interoperable communication protocols, media, and standards. These things have attributes capable of connecting to information networks and can perform sensing, data processing, networking, and communication” [12, p.645]. When integrated with healthcare, this is known as the Internet of Medical Things (IoMT) [3]. By connecting medical-grade devices with healthcare infrastructures, IoMT enables remote communication between patients and healthcare providers [13]. Two environments for IoMT applications have been identified: healthcare facilities such as hospitals and the homes of patients [14]. IoMT includes devices that work with body sensors to monitor patients’ health, facilitating remote monitoring and diagnosis, and reducing the need for regular hospital visits. The advantages of IoMT include cost reductions due to early intervention [3, 15]. IoMT solutions comprise wearable, implantable, and in-hospital devices. Wearable devices collect patient vitals such as physical activity metrics, heart rates, and blood pressure levels for real-time monitoring and diagnostics [15]. These devices include fitness trackers, smart health watches, and belts [3, 15]. Used for remote health monitoring, these devices transmit readings of patient vitals to healthcare providers, enabling remote monitoring and diagnosis [15]. Patients may also utilise them for self-monitoring [16]. Implantable devices, in contrast, are placed inside the patient’s body. They monitor patient vitals like glucose levels and transmit these results via radio frequency to other devices [17]. Examples include cardiac defibrillators, pacemakers, and insulin pumps

434

S. Brown et al.

[18, 19]. Lastly, in-hospital devices are utilised in hospitals to assess patients and administer medical treatments [20]. Examples include Magnetic Resonance Imaging (MRI) machines, surgical robots, electrocardiogram machines, and anaesthesia machines [3, 18]. 2.2 IoMT and Cybersecurity The Internet of Medical Things (IoMT) has significantly advanced healthcare, but also attracts hackers with varying motivations [5]. As of 2019, nearly 90% of healthcare organisations using IoMT reported at least one security breach [2]. Poor cybersecurity can endanger lives [3, 4], and human error, often due to factors like workload, personality traits, and lack of awareness, is a major threat [7, 15, 21–24]. IoMT cybersecurity should address both technical and human factors through a socio-technical approach [25, 26], involving technical solutions like encryption and blockchain integration [14, 27–29], and non-technical measures, such as shared responsibility guidelines [30]. Cybersecurity must protect user and organisational assets while maintaining data confidentiality, integrity, and availability [4, 14]. Unmet IoMT cybersecurity objectives endanger human lives [3, 4]. The inability to access accurate patient data due to cybersecurity breaches may result in incorrect treatment decisions. Any compromise in the integrity and availability of this data can have severe repercussions for patient care and outcomes. For instance, if an attacker gains control over a device worn or implanted in a patient, they could manipulate the device to cause harm to the patient [19]. A security company demonstrated this risk by exploiting vulnerabilities in device functionality, intercepting an insulin pump, and administrating insulin without the patient’s knowledge [4, 20]. Consequently, cybersecurity priorities must be adapted based on the type of IoMT and its use, distinguishing between threats to data and threats to life [31].

3 Theoretical Framework 3.1 The Information-Motivation-Behavioural Skills Model Various empirical studies have employed the Information-Motivation-Behavioural Skills (IMB) framework to design and evaluate interventions targeting diverse health behaviours [32]. This framework posits that three primary factors influence behaviour: information, motivation, and behavioural skills. Information encompasses knowledge and awareness about a specific behaviour, while motivation consists of attitudes, beliefs, and perceived social norms. Behavioural skills refer to the ability to perform the behaviour. In the information security context, the IMB model has been utilised to examine privacy and cybersecurity behaviours [11, 33–37]. The model has been applied to enhance the effectiveness of information security awareness campaigns [34], explore the knowledge-behaviour gap in users’ security practices [35], and examine factors influencing the use of privacy settings on smartphones [11].

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

435

3.2 Hypotheses Development The IMB model consists of three components: information, motivation, and behavior, as depicted in Fig. 1. This research model has been adapted from the existing literature to gain insights into the factors that influence IoMT cybersecurity protective behavior. The model suggests that the constructs of information, motivation, and behavioral skills play a crucial role in shaping behavior. Figure 1 provides an overview of the relationships between these constructs and presents the derived hypotheses, which are explained in detail below.

Fig. 1. IMB Skills Model for IoMT Cybersecurity behaviour

Information. refers to an individual’s understanding of concepts related to a particular behaviour. In their implementation of the model, [11] operationalised information as knowledge, consisting of two types: privacy knowledge and technical knowledge. In the context of IoMT cybersecurity behaviour, privacy knowledge pertains to users’ awareness of IoMT privacy and security settings, while technology knowledge involves proper IoMT usage. Research using the IMB Skills model has operationalised information as awareness of risks and knowledge about behavioural action [11, 33].As such, the following hypotheses have been derived: Hypothesis 1a: IoMT cybersecurity risk awareness positively influences IoMT cybersecurity protective behaviour. Hypothesis 1b: IoMT cybersecurity knowledge positively influences IoMT cybersecurity protective behaviour. Hypothesis 1c: IoMT technology knowledge positively influences IoMT cybersecurity protective behaviour. Research suggests that perceived abilities, often operationalised as self-efficacy, are required to enact a behaviour. Self-efficacy is the individual’s ability, competence and confidence about performing a behaviour [38]. Studies have shown that self-efficacy significantly impacts one’s ability to perform and cope with a task and that lack of

436

S. Brown et al.

knowledge negatively affects one’s self-efficacy. The following hypotheses therefore arise: Hypothesis 2: IoMT cybersecurity risk awareness positively affects IoMT cybersecurity self-efficacy. Hypothesis 3: IoMT cybersecurity knowledge has a positive effect on IoMT cybersecurity self-efficacy. Hypothesis 4: IoMT technology knowledge has a positive effect on IoMT technology self-efficacy. Motivation consists of personal motivation and social motivation. Personal motivation pertains to individual attitudes, while social motivation is influenced by an individual’s social support system. It is argued that subjective norms can influence users’ perceptions of their self-efficacy for the technology and its privacy settings [11]. Furthermore research shows that lower levels of subjective norms lead to less privacy-protective behaviour [11].[34] emphasises the application of the IMB model, particularly its social motivation component, to enhance the effectiveness of information security awareness campaigns, highlighting the significance of considering normative knowledge and social norms in shaping information security behaviour. [11] argue that individuals’ willingness to share information negatively influences their intention to use protective settings. Hypothesis 5a: IoMT subjective norms have a positive influence on IoMT Cybersecurity Self-Efficacy. Hypothesis 5b: IoMT subjective norms positively influence IoMT Technology Self-Efficacy. Hypothesis 6a: IoMT information sharing preferences positively influence IoMT Cybersecurity Self-Efficacy. Hypothesis 6b: IoMT information sharing preferences positively influence IoMT Technology Self-Efficacy. Hypothesis 7a: IoMT subjective norms positively influence IoMT cybersecurity protective behaviour. Hypothesis 7b: IoMT information sharing preferences negatively influence IoMT cybersecurity protective behaviour. Behavioural Skills encompasses actual and perceived skills. In order to perform a behaviour, an individual must believe they can and have the necessary skills to do so. Self-efficacy, a measure of an individual’s belief in their knowledge of the relevant technology and ability to use it, has been found to influence behaviours positively [37]. Consequently, the following hypotheses are formed: Hypothesis 8a: IoMT Cybersecurity Self-Efficacy positively influences IoMT cybersecurity protective behaviour. Hypothesis 8b: IoMT Technology Self-Efficacy positively influences IoMT cybersecurity protective behaviour.

4 Research Methodology Quantitative methods were employed in this research through the distribution of questionnaire surveys to healthcare workers at an academic hospital serving the South African public. Healthcare workers include a range of professionals, such as doctors, nurses, and

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

437

laboratory technicians, all involved in delivering patient care within healthcare facilities [39]. For the population of 5,300 healthcare workers at a South African university, the study used a probability sampling method to derive a sample of 357, thereby allowing for precise inferences with a 5% margin of error. Data collection involved a web-based, self-completed survey questionnaire created and disseminated using Qualtrics. Inferential statistical analysis was subsequently applied, with the chosen statistical method aiding in hypothesis testing [40]. Prior to the survey’s official distribution, the instrument was pretested by graduate students at the same South African public university. Instrument Development. In this research, constructs were drawn from previously validated measures and tailored specifically to the context of IoMT Cybersecurity. These included, Risk Awareness [41], Cybersecurity Knowledge [36], Technology Knowledge [42], Subjective Norms [43], Sharing Preferences [11], Cybersecurity Self-Efficacy [11], Technology Self-Efficacy [44], and Behaviour [36]. A 5-point Likert scale (1”Strongly Disagree” to 5”Strongly Agree”) gauged respondents’ agreement with measurement items. The Qualtrics software platform was used for survey development and distribution. The questionnaire was first pilot tested with 12 postgraduate students, a subset of the target population. Following a pretest, items were refined to ensure grammatical accuracy, usability, and clarity, including an addition of a comprehensive explanation of IoMT for the participants’ benefit. Participants. At the conclusion of the survey period, a total of 312 responses were recorded. However, only 68 responses (19% response rate) were deemed complete and valid for subsequent data analysis. The response rate falls within the common range of 15–20% for web-based, self-completed surveys [40, 45]. It is important to note that the study was conducted in 2021 during the pandemic, which may have contributed to the relatively low response rate. Healthcare workers were experiencing unprecedented challenges and increased workloads during this time, potentially impacting their willingness and availability to participate in the survey.

5 Findings Demographic Profile. The demographics of the participants who completed the survey are as follows: a total of 68 healthcare workers participated; the majority were medical students aged 21–24 (82.35%), with women making up 63.24% and men 35.29%. Participants’ clinical careers varied, with 60.29% having 12 months or less experience, 14.71% having 12–24 months, and 25% having 24 months or more. Occupations included physicians (27.94%), audiologists (5.88%), occupational therapists (8.82%), physiotherapists (16.18%), speech language pathologists (1.47%), and others (39.71%). The higher selection of’other’ might have been due to ambiguity in the question, as it could have included nurses and dentists. However,’other’ may also represent medical students in their respective fields, as most respondents had a high school education as their highest level (60.29%), consistent with their student status. Regarding IoMT use, 57.35% had less than one year of experience, 20.59% had 1–2 years, 16.18% had 2–5 years, and

438

S. Brown et al.

5.88% had five or more years. The lack of diversity of the participants was an added limitation. Most of the participants were in the early years of their careers, with minimal use of IoMT. In contrast, we would not argue that this sample is highly generalisable to the overall healthcare workers population. It is reasonable that it could be generalised to healthcare workers in a medical school setting. Descriptive Statistics. When determining the level of awareness South African healthcare workers have about IoMT cybersecurity, responses were filtered based on the number of years of IoMT use to ascertain self-reported IoMT cybersecurity awareness. The majority of respondents (57.6%) had less than one year of experience with IoMT use, and 76.9% of them indicated not being aware of the potential IoMT cybersecurity threats and their negative consequences. As the years of experience increased, so did the participant’s awareness of IoMT cybersecurity and threats. For instance, among participants with 1–2 years of experience, 23.1% were not at all aware, and 50% were moderately aware of IoMT cybersecurity threats. The majority of the participants with 2–5 years of experience showed to be more aware of IoMT cybersecurity and threats. Finally, participants with five or more years of experience demonstrated the highest awareness levels, with none of them being not at all aware and 12.5% being very much aware of IoMT cybersecurity threats. A similar trend can be observed in the participants’ perception of IoMT cybersecurity’s relevance to their practice. Among those with less than one year of experience, 76.2% considered it not at all true, while 33.3% of those with five or more years of experience considered it very much true. Lastly, when asked about the consequences of IoMT cybersecurity, 75% of participants with less than one year of experience considered them untrue, while 57.1% of those with five or more years of experience considered them very much true. Overall, the increase in experience correlates with increased awareness of IoMT cybersecurity. Assessment of Measurement Model. Partial Least Squares Structural Equation Modeling (PLS-SEM) was employed to assess the research model and test hypotheses using the SmartPLS software package. PLS-SEM was chosen due to the complexity of the path model, featuring multiple latent constructs and indicators [46]. The assessment followed a two-step approach, first evaluating the measurement model’s reliability and validity and then assessing the structural model [47]. The IoMT Cybersecurity IMB Skills model assessment used PLS-SEM to analyze complex path models with multiple latent constructs and indicators. The measurement model was evaluated for reliability, discriminant validity, and convergent validity. Reliable indicators with loadings above 0.60 were retained [46], and those below were removed; these included IoMTB3, IoMTB4, IoMTB6, IoMTB10, IoMTB11 and IoMTB12. All other indicators were retained as they met the reliability standard resulting in a valid and reliable model. Internal consistency was established through Cronbach’s alpha and composite reliability measures, with values above 0.70 considered acceptable. Composite reliability of values 0.70 to 0.90 is considered satisfactory [48]. However, as seen in Table 1, the high composite reliability value of 0.964 for the IoMT Cybersecurity Knowledge construct suggests indicator redundancy and reduced construct validity. This is because composite reliability higher than 0.95 can suggest indicator redundancy, reducing construct validity [48].

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

439

Convergent validity was assessed through the AVE of each construct, which measures whether the latent construct can explain more than half of its indicators’ variances. AVE values of 0.50 or more indicate sufficient convergent validity [46], and all AVE values in this study were 0.50 or above, indicating good convergent reliability as seen in Table 1. Discriminant validity, which indicates the uniqueness and distinctiveness of each latent construct, was assessed using the heterotrait-monotrait (HTMT) ratio of correlations and cross-loading of indicators [48, 49]. The HTMT method, with a threshold of 0.90, and cross-loadings examination showed that discriminant validity was met for all latent constructs as seen in Table 2, providing a solid foundation for hypothesis testing in the next section.

Table 1. Reliability and Validity Tests Cronbach’s Alpha

Composite Reliability

Average Variance Extracted (AVE)

BEHAVIOUR

0.794

0.853

0.50

IoMT Cybersecurity Knowledge (IoMTCK)

0.960

0.964

0.57

IoMTCybersecurity Risk Awareness (IoMTCRA)

0.898

0.936

0.83

IoMTCybersecuritySelfEfficacy (IoMTCSE)

0.899

0.937

0.83

IoMTSharingPreference (IoMTSP)

0.848

0.904

0.76

IoMTSubjectiveNorms (IoMTSN)

0.897

0.935

0.83

IoMT Technology Knowledge (IoMTTK)

0.850

0.894

0.63

IoMTTechnologySelfEfficacy (IoMTTSE)

0.846

0.908

0.77

Assessment of The Structural Model. The structural model assessment includes the coefficient of determination (R2 ), the blindfolding-based cross-validated redundancy measure (Q2 ), and the path coefficient. The R2 measures the insample predictive ability, with values of 0.75, 0.50, and 0.25 considered substantial, moderate, or weak, respectively [50]. However, the study context should be considered, as in consumer behaviour disciplines, a measure of 0.20 is considered high [50]. The construct “Behaviour” as seen in Table 3 has an R2 of 0.671, which is high in the study context, explaining 67.1% of the variance. Similarly, IoMT Cybersecurity Self-Efficacy (34.5%) and IoMT Technology Self-Efficacy (33.4%) demonstrate predictive ability. Since R2 only measures in-sample

440

S. Brown et al. Table 2. Heterotrait-Monotrait (HTMT) Ratio of Correlations. BEHAVIOUR

IoMTCK

IoMTCRA

IoMTCS-E

IoMTSP

IoMTSN

IoMTTK

BEHAVIOUR IoMTCK

0.710

IoMTCRA

0.531

IoMTCSE

0.851

0.641 0.575

0.588

IoMTSP

0.463

0.253

0.195

0.396

IoMTSN

0.486

0.239

0.177

0.412

0.440

IoMTTK

0.681

0.562

0.649

0.664

0.416

0.397

IoMTTSE

0.593

0.425

0.453

0.697

0.307

0.300

0.787

predictive ability, (Q2 ) is used to assess out-of-sample predictive power. With all (Q2 ) values above 0, Behaviour, IoMTCSE, and IoMTTSE all have predictive relevance. Hypotheses Testing. To test the hypothesised relationships, path coefficients, t-values, and p-values are assessed [40]. The p-value represents the probability that a statistical result occurs by chance [40]. A p-value lower than the significance level (5%) indicates a statistically significant relationship [40]. T-values are also assessed for significance testing [51], with a path coefficient considered significant if the t-value is greater than 1.96 (using a two-tailed t-test with a significance level of 5%) [51]. Table 4 shows the hypotheses tested; nine were not supported of the 14 hypotheses tested. Table 4 shows that IoMT cybersecurity knowledge positively influences IoMT cybersecurity protective behaviour (H1b). On the other hand, we found that IoMT cybersecurity risk awareness (H1a) and IoMT technology knowledge (H1c) do not significantly influence IoMT cybersecurity protective behaviour. Our findings suggest that IoMT cybersecurity self-efficacy (H2 and H3) and IoMT technology self-efficacy (H4) positively affect their respective protective behaviours. Additionally, we found that IoMT subjective norms and information-sharing preferences do not significantly influence IoMT cybersecurity self-efficacy or technology self-efficacy, nor do they have a significant effect on IoMT cybersecurity protective behaviour (H5a, H5b, H6a, H6b, H7a, and H7b), with the pvalue being higher than the significance level of 0.05 and t-value lower than 1.96.

Table 3. Predictive Measures. R2

Q2

BEHAVIOUR

0.671

0.287

IoMT Cybersecurity Self-Efficacy (IoMTCSE)

0.455

0.345

IoMT Technology Self-Efficacy (IoMTTSE)

0.473

0.334

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

441

Table 4. Path Coefficient. Hypothesis

Path

Path Coefficient

t-value

p-value

Supported

H1a

IoMTCRA → BEHAVIOUR

−0.063

0.628

0.53

No

H1b

IoMTCK → BEHAVIOUR

0.343

3.812

0

Yes

H1c

IoMTTK → BEHAVIOUR

0.121

0.85

0.395

No

H2

IoMTCRA → IoMTCSE

0.306

2.365

0.018

Yes

H3

IoMTCK → IoMTCSE

0.281

2.204

0.028

Yes

H4

IoMTTK → IoMTTSE

0.672

7.542

0

Yes

H5a

IoMTSN → IoMTCSE

0.193

1.82

0.069

No

H5b

IoMTSN → IoMTTSE

−0.005

0.05

0.96

No

H6a

IoMTSP → IoMTCSE

0.178

1.635

0.102

No

H6b

IoMTSP → IoMTTSE

0.041

0.331

0.741

No

H7a

IoMTSN → BEHAVIOUR

0.096

1.111

0.267

No

H7b

IoMTSP → BEHAVIOUR

0.107

1.145

0.252

No

H8a

IoMTCSE → BEHAVIOUR

0.447

3.736

0

Yes

H8b

IoMTTSE → BEHAVIOUR

−0.013

0.11

0.913

No

6 Discussion The empirical results reveal several insights. A major one is the pronounced influence of IoMT cybersecurity knowledge (IoMTCK) on protective behaviour, aligning with earlier studies [5, 11]. Conversely, IoMT cybersecurity risk awareness (IoMTCRA) did not significantly affect protective behaviour, This divergence might be attributed to the novelty and complexity of IoMT technologies, suggesting that mere awareness of potential threats might not be sufficient for healthcare professionals who may not fully understand these complex systems. Our study found that IoMTCRA and IoMT technology knowledge (IoMTTK) both positively impact participants’ self-efficacy in IoMT cybersecurity, indicating the importance of enhancing healthcare workers’ self-efficacy in this context. This shows that healthcare professionals need specific IoMT cybersecurity knowledge rather than just general technological familiarity. Moreover, the study shows both Information and Behavioural Skills influence South African healthcare workers’ attitudes towards IoMT cybersecurity. Self-efficacy positively influenced protective behaviour, suggesting that perceived competence can foster protective behaviour in healthcare cybersecurity. This is consistent with previous studies [11, 32, 52]. Interestingly, no significant relationship was found between motivation and behaviour, particularly IoMT subjective norms (IoMTSN) and information sharing preferences (IoMTSP)’s influence on self-efficacy and protective behaviour corroborating with [11]. This is contrary to previous research that highlighted the role of subjective

442

S. Brown et al.

norms in shaping cybersecurity behaviour [52]. This highlights professional practices and guidelines in healthcare may influence cybersecurity protective behaviour more than perceived social pressure.

7 Conclusion This study uses the IMB Skills model to explore IoMT cybersecurity-protective behaviour factors among South African healthcare workers. The findings underline the importance of enhancing healthcare workers’ IoMT cybersecurity knowledge and self-efficacy in managing cybersecurity risks. However, subjective norms, informationsharing preferences, and technological knowledge did not significantly affect protective behaviour, necessitating further research. Our study contributes to research on healthcare cybersecurity protective behaviour and extends the IMB Skills model’s application to IoMT cybersecurity. The findings emphasise enhancing cybersecurity knowledge and self-efficacy beliefs among healthcare workers. Targeted training, addressing knowledge gaps, and focusing on individual behaviours can strengthen cybersecurity measures in IoMT usage by healthcare workers.

References 1. Kotronis, C., et al.: Evaluating internet of medical things (IoMT)-based systems from a humancentric perspective. Internet of Things 8, 100125 (2019) 2. Alsubaei, F., Abuhussein, A., Shandilya, V., Shiva, S.: IoMT-SAF: internet of medicalthings security assessment framework. Internet of Things 8, 100123 (2019) 3. Yaacoub, J.P.A., et al.: Securing internet of medical things systems: limitations, issues and recommendations. Futur. Gener. Comput. Syst. 105, 581–606 (2020) 4. Alexander, B., Haseeb, S., Baranchuk, A.: Are implanted electronic devices hackable? Trends Cardiovasc. Med. 29(8), 476–480 (2019) 5. Baranchuk, A., et al.: Cybersecurity for cardiac implantable electronic devices: what should you know? J. Am. Coll. Cardiol. 71(11), 1284–1288 (2018) 6. Cilliers, L.: Wearable devices in healthcare: privacy and information security issues. Health Inf. Manag. J. 49(2–3), 150–156 (2020) 7. Evans, M., He, Y., Maglaras, L., Janicke, H.: Heart-is: a novel technique for evaluating human error-related information security incidents. Comput. Secur. 80, 74–89 (2019) 8. Rubin, A., Ophoff, J.: Investigating adoption factors of wearable technology in health and fitness. In: 2018 Open Innovations Conference (OI), pp. 176–186. IEEE (2018) 9. Agwa-Ejon, J., Pradhan, A.: The impact of technology on the health care services in Gauteng province, South Africa. In: International Association for Management of Technology (IAMOT) Annual Conference (2014) 10. Fisher, W.A., Fisher, J.D., Shuper, P.A.: Social psychology and the fight against aids: an information–motivation–behavioral skills model for the prediction and promotion of health behavior change. In: Advances in Experimental Social Psychology, vol. 50, pp. 105–193. Elsevier (2014) 11. Crossler, R.E., Bélanger, F.: Why would i use location-protective settings on my smartphone? Motivating protective behaviors and the existence of the privacy knowledge–belief gap. Inf. Syst. Res. 30(3), 995–1006 (2019)

Factors Influencing Internet of Medical Things (IoMT) Cybersecurity

443

12. Al-Turjman, F., Nawaz, M.H., Ulusar, U.D.: Intelligence in the internet of medical things era: a systematic review of current and future trends. Comput. Commun. 150, 644–660 (2020) 13. Gatouillat, A., Badr, Y., Massot, B., Sejdi´c, E.: Internet of medical things: a review of recent contributions dealing with cyber-physical systems in medicine. IEEE Internet of Things J. 5(5), 3810–3822 (2018) 14. Hatzivasilis, G., Soultatos, O., Ioannidis, S., Verikoukis, C., Demetriou, G., Tsatsoulis, C.: Review of security and privacy for the internet of medical things (IoMT). In: 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), pp. 457– 464. IEEE (2019) 15. Putta, S.R., Abuhussein, A., Alsubaei, F., Shiva, S., Atiewi, S.: Security benchmarks for wearable medical things: stakeholders-centric approach. In: Yang, X.-S., Sherratt, S., Dey, N., Joshi, A. (eds.) Fourth International Congress on Information and Communication Technology. AISC, vol. 1027, pp. 405–418. Springer, Singapore (2020). https://doi.org/10.1007/ 978-981-32-9343-4_32 16. Abraham, M.: Wearable technology: a health-and-care actuary’s perspective. Institute and Faculty of Actuaries (2016) 17. Wazid, M., Das, A.K., Rodrigues, J.J., Shetty, S., Park, Y.: IoMT malware detection approaches: analysis and research challenges. IEEE Access 7, 182459–182476 (2019) 18. Alsubaei, F., Abuhussein, A., Shiva, S.: Security and privacy in the internet of medical things: taxonomy and risk assessment. In: 2017 IEEE 42nd Conference on Local Computer Networks Workshops (LCN Workshops), pp. 112–120. IEEE (2017) 19. Williams, P.A., Woodward, A.J.: Cybersecurity vulnerabilities in medical devices: a complex environment and multifaceted problem. Med. Dev. (Auckland, NZ) 8, 305 (2015) 20. Koutras, D., Stergiopoulos, G., Dasaklis, T., Kotzanikolaou, P., Glynos, D., Douligeris, C.: Security in IoMT communications: a survey. Sensors 20(17), 4828 (2020) 21. Arend, I., Shabtai, A., Idan, T., Keinan, R., Bereby-Meyer, Y.: Passive-and notactive-risk tendencies predict cyber security behavior. Comput. Secur. 97, 101964 (2020) 22. Chowdhury, N.H., Adam, M.T., Teubner, T.: Time pressure in human cybersecurity behavior: theoretical framework and countermeasures. Comput. Secur. 97, 101931 (2020) 23. Gratian, M., Bandi, S., Cukier, M., Dykstra, J., Ginther, A.: Correlating human traits and cyber security behavior intentions. Comput. Secur. 73, 345–358 (2018) 24. Puat, H.A.M., Abd Rahman, N.A.: IoMT: a review of pacemaker vulnerabilities and security strategy. In: Journal of Physics: Conference Series, vol. 1712, p. 012009. IOP Publishing (2020) 25. McEvoy, T.R., Kowalski, S.J.: Deriving cyber security risks from human and organizational factors–a socio-technical approach. Complex Syst. Inf. Model. Q. (18), 47–64 (2019) 26. De Bruijn, H., Janssen, M.: Building cybersecurity awareness: the need for evidence-based framing strategies. Gov. Inf. Q. 34(1), 1–7 (2017) 27. Rizk, D., Rizk, R., Hsu, S.: Applied layered-security model to IoMT. In: 2019 IEEE International Conference on Intelligence and Security Informatics (ISI), pp. 227–227. IEEE (2019) 28. Papaioannou, M., et al.: A survey on security threats and countermeasures in internet of medical things (IoMT). Trans. Emerg. Telecommun. Technol. 33, e4049 (2020) 29. Dai, H.N., Imran, M., Haider, N.: Blockchain-enabled internet of medical things to combat covid-19. IEEE Internet of Things Mag. 3(3), 52–57 (2020) 30. Webb, T., Dayal, S.: Building the wall: addressing cybersecurity risks in medical devices in the USA and Australia. Comput. Law Secur. Rev. 33(4), 559–563 (2017) 31. Jones, R.W., Katzis, K.: Cybersecurity and the medical device product development lifecycle. ICIMTH, pp. 76–79 (2017)

444

S. Brown et al.

32. Fisher, W.A., Fisher, J.D., Harman, J.: The information-motivation-behavioral skills model: a general social psychological approach to understanding and promoting health behavior. Soc. Psychol. Found. Health Illness 22(4), 82–106 (2003) 33. Crossler, R.E., Bélanger, F.: The mobile privacy-security knowledge gap model: understanding behaviors. Hawaii International Conference on System Sciences (2017) 34. Khan, B., Alghathbar, K.S., Khan, M.K.: Information security awareness campaign: an alternate approach. In: Kim, T.-h, Adeli, H., Robles, R.J., Balitanas, M. (eds.) ISA 2011. CCIS, vol. 200, pp. 1–10. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23141-4_1 35. Scott, J., Ophoff, J.: Investigating the knowledge-behaviour gap in mitigating personal information compromise. In: HAISA, pp. 236–245 (2018) 36. Farooq, A., Jeske, D., Isoaho, J.: Predicting students’ security behavior using informationmotivation-behavioral skills model. In: Dhillon, G., Karlsson, F., Hedström, K., Zúquete, A. (eds.) ICT Systems Security and Privacy Protection. IFIP Advances in Information and Communication Technology, vol. 562, pp. 238–252. Springer, Heidelberg (2019). https://doi. org/10.1007/978-3-030-22312-0_17 37. Iqbal, J., Soroya, S.H., Mahmood, K.: Financial information security behavior in online banking. Inf. Dev. 02666669221149346 (2023) 38. Bandura, A.: Self-efficacy: toward a unifying theory of behavioral change. Psychol. Rev. 84(2), 191–215 (1977). https://doi.org/10.1037/0033-295X.84.2.191 39. Joseph, B., Joseph, M.: The health of the healthcare workers. Ind. J. Occup. Environ. Med. 20(2), 71 (2016) 40. Bhattacherjee, A.: Social science research: principles, methods, and practices (2012) 41. Bulgurcu, B., Cavusoglu, H., Benbasat, I.: Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. MIS Q. 523–548 (2010) 42. Büchi, M., Just, N., Latzer, M.: Caring is not enough: the importance of internet skills for online privacy protection. Inf. Commun. Soc. 20(8), 1261–1278 (2017) 43. Hoque, R., Sorwar, G.: Understanding factors influencing the adoption of mhealth by the elderly: an extension of the UTAUT model. Int. J. Med. Informatics 101, 75–84 (2017) 44. Hajiheydari, N., Delgosha, M.S., Olya, H.: Scepticism and resistance to IoMT in healthcare: application of behavioural reasoning theory with configurational perspective. Technol. Forecast. Soc. Chang. 169, 120807 (2021) 45. Wall, J.D., Palvia, P., Lowry, P.B.: Control-related motivations and information security policy compliance: the role of autonomy and efficacy. J. Inf. Priv. Secur. 9(4), 52–79 (2013). https:// doi.org/10.1080/15536548.2013.10845690 46. Hair, J.F., Jr., Howard, M.C., Nitzl, C.: Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. J. Bus. Res. 109, 101–110 (2020) 47. Hair, J.F., Ringle, C.M., Sarstedt, M.: PLSSEM: indeed a silver bullet journal of marketing theory and practice, 19(2) 139–152 (2011). https://doi.org/10.2753/MTP1069-6679190202 48. Hair, J.F., Jr., Sarstedt, M., Ringle, C.M., Gudergan, S.P.: Advanced Issues in Partial Least Squares Structural Equation Modeling. Sage Publications, Newbury Park (2017) 49. Hamid, M.R.A., Sami, W., Sidek, M.H.M.: Discriminant validity assessment: use of Fornell & Larcker criterion versus HTMT criterion. J. Phys. Conf. Ser. 890, 012163 (2017). https://doi. org/10.1088/1742-6596/890/1/012163 50. Hair, J.F., Risher, J.J., Sarstedt, M., Ringle, C.M.: When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 31, 2–24 (2019) 51. Wong, K.K.K.: Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS. Mark. Bull. 24(1), 1–32 (2013) 52. Ifinedo, P.: Understanding information systems security policy compliance: an integration of the theory of planned behavior and the protection motivation theory. In: Computers and Security, vol. 31, pp. 83–95. Elsevier Ltd. (2012). https://doi.org/10.1016/j.cose.2011.10.007

The Influence of Interpersonal Factors on Telecommuting Employees’ Cybercrime Preventative Behaviours During the Pandemic Tim Wright1 , Zainab Ruhwanya1(B) , and Jacques Ophoff1,2 1

University of Cape Town, Cape Town, South Africa [email protected], [email protected] 2 Abertay University, Dundee, UK [email protected] Abstract. The pandemic forced a major shift in the way employees were completing their work duties. This meant that there was an increased dependence on cyberspace to perform work duties. Although this had its benefits, it also came with its challenges. Cybercrimes, in specific, increased dramatically as more and more people were using cyberspace. Cybercriminals evolved their attacks, and many more people began to fall victim to these cybercriminals. It was the sudden shift to telecommuting which meant that people did not have the correct cybercrime preventative behaviours which could help them to fight these criminals. Telecommuting also meant that employees now had a different work environment, which meant that the interpersonal factors that employees had were different from traditional workspaces. There are limited studies addressing this problem and certainly very few in the South African context. Therefore, this empirical report presents the influence of interpersonal factors on telecommuting employees’ cybercrime preventative behaviours. An adapted framework is proposed and evaluated using data collected from 209 South African employees. Descriptive statistics and data analysis were conducted through IBM SPSS and PLS-SEM, respectively. The results uphold the suitability of the adapted Theory of Interpersonal Behaviour model. The results show that the intention to perform cybercrime preventative behaviours has a strong impact on an employee’s performing those cybercrime preventative behaviours. It also shows how the habit of an employee with regard to cybercrime preventative behaviours also has a strong impact on the employees performing these preventative behaviours. Keywords: Information security COVID-19

1

· Cybercrime · Telecommute ·

Introduction

The pandemic and nationwide lockdowns forced a significant shift in social situations such as work and learning. For organisations, this resulted in the c IFIP International Federation for Information Processing 2023  Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 445–458, 2023. https://doi.org/10.1007/978-3-031-38530-8_35

446

T. Wright et al.

widespread adoption of telecommuting, which in turn established a new normal for employees. In this study, telecommuting refers to the practice of working from home while utilising information and telecommunication technologies, such as the Internet, to access and engage with organisational resources as though the employee were physically present at the corporate offices [30]. Social situations during the pandemic, such as telecommuting and online learning, contributed to increased reliance on the Internet. In South Africa, reports indicate that internet use increased by 14% overall, with an even greater increase of 20% among full-time employees [1]. Unfortunately, this shift also resulted in a rise in cybercrime. Studies report a 667% increase in COVID19-related phishing messages [33], highlighting the growing cybersecurity risks. Additionally, the FBI’s Internet Crime Complaint Centre experienced a substantial increase in daily complaints, from around 1,000 before the pandemic to 3,000–4,000 during the pandemic [10]. During the pandemic, people experienced various affects, which translates as emotional states such as stress, trauma, loneliness, depression, and anxiety [28]. This created an opportunity for cybercriminals to exploit these psychological vulnerabilities. Victims of cybercrime often face negative consequences, including feelings of outrage, anxiety, stress, loss of confidence in technology, and reduced interest in adopting new technologies [5]. Bada & Nurse [5] argue that individuals are more likely to change their behaviour in response to cybercrime when they perceive a high risk and exposure to a threat that could cause harm. However, those who believe they have sufficient knowledge of cybercrime are less likely to change their behaviour. Therefore, understanding the influence of affects on cybercrime prevention behaviours is crucial to developing effective strategies that promote cybersecurity awareness and resilience in a telecommuting and stressful environment. This study aims to examine the influence of interpersonal factors such as social factors (norms, roles, social situation and person’s self-concept), affects (positive or negative emotions) and the perceived consequences (subjective probability that a particular consequence will follow the behaviour and the affect attached to that consequence), on telecommuting employees’ intention to perform cybercrime preventative behaviours. This study will use the theory of interpersonal behaviour [40], which helps to explain how the strength of habit and behaviour intention can affect a person’s behaviour [34]. The remainder of this paper is organised as follows. First, the theoretical framework is discussed, including the hypotheses and conceptual model. This is followed by a description of the research design. Next the data analysis is presented, followed by a discussion of the results. Lastly, the conclusion summarises the research contributions and opportunities for future work.

2

Literature Review

During the COVID-19 pandemic, the rapid shift towards telecommuting exposed employees to increased cyber threats due to inadequate safeguards. Existing literature discusses various aspects of cybercrime during the pandemic, such as the

The Influence of Interpersonal Factors on Telecommuting Employees’

447

increase in cybercrime [33], cybercriminal targets [6,31], a rise in cyberattacks [9,29], effects of cybercrime on individuals [5], and cybercrime behaviour [5,13]. Cybercriminals primarily targeted older individuals and those with psychological vulnerabilities [6,31], exploiting their inexperience with technology and reliance on it for daily tasks during the pandemic. The effects of cybercrime on individuals include anxiety, anger, loss of confidence in technology, and in severe cases, symptoms of Acute Stress Disorder [5]. Unfortunately, individuals who believe they are well informed about cyber risks may feel less vulnerable and not adopt countermeasures [24], potentially putting them at risk [13]. People’s motivation to change their behaviour depends on the perceived severity of the threat, susceptibility to it, and the cost and effort of prevention [2,8,12,23]. While the existing literature provides valuable insights into the effects of cybercrime [5], it does not sufficiently address the impact of pandemic-induced stress, anxiety, and isolation on employees’ cybercrime preventative behaviours. Previous research on employees’ cybercrime preventative behaviours has largely focused on factors such as deterrence theory [2,11,12] and compliance with information security policies (ISPs) [8,23]. These studies have significantly contributed to our understanding of how employees can safeguard information and technology resources and how organisations can devise mechanisms to improve compliance with ISPs. However, there is a noticeable gap in the literature concerning interpersonal factors influencing employees’ cybercrime preventative behaviours, specifically in the context of telecommuting. The majority of existing literature assumes that employees are working in traditional office settings, where face-to-face interactions and well-established social norms contribute to shaping employee behaviours. Although some studies have considered the role of virtual status (telecommuting) and ISP compliance [11], the literature largely overlooks the emotional factors that might influence telecommuting employees’ cybercrime preventative behaviours. The stress, anxiety, and isolation experienced by employees working from home may have a considerable impact on their adherence to ISPs, and this area warrants further investigation. Furthermore, the shift in situational factors and the lack of physical interaction with colleagues and organisational support may lead to a decreased sense of responsibility towards protecting sensitive information and adhering to cybersecurity best practices. This gap in the literature highlights the need to explore the interpersonal factors influencing employees’ cybercrime preventative behaviours during the COVID-19 pandemic.

3

Theoretical Framework

This study’s conceptual model is built upon the theory of Interpersonal Behaviour (TIB) [40]. The TIB is a comprehensive cognitive model that expands upon Ajzen and Fishbein’s Theory of Reasoned Action [3] and Ajzen’s Theory of Planned Behaviour [4] where the theories posit that an individual’s intention to perform a specific act is the primary determinant of their behaviour. TIB accounts for additional factors such as habits, facilitating conditions, and affect,

448

T. Wright et al.

which contribute to its predictive power over other models [27,42]. The TIB emphasises the importance of even small amounts of variance in behaviour, as they can be socially significant [37,42]. The TIB considers interpersonal behaviour as a multifaceted phenomenon shaped by attitudes, habits, social factors, affects, and situational constraints. The tri-level model, as seen in Fig. 1, explores how these factors influence the formation of intentions and ultimately predict the performance of specific behaviours. Though less frequently used, TIB has demonstrated its value in understanding complex human behaviours across various contexts. For instance, [42] used TIB to study internet abuse in the workplace, while [32,34] applied the TIB model to explain non-work related computing in the workplace and personal use of the Internet at work. [37] used to explain software piracy behaviour, and [7] investigated cyberloafing using TIB in combination with the theory of organisational justice. Other research includes understanding telemedicine adoption by physicians [18] and examining the relationship between computer self-efficacy, anxiety, experience, support and usage [16]. These diverse applications of the TIB model demonstrate its usefulness in explaining and understanding complex human behaviours influenced by social and physical environments.

Fig. 1. The theory of interpersonal behaviour adopted from [34]

Affect. Affect refers to the emotions an individual feels when thinking about a particular behaviour [40, p. 9]. These emotions can be positive or negative, and they can vary in intensity [34,37]. During the pandemic, people experienced various emotional states such as stress, trauma, loneliness, depression, and anxiety [28], which created opportunities for cybercriminals to exploit these psychological vulnerabilities. Victims of cybercrime often face negative consequences, including outrage, anxiety, stress, loss of confidence in technology, and reduced interest in adopting new technologies [5]. Bada et al. [5] argue that individuals are more likely to change their behaviour in response to cybercrime when they perceive a high risk and exposure to a threat that could cause harm, but those with sufficient knowledge of cybercrime are less likely to change their behaviour. Understanding the influence of affect on cybercrime prevention behaviours is crucial for developing effective strategies that promote cybersecurity awareness and resilience in telecommuting and stressful environments. Based on the above evaluation, we can formulate the following hypothesis H1:

The Influence of Interpersonal Factors on Telecommuting Employees’

449

H1: The affect of an employee towards cybercrimes positively influences their intention to perform cybercrime preventative behaviours. Social Factors. Social factors are comprised of norms, roles, and self-concepts that form due to the interactions individuals have with the people around them [37, p. 8–9]. If people around them have been victims of cybercrime, they may want to enhance their protection so that they do not fall victim as well. Research suggests that employees are more likely to engage in secure behaviours if they perceive support from their organisation and colleagues [11]. As employees adjust to remote work, they may look to their colleagues and supervisors for guidance on cybersecurity best practices. Based on the above evaluation, we can formulate the following hypothesis H2: H2: The social factors that form due to the people around an employee have a positive influence on their intention to perform cybercrime preventative behaviours. Norms. Norms are beliefs that guide an individual’s understanding of which behaviours are acceptable and which are not [40, p. 9]. Norms can influence people to change their behaviour by increasing their desire and pressure to conform to the group’s expected behaviour [32]. The presence of norms can create a cue or pressure that increases the likelihood that the individual will behave in accordance with the given norm [25,32]. The pandemic disrupted many aspects of daily life, including the norms that guide employees’ understanding of acceptable and unacceptable behaviour [25]. Based on the above evaluation, we can formulate the following hypothesis H3: H3: The norms related to cybercrime preventative behaviours have a positive relationship with social factors. Roles. Roles can be understood as the expected behaviour of an individual based on their position within a group. Roles must be considered when understanding how an individual behaves, as each individual with a different role within a group will likely behave differently based on their role [32]. Therefore, employees’ roles within their organisation can also influence their cybercrime preventative behaviours. For instance, individuals in managerial or leadership positions may be more likely to promote secure behaviours among their team members. Based on the above evaluation, we can formulate the following hypothesis H4: H4: The roles related to cybercrime preventative behaviours have a positive relationship with social factors. Self-Concept. An individual’s self-concept towards a behaviour is likely to affect the amount of social influence perceived by that individual [32]. This is proposed to impact social factors due to the ability of significant and known others to observe behaviours [32]. If an individual believes that it is important to

450

T. Wright et al.

protect themselves from cybercrimes, they will likely experience pressure from known others to engage in this behaviour. Furthermore, studies have shown that employees who perceive a higher level of control over their cybersecurity behaviours are more likely to engage in such behaviours [2]. Based on the above evaluation, we can formulate the following hypothesis H5: H5: The self-concepts related to cybercrime preventative behaviours have a positive relationship with social factors. Perceived Consequences. Perceived consequences refer to the outcomes that a person believes will occur due to a certain behaviour, and these outcomes can either be positive or negative [37]. Employees who perceive the potential consequences of cybercrime as severe are more likely to engage in preventative behaviours [2,11]. Based on the above evaluation, we can formulate the following hypothesis H6: H6: An employee’s perceived consequences of cybercrimes have a positive influence on their intention to perform cybercrime preventative behaviours. Habit. Habit refers to situation-behaviour sequences that have become automatic (occurring or done without self-instruction or deliberately) in response to something that has happened in their environment [34]. A habit is a form of routine behaviour; the behaviour has become habitual because it is either easy, comfortable, or rewarding [15]. In the context of cybersecurity, habitual behaviours such as updating software, using strong passwords, and avoiding phishing emails are crucial for preventing cybercrime. Research has shown that habits are essential predictors of behaviours [40]; as such, the shift to telecommuting, which disrupted many employees’ daily routines, may have affected their cybercrime preventative behaviours. A change in habit can also make people change their behaviour, such as people altering their behaviour after learning about the effects of cybercrimes. By evaluating the above, we can create the following hypotheses H7 and H8. H7: The habit of an employee with regard to cybercrime preventative behaviours has a positive influence on their affect towards cybercrimes. H8: The habit of an employee with regards to cybercrime has a positive influence on their cybercrime preventative behaviours. Intention. Intention can be defined as “an individual’s conscious plan or selfinstruction to carry out a behaviour” [34, p. 8]. It refers to the amount of effort someone is willing to exert for a particular behaviour [37]. After experiencing or learning about the effects of cybercrime, a person might be more motivated to change their behaviour to better protect themselves. The intention to perform a behaviour often has a strong influence on the individual actually carrying out that behaviour [32]. If an individual has both a habit of being aware of cybercrime and cybersecurity and an intention to improve their behaviour in this regard, they are likely to do so [32]. Based on this evaluation, we can formulate the following hypothesis H9:

The Influence of Interpersonal Factors on Telecommuting Employees’

451

H9: An employee’s intention to perform cybercrime preventative behaviours has a positive influence on them actually performing those behaviours. Facilitating Conditions. Facilitating conditions refer to factors in an individual’s environment that make a behaviour easier to perform [34]. An individual may have the intention to perform a behaviour, but they might lack the environment (i.e., the facilitating conditions) in which to do so, or their environment may hinder them from performing that behaviour [37]. For instance, people who want to protect themselves from cybercrimes may not have the appropriate environment or skills to do so. The COVID-19 pandemic significantly impacted employees’ social and physical environments [25], as they were forced to telecommute during strict lockdowns. Remote work during the pandemic presented unique challenges in terms of facilitating conditions for cybercrime prevention and employees may have had limited access to information and resources. Based on the above evaluation, we can formulate the following hypothesis H10: H10: An employee’s access to information on cybercrime preventative behaviours while telecommuting during the pandemic had a positive influence on them actually performing those behaviours.

4

Research Design

A deductive approach was used to study the influence of interpersonal factors on telecommuting employees’ cybercrime preventative behaviour during COVID-19. This allowed the causal relationships hypothesised by the theoretical framework to be tested. The research instrument, accessible at [43], consisted of two parts. The first part collected demographic data such as age, gender, the highest level of education, duration of working from home, and experiences with cybercrime, with questions adapted from [26]. The second part assessed theoretical constructs, including norms, roles, self-concept, social factors, affect, perceived consequences, habit, facilitating conditions, intention, and behaviour with constructs adapted from [7,32,34]. These constructs were evaluated using a 5-point Likert scale, with options ranging from “strongly agree” to “strongly disagree”. The instrument was pilot tested with 11 participants who were a subset of the population. Based on feedback from the participants, small adjustments were made to the length and layout of the instrument. To address method bias [35] we implemented several procedural guidelines. This included using clear and concise language (confirmed by the pilot study), labelling all scale points, and assuring participants that their responses will remain anonymous and confidential. The study was granted ethical clearance prior to data collection commencing. An online survey was set up using the Qualtrics platform and distributed via the Prolific platform. A probability sampling strategy was used to target employees living in South Africa during the COVID-19 pandemic (2020–2022). Prolific filters were used to target the desired sample and restrict participants to those

452

T. Wright et al.

with a 100% completion rate on the platform. Data was collected in August 2022, and participants were compensated for their time according to Prolific guidelines.

5

Analysis and Results

To analyse the data, IBM SPSS was used for initial data cleaning and a descriptive analysis. Following this, SmartPLS was used for hypothesis testing using Partial Least Squares Structural Equation Modelling (PLS-SEM). PLS-SEM is “particularly suited to situations in which constructs are measured by a very large number of indicators and where maximum likelihood covariance-based SEM tools reach their limit” [19, p. 283]. PLS-SEM is frequently used in research for measuring the relationship of constructs in model-based research using latent variables. Despite its widespread use, it is acknowledged that the method is not without controversy [38,39]. It appears that information systems, the discipline in which this work is based, has not taken a firm view on the use of the method. Nevertheless, we have endeavoured to apply methodological guidelines to ensure rigour in the following analysis. The survey was completed by 209 respondents. Respondents were aged from 18–60+ years with the majority (62.2%) being between the ages of 21–30. By gender, there were 66.5% female, 32.1% male, and 1.4% non-binary/third-gender respondents. Most respondents (58.9%) had a bachelor’s degree. Almost half of the respondents (48.3%) had been working from home for 0–12 months, with 34.9% working from home for 13–24 months. Most respondents had some sort of experience with cybercrime, whether as a victim themselves (30.6%) or knowing someone who had been a victim (45.0%). 5.1

Measurement Model

The first part of the PLS-SEM analysis examined the measurement model for reliability and validity. The model was assessed using outer loadings, Cronbach’s alpha, composite reliability (CR) and average variance extracted (AVE) to evaluate the internal consistency, indicator reliability and convergent validity. Following PLS-SEM guidelines, indicators with outer loadings below 0.7 were evaluated and removed if this improved CR and AVE. All constructs were retained, with sufficiently reliable indicators to proceed, as shown in Table 1. All Cronbach’s alpha values were above the 0.7 threshold. It is argued that CR is a better indicator of reliability in a multi-construct model [36]. All CR values were also above the 0.7 threshold. For completeness, we present both measures. Lastly, the AVE was examined for convergent validity, where a threshold of 0.5 is recommended [21,36]. All AVE values were above the threshold. Next discriminant validity was assessed, which is “the degree to which the measures of different constructs differ from one another” [41, p. 19]. Both the Fornell-Larcker and Heterotrait-Monotrait Ratio (HTMT) criteria was satisfied. The HTMT recommends values be equal to or less than 0.9 [22]. All values

The Influence of Interpersonal Factors on Telecommuting Employees’

453

Table 1. Construct reliability and convergent validity Construct

Cronbach’s Alpha

Composite Average Reliability Variance Extracted

Norms

0.870

0.901

0.605

Roles

0.836

0.901

0.752

Self-Concept

0.800

0.882

0.714

Social Factors

0.831

0.881

0.600

Affect

0.875

0.898

0.561

Perceived Consequences 0.863

0.892

0.581

Habit

0.936

0.945

0.539

Facilitating Conditions

0.874

0.914

0.726

Intention

0.947

0.966

0.904

Behaviour

0.708

0.872

0.772

except one (at 0.943) were below the threshold. This does not disprove that discriminant validity has been achieved because the HTMT measure is not a strict requirement for measuring discriminant validity. The sample size could be a limitation which contributed to the value being above the 0.9 threshold. Importantly the value is not greater than 1 and overall thus overall adequate discriminant validity was achieved. 5.2

Structural Model

With the reliability and validity of the model having been established, the structural model was assessed. This replicates the paths (relationships) between the different constructs that have been hypothesised. To evaluate the structural model, the values of the path coefficients were used, as well as the coefficient of determination (R2 ). Analysis used the recommended 5000 bootstrap samples and a two-tailed bias-corrected and accelerated bootstrapping procedure, using a significance level of 0.05 [20,21,36]. R2 represents the amount of variance in the endogenous constructs explained by the exogenous constructs linked to it. R2 values range between 0 and 1 with higher values indicating better predictive accuracy. A threshold for R2 is around 0.1 but is dependent on the type of study; values for predicting human behaviour are usually lower than normal [17]. Therefore, even though affect has a value less than 0.1, predictive power is still achieved. R2 values were calculated as social factors (0.151), affect (0.066), intention (0.137), and behaviour (0.680), respectively. Hypotheses testing indicated that two were not supported (H4, H5) while three hypotheses were significant at the 5% level (H1, H2, H7) and five hypotheses were significant at the 1% level (H3, H6, H7, H8, H9). These results are summarised in Table 2.

454

T. Wright et al. Table 2. Overview of findings

Hypothesis

Path Coefficient

T-Value P-Value Supported?

H1: Affect → Intention

0.17

2.293

0.022

Yes

H2: Social Factors → Intention

0.171

2.216

0.027

Yes

H3: Norms → Social Factors

0.289

2.611

0.009

Yes

H4: Roles → Social Factors

0.067

0.716

0.474

No

H5: Self-Concept → Social Factors

0.067

0.589

0.556

No

H6: Perceived Consequences → Intention

0.167

3.071

0.002

Yes Yes

H7: Habit → Affect

0.257

3.992

0

H8: Habit → Behaviour

0.552

9.695

0

Yes

H9: Intention → Behaviour

0.308

4.396

0

Yes

2.48

0.013

Yes

H10: Facilitating Conditions → Behaviour 0.114

From the results, it can be concluded that there is a strong relationship between habit and affect, and a moderate relationship between affect, social factors, and intention. This is consistent with the results of Woon et al. [42]. Consistent with Moody & Siponen [32], there is a strong relationship between norms and social factors. However, contrary to their results, there was a weak relationship between roles and social factors, and this hypothesis was rejected. This could be because prior literature examined non work-related personal use of the Internet at work. It appears that cybercrime preventative behaviours in specific roles at work do not help to predict the social factors affecting employees. It is shown that self-concept has a weak effect on social factors (p = 0.567). Again this was not consistent with Moody & Siponen [32], perhaps indicating that peoples’ beliefs about cybercrime preventative behaviours do not imply they will experience pressure from others as to how they should behave. It is shown that habit is a strong predictor of behaviour, and there is also a strong relationship between intention and behaviour. This is consistent with Moody & Siponen [32] and Pee et al. [34]. There is a strong relationship between perceived consequences and intention and between facilitating conditions and behaviour, consistent with Pee et al. [34].

6

Discussion

Due to COVID-19, many businesses needed to start the telecommuting way of business. This created an increase in cybercrime during this period as more people were now using the Internet and devices to perform their work duties, phishing, for example, increased by 667% during this time [33]. These cybercrimes increased peoples’ anxiety, stress, or anger during COVID-19, and it has been found that people will change their behaviour towards cybercrimes if their perceived risk is high or if they or someone they know falls victim to a cybercrime [5]. To reduce the risk of falling victim to a cybercrime, people can perform

The Influence of Interpersonal Factors on Telecommuting Employees’

455

cybercrime preventative behaviours, which will reduce the opportunity of cybercrime as it will take cybercriminals more time and effort to get the person’s information [14]. By first assessing the demographic statistics, one can observe that the majority of respondents either experienced cybercrimes themselves or know of someone who has experienced cybercrimes (75.6% combined). This ties into the literature surrounding the fact that cybercrime increased during COVID-19 [33]. Respondents mostly agree or strongly agree that they intend to, will, and expect to perform cybercrime preventative behaviours while telecommuting in the future. This is consistent with the fact that people will change their behaviour towards cybercrimes after becoming a victim or from the influence of the people around them [5]. By assessing the structural model, one can first observe that, as predicted by the TIB, the intention to engage in cybercrime preventative behaviour strongly predicts actual behaviour towards cybercrime preventative behaviours. One can also observe that an individual’s habits towards cybercrime preventative measures strongly predict actual behaviour towards cybercrime preventive behaviours. This shows how the habit of an individual can strongly predict whether they will engage in that specific behaviour. These are relationships supported by prior literature [32] though in the context of non work-related personal use of work computers. Facilitating conditions surrounding an individual appear to predict whether they will perform cybercrime preventative measures. In addition, the affect of a respondent predicted whether they intend to perform cybercrime preventative behaviours in the future. Interestingly, the hypotheses for both roles and self-concept for predicting social factors were both rejected. This is contrary to findings where self-concept strongly predicted the social factors surrounding non work-related personal use of work computers [32]. This means that both roles and self-concepts of cybercrime preventative behaviours do not predict the social factors surrounding cybercrime preventative behaviours well.

7

Conclusion

This study explored the interpersonal factors influencing telecommuting employees’ cybercrime preventative behaviours during COVID-19 pandemic. TIB was used as a foundation for the research. Intention and habit were found to be significant predictors of behaviour, as well as being positively correlated to behaviour, as was predicted. Facilitating conditions were also found to be a predictor of behaviour, although not as strong as both habit and intention, as well as a positive predictor of behaviour, this was also predicted. Affect, perceived consequences, and social factors were all found to be predictors of intention, and all were positively correlated to intention, as was also predicted. These findings indicate that an individual’s intention and current habits regarding cybercrime prevention strongly influence their engagement in such behaviours. Access to relevant resources also plays a role in the decision to engage in cybercrime preventative actions. Moreover, an individual’s perceived consequences of not engaging

456

T. Wright et al.

in cybercrime preventative behaviours, their emotions towards cybercrimes, and the influence of their social environment impact their intention to engage in such behaviours. Thus, organisations should prioritise a positive emotional environment around cybersecurity among their telecommuting employees. Establishing strong cybersecurity social norms within remote work settings and raising awareness of cybercrime consequences. Additionally, promote the development of robust cybersecurity habits tailored to telecommuting and ensure remote employees have easy access to the necessary resources and tools for preventing cybercrime. This study provides a testable framework that could be enhanced in future work. First, the study adopted self-reported cybercrime preventative behaviours as a dependent variable, which may introduce self-report bias. Future studies could devise measures of actual cybercrime preventative behaviours. Second, the study focused specifically on the cybercrime preventative behaviours of telecommuting employees within South Africa. As a result, it should be cautiously generalised to a broader population; this opens opportunities for future studies to investigate cybercrime preventative behaviours within different contexts. Further exploration is required to understand the limited role of roles and self-concept in predicting social factors.

References 1. Africa online: Internet access spreads during the pandemic. https://news.gallup. com/poll/394811/africa-online-internet-access-spreads-during-pandemic.aspx. Accessed 21 Apr 2023 2. Herath, T., Rao, H.R.: Protection motivation and deterrence: a framework for security policy compliance in organisations. Eur. J. Inf. Syst. 18(2), 106–125 (2009) 3. Ajzen, I., Fishbein, M.: Theory of reasoned action in understanding attitudes and predicting social behaviour. J. Soc. Psychol. (1980) 4. Ajzen, I.: From intentions to actions: a theory of planned behavior. In: Kuhl, J., Beckmann, J. (eds.) Action Control. SSSSP, pp. 11–39. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-69746-3 2 5. Bada, M., Nurse, J.R.: The social and psychological impact of cyberattacks. In: Emerging Cyber Threats and Cognitive Vulnerabilities, pp. 73–92. Elsevier (2020) 6. Benbow, S.M., Bhattacharyya, S., Kingston, P., Peisah, C.: Invisible and at-risk: older adults during the COVID-19 pandemic. J. Elder Abuse Negl. 34(1), 70–76 (2022) 7. Betts, T.K., Setterstrom, A.J., Pearson, J.M., Totty, S.: Explaining cyberloafing through a theoretical integration of theory of interpersonal behavior and theory of organizational justice. J. Organ. End User Comput. (JOEUC) 26(4), 23–42 (2014) 8. Bulgurcu, B., Cavusoglu, H., Benbasat, I., Burcu, B., Cavusoglu, H., Benbasat, I.: Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. MIS Q. 34, 523–548 (2010) 9. Chigada, J., Madzinga, R.: Cyberattacks and threats during COVID-19: a systematic literature review. S. Afr. J. Inf. Manage. 23(1), 1–11 (2021) 10. Cimpanu, C.: FBI says cybercrime reports quadrupled during COVID-19 pandemic. ZD Net (2020)

The Influence of Interpersonal Factors on Telecommuting Employees’

457

11. D’Arcy, J., Herath, T.: A review and analysis of deterrence theory in the IS security literature: making sense of the disparate findings. Eur. J. Inf. Syst. 20(6), 643–658 (2011). https://doi.org/10.1057/ejis.2011.23 12. D’Arcy, J., Hovav, A., Galletta, D.: User awareness of security countermeasures and its impact on information systems misuse: a deterrence approach. Inf. Syst. Res. 20(1), 79–98 (2009). https://doi.org/10.1287/isre.1070.0160 13. De Kimpe, L., Walrave, M., Verdegem, P., Ponnet, K.: What we think we know about cybersecurity: an investigation of the relationship between perceived knowledge, internet trust, and protection motivation in a cybercrime context. Behav. Inf. Technol. 41(8), 1796–1808 (2022) 14. Drew, J.M.: A study of cybercrime victimisation and prevention: exploring the use of online crime prevention behaviours and strategies. J. Criminol. Res. Policy Pract. 6, 17–33 (2020) 15. Egmond, C., Bruel, R.: Nothing is as practical as a good theory. Analysis of theories and a tool for developing interventions to influence energy-related behaviour (2007) 16. Fagan, M.H., Neill, S., Wooldridge, B.R.: An empirical investigation into the relationship between computer self-efficacy, anxiety, experience, support and usage. J. Comput. Inf. Syst. 44(2), 95–104 (2004) 17. Frost, J.: How high does R-squared need to be. Statistics by Jim (2021) 18. Gagnon, M.P., et al.: An adaptation of the theory of interpersonal behaviour to the study of telemedicine adoption by physicians. Int. J. Med. Inform. 71(2–3), 103–115 (2003) 19. Haenlein, M., Kaplan, A.M.: A beginner’s guide to partial least squares analysis. Underst. Stat. 3(4), 283–297 (2004). https://doi.org/10.1207/s15328031us0304 4 20. Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M.: A primer on partial least squares structural equation modeling (2014) 21. Hair Jr., J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M., Danks, N.P., Ray, S.: Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R. CCB, Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80519-7 22. Henseler, J., Ringle, C.M., Sarstedt, M.: A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135 (2015) 23. Ifinedo, P.: Understanding information systems security policy compliance: an integration of the theory of planned behavior and the protection motivation theory. Comput. Secur. 31, 83–95 (2012). https://doi.org/10.1016/j.cose.2011.10.007 24. Ifinedo, P.: Effects of security knowledge, self-control, and countermeasures on cybersecurity behaviors. J. Comput. Inf. Syst. 63(2), 380–396 (2023) 25. Kautondokwa, P., Ruhwanya, Z., Ophoff, J.: Environmental uncertainty and enduser security behaviour: a study during the COVID-19 pandemic. In: Drevin, L., Miloslavskaya, N., Leung, W.S., von Solms, S. (eds.) WISE 2021. IAICT, vol. 615, pp. 111–125. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-80865-5 8 26. Li, L., He, W., Xu, L., Ash, I., Anwar, M., Yuan, X.: Investigating the impact of cybersecurity policy awareness on employees’ cybersecurity behavior. Int. J. Inf. Manage. 45, 13–24 (2019). https://doi.org/10.1016/j.ijinfomgt.2018.10.017 27. Limayem, M., Khalifa, M., Chin, W.W.: Factors motivating software piracy: a longitudinal study. IEEE Trans. Eng. Manage. 51(4), 414–425 (2004) 28. Ma, K.W.F., McKinnon, T.: COVID-19 and cyber fraud: emerging threats during the pandemic. J. Fin. Crime 29(2), 433–446 (2022) 29. Minnaar, A., Herbig, F.J.: Cyberattacks and the cybercrime threat of ransomware to hospitals and healthcare services during the COVID-19 pandemic. Acta Criminologica: Afr. J. Criminol. Vict. 34(3), 155–185 (2021)

458

T. Wright et al.

30. Mokhtarian, P.L.: Defining telecommuting (1991) 31. Monteith, S., Bauer, M., Alda, M., Geddes, J., Whybrow, P.C., Glenn, T.: Increasing cybercrime since the pandemic: concerns for psychiatry. Curr. Psychiatry Rep. 23, 1–9 (2021) 32. Moody, G.D., Siponen, M.: Using the theory of interpersonal behavior to explain non-work-related personal use of the internet at work. Inf. Manage. 50(6), 322–335 (2013) 33. Naidoo, R.: A multi-level influence model of COVID-19 themed cybercrime. Eur. J. Inf. Syst. 29(3), 306–321 (2020) 34. Pee, L.G., Woon, I.M., Kankanhalli, A.: Explaining non-work-related computing in the workplace: a comparison of alternative models. Inf. Manage. 45(2), 120–130 (2008) 35. Podsakoff, P.M., MacKenzie, S.B., Podsakoff, N.P.: Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 63(1), 539–569 (2012). https://doi.org/10.1146/annurev-psych-120710-100452 36. Ringle, C.M., Wende, S., Becker, J.M., et al.: Smartpls 3 (2015) 37. Robinson, J.: Triandis’ theory of interpersonal behaviour in understanding software piracy behaviour in the South African context. Ph.D. thesis, University of the Witwatersrand Johannesburg (2010) 38. R¨ onkk¨ o, M., Lee, N., Evermann, J., McIntosh, C., Antonakis, J.: Marketing or methodology? Exposing the fallacies of PLS with simple demonstrations. Eur. J. Mark. 57(6), 1597–1617 (2023). https://doi.org/10.1108/EJM-02-2021-0099 39. R¨ onkk¨ o, M., McIntosh, C.N., Antonakis, J., Edwards, J.R.: Partial least squares path modeling: time for some serious second thoughts. J. Oper. Manag. 47–48(1), 9–27 (2016). https://doi.org/10.1016/j.jom.2016.05.002 40. Triandis, H.: Interpersonal Behavior. Brooks/Cole Publishing Company (1977). https://books.google.co.za/books?id=Sz63AAAAIAAJ 41. Urbach, N., Ahlemann, F.: Structural equation modeling in information systems research using partial least squares. J. Inf. Technol. Theory Appl. (JITTA) 11(2), 2 (2010) 42. Woon, I.M., Pee, L.G.: Behavioral factors affecting internet abuse in the workplacean empirical investigation. In: SIGHCI 2004 Proceedings, p. 5 (2004) 43. Wright, T., Ruhwanya, Z., Ophoff, J.: Measurement items - the influence of interpersonal factors on telecommuting employees’ cybercrime preventative behaviours (2023). https://doi.org/10.6084/m9.figshare.23465885

Research Methods

A Review of Constructive Alignment in Information Security Educational Research Vuyolwethu Mdunyelwa1(B)

, Lynn Futcher1

, and Johan van Niekerk1,2

1 Nelson Mandela University, Port Elizabeth, South Africa

{vuyolwethu.mdunyelwa,lynn.futcher, johan.vanniekerk}@mandela.ac.za, [email protected] 2 Noroff University College, Kristiansand, Norway

Abstract. Cybersecurity is a growing concern in today’s digitally connected world. With the increasing reliance on technology, it is essential to develop effective educational interventions that enhance users’ knowledge and behaviour in both information security and cybersecurity. However, there is limited research on whether these interventions adhere to sound pedagogical approaches. This study evaluates the approaches taken when creating information security and cybersecurity educational interventions by analysing past Human Aspects of Information Security Assurance (HAISA) and World Information Security Education (WISE) papers from 2018 to 2022. A thematic content analysis was conducted to determine whether the papers from these conferences over the 5-year period considered any of the elements of constructive alignment when designing their interventions. The results reveal that the majority of studies focus primarily on learning activities and assessment tasks, rather than incorporating all elements of constructive alignment. Consequently, the positive outcomes on assessment tasks may not necessarily indicate that the intended learning outcomes have been addressed. This research highlights the importance of adopting sound pedagogical approaches in the design of information security and cybersecurity educational interventions to ensure that all intended learning outcomes are adequately addressed and measured before the learning process begins. Keywords: Information Security · Educational Interventions · Educational Pedagogy · Constructive Alignment

1 Introduction Information security and cybersecurity play a crucial role in the field of information technology. As a result, securing information has become a significant challenge in today’s world [9]. Efforts to enhance information security involve equipping users with relevant knowledge, which aims to improve their understanding of both information security and cybersecurity [21]. Programs that strive to improve information security and cybersecurity knowledge include education, training, or awareness initiatives [13]. Issues with such programs include a lack of adherence to sound educational pedagogies, © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 461–469, 2023. https://doi.org/10.1007/978-3-031-38530-8_36

462

V. Mdunyelwa et al.

which underpin the creation of effective educational experiences for targeted users [1]. This research aimed to review past papers focused on information security and cybersecurity interventions to determine whether they considered any of the elements relating to constructive alignment in the design of these interventions. This was done by means of a thematic content analysis of past Human Aspects of Information Security Assurance (HAISA) and World Information Security Education (WISE) conference papers from 2018 to 2022. This paper begins with a review of related literature in Sect. 2, followed by the research methodology in Sect. 3, which outlines the methodology undertaken for this study. Section 4 describes the research process undertaken, while Sect. 5 presents the results of the thematic content analysis. Section 6 provides a brief discussion, and Sect. 7 concludes this paper.

2 Related Literature The rise in the use of information technology has also increased the prevalence of cybersecurity attacks among cyberspace users. One way of keeping users safe when engaging with internet services is the introduction of programs that aim to equip cyberspace users with relevant cybersecurity knowledge. Such programs include education, training, and awareness programs [23]. These interventions can be used to address different aspects of information and cybersecurity relevant to the needs of various users, such as programs focusing on fostering a cybersecurity culture for end-users within an organisation through awareness programs, or introducing secure coding clinics to improve secure coding knowledge and behaviour among programmers [4, 19]. Consequently, the foundation of such programs focuses on equipping users with the relevant knowledge, skills and abilities to address their information security behaviour [20]. However, any form of information and cybersecurity educational research approach should align with sound educational pedagogical approaches when developing their educational programs [13]. If the underlying basic educational pedagogy, which underpins the educational programs, is overlooked or ignored, the specific cybersecurity threats that the various programs seek to address will not be effectively mitigated [14]. Educational theories recommend that the intentions for the teaching and learning process, along with the required resources and output, should be aligned during the design of any curriculum. This allows for measuring whether the intended output has been achieved at the end of the educational process [7]. Educational theories exist that enable the design for teaching to provide students with a way to express their learning, with clearly stated objectives before the teaching and learning process takes place [2]. A widely used theory for clearly stating learning objectives in relation to students’ learning before the teaching and learning process begins, allowing for the measurement of such objectives, is constructive alignment [2, 14]. 2.1 Constructive Alignment The creation of academic programs requires the integration of curricular documents with academic content development for teaching and learning. This integration aligns

A Review of Constructive Alignment in Information Security

463

the curricular statements to determine whether students’ learning was effective [14]. The integration takes an approach that connects theory to policy and practice, with the results of student learning measured at the end of the educational process [8]. Various approaches can be used to design educational programs that allow for measuring students’ learning in relation to the intended learning outcomes; however, a widely recommended approach is constructive alignment [2]. Constructive alignment is an pedagogical framework of teaching and assessment that enables learners to construct their own learning through relevant learning activities [2]. In this pedagogical framework, the teacher’s job is to create a learning environment that supports the learning activities appropriate for achieving the desired learning outcomes [2]. What is essential is that all elements in the pedagogical framework, such as the curriculum, intended learning outcomes, teaching methods used, and assessments, are aligned with each other [2, 14], as seen in Fig. 1.

Fig. 1. Constructive Alignment Elements [10].

Constructive alignment occurs at different levels: course and unit level. Each of the units in a program should address the course learning outcomes [2]. The intended learning outcomes in a unit should be specific to the content and learning activities from that unit. It is crucial to identify the intended learning outcomes and design assessment tasks to measure the attainment of these learning outcomes [14]. The learning activities should be well-planned to enable students to develop skills, knowledge, and understanding described in the intended learning outcomes and measured by the assessments [2]. It is essential to choose the content required to support the learning activities for the students [14]. An example can be seen when creating a module focusing on secure coding, which is an aspect of information security and cybersecurity. This example would include course learning outcomes which are specific for the module, and intended learning outcomes which are specific for addressing the various units of the module. These intended learning outcomes should include learning activities which the students should engage with, and the content or resources which assists them in achieving the learning activities. The assessment tasks should also be included and identified to measure whether the intended learning outcomes have been met by the students.

464

V. Mdunyelwa et al.

3 Research Methodology This study aimed to review past papers focused on information security and cybersecurity interventions to determine whether they considered any of the elements relating to constructive alignment in the design of these interventions. While numerous conferences and journals focus on information and cybersecurity education research, the researchers chose to focus on the proceedings from two main International Federation for Information Processing (IFIP) Technical Committee (TC) 11 conferences, namely the WISE and HAISA conferences over a 5-year period from 2018 to 2022. WISE is the official conference of the IFIP TC 11 Working Group 8 which focuses on Information Security Education, while HAISA is the IFIP TC 11 Working Group 12 on Human Aspects of Information Security and Assurance. This focused approach allows for a more in-depth exploration of the themes within the chosen conferences. While analysing content from conferences may limit the generalisability of the findings, this also assists in providing valuable insights into the specific context and content of these conferences. A thematic content analysis was deemed relevant for this study since it involves counting and categorising the occurrences of specific terms, phrases, or themes in the text [6]. This type of content analysis is useful when the aim is to identify patterns or trends in the data [11]. While various approaches for conducting a thematic content analysis exist, this study follows a six-step process, namely familiarisation, coding, generating themes, reviewing themes, defining and naming themes, and writing up, as described below [11, 17]: STEP 1 - Familiarisation: The initial stage involves comprehending all the gathered data to establish a general perspective for examining the specific elements. STEP 2 - Coding: This step refers to highlighting sections of text or phrases, resulting in the creation of labels to describe their codes. STEP 3 - Generating Themes: At this stage, themes are generated from patterns identified from the codes that have been created. STEP 4 - Reviewing Themes: This step includes comparing the data set to the generated themes to ensure an accurate representation of the data. STEP 5 - Defining and Naming Themes: During this phase, defining themes involves the exact formulation for determining what they mean, while naming themes involves coming up with brief and easily understandable names for each theme. STEP 6 - Writing Up: This final step includes writing the analysis of the data, covering data collection and how the thematic content analysis was conducted. The six-step content analysis process was implemented while conducting the thematic content analysis to determine whether the identified studies considered any of the elements relating to constructive alignment in the design of their interventions.

4 Research Process This research took a phased approach in reviewing literature to identify studies for information and cybersecurity interventions. A total of 75 studies were identified, focusing on education, awareness, or training. However, this was filtered down to those which focused

A Review of Constructive Alignment in Information Security

465

specifically on educational interventions, resulting in 14 studies. These 14 studies were further filtered down to those that focused on formal information and cybersecurity educational interventions to enhance users’ knowledge and behaviour, resulting in a final data set of 10 studies as listed in Table 1. Each paper is represented by a paper Code which is built from the Conference (C), the corresponding year (18 for 2018; 19 for 2019; 20 for 2020; 21 for 2021 and 22 for 2022), the specific conference, i.e., WISE (W) or HAISA (H), and the sequential number in the list presented (1 to 10). Therefore a HAISA paper from 2018 which is number one in the list has a paper code of C18H1. These 10 studies were used to conduct the thematic content analysis, which focused on the five elements of constructive alignment, namely: Course Learning Outcomes; Intended Learning Outcomes; Assessment Tasks; Learning Activities; and Content/Resources. The results for the content analysis are presented in Sect. 5. The thematic content analysis followed a six-step process as discussed in Sect. 3: STEP 1 - Familiarisation: This step was achieved by reviewing the 10 identified studies to recognise elements of constructive alignment in the study or terms that relate to those of constructive alignment. STEP 2 - Coding: This step involved highlighting different phrases or text in the 10 identified studies that relate to the five elements of constructive alignment. STEP 3 - Generating Themes: This step was achieved through identifying the elements of constructive alignment which the identified code should be reviewed against in the following step. STEP 4 - Reviewing Themes: This step included a comparison between the identified phrases from the studies and the five elements of constructive alignment. Some studies referred to gaps for learning outcomes, and referred to intervention or educational interventions as the learning activities. STEP 5 - Defining and Naming Themes: This step involved conducting a review of the identified studies using metrics to indicate whether the studies adhere to some of the elements of constructive alignment. STEP 6 - Writing Up: The final step is the analysis of the data, which forms part of the discussion in Sect. 6. This six-step process was applied to the 10 identified studies to determine whether these studies considered any of the elements of constructive alignment when designing their information and cybersecurity interventions. The findings of this research will contribute to the understanding of the design for cybersecurity educational interventions and will provide insights into the importance of constructive alignment when designing information and cybersecurity interventions.

5 Results The results of the thematic content analysis as described in Sect. 4 are presented in Table 2 which clearly indicates the ten studies analysed and their corresponding mapping to the five elements of constructive alignment. From the ten studies analysed, the majority of them do not adhere to the five elements of constructive alignment in the design of their information and cybersecurity interventions.

466

V. Mdunyelwa et al. Table 1. Thematic content analysis data set.

Paper Code

Paper Title

Year

Reference

C18H1

An Educational Intervention Towards Safe Smartphone Usage

2018

[22]

C18H2

Social Networking: A Tool for Effective Cybersecurity Education in Cyber-Driven Financial Transactions

2018

[15]

C18H3

Using the IKEA Effect to Improve Information Security Policy Compliance

2019

[18]

C20H4

SherLOCKED: A Detective-Themed Serious Game for Cyber Security Education

2021

[12]

C21H5

A Wolf, Hyena, and Fox Game to Raise Cybersecurity Awareness Among Pre-school Children

2021

[21]

C22H6

Visual Programming in Cyber Range Training to Improve Skill Development

2022

[9]

C18W7

A Design for a Collaborative Make-the-Flag Exercise

2018

[3]

C18W8

ForenCity: A Playground for Self-Motivated Learning in 2018 Computer Forensics

[5]

C18W9

A Pilot Study in Cyber Security Education Using CyberAIMs: A Simulation-Based Experiment

2018

[24]

C18W10

An Educational Intervention for Teaching Secure Coding Practices

2019

[16]

Studies for improving cybersecurity knowledge focus on the learning activities which are the actual interventions and also focus on the assessment tasks which are designed to test the knowledge acquired from the cybersecurity learning activities. In the results from Table 2, only two studies (C18H1 and C18W10) indicated their intended learning outcomes before the teaching and learning process starts, and these were phrased as gaps which were aligned to the activities and assessment tasks. Also, two studies (C18H1 and C18W7) included the Course Learning Outcomes which were relevant for the study since it provided outcomes for the overarching cybersecurity programme, and these were broader than the intended learning outcomes which would be for a specific learning unit in a course. Seven papers included assessment tasks and various learning activities for their programs. Three studies (C18H2, C20H4, and C18W10) did not include assessment tasks to measure their achievement of the intended learning outcomes. Similarly, eight out of the ten studies included learning activities, except for C18H1 and C18W10. Four studies provide various content and additional resources for students to enhance their information security knowledge and behaviour. In general, none of the studies had all the required elements of constructive alignment before conducting their cybersecurity interventions.

A Review of Constructive Alignment in Information Security

467

Table 2: Results from thematic content analysis. Paper Code

Course Learning Outcomes

Intended Learning Outcomes

Assessment Tasks

C18H1

x

x

x

C18H2 C18H3

x

C20H4

Learning Activities

Content/Resources

x

x

x x

C21H5

x

x

C22H6

x

x

x

x

x

x

C18W7

x

C18W8 C18W9

x

C18W10 TOTALS

2

x

x

2

7

x

x x

8

4

6 Discussion The majority of the studies analysed primarily concentrate on learning activities (8 of the 10) and assessment tasks (7 of the 10), which aim to gauge the knowledge or behaviour associated with these activities. However, a favourable outcome on assessment tasks does not necessarily mean that learning outcomes have been met; it merely suggests that students have comprehended the provided content. This presents a concern, as tasks may address only one anticipated learning outcome and exclude others. If these outcomes were explicitly stated before designing or initiating the program, it would be simpler to assess whether the intended learning outcomes have been met. This paper argues that using a pedagogical framework like constructive alignment can facilitate the alignment of design, learning activities, and assessment tasks before beginning the process for designing information and cybersecurity educational programs. It is crucial for information and cybersecurity educational interventions to consider and adopt such pedagogical frameworks to reinforce their initiatives, ensuring all intended learning outcomes are addressed and measured before the learning process commences. Constructive alignment, as a pedagogical framework, offers this support and enables students and teachers to express their learning and teaching experiences, fostering a learning environment that facilitates student activities tailored to achieving desired learning outcomes.

7 Conclusion Incorporating various educational foundations is essential when conducting research that encompasses educational processes. In the realm of information and cybersecurity research, there is often insufficient consideration given to sound pedagogical approaches

468

V. Mdunyelwa et al.

when designing educational interventions. These approaches should define clear learning outcomes and the methods of measuring them before initiating the teaching and learning process. Constructive alignment is one such approach that aligns these elements effectively. This research has discovered that numerous information and cybersecurity studies aiming to create educational interventions do not adhere to such an approach, thereby evaluating only what has been taught. As mentioned in Sect. 6, a positive assessment result does not necessarily equate to meeting the envisioned learning outcomes; it merely indicates students’ understanding of the taught content. Outcomes in terms of information and cybersecurity knowledge can be significantly improved if the intended learning outcomes are described before the teaching and learning process, demonstrating how the appropriate objectives have been accomplished. This study only focused on reviewing past studies on information security and cybersecurity education interventions to identify whether past HAISA and WISE publications over a 5-year period (2018–2022) considered any elements of constructive alignment in the design of their interventions. Future work will focus on developing a broader educational process model considering how the constructive alignment elements could fit in the educational process model.

References 1. Amankwa, E.: Relevance of cybersecurity education at pedagogy levels in schools. J. Inf. Secur. 12(04), 233–249 (2021). https://doi.org/10.4236/jis.2021.124013 2. Biggs, J.: Enhancing teaching through constructive alignment. High. Educ. 32, 347–364 (1996). https://doi.org/10.1007/BF00138871 3. Bishop, M.: A design for a collaborative make-the-flag exercise. In: Drevin, L., Theocharidou, M. (eds.) WISE 2018. IAICT, vol. 531, pp. 3–14. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-99734-6_1 4. Bishop, M., et al.: Cybersecurity curricular guidelines. In: Bishop, M., Futcher, L., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2017. IAICT, vol. 503, pp. 3–13. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58553-6_1 5. Blauw, F.F., Leung, W.S.: Forencity: A playground for self-motivated learning in computer forensics. In: Drevin, L., Theocharidou, M. (eds.) Information Security Education – Towards a Cybersecure Society. IFIP Advances in Information and Communication Technology, vol. 531, pp. 15–27. Springer, New York (2018). https://doi.org/10.1007/978-3-319-99734-6_2 6. Creswell, J.W.: Qualitative enquiry & research design, choosing among five approaches, vol. 2nd ed (2007). https://doi.org/10.1016/j.aenj.2008.02.005 7. Fawcett, G., Juliana, M.: Teaching in the digital age. designing instruction for technologyenhanced learning, pp. 71–82 (2015). http://opentextbc.ca/teachinginadigitalage/%5Cnhttp:// services.igiglobal.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-930708-28-0.ch004 8. Genon, L.J.D., Torres, C.P.B.: Constructive alignment of assessment practices in English language classrooms. Engl. Lang. Teach. Educ. J. 3, 211–228 (2020). https://doi.org/10.12928/ eltej.v3i3.2460. http://journal2.uad.ac.id/index.php/eltej/index 9. Glas, M., Vielberth, M., Reittinger, T., Böhm, F., Pernul, G.: Visual programming in cyber range training to improve skill development. In: Clarke, N., Furnell, S. (eds.) Human Aspects of Information Security and Assurance: 16th IFIP WG 11.12 International Symposium, HAISA 2022, Mytilene, Lesbos, Greece, July 6–8, 2022, Proceedings, pp. 3–13. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-12172-2_1 10. Gurney, B., Rundle, N.: Constructive Alignment (2014). https://www.teachinglearning.utas. edu.au/unit-design/constructive-alignment

A Review of Constructive Alignment in Information Security

469

11. Caulfield, J.: How to do thematic analysis—a step-by-step guide & examples. Scribbr pp. 1–9 (2020). https://www.scribbr.com/methodology/thematicanalysis/ 12. Jaffray, A., Finn, C., Nurse, J.R.C.: Sherlocked: a detective-themed serious game for cyber security education. In: Furnell, S., Clarke, N. (eds.) Human Aspects of Information Security and Assurance. IFIP Advances in Information and Communication Technology, pp. 35–45. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81111-2_4 13. Li, L., Shen, Y., Han, M.: Perceptions of information systems security compliance: An empirical study in higher education setting. In: Proceedings of the Annual Hawaii International Conference on System Sciences 2020-Janua, pp. 6226–6231 (2021). https://doi.org/10.24251/ hicss.2021.751 14. Loughlin, C., Lygo, S.: Asa Lindberg-Sand: reclaiming constructive alignment. Eur. J. High. Educ. 11, 119–136 (2021). https://doi.org/10.1080/21568235.2020.1816197 15. Maharaj, R., Solms, R.V.: Social networking: a tool for effective cybersecurity education in cyber-driven financial transactions. In: HAISA, pp. 91–100 (2018) 16. Mdunyelwa, V., Futcher, L., van Niekerk, J.: An educational intervention for teaching secure coding practices. In: Drevin, L., Theocharidou, M. (eds.) WISE 2019. IAICT, vol. 557, pp. 3– 15. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23451-5_1 17. Nowell, L.S., Norris, J.M., White, D.E., Moules, N.J.: Thematic analysis: striving to meet the trustworthiness criteria. Int. J. Qual. Methods 16(1), 1–13 (2017). https://doi.org/10.1177/ 1609406917733847 18. Olivos, O.: Using the ikea effect to improve information security policy compliance. In: HAISA, pp. 12–19 (2019) 19. PayU: SA companies under cyber-attack? (2020). https://www.payu.co.za/pressroom/sa-com panies-under-cyberattack 20. Richardson, M.D., Lemoine, P.A., Stephens, W.E., Waller, R.E.: Planning for cyber security in schools: the human factor. Educ. Plan. 27, 23–39 (2020) 21. Snyman, D.P., Drevin, G.R., Kruger, H.A., Drevin, L., Allers, J.: A wolf, hyena, and fox game to raise cybersecurity awareness among pre-school children. In: Furnell, S., Clarke, N. (eds.) HAISA 2021. IAICT, vol. 613, pp. 91–101. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-81111-2_8 22. Van Rensburg, W.J., Thomson, K.L., Futcher, L.A.: An educational intervention towards safe smartphone usage. In: HAISA, pp. 123–134 (2018) 23. Venter, I.M., Blignaut, J., Renaud, K., Venter, M.A.: Heliyon Cyber security education is as essential as “the three R’s” 5(April), 0–7 (2019). https://doi.org/10.1016/j.heliyon.2019. e02855 24. Zoto, E., Kowalski, S., Frantz, C., Lopez-Rojas, E., Katt, B.: A pilot study in cyber security education using cyberaims: a simulation-based experiment. In: Drevin, L., Theocharidou, M. (eds.) Information Security Education – Towards a Cybersecure Society. IFIP Advances in Information and Communication Technology, vol. 531, pp. 40–54. Springer, New York (2018). https://doi.org/10.1007/978-3-319-99734-64

What Goes Around Comes Around; Effects of Unclear Questionnaire Items in Information Security Research Marcus Gerdin(B) , Åke Grönlund, and Ella Kolkowska Örebro University, Örebro, Sweden [email protected]

Abstract. The credibility of research on information system security is challenged by inconsistent results and there is an ongoing discussion about research methodology and its effect on results within the employee non-/compliance to information security policies literature. We add to this discussion by investigating discrepancies between what we cl/aim to measure (theoretical properties of variables) and what we actually measure (respondents’ interpretations of our operationalized variables). The study asks: (1) How well do respondents’ interpretations of variables correspond to their theoretical definitions? (2) What are the characteristics and causes of any discrepancies between variable definitions and respondent interpretations? We report a pilot study including interviews with seven respondents to understand their interpretations of the variable Perceived severity from the Protection Motivation Theory (PMT). We found that respondents’ interpretations differ substantially from the theoretical definitions which introduces error in measurement. There were not only individual differences in interpretations but also, and more importantly, systematic ones; When questions are not well specified, or do not cover respondents’ practice, respondents make interpretations based on their practice. Our results indicate three types of ambiguities, namely (i) Vagueness in part/s of the measurement item causing inconsistencies in interpretation between respondents, (ii) Envision/Interpret ‘new’ properties not related to the theory, (iii) ‘Misses the mark’ measurements whereby respondents misinterpret the fundamentals of the item. The qualitative method used proved conducive to understanding respondents’ thinking, which is a key to improving research instruments. Keywords: Non-/compliance research · Validation of measurement instruments · Interpretation of measurement items

1 Introduction The credibility of research on information system security compliance research is challenged by inconsistent results. As a result, there is an ongoing discussion about research methodology and its effect on research results within the employee non-/compliance © IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 470–481, 2023. https://doi.org/10.1007/978-3-031-38530-8_37

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

471

to information security policies (ISP) literature. For example, researchers such as Mou et al. (2022) and Cram et al. (2019) have observed significant effect on research results depending on the level of specificity of the content of a security policies when operationalized into measurement instruments. Similarly, Johnston et al. (2015) exemplified the grave importance of contextualizing behavioral variables into non-/compliance settings, while Gerdin et al. (2021) and Sommestad et al. (2014) noted inconsistencies between variable definitions and measurement items within and across studies. Siponen and Vance (2014) illustrated the need of contextualizing measurement items for practitioners. The common denominator is that all authors have highlighted problems with the conceptualization and operationalization of behavioral variables and several attempts to solve these problems have been made. Despite these, however, the problems with inconsistent results largely remain (Cram et al. 2019). This paper contributes to this line of research by examining yet another fundamental aspect of validation procedures, namely, the extent to which key variables have been operationalized into questionnaire measurement items in ways that are not open for multiple interpretations by respondents, but if, are their interpretation in consonance with the variables stated property/characteristic (Desimone and Le Floch 2004; Mackenzie et al. 2011; Luft and Shields 2003). To investigate that, we critically analyze respondents’ interpretations of questionnaire measurement items commonly used in the field in order to understand how much variation there is, and what causes that variation. Drawing from influential papers discussing variable measurement and validation procedures (e.g., Boudreau et al. 2001; MacKenzie et al. 2011; Luft and Shields 2003) we investigate if there are any discrepancies between what we claim to measure (a theoretical property of the variable as stated in our variable definition) and what we actually measure (which is respondents’ interpretations of our operationalized variables). If there is a gap between the two, there is noise in the measurement as there is a high risk that we do not measure what we think we do. Specifically, the study asks: • How well do respondents’ interpretations of variables correspond to their theoretical definitions? • What are the characteristics and causes of any discrepancies between variable definitions and respondent interpretations? The full study will include variables from the two most used theories in the field, Protection Motivation Theory and Theory of Planned Behavior. The present paper reports a pilot study where we use a small respondent sample (n = 7) to test (1) how useful our method is for investigating respondents’ interpretations of questionnaire question and (2) if there are any discrepancies to find. The pilot tests only one variable, “perceived severity”, under the logic that (1) if we find discrepancies in this variable, it is likely that there are such also in other variables; (2) the nature of the variables and the causes for them will give us clues that are useful to adjust the methodology for the full study. This study, hence, investigates the nature of the problem in order to prepare for a study covering also the extent of it.

472

M. Gerdin et al.

2 Related Research and Protection Motivation Theory 2.1 Related Research Behavioral research involves very complex phenomena which are complicated to empirically measure. For this reason, issues related to the validation of questionnaire measurement instruments has been discussed in the literature of behavioral management information systems (MIS) for over 30 years. Along these lines, for example, Straub (1989) raised concerns about the lack of systematic validation of measurement instruments in MIS research and called for more methodological rigor in the field. Since then, important papers such as that of Boudreau et al. (2001) and MacKenzie et al. (2011) have noted that even though the research field as a whole has improved in important respects, many problems related to measurement instrument validation remain. Accordingly, researchers in non-/compliance disciplines have recognized the importance of a good measurement practice and taken vital steps to improve it. Within behavioral MIS literature, two alternative perspectives on how to improve methodological rigor in research has been discussed. The first perspective is based on theoretical considerations and emphasizes the need for clear and uniform specifications of variables’ theoretical properties/characteristics, which is required to identify consistent cause-andeffect relationships between them (Luft and Shields 2003; MacKenzie et al. 2011). Along these lines within non-/compliance field Johnston et al. (2015) proposed an enhanced version of the theory (PMT) where variable properties had been tailored to better fit the ISP non-/compliance context. See Karjalainen, et al. (2019) for example of similar type of study. Hence, according to the first perspective researchers first and foremost must clearly specify the theoretical properties/characteristics of variables based on the context of ISP non-/compliance, i.e., make sure that the variable properties are adapted to this ‘new’ context. This is absolutely vital since many of the theories within the research area are ‘borrowed’ from other research disciplines (Johnston et al. 2015; Moody et al. 2018). In contrast to the first perspective, the second one takes variables’ theoretical properties largely for given and instead focuses on how to successfully operationalize them into questionnaire measurement items. For example, Siponen and Vance (2014) illustrated the importance of developing measurement items tailored to the intended respondents’ organizational perspective to minimize the risk of measurement errors and to increase to possibility for researchers to draw correct conclusions about cause-and-effect relationships between variables. They also stressed the importance of testing and validating the measurement instrument with the target population before doing to actual study. If this is not done, a measurement item may have an increased risk of being subjected to multiple interpretations which can imply that it only captures parts of the intended theoretical property, captures content that does not correspond to the intended property, and/or capture several properties, some of which may not be in line with the theoretical definition (Luft and Shields 2003) See Karlsson et al. (2017) and Li et al. (2021) for examples of similar type of studies. Notwithstanding their important contributions, however, these two perspectives have to our knowledge essentially only been studied in isolation. Thus, there is no study which have investigated whether variable definitions in the non-/compliance literature clearly

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

473

specify their theoretical properties/characteristics and whether these properties are fully translated into adequate measurement items that are not subject to multiple interpretations (Desimone and Le Floch 2004; Mackenzie et al. 2011; Luft and Shields 2003). We believe that these two perspectives could beneficially be combined, as successful research is dependent on both. As effectively put by Desimone and Le Floch (2004, p.4), “An important aspect of validity is that the respondent has a similar understanding of the questions as the survey designer; that the questions do not omit or misinterpret major ideas or miss important aspects of the phenomena being examined”. 2.2 Perceived Severity This pilot study uses the variable Perceived severity from Protection Motivation Theory (PMT). Originating from Rogers (1975, 1983), PMT was initially developed to explain the effects of fear appeals on health attitudes and behaviors (Prentice-Dunn and Rogers 2000). We have two reasons for specifically focusing on the variable Perceived Severity. First, it has been a central focus of research on non-/compliance methodology in the past (e.g., Cram et al. 2019; Gerdin et al. 2021; Sommestad et al. 2015; Haag et al. 2021; Johnston et al. 2015). Second, the variable has a high level of specificity in the conceptualization of its theoretical property/characteristic, namely the perceived consequences of an information security threat (Sommestad et al. 2015; Gerdin et al. 2021) but the notation concerning whom the consequences refer to differs between different studies.. Some refer to consequences related to the organization (e.g., Aigbefo et al. 2020; Hooper and Blunt 2020), others to consequences related to the individual (e.g., Aurigemma and Mattson 2019; Vrhovec and Mihelic 2021), and some fail to specify to whom the consequences refer to (e.g., Burns et al. 2017; Rajab and Eydgahi 2019). Perceived severity is well-suited for this study due to its extensive research history in the context of non/compliance research, and interesting characteristics in terms of conceptualization and operationalization (Sommestad et al. 2015; Gerdin et al. 2021).

3 Method 3.1 Data Collection We conducted seven semi-structured interviews with professionals from the Swedish public sector. Of the respondents, five worked with home care within municipalities, one worked in the Swedish Transport Agency, and one in the Agency of Digital Government. All respondents handled classified information and were required to adhere to an information security policy in their respective organizations. The choice of questionnaire items was based on the above mentioned inconsistences in the variable’s conceptualization and operationalization, especially regarding whom the consequences referred to (see Sect. 2.2) differed in the literature (Sommestad et al. 2015; Gerdin et al. 2021). We incorporated a mix of items specifically related to whom the consequences of non-compliance were directed towards in order to make sure that all differences in the variable conceptualization identified in previous studies where

474

M. Gerdin et al.

covered. For example, item 6 referred to consequences in general, item 7 referred to consequences related to the organization and item 8 referred to consequences related to the individual. Moreover, following Cram et al. (2019), we also made sure that different level of specificity of threats where covered. For example, item 1 is considered general as it refers to threats to the security of an organization, item 2 is instead specifying which segment that becomes affected and the impact of it. See full list of items in Table 1. The interviewees were asked to think-out-loud and explain 1) how they interpret the question, 2) how they would have answered the question using a 5-point Likert-scale and 3) the reason for choosing the specific number. When necessary, probing follow-up questions were asked. Six interviews were conducted online, using Zoom or Microsoft Teams, while one was conducted in person. Two researchers were present at each interview, both of whom were familiar with the interview protocol. While one researcher was responsible for following the interview protocol the other could focus on unclarities in the answers and ask follow-up questions. Each interview lasted about one hour. We adopted an iterative approach, whereby the interview protocol was modified (if needed) after each interview. Table 1. List of items and original studies Item 1 ”Threats to the security of my organizations information are harmful” (Ifinedo 2012) Item 2 ”If my computerized data were temporarily not available, serious information security problems would result” (Barlette et al. 2015) Item 3 “If my work device were infected by malware, it would be severe” (Blythe and Coventry 2018) Item 4 “In terms of information security violations, attacks on my organization’s information and information systems are severe” (Posey et al. 2015) Item 5 “If my password was stolen, the consequences would be severe” (Johnston et al. 2015) Item 6 “Threats to the security of my organization’s information and information systems are severe” (Ma 2022; Posey et al. 2015) Item 7 “An information security breach in my organisation would be a serious problem for my organization.” (Siponen et al. 2014) Item 8 “An information security breach in my organization would be a serious problem for me” (Vance et al. 2012)

3.2 Method for Analysis The coding process comprised of three steps. First, we transcribed, compiled, and coded each interview separately. During the transcription and the subsequent readings, we identified numerous interesting words, phrases, terms and concepts offered by the respondents and coded them (Bazeley 2013). Second, we re-read all transcripts and devoted our focus to combining words, phrases, terms, and concepts from each interview into categories representing similar ideas and/or

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

475

issues shared among the respondents. This process consisted of comparisons of transcripts and subsequent analysis to over time detect conceptual patterns shared between the transcripts (Bazeley 2013). Third, we formulated overarching labels that subsumed the previously identified categories on a more abstract level, representing a more aggregated analytical dimension of each category (Bazeley 2013). The focus here was on identifying different types of ambiguity in the operationalization of variables. Six categories were identified. To ensure confidence in our identified categories and subsequent labels, we continuously worked back and forth with the material, literature, and the emerging table (Table 2) and two authors were involved in the process. Continuous discussion and modification formed the basis for achieving agreement. Note however, the interviews, transcripts and succedent coding was done in Swedish, thus the emergent findings have been translated into English.

4 Emergent Findings We set out to investigate how the respondents interpreted questionnaire measurement items, and if their interpretations were in line with the theoretical conceptualization of the variable as suggested in ISP non/compliance research literature (see Sect. 2). Overall, we found three types of ambiguities in the formulation of these items which increased the risk of misinterpretations the among the respondents, namely: (i) Vagueness in words or concepts used in the measurement items causing inconsistencies in interpretation among respondents, (ii) Envisioning, where respondents are unable to discern the intended meaning of a variable and therefore – based on their understanding of the actual work situation – interpret ‘new’ properties/characteristics which are outside of the theoretical conceptualization in order for them to make sense of the measurement item. (iii) ‘Misses the mark’ measurements whereby the respondent misinterprets the theoretical property/ies which the items seek to measure. 4.1 Ambiguity (i) -Vagueness The first type of ambiguity refers to vagueness in part/s of the measurement item causing inconsistencies in interpretation between respondents. Especially in relation to items 1, 2 and 6, we found that the respondents interpreted items differently due to vagueness and/or imprecise wording. This type of ambiguity is not related to any type of (mis)interpretation of properties/characteristics of a variable per se, rather other types of ‘external’ aspects of the measurement items which cause differences in interpretation between respondents. For example, regarding questions referring to ‘seriousness of security threats’ (items 1 and 6 in Table 1), respondents were not confident in how they should understand the level of seriousness of the threat. Consequently, some respondents answered the question from a worst-case-scenario perspective while others had a normal, “run-ofthe-mill” scenario in mind. This made them choose, respectively, very low or very high options on the 5-point Likert-scale. That is, the respondents would grade their responses differently on the Likert scale depending on how they interpreted the level of seriousness.

476

M. Gerdin et al.

Note, however, that even if the respondents would grade similarly on the Likert-scale concerning different levels of seriousness of threat, the fact that they interpret the level of seriousness differently is still noteworthy as they did in fact not answer the same question. This type of findings can lead to increased theoretical specificity in future research. That is, we have the possibility to not only validate, but also make rational improvements to, our measurement instrument to be able to draw more fine-tuned conclusion of our work, i.e., better understand the conceptual range and boundaries of a variable. Another telling example of vagueness concerns the aspect of time (see illustrative example from category 1 in Table 2). Not only did the respondents envision different time periods, but they also mentioned that they would change their answered drastically (e.g., going from a 2 to a 5 on the Likert scale) if the time aspect changed. This type of imprecise wording therefore introduces noise in the measurement whereby the interpretation of a crucial concept (time) may differ between the respondents, but also between the respondents and the research which may cause misinterpretation of results. 4.2 Ambiguity (ii) – Envisioning Unintended Properties The second type of ambiguity refers to respondents envision/interpret ‘new’ properties/characteristics not related to the conceptualization of a variable. This type of ambiguity differed from vagueness (ambiguity (i)) in the sense that respondents added new information to the item in question. Vagueness means that respondents interpret important words differently due to imprecise wording, but their interpretations still operate within the conceptualization of the variable. To contrast further, when respondents envision/interpret new content, they add new information to the item in question which is not necessary in line with the conceptualization of the variable, nor anticipated by the researcher. For example, for respondents working in the municipality, when faced with measurement items regarding consequences, they relate the consequences to the citizen/clients to whom they served in their role as a public organization. As described in Sect. 2.2, the variable perceived severity has in previous research been found to sometimes refer to consequences related to the organization (e.g., Aigbefo et al. 2020; Hooper and Blunt 2020), others to consequences related to the individual (e.g., Aurigemma and Mattson 2019; Vrhovec and Mihelic 2021), and some failing to specify to whom the consequences refer to (e.g., Burns et al. 2017; Rajab and Eydgahi 2019). However, never have consequences for the citizen/client been defined as a property/characteristic of the variable. On one hand, even though differences in interpretation regarding whom the consequences refer to may seem arbitrary as they all refer to consequences in some sort. The Sommestad et al. (2015) meta-study observed that the weighted mean correlation was substantially higher when the threat targeted individuals (r = 0.3) compared to when the threat targeted the organization as a whole (r = 0.17). Consequently, there are empirical evidence that suggest that effect on intention by consequences fluctuates according to whom the consequences refer to. On the other hand, if we consider that the respondents grading is similar on the Likert scale might be similar independently to whom the consequences refers, their actual interpretation of the question (consequences for the citizen/client) is unknown to the researcher and may not correlate with the conceptualization of the variable. In line

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

477

with MacKenzie et al. (2011) and Luft and Shields (2003) papers, this type of ambiguity poses a significant risk of drawing invalid conclusions and not correctly understanding the phenomenon being studied. 4.3 Ambiguity (iii) – ‘MiSses the Mark’ The third type of ambiguity refers to what we call ‘misses the mark’ measurement items. This refers to situations where the respondents simply interpreted important content, which did not correlate with the conceptual definition of the variable. For example, when facing question related to the consequences of a threat, some respondents referred to properties such as vulnerability and probability. While these properties share some characteristics with the conceptualization of Perceived severity, they actually belong to another variable from the Protection Motivation Theory (PMT), namely, Perceived Vulnerability (see example in Table 2). Another type of category within this ambiguity refers to types of threats. That is, even though the respondents were aware of the context (information security) in which the questionnaire operates within, some respondents still referred to other types of threats such as shooting, or fire not related to IS when interpreting the question. This type of ambiguity can lead to invalid conclusions if not addressed properly, at least if this interpretation is common among the respondents. Similar to above-mentioned ambiguities, the results of these types of (mis)interpretation from the respondents may result in that the item captures content that does not correspond to the intended property, and/or capture several properties, some of which may not be in line with the conceptual definition. Table 2. Note* the representative examples have been modified to be more specific. Type of ambiguity

Category

Representative example (respondent)

Vagueness in part/s of the measurement item

Timeframe

The word “temporarily” makes it very difficult to interpret this question…What does it mean. 5 days or an hour? (Respondent 2- Item 4) For me it depends on how temporary the break is. If it is a couple hours, then it is not a problem but if it is longer than that, well there is definitely a problem then. (Respondent 4Item 4) (continued)

478

M. Gerdin et al. Table 2. (continued)

Type of ambiguity

Category

Representative example (respondent)

Seriousness of the threat; Run-of-the-mill or Worst-case-scenario

If they steal all my passwords, they can do anything with my computer and the consequences will be unimaginable. (Respondent 6 – Item 5)

Level of analysis – Organizational, department or unit

Well, it depends on what level in the organization the breach happens. (Respondent 4 – item 8)

Envision/Interpret ‘new’ properties not related to the conceptualization

Consequences related to the For us who work within the citizens/clients health care sector, everything is always serious. In the end, the clients can die or be injured. (Respondent 7 - Item 5) Our mission is to always look out for the best interests of the citizens and care citizen/clients. So that nothing can happen to them, and that is what we must try to avoid. (Respondent 3 Item 3)

‘Misses the mark’ measurements

Other types of threats which is not information security related

It could be a school shooter and create chaos at the school, or perhaps share suicide clips to other students. (Respondent 2 – Item 8)

Vulnerability and probability

These types of threats occur on a daily basis. (Respondent 4 – item 3)

5 Discussion and Conclusion This pilot study investigates how well respondents’ interpretations of variables correspond to the theoretical definitions of them as well as the characteristics and causes of any discrepancies. The short answers to these questions are: (i) Respondents interpret variables so differently from the theoretical definitions that it is a problem. In some cases, they respond to something else than the question asked which introduces error in measurement.

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

479

(ii) There are not only individual differences in interpretations but also, and more importantly, systematic ones; When questions are not well specified, or do not cover respondents’ practice, respondents make interpretations based on their practice. The pilot study uses a small respondent sample (n = 7) to test (1) how useful our method is for investigating respondents’ interpretations of questionnaire question and (2) if there are any discrepancies to find. Moreover, the pilot tests only one variable, “Perceived severity”, under the logic that (1) if we find discrepancies in this variable, it is likely that there are such also in other variables; (2) the nature of the variables and the causes for them will give us clues that are useful to adjust the methodology for the full study. This study, hence, investigates the nature of the problem in order to prepare for a study covering also the extent of it. We found three different types of ambiguities, namely (i) Vagueness in part/s of the measurement item causing inconsistencies in interpretation between respondents, (ii) Envision/Interpret ‘new’ properties not related to the conceptualization, (iii) ‘Misses the mark’ measurements whereby the respondent miss-interpret the fundamentals of the item. We conducted interviews with seven respondents to see if their interpretation of the variable was suitable given the established definition of the variable, but also to see if and how the interpretation varied among the respondents. Overall, our study sheds light on the challenges of interpreting questionnaire items in non-/compliance research and highlights the importance of improving measurement instruments to achieve more theoretical specificity and avoid invalid conclusions. Moreover, we discovered that respondents from the public sector and from healthcare identified relevant aspects of ‘perceived severity’ that have hitherto never been included in research, namely consequences for the client to the organization. In healthcare, patient security and privacy are important concerns, and GDPR has added legal restrictions to information handling not only for the public sector but for every organization. This increasingly important concern for consequences on part of the clients is not only one factor for confusing the interpretation of items in a questionnaire but also an issue that should be considered for inclusion in research. Even though we investigated only one variable with relatively few respondents, the findings suggest a need for further validation of measurements as recommended by researchers such as Straub (1989), MacKenzie et al. (2011). Our study adds to the discussions about validating variable measurements by showing that respondents indeed interpret important content in measurement instruments differently than intended by the researchers, and that there are systematic causes for important parts of this discrepancy. Most importantly the discrepancies stem from what could be called theory/practice mismatch; either measurement items do not clearly measure the variables of the theory, or they do not clearly relate to practice, or both. We believe the method we used was conducive to investigating our research questions as it both revealed a number of ambiguities and proved effective in making respondents explain their reasoning.

480

M. Gerdin et al.

References Aurigemma, S., Mattson, T.: Generally speaking, context matters: making the case for a change from universal to particular ISP research. J. Assoc. Inf. Syst. 20(12), 7 (2019) Barlette, Y., Gundolf, K., Jaouen, A.: Toward a better understanding of SMB CEOs’ information security behavior: Insights from threat or coping appraisal. J. Intell. Stud. Bus. 5(1) (2015) Bazeley, P.: Qualitative Data Analysis Practical Strategies, 2nd edn. Sage, London (2013) Blythe, J.M., Coventry, L.: Costly but effective: comparing the factors that influence employee anti-malware behaviours. Comput. Hum. Behav. 87, 87–97 (2018) Burns, A.J., Posey, C., Roberts, T.L., Lowry, P.B.: Examining the relationship of organizational insiders’ psychological capital with information security threat and coping appraisals. Comput. Hum. Behav. 68, 190–209 (2017) Boudreau, M.C., Gefen, D., Straub, D.W.: Validation in information systems research: a state-ofthe-art assessment. MIS Q. 1–16 (2001) Cram, W.A., D’arcy, J., Proudfoot, J.G.: Seeing the forest and the trees: a meta-analysis of the antecedents to information security policy compliance. MIS Q. 43(2), 525–554 (2019) Desimone, L.M., Le Floch, K.C.: Are we asking the right questions? Using cognitive interviews to improve surveys in education research. Educ. Eval. Policy Anal. 26(1), 1–22 (2004) Gerdin, M., Grönlund, Å., Kolkowska, E.: Use of protection motivation theory in non-compliance research (2021) Haag, S., Siponen, M., Liu, F.: Protection motivation theory in information systems security research: a review of the past and a road map for the future. ACM SIGMIS Database: DATABASE Adv. Inf. Syst. 52(2), 25–67 (2021) Hooper, V., Blunt, C.: Factors influencing the information security behaviour of IT employees. Behav. Inf. Technol. 39(8), 862–874 (2020) Ifinedo, P.: Understanding information systems security policy compliance: an integration of the theory of planned behavior and the protection motivation theory. Comput. Secur. 31(1), 83–95 (2012) Johnston, A.C., Warkentin, M., Siponen, M.: An enhanced fear appeal rhetorical framework. MIS Q. 39(1), 113–134 (2015) Karjalainen, M., Sarker, S., Siponen, M.: Toward a theory of information systems security behaviors of organizational employees: a dialectical process perspective. Inf. Syst. Res. 30(2), 687–704 (2019) Karlsson, F., Karlsson, M., Åström, J.: Measuring employees’ compliance – the importance of value pluralism. Inf. Comput. Secur. 25(3), 279–299 (2017) Li, H., Luo, X.R., Chen, Y.: Understanding information security policy violation from a situational action perspective. J. Assoc. Inf. Syst. 22(3), 7398–7772 (2021) Luft, J., Shields, M.D.: Mapping management accounting: graphics and guidelines for theoryconsistent empirical research. Acc. Organ. Soc. 28(2–3), 169–249 (2003) Ma, X.: IS professionals’ information security behaviors in Chinese IT organizations for information security protection. Inf. Process. Manage. 59(1), 102744 (2022) MacKenzie, S.B., Podsakoff, P.M., Podsakoff, N.P.: Variable measurement and validation procedures in MIS and behavioral research: Integrating new and existing techniques. MIS Q. 35, 293–334 (2011) Moody, G.D., Siponen, M., Pahnila, S.: Toward a unified model of information security policy compliance. MIS Q. 42(1) (2018) Mou, J., Cohen, J.F., Bhattacherjee, A., Kim, J.: A test of protection motivation theory in the information security literature: a meta-analytic structural equation modeling approach. J. Assoc. Inf. Syst. 23(1), 196–236 (2022)

What Goes Around Comes Around; Effects of Unclear Questionnaire Items

481

Posey, C., Roberts, T.L., Lowry, P.B.: The impact of organizational commitment on insiders’ motivation to protect organizational information assets. J. Manag. Inf. Syst. 32(4), 179–214 (2015) Rajab, M., Eydgahi, A.: Evaluating the explanatory power of theoretical frameworks on intention to comply with information security policies in higher education. Comput. Secur. 80, 211–223 (2019) Rogers, R.W.: A protection motivation theory of fear appeals and attitude change1. J. Psychol. 91(1), 93–114 (1975) Rogers, R.W.: Cognitive and physiological processes in fear-based attitude change: a revised theory of protection motivation. In: Cacioppo, J., Petty, R. (eds.) Social Psychophvsiology. A Source Book, pp. 153–176. Guilford, New York (1983) Siponen, M., Vance, A.: Guidelines for improving the contextual relevance of field surveys: the case of information security policy violations. Eur. J. Inf. Syst. 23(3), 289–305 (2014) Siponen, M., Mahmood, M.A., Pahnila, S.: Employees’ adherence to information security policies: an exploratory field study. Inf. Manag. 51(2), 217–224 (2014) Sommestad, T., Karlzén, H., Hallberg, J.: The sufficiency of the theory of planned behavior for explaining information security policy compliance. Inf. Comput. Secur. 23(2), 200–217 (2015) Sommestad, T., Hallberg, J., Lundholm, K., Bengtsson, J.: Variables influencing information security policy compliance: a systematic review of quantitative studies. Inf. Manag. Comput. Secur. 22(1), 42–75 (2014) Straub, D.W.: Validating instruments in MIS research. MIS Q. 147–169 (1989) Vance, A., Siponen, M., Pahnila, S.: Motivating IS security compliance: insights from habit and protection motivation theory. Inf. Manag. 49(3–4), 190–198 (2012) Vrhovec, S., Miheliˇc, A.: Redefining threat appraisals of organizational insiders and exploring the moderating role of fear in cyberattack protection motivation. Comput. Secur. 106, 102309 (2021)

Author Index

A Ahmad, Rufai 324 Akil, Mahdi 76 Aldaraani, Najla 364 Alotaibi, Fayez 64 Alotaibi, S. 143 B Bergström, Erik 181 Bernsmed, Karin 181 Bishop, Matt 36 Böhm, Fabian 24 Bour, Guillaume 181 Brown, Sinazo 432 Buchanan, Kerrianne 391 Butavicius, Marcus 225, 310 C Chen, Fang 310 Chimbo, Bester 129 Conway, Daniel 310 Craven, Matthew J. 349 D da Veiga, Adéle 192, 211 Das, Sanchari 296 de Jager, Michael 237 du Toit, Jaco 13

Grönlund, Åke 470 Gundu, Tapiwa 418 H Haney, Julie M. 391 He, Y. 143 He, Ying 64 Healy, Charlotte 391 Hedberg, David 275 Huesch, Pia 261 J Jøsang, Audun 169 Joukov, Alexander 337 Joukov, Nikolai 337 K Kannelønning, Kristian 249 Karlsson, Fredrik 157 Katsikas, Sokratis 249 Kävrestad, Joakim 3, 53 Keatley, David A. 377 Kishnani, Urvashi 296 Kolkowska, Ella 470 Kritzinger, Elmarie 13 Kucherera, Lean 116

F Fallatah, Wesam 53 Furnell, Steven 53, 64, 105, 143, 349 Futcher, Lynn 116, 237, 461

L Langner, Gregor 105 Lebea, Khutso 285 Lejaka, Tebogo Kesetse 211 Leung, Wai Sze 285 Lindvall, David 3 Loock, Marianne 211 Lundgren, Martin 181, 275

G Gcaza, Noluxolo 116 Gerdin, Marcus 470 Glas, Magdalena 24

M MacColl, Jamie 261 Madabhushi, Srinidhi 296 Magnusson, Jonathan 76

© IFIP International Federation for Information Processing 2023 Published by Springer Nature Switzerland AG 2023 S. Furnell and N. Clarke (Eds.): HAISA 2023, IFIP AICT 674, pp. 483–484, 2023. https://doi.org/10.1007/978-3-031-38530-8

484

Author Index

Martinie, Célia 405 Martucci, Leonardo A. 76 Mdunyelwa, Vuyolwethu 461 Mott, Gareth 261 Mtsweni, Jabu 129 Mwim, Emilia N. 129 N Naqvi, Bilal 405 Neil, Lorenzo 391 Nohlberg, Marcus 3, 275 Nurse, Jason R. C. 261 O Ophoff, Jacques

445

P Palomino, Marco A. 349 Papadaki, Maria 349 Pattinson, Malcolm 225 Pattnaik, Nandita 261 Pekane, Ayanda 432 Pernul, Günther 24 Petrie, Helen 364 Q Quirchmayr, Gerald

105

R Reeves, Andrew 225 Renaud, Karen 324 Rostami, Elham 157

Ruddy, Christopher 377 Ruhwanya, Zainab 432, 445 S Salamah, Fai Ben 349 Schönteich, Falko 24 Shahandashti, Siamak F. 364 Silva, William 36 Skopik, Florian 105 Stavrou, Eliana 91 Sullivan, James 261 Swearingen, Nathan 36 T Terzis, Sotirios 324 Thomson, Kerry-Lynn Tran, Dinh Uy 169 Turner, Sarah 261

116, 237

V van Niekerk, Johan 461 von Solms, S. H. 13 Vorster, Armand 192 W Whitty, Monica T. 377 Wright, Tim 445 Y Yu, Kun

310

Z Zheng, Muwei 36 Zou, Xukai 36