Information Security Education for Cyber Resilience: 14th IFIP WG 11.8 World Conference, WISE 2021, Virtual Event, June 22–24, 2021, Proceedings (IFIP ... and Communication Technology, 615) 3030808645, 9783030808648

This book constitutes the refereed proceedings of the 14th IFIP WG 11.8 World Conference on Information Security Educati

142 0 9MB

English Pages 161 [152] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents
A Roadmap for Building Resilience
A Brief History and Overview of WISE
Innovation in Curricula
Formation of General Professional Competencies in Academic Training of Information Security Professionals
1 Introduction
2 Basic Concepts and Definitions
3 Competency Model “A” and Competencies Structure
4 Structure of Groups of GPCs and PCs
5 Basic GPCs
6 Training Modules for Implementing Basic GPCs
7 Conclusion
References
Electronic Voting Technology Inspired Interactive Teaching and Learning Pedagogy and Curriculum Development for Cybersecurity Education
1 Introduction
2 Related Work
2.1 Mutual-Restraining E-voting
3 Modules Mapped Directly from E-voting
3.1 Relationship to CSEC2017
4 Module Format and Construction
4.1 Interactive Lecturing
4.2 Interactive Class Projects
4.3 Interactive Self-study and Evaluation
4.4 Interactive Topic/Class Evaluation
4.5 Summary of Topics, Projects, and Interactive Modules
5 Conclusions
References
Teaching Methods and Tools
Minimizing Cognitive Overload in Cybersecurity Learning Materials: An Experimental Study Using Eye-Tracking
1 Introduction
2 Theoretical Model
3 Literature Review
3.1 Cognitive Load Theory
3.2 Bloom’s Taxonomy Cognitive Levels
3.3 Design Principles of Segmentation & Interactivity
3.4 Eye-Tracking and Cognitive Load
4 Research Method
4.1 Learning Materials
4.2 Research Design
4.3 Research Design
5 Conclusion and Future Work
References
A Layered Model for Building Cyber Defense Training Capacity
1 Introduction
2 Research Methodology
2.1 Primary Research Methodologies
2.2 Secondary Research
3 Infrastructure
4 Contextual
5 Programmatic
6 Societal
7 Conclusion
References
Measuring Self-efficacy in Secure Programming
1 Introduction
2 Background and Related Work
3 Study 1: Developing and Validating the Secure Programming Self-Efficacy Scale
4 Study 2: Examining the Predictive Validity of the Secure Programming Self-Efficacy Scale
5 Conclusion
5.1 Limitations and Future Work
References
End-User Security
Children's Awareness of Digital Wellness: A Serious Games Approach
1 Introduction
2 Related Literature
2.1 Digital Wellness
2.2 Digital Wellness Awareness for Children
2.3 Serious Games
3 Overview of the Game
4 Expert Review and Validation
5 Conclusion and Future Work
References
Environmental Uncertainty and End-User Security Behaviour: A Study During the COVID-19 Pandemic
1 Introduction
2 Literature Review
2.1 Theories Related to Information Security Behaviour
2.2 Protection Motivation Theory
2.3 Theory of Planned Behaviour
2.4 Intentions and Security Behaviour
3 Hypotheses Development
3.1 Protection Motivation Theory
3.2 Theory of Planned Behaviour
3.3 Security Habit
3.4 Environmental Uncertainty
4 Research Design
4.1 Instrument Development
4.2 Survey Administrations and Participants
5 Data Analysis and Results
5.1 Respondent Demographics
5.2 Internal Consistency Reliability and Convergent Validity
5.3 Discriminant Validity
5.4 Coefficient of Determination
5.5 Hypotheses Test Results
6 Discussion
7 Conclusion
A Appendix
References
What Parts of Usable Security Are Most Important to Users?
1 Introduction
2 Methodology
3 Results and Analysis
4 Discussion
5 Conclusions and Future Work
References
WISE Workshops
Foundations for Collaborative Cyber Security Learning: Exploring Educator and Learner Requirements
Abstract
1 Introduction
2 The COLTRANE Project
3 Workshop Focus
References
Reimagining Inclusive Pedagogy in Cybersecurity Education (A Workshop Proposal)
Abstract
1 Introduction
1.1 Overview
References
Author Index
Recommend Papers

Information Security Education for Cyber Resilience: 14th IFIP WG 11.8 World Conference, WISE 2021, Virtual Event, June 22–24, 2021, Proceedings (IFIP ... and Communication Technology, 615)
 3030808645, 9783030808648

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

IFIP AICT 615

Lynette Drevin Natalia Miloslavskaya Wai Sze Leung Suné von Solms (Eds.)

Information Security Education for Cyber Resilience 14th IFIP WG 11.8 World Conference, WISE 2021 Virtual Event, June 22–24, 2021 Proceedings

IFIP Advances in Information and Communication Technology

615

Editor-in-Chief Kai Rannenberg, Goethe University Frankfurt, Germany

Editorial Board Members TC 1 – Foundations of Computer Science Luís Soares Barbosa , University of Minho, Braga, Portugal TC 2 – Software: Theory and Practice Michael Goedicke, University of Duisburg-Essen, Germany TC 3 – Education Arthur Tatnall , Victoria University, Melbourne, Australia TC 5 – Information Technology Applications Erich J. Neuhold, University of Vienna, Austria TC 6 – Communication Systems Burkhard Stiller, University of Zurich, Zürich, Switzerland TC 7 – System Modeling and Optimization Fredi Tröltzsch, TU Berlin, Germany TC 8 – Information Systems Jan Pries-Heje, Roskilde University, Denmark TC 9 – ICT and Society David Kreps , National University of Ireland, Galway, Ireland TC 10 – Computer Systems Technology Ricardo Reis , Federal University of Rio Grande do Sul, Porto Alegre, Brazil TC 11 – Security and Privacy Protection in Information Processing Systems Steven Furnell , Plymouth University, UK TC 12 – Artificial Intelligence Eunika Mercier-Laurent , University of Reims Champagne-Ardenne, Reims, France TC 13 – Human-Computer Interaction Marco Winckler , University of Nice Sophia Antipolis, France TC 14 – Entertainment Computing Rainer Malaka, University of Bremen, Germany

IFIP – The International Federation for Information Processing IFIP was founded in 1960 under the auspices of UNESCO, following the first World Computer Congress held in Paris the previous year. A federation for societies working in information processing, IFIP’s aim is two-fold: to support information processing in the countries of its members and to encourage technology transfer to developing nations. As its mission statement clearly states: IFIP is the global non-profit federation of societies of ICT professionals that aims at achieving a worldwide professional and socially responsible development and application of information and communication technologies. IFIP is a non-profit-making organization, run almost solely by 2500 volunteers. It operates through a number of technical committees and working groups, which organize events and publications. IFIP’s events range from large international open conferences to working conferences and local seminars. The flagship event is the IFIP World Computer Congress, at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high. As with the Congress, participation in the open conferences is open to all and papers may be invited or submitted. Again, submitted papers are stringently refereed. The working conferences are structured differently. They are usually run by a working group and attendance is generally smaller and occasionally by invitation only. Their purpose is to create an atmosphere conducive to innovation and development. Refereeing is also rigorous and papers are subjected to extensive group discussion. Publications arising from IFIP events vary. The papers presented at the IFIP World Computer Congress and at open conferences are published as conference proceedings, while the results of the working conferences are often published as collections of selected and edited papers. IFIP distinguishes three types of institutional membership: Country Representative Members, Members at Large, and Associate Members. The type of organization that can apply for membership is a wide variety and includes national or international societies of individual computer scientists/ICT professionals, associations or federations of such societies, government institutions/government related organizations, national or international research institutes or consortia, universities, academies of sciences, companies, national or international associations or federations of companies. More information about this series at http://www.springer.com/series/6102

Lynette Drevin Natalia Miloslavskaya Wai Sze Leung Suné von Solms (Eds.) •





Information Security Education for Cyber Resilience 14th IFIP WG 11.8 World Conference, WISE 2021 Virtual Event, June 22–24, 2021 Proceedings

123

Editors Lynette Drevin North-West University Potchefstroom, South Africa Wai Sze Leung University of Johannesburg Johannesburg, South Africa

Natalia Miloslavskaya The National Research Nuclear University MEPhI Moscow, Russia Suné von Solms University of Johannesburg Johannesburg, South Africa

ISSN 1868-4238 ISSN 1868-422X (electronic) IFIP Advances in Information and Communication Technology ISBN 978-3-030-80864-8 ISBN 978-3-030-80865-5 (eBook) https://doi.org/10.1007/978-3-030-80865-5 © IFIP International Federation for Information Processing 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This volume contains the papers presented at the 14th World Conference on Information Security Education (WISE 14) held during June 22–24, 2021. It was held in conjunction with the 36th IFIP TC-11 SEC 2021 International Information Security and Privacy Conference (IFIP SEC 2021). WISE 14 was organized by the IFIP Working Group 11.8, which is an international group of people from academia, government, and private organizations who volunteer their time and effort to increase knowledge in the very broad field of information security through education. WG 11.8 has worked to increase information security education and awareness for almost two decades. This year, WG 11.8 organized the 14th conference of the successful WISE series under the theme “Information Security Education for Cyber Resilience.” We received 19 submissions from around the world. Each submission was blind reviewed by at least three international Program Committee members. The committee decided to accept 8 full papers. The acceptance rate for the conference was thus 42%. This conference took place thanks to the support and commitment of many individuals. First, we would like to thank all TC-11 members for continually giving us the opportunity to serve the working group and organize the WISE conferences. Our sincere appreciation also goes to the members of the Program Committee, to the external reviewers, and to the authors who trusted us with their intellectual work. We are grateful for the support of the WG 11.8 Officers: Erik Moore, Jacques Ophoff, and Matt Bishop. In addition, the hosting of WISE would not have been possible without the efforts of the SEC 2021 organizers, Audun Jøsang and Nils Gruschka. Finally, we would like to thank the IFIP SEC 2021 organizers for their support. Furthermore, we wish to acknowledge the EasyChair conference management system which was used for managing the submissions and reviews of WISE 14 papers. As for the preparation of this volume, we sincerely thank Miriam Costales and our publisher Springer for their assistance. June 2021

Lynette Drevin Natalia Miloslavskaya Wai Sze Leung Suné von Solms

The original version of this volume was revised: The last name of an author of the first workshop paper has been corrected to “Scarano” and the title of the second workshop paper has been corrected in the Book Backmatter.

Organization

WISE 14 Conference Chair Erik Moore

Regis University, USA

WISE 14 Program Chairs Lynette Drevin Natalia Miloslavskaya

North-West University, South Africa National Research Nuclear University MEPhI, Russia

WISE 14 Conference Secretariat Matt Bishop

University of California, Davis, USA

WISE 14 Publications Chairs Wai Sze Leung Suné von Solms

University of Johannesburg, South Africa University of Johannesburg, South Africa

WISE 14 Logistics Chair Lynn Futcher

Nelson Mandela University, South Africa

WISE 14 Web Chair Jacques Ophoff

Abertay University, UK

Program Committee Maria Bada Reinhardt Botha Jun Dai Ludwig Englbrecht Lothar Fritsch Ram Herkanaidu Christos Kalloniatis Sokratis Katsikas Basel Katt Konstantin Knorr Elmarie Kritzinger Hennie Kruger

University of Cambridge, UK Nelson Mandela University, South Africa California State University, Sacramento, USA University of Regensburg, Germany Karlstad University, Sweden University of Plymouth, UK University of the Aegean, Greece Open University of Cyprus, Cyprus Norwegian University of Science and Technology, Norway Trier University of Applied Sciences, Germany University of South Africa, South Africa North-West University, South Africa

viii

Organization

Costas Lambrinoudakis Dan Likarish Javier Lopez Emmanouil Magkos Annlize Marnewick Herbert Mattord Stig Mjolsnes Ruxandra F. Olimid Günther Pernul Tobias Pulls Kai Rannenberg Carlos Rieder Leo Robert Rudi Serfontein Alireza Shojaifar Kerry-Lynn Thomson Alexander Tolstoy Rossouw von Solms Edgar Weippl Susanne Wetzel

University of Piraeus, Greece Regis University, USA University of Malaga, Spain Ionian University, Greece University of Johannesburg, South Africa Kennesaw State University, USA Norwegian University of Science and Technology, Norway Norwegian University of Science and Technology, Norway University of Regensburg, Germany Karlstad University, Sweden Goethe University Frankfurt, Germany isec ag, Switzerland Laboratory of Informatics, Modelling and Optimization of the Systems, France North-West University, South Africa University of Applied Sciences and Arts Northwestern Switzerland, Switzerland Nelson Mandela University, South Africa National Research Nuclear University MEPhI, Russia Nelson Mandela University, South Africa University of Vienna, Austria Stevens Institute of Technology, USA

Contents

A Roadmap for Building Resilience A Brief History and Overview of WISE. . . . . . . . . . . . . . . . . . . . . . . . . . . Matt Bishop, Lynette Drevin, Lynn Futcher, Wai Sze Leung, Natalia Miloslavskaya, Erik L. Moore, Jacques Ophoff, and Suné von Solms

3

Innovation in Curricula Formation of General Professional Competencies in Academic Training of Information Security Professionals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Natalia Miloslavskaya and Alexander Tolstoy

13

Electronic Voting Technology Inspired Interactive Teaching and Learning Pedagogy and Curriculum Development for Cybersecurity Education . . . . . . Ryan Hosler, Xukai Zou, and Matt Bishop

27

Teaching Methods and Tools Minimizing Cognitive Overload in Cybersecurity Learning Materials: An Experimental Study Using Eye-Tracking. . . . . . . . . . . . . . . . . . . . . . . . Leon Bernard, Sagar Raina, Blair Taylor, and Siddharth Kaza

47

A Layered Model for Building Cyber Defense Training Capacity . . . . . . . . . Erik L. Moore, Steven P. Fulton, Roberta A. Mancuso, Tristen K. Amador, and Daniel M. Likarish

64

Measuring Self-efficacy in Secure Programming . . . . . . . . . . . . . . . . . . . . . Matt Bishop, Ida Ngambeki, Shiven Mian, Jun Dai, and Phillip Nico

81

End-User Security Children’s Awareness of Digital Wellness: A Serious Games Approach . . . . . J. Allers, G. R. Drevin, D. P. Snyman, H. A. Kruger, and L. Drevin Environmental Uncertainty and End-User Security Behaviour: A Study During the COVID-19 Pandemic . . . . . . . . . . . . . . . . . . . . . . . . . Popyeni Kautondokwa, Zainab Ruhwanya, and Jacques Ophoff

95

111

x

Contents

What Parts of Usable Security Are Most Important to Users? . . . . . . . . . . . . Joakim Kävrestad, Steven Furnell, and Marcus Nohlberg

126

WISE Workshops Foundations for Collaborative Cyber Security Learning: Exploring Educator and Learner Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jerry Andriessen, Steven Furnell, Gregor Langner, Gerald Quirchmayr, Vittorio Scarano, and Teemu Tokola

143

Reimagining Inclusive Pedagogy in Cybersecurity Education (A Workshop Proposal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Angela G. Jackson-Summers

146

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

151

A Roadmap for Building Resilience

A Brief History and Overview of WISE Matt Bishop1 , Lynette Drevin2 , Lynn Futcher3 , Wai Sze Leung4 , Natalia Miloslavskaya5 , Erik L. Moore6 , Jacques Ophoff7(B) , and Sun´e von Solms4 1

University of California Davis, Davis, CA 95616, USA [email protected] 2 North-West University, Potchefstroom, South Africa [email protected] 3 Nelson Mandela University, Port Elizabeth, South Africa [email protected] 4 University of Johannesburg, Johannesburg, South Africa {wsleung,svonsolms}@uj.ac.za 5 The National Research Nuclear University, MEPhI (Moscow Engineering Physics Institute), Moscow, Russia [email protected] 6 Regis University, Denver, CO 80221, USA [email protected] 7 Abertay University, Dundee, UK [email protected]

The World Conference on Information Security Education (WISE) has a long history, with the first official event taking place in 1999 in Sweden, hosted by IFIP WG 11.8. However, even before this, information security education had been under discussion at IFIP SEC. Prior to 1995 several IFIP SEC tracks included a focus on education. Then, from 1995 to 1998, four annual workshops focused on the theme “Information Security Education—Current and Future Needs, Problems and Prospects”. These events led to the forming of WISE. In the 14 events to date WISE has been held in 13 countries and on five continents. Table 1 summarises the history of WISE events from 1999 to 2021. WG 11.8: Information Security Education is a collection of international professionals from academia, military, government and the private sector dedicated to spreading knowledge, understanding, and the practice of computer security and information assurance. WG 11.8 was established in 1991 and aims to “promote information security education and training at the university level and in government and industry” 1 . More information can be found on the official working group website at https://www.ifiptc11.org/wg118. While topics related to information security education are now predominantly associated with WISE, the event is frequently co-located with IFIP SEC, benefiting from active participation by a broad range of security researchers.

1

https://www.ifiptc11.org/wg118-mission.

c IFIP International Federation for Information Processing 2021  Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 3–9, 2021. https://doi.org/10.1007/978-3-030-80865-5_1

4

M. Bishop et al. Table 1. History of WISE events (1999–2021) No. Year Dates

Location

Papers Proceedings

1

1999 17–19 June Stockholm, Sweden

21

Published by the Department of Computer & Systems Sciences, Stockholm University/Royal Institute of Technology (ISBN 91-7153-910-7)

2

2001 12–14 July Perth, Australia

23

Published by the School of Computer & Information Science, Edith Cowan University (ISBN 0-7298-0498-4)

3

2003 26–28 June Monterey, USA

28

https://doi.org/10.1007/978-0387-35694-5

4

2005 18–20 May Moscow, Russia

36

Published by the Moscow Engineering Physics Institute (ISBN 5-7262-0565-0)

5

2007 19–21 July West Point, USA

19

https://doi.org/10.1007/978-0387-73269-5

6

2009 27–31 July Bento Goncalves, Brazil 10 papers combined with WISE 8

7

2011 7–9 June

Lucerne, Switzerland

8

2013 8–10 July

Auckland, New Zealand 34

9

2015 26–28 May Hamburg, Germany

13

https://doi.org/10.1007/978-3319-18500-2

10

2017 29–31 May Rome, Italy

14

https://doi.org/10.1007/978-3319-58553-6

11

2018 18–20 Sep

11

https://doi.org/10.1007/978-3319-99734-6

12

2019 25–27 June Lisbon, Portugal

12

https://doi.org/10.1007/978-3030-23451-5

13

2020 21–23 Sep

13

https://doi.org/10.1007/978-3030-59291-2

14

2021 22–24 June Oslo, Norway* events were held virtually

8

In press

Poznan, Poland

Maribor, Slovenia*

18 papers combined with WISE 8 https://doi.org/10.1007/978-3642-39377-8

∗ These

The WISE conferences are proud of presenting and publishing high quality papers. This is ensured through a double-blind review process and publication of accepted papers in an official proceedings. The proceedings for WISE 1, 2 and 4 are inaccessible in practice as they are published as technical reports. However, starting with WISE 5 in 2007, the proceedings have been published by Springer in the series entitled IFIP Advances in Information and Communication Technology. These proceedings have an ISBN and are indexed in Scopus. The WISE proceedings published in the Springer series throughout the years present a rich history of the development and growth of the conference.

A Brief History and Overview of WISE

5

During the 14 years in which WISE has been held, a total of 231 papers have been presented by authors from the 15 countries shown in the map below (Fig. 1), including the USA, UK, South Africa, Russia, Sweden, and Norway. The contribution from each country is shown in Fig. 2.

Fig. 1. Countries represented by WISE authors

A total of 114 papers are currently available in the various proceedings published by Springer. A content analysis based on the title, abstract, and keywords of these papers provide some insight into WISE topic areas. The top terms are shown in the word cloud in Fig. 3. The top five terms are: Security, Information, Education, Cybersecurity, and Computer.

Fig. 2. Contributing countries

6

M. Bishop et al.

Fig. 3. Top terms in published WISE papers

Every year the WISE conference solicits submissions under a specific theme and range of topic areas. The theme, and top terms from accepted submissions, for every WISE conference since 2007 are shown in Table 2. Table 2. Conference themes and top terms in WISE papers (2007–2020) WISE No. Year Theme

Top Terms (excluding ‘information’, ‘security’, and ‘education’)

5

2007 Fifth World Conference on Information Security Education

Computer, training, students, course, forensic, academic

6

2009 Combined with WISE 8

Programming, training, learning, engineering, requirements, students

7

2011 Combined with WISE 8

Computer, students, cyber, technology, curriculum, program

8

2013 Information Assurance and Security Cyber, technology, systems, Education and Training curriculum, students, infrastructure

9

2015 Information Security Education Across the Curriculum

Cyber, training, design, secure, research, software

10

2017 Information Security Education for a Global Digital Society

Cybersecurity, students, network, awareness, computing, research

11

2018 Information Security Systems, engineering, students, Education—Towards a Cybersecure design, cyber, awareness Society

12

2019 Information Security Education: Research, secure, feedback, training, Education in Proactive Information systems, methods Security

13

2020 Information Security Education: Information Security in Action

Cybersecurity, students, cyber, problem, technical, development, learning

A Brief History and Overview of WISE

7

The top terms listed here exclude ‘information’, ‘security’, and ‘education’ to provide a more nuanced look at WISE topic areas. For example, the term ‘training’ reflects the practical, hands-on nature of many contributions, which examine the transfer of skills or activity processes. It can also be seen that issues relevant to ‘students’ is a common topic area. This often extends beyond the classroom to examinations of security curricula. A further noteworthy term is ‘cybersecurity’ which points to an expanding focus on societal issues beyond information security. Ultimately this reflects the rich and multi-disciplinary topic areas which are relevant to WISE. As of April 2021 the total number of citations (as reported by Google Scholar), as well as the paper with the highest number of citations, for each proceedings is shown in Table 3. Table 3. WISE bibliometrics (2007–2020) WISE No.

Year Proceedings citations

Impact factor

Highest cited paper

5

2007 165

8.68

“How to Design Computer Security Experiments” (S. Peisert & M. Bishop)

6

2009 95

9.5

“Two Case Studies in Using Chatbots for Security Training” (S. Kowalski, K. Pavlovska, & M. Goldstein)

7

2011 45

4.5

“An Approach to Visualising Information Security Knowledge” (C. Armstrong)

8

2013 70

5.83

“The Power of Hands-On Exercises in SCADA Cyber Security Education” (E. Sitnikova, E. Foo, & R. Vaughn)

9

2015 59

4.54

“Learn to Spot Phishing URLs with the Android NoPhish App” (G. Canova, M. Volkamer, C. Bergmann, R. Borza, B. Reinheimer, S. Stockhardt, & R. Tenberg)

10

2017 74

5.29

“A Study into the Cybersecurity Awareness Initiatives for School Learners in South Africa and the UK” (E. Kritzinger, M. Bada, & J. Nurse)

11

2018 30

2.73

“A National Certification Programme for Academic Degrees in Cyber Security” (S. Furnell, Michael K, F. Piper, Chris E, Catherine H, & C. Ensor)

12

2019 12

1

“Identifying Information Security Risks in a Social Network Using Self-Organising Maps” (R. Serfontein, H. Kruger, & L. Drevin) & “A Short-Cycle Framework Approach to Integrating Psychometric Feedback and Data Analytics to Rapid Cyber Defense” (E. Moore, S. Fulton, R. Mancuso, T. Amador, & D. Likarish)

13

2020 6

0.46

“An Analysis and Evaluation of Open Source Capture the Flag Platforms as Cybersecurity e-Learning Tools” (S. Karagiannis, E. Maragkos-Belmpas, & E. Magkos)

8

M. Bishop et al.

Officer positions within WG 11.8 include chair, vice-chair, and secretary. Individuals are usually appointed to these positions for 3-year terms. Over the years, a total of 14 individuals from seven countries acted as committee members for WG 11.8, as shown in Table 4. Table 4. IFIP WG 11.8 Officers (1991–2021) Officer

Country

Harold Highland Louise Yngstr¨ om

USA Sweden

Position

Chair Vice-chair Chair Vice-chair Simone Fischer-H¨ ubner Denmark & Sweden Secretary Australia Vice-chair Helen Armstrong Chair USA Vice-chair Daniel Ragsdale Chair South Africa Secretary Lynette Drevin USA Secretary Ronald Dodge Secretary Chair Vice-chair Natalia Miloslavskaya Russia Vice-chair Vice-chair Vice-chair South Africa Vice-chair Lynn Futcher Vice-chair Secretary Chair Chair Australia Chair Colin Armstrong USA Vice-chair Kara Nance USA Secretary Matt Bishop Secretary Secretary USA Vice-chair Erik Moore Vice-chair Chair UK Vice-chair Jacques Ophoff

Year 1991–1995 1993–1995 1995–2001 2001–2004 1996–2001 1995–2001 2001–2005 2000–2005 2005–2008 2001–2005 2005–2008 2008–2011 2011–2014 2005–2008 2008–2011 2014–2017 2017–2020 2005–2008 2008–2011 2011–2014 2014–2017 2017–2020 2008–2011 2011–2014 2014–2017 2017–2020 2020– 2014–2017 2017–2020 2020– 2020–

Fig. 4. Several participants at the WISE 12 conference (Lisbon, Portugal). Back row from the left: Erik Moore, Steven Furnell, Johan van Niekerk, Alexander Tolstoy, Vuyolwethu Mdunyelwa, Audun Jøsang. Front row from the left: Matt Bishop, Natalia Miloslavskaya, Sun´e von Solms, Lynette Drevin, Lynn Futcher, Rudi Serfontein.

A Brief History and Overview of WISE 9

Innovation in Curricula

Formation of General Professional Competencies in Academic Training of Information Security Professionals Natalia Miloslavskaya(B)

and Alexander Tolstoy

The National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), 31 Kashirskoye shosse, Moscow, Russia {NGMiloslavskaya,AITolstoj}@mephi.ru

Abstract. The research results of competencies and competence models of professionals in the Information Security (IS) field are presented. The group structure of competencies, which form a Competency Model (CM) for IS professional academic training, has been determined. The fundamental difference between the academic community’s and the professional community’s CMs is shown. The urgency of a general professional competencies’ (GPCs) group formation during IS academic training is shown. A group structure model, in which specific subgroups are identified, is proposed. The relevance of the study of the first (so-called basic) level of subgroups of general educational competencies is determined. They are versatile across a variety of academic training programmes. Basic GPCs are formulated, and their characteristics are determined. The training modules, with their annotations and labor inputs, are described as parts of academic disciplines. The validity of the results obtained is confirmed by the positive experience in developing competencies and CM in the framework of training IS professionals on the specific educational programmes at the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) (Moscow, Russian Federation). Keywords: Information security · Academic training · Competency model · Competence characteristics · Knowledge · Skills · Competence groups · Training module · Academic discipline

Abbreviations AD BIT CM CS CU IP IPM IS ISEE ISLRF

Academic Discipline Basic Information Technologies Competency Model Cyber Security Conventional Unit Information Protection Information Protection Management Information Security Information Security Ensuring Engineering Information Security Legal and Regulatory Framework

© IFIP International Federation for Information Processing 2021 Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 13–26, 2021. https://doi.org/10.1007/978-3-030-80865-5_2

14

N. Miloslavskaya and A. Tolstoy

ISM IT FISEM GPCs LI NRNU MEPhI PA PCs PEP ST TBISC TISO

Information Security Management Information Technology Fundamentals of Information Security Ensuring Methodology General Professional Competencies Labor Intensity National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) Professional Activity Professional Competencies Professional Educational Profile Socio-Technical Taxonomy Basis of Information Security Concepts Typical Information Security Objects

1 Introduction Research shows [1–6] that competencies and Competency Models (CMs) are important elements of HR systems in any professional community of educational systems, which includes the educational community and professional certification systems. Currently, there are a large number of definitions of the “competency” term [1–6]. For completeness of the paper’s topic consideration, we will focus on the following definitions: Competency is the ability of a subject (person) to apply knowledge and skills to achieve the desired results [7]; CM is the structured sets of necessary, identifiable, and measurable competencies [1]. Existing approaches to the CM development take into account the scope (areas and types) of professional activity, as well as the application direction for such models. For example, for the cybersecurity sphere, the most interesting is the CM presented in the report of the Apollo Education Group, Inc and University of Phoenix [6, 8] and based on the CM classification developed in the Department of Labor (US). It is divided into nine levels, forming three clusters [9, 10]. An analysis of such a system of competencies shows that there can be many CMs. In this case, problems arise in the practical use of these CMs. Each problem is associated with a specific context of their application. This paper highlights the problem associated with the coordination of CMs developed by the academic community (competencies of all three clusters and eight levels of classification [9, 10]) and CMs existing in the professional community (third cluster, 5–8 levels of classification [9, 10]). Figure 1 shows a structure that explains the links between competencies and CMs related to different communities. The professional community formulates the competence requirements for professionals that are in demand in a specific professional field (for example, in the IS field) in

Formation of General Professional Competencies in Academic Training

15

Fig. 1. The structure of links between competencies and CMs

the form of Professional Competencies (PCs) that form the basis of the corresponding CM (for example, the basis of professional standards as it is done in Russia [11]). Academic training should be aimed primarily at satisfying the labor market in professionals with PCs formulated by the professional community. Therefore, the PCs included in the CMs related to academic training (CM “A”) are directly related to the PCs from the CM “P” related to the professional community (Fig. 1). To form specific PCs, students must have certain basic competencies. A group of universal competencies and a group of general professional competencies (GPCs) can be distinguished among them. From this follows the conclusion about significant differences between CM “A” and CM “P” (Fig. 1). The purpose of this paper is to investigate the features of the competencies related to the academic training of IS professionals. The structure and dynamics of the formation of such competencies. The structure of groups of general PCs and PCs are determined. The basic general PCs are formulated. The training modules designed to implement basic general PCs are described.

2 Basic Concepts and Definitions Before formulating general PCs in the academic training of IS professionals, it is necessary to make the following remarks. 1. The concept defined by the “IS” term requires clarification. In the context of this paper, we will use two concepts defined by this term: in its broad and narrow senses. IS in a broad sense is an area of professional activity as a field of science, engineering and technology, covering a set of problems associated with the provision (preservation) of specific properties of an object (called the IS properties). The definition is of a general conceptual nature and has no practical significance. IS in the narrow sense is some entity that has practical significance, for example, for the subject of this paper, competence requirements for IS professionals. Most often this concept is defined as follows [7]: IS is the preservation of confidentiality, integrity

16

N. Miloslavskaya and A. Tolstoy

and availability of information. In addition, other properties, such as authenticity, accountability, non-repudiation, and reliability can also be involved. 2. It should be noted that the above definition of the IS term in a narrow sense raises some questions, to which the definition does not answer. For example, storing properties of information is an information-related process. This process should involve a specific object (or environment) to which this information belongs. Nothing about this is said in the definition of the term. Thus, the existing IS term definition [7] makes it difficult to interpret it unambiguously. In the Russian-language standard, you can find the definition of the IS regarding such an object as “organization” [12]: IS of an organization is the state of protection of the organization’s interests in the face of threats in the information sphere. Herewith, protection is achieved by ensuring a set of IS properties such as confidentiality, integrity, availability of information assets and the organization’s infrastructure. The priority of IS properties is determined by the importance of information assets for the interests (goals) of the organization. This definition uses such terms as “organization” (IS object), “state of protection”, “information sphere”, “asset”, “threat” and “IS properties” that does not contradict the modern approach to ensuring IS, which is outlined in the ISO/IEC 27000 series. 3. As a generalization and development of such an interpretation of the IS term, it is proposed to use the following terms and definitions: IS an object (object’s IS) is the state of protection of the object’s assets from threats in the information sphere, in which damage to the object’s assets and the object itself will not exceed the permissible level. The state of protection is achieved by ensuring the set of IS properties of the object’s assets such as availability, integrity, confidentiality, non-repudiation, reliability, authenticity, and accountability. Information sphere is a set of information, objects, information processing objects (information systems, automated systems, information technology (IT) objects, communication networks), IT, subjects whose activities are related to the formation and processing of information, the development and use of these technologies, IS ensuring, as well as a set of mechanisms for regulating the relevant public relations. Ensuring object’s IS is the implementation of the process (processes) of ensuring such a state of protection of the object’s assets from threats in the information sphere, in which the possible damage to the object’s assets and the object itself from the implementation of threats will not exceed the permissible level. These definitions more clearly describe the concept to which the IS term refers. It allows for a well-grounded structuring and description of the competency’s characteristics of IS professionals.

3 Competency Model “A” and Competencies Structure In the CMs of the academic community, three groups of competencies are distinguished: universal (UCs), general professional (GPCs), and professional (PCs) (Fig. 1). The first group (UCs) includes competencies that are weakly dependent on the direction of professional training. They belong to the first cluster and the first three levels

Formation of General Professional Competencies in Academic Training

17

of classification [9, 10]. An example of the UCs, which a graduate of an educational institution who has mastered Bachelor’s and Master’s degree programmes in IS, can be found in [12]. The second group (GPCs) includes competencies that are directly related to the area of professional activity. These competencies belong to the fourth and fifth levels, united in the second cluster of the well-known classification [9, 10]. Moreover, these GPCs have a certain level of universality within a specific field of training: they will be the same for various educational programmes. The GPCs importance for a specific field of academic training explains the choice of research topic, the results of which are presented below. The third group (PCs) includes competencies directly related to PCs described in the corresponding CM of the professional community (CM “P”). In academic training, the GPCs definition should take into account educational levels (Bachelor, Master) and the professional educational profile (PEP). All three groups of competencies are interconnected taking into account the levels of their formation (Fig. 1). These facts confirm the validity of the statement made that the CMs used by the academic community have significant differences from the CMs used by the professional community. It should be noted that these features can be reflected in educational standards for the academic training of IS professionals (Fig. 1), as it is done in the Russian Federation [13].

4 Structure of Groups of GPCs and PCs In the academic training of professionals (including in the IS field), not only PCs but also GPCs are important. The presence of the latter is mandatory for students in the formation of PCs. To divide the competencies into GPCs and PCs, it is necessary to determine the separation criterion. In this case, we can propose an approach, which was used to determine the initial data for the CMs development [1]. Such initial data are the area, objects, types, and tasks of professional activity. In addition to these initial data, the following factors must be taken into account: 1. IS as an area of professional activity (PA) is a set of sub-areas that have their own specifics in relation to PA objects, types, and tasks. Such sub-areas will include the areas of IS and information protection (IP), the area of cyber security (CS) and the area of IS of socio-technical (ST) objects, as well as other sub-areas are possible. Each sub-area of the PA defines a PEP). 2. PA objects, types and tasks can be typical for the entire area of IS (T1), typical for a certain sub-area (T2) and specific for a specific PEP. 3. Processes and measures for ensuring IS and IT are typical (T1) for the PA area (IS), typical (T2), specific for PEP. 4. The methodology, methods and techniques for ensuring IS can relate to the entire area of IS (Methodology), to a specific PA sub-area (Methods 1, 2, 3) for the subareas of IS, CS and ST IS) and to a specific PEP taking into account a specific PA sub-area (Techniques 1, 2, 3).

18

N. Miloslavskaya and A. Tolstoy

5. The formation of PCs should also be based on GPCs related to the fundamental sciences. For example, mathematics, physics and information theory, as the basis for the formation of systems thinking in trainees (F1), special chapters of mathematics, physics and electronics, necessary in some cases to understand and describe the processes and measures for ensuring IS, as well as the description of IT (F2). The initial data and the factors highlighted above are combined and presented in Table 1. Table 1. Selection criteria for GPCs and PCs PA area/sub-area

PA object, types and tasks

Processes and measures

IT

Methodology, methods and techniques

Fundamental area

GPCs/PCs

IS/

T1

T1

T1

Methodology

F-1

GPCs

/IP

T2

T2

T2

Method 1

F-21

GPCs

/CS

T2

T2

T2

Method 2

F-22

GPCs

/ST IS

T2

T2

T2

Method 3

F-23

GPCs

/IP

PEP

PEP

PEP

Technique 1



PCs

/CS

PEP

PEP

PEP

Technique 2



PCs

/ST IS

PEP

PEP

PEP

Technique 3



PCs

Analysis of the data from Table 1 allows us to divide the competencies into two groups (GPCs and PCs) and structure each of these groups with the assumption that these groups should be related to each other (Fig. 2). There are five subgroups in the GPCs group, each of which consists of two parts (two levels of competencies): GPC-A (GPC-A1 and GPC-A2), GPC-T (GPC-T1, and GPC-T2), GPC-P (GPC-P1 and GPCP2), GPC-M (GPC-M1 and GPC-M2), GPC-F (GPC-F1 and GPC-F2). The first parts include competencies that relate to the entire PA area – IS. The second parts combine the competencies related to one of the PA sub-area, which determines the PEP. Each of these subgroups (except for the GPC-F subgroup) combines competencies that reflect knowledge and skills about typical PA objects, types and tasks (GPC-A), typical IT used by typical PA objects (GPC-T), typical processes and measures to ensure IS (GPCP), methodology (GPC-M1) and methods (GPC-M2) to ensure IS of typical objects (GPC-M). The GPC-F subgroup contains competencies that are formed in the study of either the fundamental sciences, which include mathematics and physics (GPC-F1), or applied questions of mathematics, physics, electronics and electrical engineering (GPC-F2). The first level forms students’ outlook and systemic thinking, and the second is necessary for the formation of PCs (Fig. 2).

Formation of General Professional Competencies in Academic Training

19

Fig. 2. The structure of groups of GPCS and PCs

The structure of the GPCs group may have some peculiarities related to the curriculum for training professionals. Firstly, the competencies related to the first level of the first four subgroups of the GPCs group will be universal in relation to educational levels (Bachelor, Master) and to the PEP in IS. Secondly, the range of second-level competencies of the first four subgroups will be different in Bachelors’ and Masters’ training. Thirdly, there will be no competencies of the first level subgroup GPC-F (GPC-F1) during Master’s training, and the competencies of the second level of this subgroup may not exist. The formation of PCs should take into account the selected level and profile of training of IS professionals and should be associated with specific PA objects, types and tasks, IT, processes, measures, and techniques of ensuring IS. Also, it should be borne in mind that PCs are directly related to GPCs. Based on this, we can make a reasonable conclusion that the structure of the PCs group should be close to the structure of the GPCs group (Fig. 2). The PCs group consists of four PC subgroups directly related to similar subgroups of the GPCs group. Each of these subgroups combines competencies that reflect knowledge and skills for specific PA objects, types and tasks (PC-A), specific IT used by specific PA objects (PC-T), specific IS processes and measures (PC-P), techniques of ensuring IS of specific PA objects (PC-M). In the following, only GPCs related to the first level of subgroups GPC-A, GPC-T, GPC-P, and GPC-M will be considered. Since these competencies do not depend on the PEP for IS professionals and are common to the curricula for training Bachelors and Masters, they will be called the basic GPCs.

20

N. Miloslavskaya and A. Tolstoy

5 Basic GPCs The analysis of the definitions of terms related to the IS concept in the narrow sense formulated in the paper explains why subgroups of competencies related to IS objects (GPC-A), IT (GPC-T), IS ensuring processes (GPC-P), and IS ensuring methodology (GPC-M) are identified in the structure of the GPCs group. Let us consider the GPCs of the first level, included in the subgroups GPC-A1, GPC-T1, GPC-P1, and GPC-M1, related to the PA IS area (Fig. 2). First of all, those competencies were identified, from which the formation of the competency characteristics of the future IS professional (subgroup GPC-M1) begins – these are competencies that reflect knowledge and skills about the modern methodology of ensuring IS (competencies CM1.*). Further, the GPCs of the remaining subgroups were determined in the following order: GPC-A1 (CA1.*), GPC-T1 (CT1.*) and GPC-P1 (CP1.*). A complete list of GPCs of the first level are given in Table 2. Table 2. GPCs of the first level (where I1 – index of the subgroup of competencies; I2 – index) I1

I2

Competence. The graduate must be able to

GPC-M1

CM1.1

Use effectively the terminology system related to the PA IS object

CM1.2

Be guided in their practice by provisions from the legal and regulatory framework for ensuring IS

CM1.3

Base their activities on modern IS ensuring methodology

CA1.1

Describe typical PA areas

CA1.2

Describe typical IS objects

CA1.3

Describe the typical tasks of IS ensuring for typical IS objects

CT1.1

Describe the typical IT used by typical IS objects

CT1.2

Describe typical network technologies used by typical IS objects

GPC-A1

GPC-T1

GPC-P1

CT1.3

Describe typical technologies for creating and managing databases

CP1.1

Describe typical IS ensuring processes that are applicable to typical GPCs

CP1.2

Describe the standard IS ensuring measures implementing standard IS ensuring processes applicable to typical PA objects

The characteristics of GPCs of the first level are given in Table 3.

Formation of General Professional Competencies in Academic Training

21

Table 3. GPCs characteristics (indicators) (where I2 – competence’s index; I3 – characteristic’s index) I2

I3

Characteristics. The graduate must know (K1.*.*)/be able (S1.*.*)

CM 1.1 KM1.1 The basics of systematics (taxonomy) of is concepts KM1.2 References of is-related terms and their definitions KM1.3 Is-related terms and their definitions adopted by the professional community KM1.4 Application features of is-relates terms and their definitions SM1.1 To apply reasonable is-relates terms and their definitions CM 1.2 KM2.1 Legal norms of the national legal system related to the is field KM2.2 International regulatory framework related to the is field KM2.3 National regulatory framework related to the is field KM2.4 Application features of legal and organizational norms related to the is field SM2.1 To apply reasonable the legal and organizational norms related to the is field CM 1.3

KM3.1 Fundamental security features KM3.2 The basics of the process approach to is ensuring KM3.3 The basics of a risk-based approach to is ensuring KM3.4 Typical procedure for ensuring is SM3.1 To define the procedure for ensuring is of a typical pa object SM3.2 To define the requirements for is ensuring stages of a typical pa object

CA 1.1

KA1.1 Characteristics of pa areas

CA 1.2

KA2.1 The basics of classification of pa objects

SA1.1

To choose reasonable the pa area

KA2.2 Typical characteristics of typical pa objects SA2.1

To describe a typical pa object

CA 1.3

KA3.1 Typical is ensuring tasks related to typical pa objects

CT 1.1

KT1.1 Basic it used in the implementation of typical pa objects

CT 1.2

KT2.1 Basic network technologies

SA3.1 ST1.1

To formulate the requirements for ensuring is of a typical pa object To choose reasonable it for the implementation of a typical pa object

KT2.2 Basics of classification of open information systems KT2.3 Internet technology basics ST2.1

CT 1.3

To choose reasonable a network technology for the implementation of a typical pa object

KT3.1 Fundamentals of technologies for creating and managing databases ST3.1

To choose reasonable a technology for creating and managing a database for the implementation of a typical pa object (continued)

22

N. Miloslavskaya and A. Tolstoy Table 3. (continued)

I2

I3

Characteristics. The graduate must know (K1.*.*)/be able (S1.*.*)

CP 1.1

KP1.1

Fundamentals of systems engineering of is ensuring processes

KP1.2

Features of typical ip processes

KP1.3

Features of typical is management (ism) processes

SP1.1

To formulate requirements for typical ip processes

SP1.2

To formulate requirements for typical ism processes

KP2.1

Features of typical measures that implement typical ip processes

KP2.2

Features of typical measures that implement typical ism processes

SP2.1

To choose reasonable typical measures that implement typical ip processes

SP2.2

To choose reasonable typical measures that implement typical ism processes

CP 1.2

6 Training Modules for Implementing Basic GPCs The basic GPCs description should be presented in the corresponding CM of the graduate being trained on a specific educational programme in IS with its curriculum. Training modules (TMs) are inserted into the curriculum separately or as part of some academic disciplines (ADs). Each TM/AD must have its syllabus and assessment of labor intensity (LI) in conventional units (CUs) (an analog of Credits). Since the implementation of a specific TM/AD is aimed at the formation of specific basic GPCs (in this case), the curriculum should establish a connection between the TM/AD and the corresponding competence. At present, the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) (NRNU MEPhI) (Russia) has accumulated many years of experience in training professionals in this field on Bachelor’s and Master’s degree programmes. The corresponding CM can be found on the NRNU MEPhI website http://eis.mephi.ru/AccGateway/index.aspx?report_param_gosn=3& report_param_ismagister=true. The NRNU MEPhI carries out training within the framework of the approved educational standards, CMs of graduates, curricula and syllabus of educational disciplines. Bachelors’ training is conducted under to the “Automated Systems Security” degree programme. Masters’ training is conducted under four degree programmes: “Application of Cryptology Methods in ISMS”; “IS Maintenance for Key Information Infrastructure Systems”; “Business Continuity and IS Maintenance” and “Information and Analytical Support of Financial Monitoring”. The papers [14] and [1] shares the NRNU MEPhI’s experience in training Bachelors and Masters in IS. All these programmes are developed in accordance with the Professional standards of the Russian Federation for IS [15] and the Federal Educational Standards of the Russian Federation in the IS Direction [16]. A group of six Professional standards y formulate general and special labor functions that specific professionals can implement and requirements for the levels of knowledge and skills, corresponding to these labor functions and related to the levels of basic education and experience acquired in a practical field. These professional standards are used by organizations to create and implement their personnel

Formation of General Professional Competencies in Academic Training

23

management systems. A group of seven Federal Educational Standards is linked directly to these IS Professional standards. They formulate general competencies and general PCs and normalized the development of special PCs. As a result of the generalization of this experience, the following TMs (index I4) were developed, intended for the formation of basic GPCs described: “Taxonomy Basis of IS concepts” (TBISC); “IS Legal and Regulatory Framework” (ISLRF); “Typical IS Objects” (TISO); “Fundamentals of Information Security Ensuring Methodology” (FISEM); “Information Security Ensuring Engineering” (ISEE); “Basic Information Technology” (BIT). TMs description, including an abstract (a short listing of the topics of the TM syllabus), is given in Table 4. In the curricula for IS Bachelor’s and Master’s training at the NRNU MEPhI, the above TMs are included in the following ADs: TBISC, ISLRF, TISO, FISEM and ISEE (LI = 2CUs) and BIT (LI = 1CU). Table 4. Description of TMs intended for the formation of basic GPCs I4

Abstract

LI

I2

TBISC Taxonomy as a tool for the taxonomy of concepts. System of 0,25 CM1.1, CM1.2 basic IS concepts. References to terms and their definitions. The term system for the IS field. Definitions and relationships of the concepts: “security”, “IS”, “IP”, “cybersecurity”, “socio-technical security” ISLRF

Subject and content of the problem of legal and regulatory 0,25 CM1.1, CM1.2 IS ensuring. Fundamentals of the legal and regulatory framework of IS ensuring in the Russian Federation. Technical regulation in the IS field. IS ensuring standardization: international standards, national standards, corporate standards. Features of the application of legal and organizational norms related to the IS field

TISO

Classification and general description of a typical IS object. Organization as an IS object. Information as an IS object. Classification of information. Assets of a typical IS object and their identification. Typical objects of automated information processing (information systems, automated systems, cyber security (CS) objects and socio-technical (ST) objects

0,5

FISEM Conceptual framework for IS ensuring. PA objects and tasks 0,5 of in the IS field. Fundamentals of IP methodology. Fundamental security features. Conceptual foundations of modern IS ensuring methodology (process approach; management approach; risk-oriented approach; role of IS incident management processes; management of control processes and business continuity processes)

CA1.2, CA1.3

CM1.1, CM1.2, CM1.3, CA1,1

(continued)

24

N. Miloslavskaya and A. Tolstoy Table 4. (continued)

I4

Abstract

ISEE

Process approach. Cyclic Shewhart-Deming model. IS 0,5 Maintenance System of a typical object and its typical structure. Typical processes of the IP system. Typical IS Management (ISM) system of a typical object and its structure. Typical processes of the ISM system. Typical processes of the IP Management (IPM) system. Typical measures that implement typical processes of the IP system. Typical measures that implement typical IPM processes. Typical measures that implement typical ISM processes

CM1.3, CP1.1, CP1.2

BIT

Basic IT used in the implementation of standard PA objects 1,0 and IS objects. The choice of IT for the implementation of a typical object. Basic network technologies. Fundamentals of classification of open information systems. Internet technology basics. The choice of network technology for the implementation of a typical object. Fundamentals of technologies for creating and managing databases. Selection of a typical technology for creating and managing databases for the implementation of a typical object

CT1.1, CT1.2, CT1.3

Total

LI

I2

3,0

7 Conclusion Analysis of modern approaches to the design of CMs in terms of defining competencies of various levels in the IS field, allowed us to obtain the following results: 1. The conceptual base related to the IS field was developed, which made it possible to structure the PA areas, objects, types, and tasks. These data served as the basis for the study of competencies. 2. The structure of the groups of competencies that form the CMs of IS professional academic training has been determined. Its comparison with the structure of competencies related to CMs used by the professional community is carried out. Their fundamental differences are shown. 3. The relevance of the GPCs group formation for the academic training of IS professionals is shown. A model of the structure of this group is proposed, in which specific subgroups are identified. Moreover, these subgroups are divided into levels with varying degrees of versatility. The relevance of the study of the first level of subgroups of GPCs, called basic, is determined. It is shown that they are universal for various programmes of academic training. 4. Basic GPCs are formulated and their characteristics are determined. The TMs are described as part of ADs with their definition, abstracts and LIs. The validity of the results obtained is confirmed by the positive experience of developing competencies and CMs in the framework of training IS professionals on specific

Formation of General Professional Competencies in Academic Training

25

degree programmes at the NRNU MEPhI. While studying, more than half of the graduates undergo internship in the leading organizations of Russia, known in the IS field. Control over the level and quality of graduates is carried out on the basis of attracting highly qualified and eminent specialists from state and commercial organizations to the State Attestation Commission, which governs the defense of Graduation Qualifying Works. Employment of graduates is carried out in the organization of the banking sector and leading government organizations involved in various aspects of ensuring IS. For the last four years, an average of 65 students have been graduated according to the above four Master’s degree programmes. These graduates are citizens of Russia and foreign countries – Belorussia, Kazakhstan and Kyrgyzstan. The features of the results obtained should be noted. Firstly, they are new in terms of developing models that describe their structures. Secondly, these models are versatile as that do not depend on the field of academic training of professionals. Thirdly, the proposed model of the GPCs structure made it possible to substantiate the formulation of competencies and to describe their characteristics necessary for the CM development using the example of academic training of IS professionals. The results obtained can become the basis for further research of competencies for various PA IS sub-areas, types, objects, and tasks. Moreover, methods of controlling the level of formation of a specific competence should be developed in parallel. It is important both in the training and certification of professionals. Acknowledgement. This work was supported by the MEPhI Academic Excellence Project (agreement with the Ministry of Education and Science of the Russian Federation of August 27, 2013, project no. 02.a03.21.0005).

References 1. Vybornov, A., Miloslavskaya, N., Tolstoy, A.: Designing competency models for cybersecurity professionals for the banking sector. In: Drevin, L., Von Solms, S., Theocharidou, M. (eds.) WISE 2020. IAICT, vol. 579, pp. 81–95. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-59291-2_6 2. Miloslavskaya, N., Tolstoy, A.: Professional competencies level assessment for training of masters in information security. In: Bishop, M., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2015. IAICT, vol. 453, pp. 135–145. Springer, Cham (2015). https://doi.org/10.1007/ 978-3-319-18500-2_12 3. Miloslavskaya, N., Tolstoy, A.: ISO/IEC competence requirements for information security professionals. In: Bishop, M., Futcher, L., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2017. IAICT, vol. 503, pp. 135–146. Springer, Cham (2017). https://doi.org/10.1007/978-3319-58553-6_12 4. Alsmadi, I., Burdwell, R., Aleroud, A., Wahbeh, A., Al-Qudah, M., Al-Omari, A.: Practical Information Security: A Competency-Based Education Course, p. 317. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-72119-4 5. Mansfield R.S.: Practical Questions in Building Competency Models. Workitect Inc. (2005). https://pdfs.semanticscholar.org/91d6/2eceb2b4288bde92b46f4c58c9dc5bcf9 827.pdf. Accessed 18 Jan 2021

26

N. Miloslavskaya and A. Tolstoy

6. Competency Models for Enterprise Security and Cybersecurity. Research-Based Frameworks for Talent Solutions. University of Phoenix. Apollo Education Group Inc. (2015). http:// www.apollo.edu/content/dam/apolloedu/microsite/security_industry/AEG-UOPX%20Secu rity%20Competency%20Models%20report.pdf. Accessed 18 Jan 2021 7. ISO/IEC 27000:2018 Information technology—Security techniques—Information security management systems—Overview and vocabulary 8. Cybersecurity Workforce Competencies: Preparing Tomorrow’s Risk-Ready Professionals. Apollo Education Group, University of Phoenix, (ISC)2 and (ISC)2 Foundation (2014, 2015). http://www.apollo.edu/content/dam/apolloedu/microsite/security_industry/AEG-PS264521-CJS-STEM-CYBERSECURITY.pdf. Accessed 18 Jan 2021 9. Introduction to the Tools. Report U.S. Department of Labor «Competency Model Clearinghouse public toolkit». http://www.careeronestop.org/competencymodel/careerpathway/cpw overview.aspx. Accessed 18 Jan 2021 10. Competency Model General Instructions. Report U.S. Department of Labor «Competency Model Clearinghouse public toolkit». http://www.careeronestop.org/competencymodel/car eerpathway/CPWGenInstructions.aspx. Accessed 18 Jan 2021 11. Professional standards of the Russian Federation for Information Security Specialists. http:// azi.ru/professionalnye-standarty. (in Russian) 12. GOST R 53114-2008 Information Protection. Ensuring Information Security in the Organization. Basic Terms and Definitions. http://docs.cntd.ru/document/gost-r-53114-2008. (in Russian) 13. Federal Educational Standards of the Russian Federation in the “Information Security” Direction. http://azi.ru/obrazovatelnye-standarty. (in Russian) 14. Budzko, V., Miloslavskaya, N., Tolstoy, A.: Forming the abilities of designing information security maintenance systems in the implementation of educational programmes in information security. In: Drevin, L., Theocharidou, M. (eds.) WISE 2018. IAICT, vol. 531, pp. 108–120. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99734-6_9 15. Professional standards of the Russian Federation for information security professionals. http:// azi.ru/professionalnye-standarty. Accessed 18 Jan 2021. (in Russian) 16. Federal Educational Standards of the Russian Federation in the Information Security Direction. http://azi.ru/obrazovatelnye-standarty. Accessed 18 Jan 2021. (in Russian)

Electronic Voting Technology Inspired Interactive Teaching and Learning Pedagogy and Curriculum Development for Cybersecurity Education Ryan Hosler1 , Xukai Zou1(B) , and Matt Bishop2(B) 1

2

Indiana University Purdue University Indianapolis, 420 University Blvd, Indianapolis, IN 46202, USA [email protected], [email protected] University of California Davis, 1 Shields Avenue, Davis, CA 95616, USA [email protected] Abstract. Cybersecurity is becoming increasingly important to individuals and society alike. However, due to its theoretical and practical complexity, keeping students interested in the foundations of cybersecurity is a challenge. One way to excite such interest is to tie it to current events, for example elections. Elections are important to both individuals and society, and typically dominate much of the news before and during the election. We are developing a curriculum based on elections and, in particular, an electronic voting protocol. Basing the curriculum on an electronic voting framework allows one to teach critical cybersecurity concepts such as authentication, privacy, secrecy, access control, encryption, and the role of non-technical factors such as policies and laws in cybersecurity, which must include societal and human factors. Student-centered interactions and projects allow them to apply the concepts, thereby reinforcing their learning. Keywords: Electronic voting Curricula

1

· Interactive teaching and learning ·

Introduction

Cybersecurity defends against attacks that plague individuals and organizations daily; hence, cybersecurity has become integral to society. Cyberattacks and defenses are based on a combination of theoretical and practical knowledge and understanding, which often challenges students studying the foundations of cybersecurity. One way to excite interest is to tie the theory and practice to important events such as elections. Safe and secure elections are imperative for a democracy to function. The uniqueness and ubiquity of elections and the widespread use of E-voting systems emphasize the special role that E-voting technology can play in academic cybersecurity education in both college and high school. c IFIP International Federation for Information Processing 2021  Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 27–43, 2021. https://doi.org/10.1007/978-3-030-80865-5_3

28

R. Hosler et al.

Voting has a unique combination of security and integrity requirements [1]. For example, in secret-ballot voting, the most widely used voting scheme (and one used exclusively in the United States), a key security requirement is that the voter cannot be associated with a particular ballot, even if the voter wishes to disclose that relationship—very different than the security requirements for a banking ATM, for example, where a customer must be able to prove their association with a transaction. Thus, E-voting technology involves many specific and sometimes conflicting requirements. The topic covers a large knowledge base of cryptography, system security, and network security. Studying a cryptographicbased network E-voting system will cover many aspects of computer security and information assurance in a way that students will see both the theoretical and practical benefits and disadvantages of various techniques. From a pedagogic point of view, interactive teaching and learning methodologies have become more and more attractive nowadays. Interactive learning’s impact on student learning outcomes, particularly in cybersecurity education, has been proven effective in both theoretical research [2,3] and practical systems such as Clicker. Moreover, a case study showed that E-voting technology can be used to achieve student learning objectives satisfying ABET (Accreditation Board for Engineering and Technology) requirements [1], as does research [4–6]. In this paper, we propose an E-voting based student-centered interactive teaching and learning framework and curriculum for cybersecurity education based on a recent E-voting technique in [7]. Section 2 discusses other work involving cybersecurity education and electronic voting. Section 3 explains the modules used in the mutually restraining E-voting system and a mapping to the Cybersecurity Curricula 2017 (CSEC 2017) [8] defined 8 knowledge areas (KAs). Section 4 describes the interactive components of the modules. Then Sect. 5 provides concluding remarks and potential for future research.

2

Related Work

Interactive learning’s impact on students, particularly in cybersecurity education, has been proven effective in both theoretical research [2,3] and practical systems such as Clicker. Education tools and programs include general security [9–11], enhancing security education using games [12–14], and strengthening security education by exploring some specific aspect of security such as web browsers [9], software security [15–18], IoT security [19], cyber-physical system security[20,21], and network security [22,23]. Here, the specific aspects are those of elections and E-voting systems. Elections have many stringent requirements such as completeness, correctness, security, confidentiality, auditability, accountability, transparency, simplicity, usability, accessibility, and fairness. Moreover, some cybersecurity topics appear to conflict with each other, such as anonymity and verifiability. Using E-voting technology to teach computer security enhances student learning outcomes [4–6]. Typical examples of such efforts are individual E-voting courses [24,25].

Electronic Voting Technology Inspired Interactive Teaching and Learning

29

Bishop and Frincke [1] identify five aspects of E-voting useful for a computer security course: (1) identifying security-relevant requirements; (2) understanding specification; (3) understanding confidentiality, privacy, and information flow; (4) understanding human elements; and (5) establishing confidence in the final tallies. These lead to 11 learning outcomes required by the Accreditation Board for Engineering and Technology (ABET) [1, p. 54]. For example, ABET outcome A, “[a]n ability to apply knowledge of mathematics, science, and engineering” occurs from material throughout the E-voting lessons, especially in the sections on establishing confidentiality and understanding the human element, and outcome K “[a]n ability to use the techniques, skills, and modern engineering tools necessary for engineering practice” comes from the sections on requirements, specification, and confidence. The E-voting system used for the proposed curriculum is detailed in the next subsection.

Voters

Encryption

Encryption

Registration

1

V1

2

2.1

V2

Ballot Casting

Registration Server 2.1

Secure Multi-party Communication

(n,n) Secret Sharing

Tally Server

Homomorphic Encryption

3

Vote Tally and Verification

Data Integrity

Vn

TC3: In-process check and enforcement

TC2: Mutual lock voting

Authentication



1.1 Location Anonymity-TC1:Forming verifiable voting vector

Access Control

Oblivious Transfer

BulleƟn Board

Authorities

1

3

Key Management

2.1 Data Integrity

Tally Server

Fig. 1. Mutual restraining E-voting architecture (and mapped topics in pentagons).

2.1

Mutual-Restraining E-voting

The mutual restraining electronic voting and election protocol [7,26] balances multiple parties with conflicting interests based on a few simple cryptographic primitives and assumptions. The protocol consists of three technical components (TC): a universal verifiable voting vector (TC1), forward and backward mutual lock voting (TC2), and in-process checking and enforcement (TC3). Figure 1 shows this architecture. As its underlying concepts and mechanisms are fundamentally different from other E-voting technologies, the mutual restraining E-voting technique is ideal for an interactive pedagogical framework. Consider an election with N voters and M candidates for an office; each voter can vote for exactly 1 candidate. One can view the votes as being unique instances of an L = N × M matrix, with each voter being associated with a unique row.

30

R. Hosler et al.

The mutually restraining E-voting protocol uses this idea as the basis for ballots. There are N + 2 entities, the voters and two collectors (e.g., the two tally servers in Fig. 1). The mutual restraining E-voting protocol proceeds as follows: 1. The voter registers using a location anonymity scheme (LAS). The LAS gives the voter a unique row number known only to the voter. The voter V i then sets to 1 the element corresponding to the candidate they wish to vote for (let the position be Lc i ) and all other elements of the row (as well as all elements of all other rows) to 0 (Step 1 of Fig. 1). Moreover, the LAS is robust to a malicious participant deliberately inducing collisions by choosing a location that is already occupied by another voter [7]. i 2. The voter casts the ballot by generating two numbers vi = 2L−Lc and vi = i 2Lc −1 from their N × M matrix. One collector generates N2 − 1 shares and the other, N2 shares. They use an (N, N ) secret sharing scheme to give each other voter a share, and send the sum of those shares to the voter. From this, the voter generates two secret ballots pi and pi , the sum of shares held by Vi ’s secret ballot, from vi and vi respectively, and the information received from the collectors (Step 2 of Fig. 1). Ballots pi and pi are made public; v and v  are secret and only known to the corresponding voter. N  N  3. The two collectors now compute P = i=1 pi and P = i=1 pi . When converted to binary vectors, the two vectors will be the complement of each other, if everyone was honest and no errors were made (Step 3 of Fig. 1). 4. Each voter’s ballot is checked for validity. When the voter sends the outputs of functions of vi and vi to the collectors, the nature of these functions precludes their inversion but allows the collectors to check that both numbers are valid and hence the voter has cast exactly 1 vote. This validity is enforced since vi × vi = 2L−1 regardless of which candidate voter Vi voted for. Hence, any deviation from correctly casting a ballot will be detected. When the secret ballots are published, they also can be validated using the outputs of the functions (Step 2.1 of Fig. 1). 5. A web-based bulletin board system posts the aggregate votes as they are computed. It also shows the aggregation of the secret ballots once all are posted. Any voter can verify their vote was counted correctly by examining the ballots and the aggregate vote by examining their sum, just as the collectors do. This scheme and its protocols use cryptographic concepts throughout. It also relies on system and human security to ensure the integrity of the votes and of the inability to associate a voter with a ballot. As an example, consider Step 2, above. If access to the server can be blocked, or the shares of the secrets associated with the voters computed, then the election will fail because ballots will show up as corrupted. More tellingly, if the web server bulletin board can be corrupted, the voters may believe the election was not run correctly or the final tallies are wrong, when in fact they were not. In an election, this lack of credibility corrupts the result as thoroughly as if the votes were actually changed.

Electronic Voting Technology Inspired Interactive Teaching and Learning

31

Non-technical factors also come into play. In the US, many states disallow voting systems to be connected to a public network (or any network) while voting is underway. If voters use a smart phone or home system, an attacker can take advantage of their vulnerabilities to alter the voter’s vote. Finally, management issues abound, particularly when one realizes poll workers are often people with limited to no experience with computing.

3

Modules Mapped Directly from E-voting

Educational modules can be tied to the development and use of the abovementioned E-voting systems. The following ten examples modules form the core of a course in computer security. Each can be mapped to the mutually restraining E-voting system, as Fig. 1 illustrates. In that Figure, pentagons indicate how each of the cybersecurity topics relates to different activities and components of the mutual restraining E-voting system. Module 0: E-voting and Cybersecurity Topic Mapping. This introductory module covers the election process, requirements derivation and validation, and an examination of what security mechanisms are required to protect the system and voters. This includes cryptographic mechanisms and their use in protecting the integrity of data and transmissions. Via voting as a real world application, the instructor can lead students to discuss its properties and security requirements. Mapping different components of the system leads to the cryptographic primitives and security concepts of the system. Module 1: User and System Authentication. This module discusses the initialization of authentication information, a topic often overlooked but critical to the correct functioning of systems. Data poisoning can result in compromised systems, in this case compromised election results. As an example, voters must register, prove both their identity and place of residence in order to be able to vote; in some places, they must prove identity when they are given a ballot. If the former is incorrect, then the legitimate voter cannot prove that they are registered to vote, and hence are disenfranchised. This E-voting system allows remote voting, so transmitting trusted authentication information over untrusted channels must also be considered. Zero-knowledge proofs and other, more widely used, methods of authentication [27–30] are covered here. Module 2: Confidentiality. Confidentiality protects the interaction of the voter with the ballot until it is cast. This includes the exchange of information to validate the cast ballot as legitimate without exposing any information about the voter beyond their being authorized to vote and that they have submitted one ballot. Therefore this module covers cryptography for secrecy, including secret key and public key cryptosystems. In addition, when the voter votes, malware in the E-voting system could transmit the ballot to a third party. With respect to cryptography, an intruder could corrupt the negotiation of the cryptographic protocol to be used or corrupt the cryptographic keys, the former enabling eavesdropping and the latter a denial of service or a masquerading attack. Thus, system security controls must supplement the cryptographic mechanisms.

32

R. Hosler et al.

Module 3: Data Integrity and Message (Sender) Authentication. Cryptographic methods such as one-way functions and digital signatures can be used to protect cast ballots and transmissions from being tampered with. System access controls augment these by protecting the integrity of the systems and software involved, as well as the data (ballots) stored there, and the systems that receive and process the votes. Procedural controls protect the integrity of the overall election and the results of the election. The latter can be analyzed semiformally [31]. These techniques are also used to ensure that the correct certified software is loaded onto the E-voting systems, and that once loaded the software is not altered or tampered with. Module 4: Key Management. Central to the use of all cryptosystems is cryptographic key generation and management. This topic covers secret key management, public key management, and group key management. Underlying this is the generation of truly random numbers, which typically requires unpredictable data to be gathered from various sources such as system hardware. It also requires that the systems on which the keys are generated and stored be tamperproof, as otherwise an adversary can substitute their own keys, or corrupt the key generation program to ensure the keys are reproducible. These properties hold in many environments, not just voting. Module 5: Privacy and Anonymity. How a voter voted must be known only by that voter, and they cannot be able to prove how they voted to anyone. In addition to privacy and anonymity principles and mechanisms such as Mixnets [32–36], this topic includes repudiation (leading to non-repudiation). Legal considerations also drive mechanisms. For example, in the US, some states forbid any unique markings on ballots until the ballot is cast, for reasons of privacy; this inhibits the use of some protective mechanisms. Also, if a Mixnet goes outside one jurisdiction, another jurisdiction (nation) can block the transmission of those votes, so that must be balanced with untraceability requirements. Module 6: Access Control. E-voting systems have access control policies and mechanisms to regulate access for all entities including voters, authorities, and third parties. These range from the technical, such as the use of role-based access control, to the procedural, such as who can view cast ballots after the election and how long those ballots must be preserved. Conflicts of interest also affect these policies. As noted above, procedure analysis can semiformally analyze the procedures used to enforce those requirements. All these techniques are covered. Module 7: Secure Group/Multi-party Communication and Secret Sharing. The mutual restraining E-voting technique uses (n, n) secret sharing for n voters to exchange votes’ shares and to obtain the sum of votes. Thus, this topic covers secret sharing and secure multi-party communication schemes. Protection of the shares and their transmissions are also relevant here, as are commitment schemes that allow one to commit to a chosen value while keeping it hidden from others [37–40].

Electronic Voting Technology Inspired Interactive Teaching and Learning

33

Module 8: Secure Multi-party Computation and Homomorphic Encryption. Secure multi-party computation (particularly multiplication) is used among authorities to prevent any voter from casting multiple votes. Secure two party multiplication is implemented via homomorphic encryption. Threshold cryptography prevents authorities from colluding and guarantees that a certain number of authorities can perform necessary tasks. [41–44] These advanced types of cryptography are covered in this topic. Module 9: Attacks and Defenses. Attackers will attack all parts of an Evoting system including the voting and vote-tallying systems, communication channels, and the vote reporting mechanisms. Threat modeling, and the derivation of requirements from them, enable developers and election officials to anticipate these attacks and create appropriate defenses as well as detection methods for when those defenses fail. This leads to an analysis of potential attacks and how to detect and handle intrusions. Modules 1, 2, and 3 cover basic cybersecurity topics; modules 4, 5, and 6, intermediate cybersecurity topics; and modules 7, 8, and 9 cover advanced cybersecurity topics. The instructor can adjust the level of detail, and specific selection of cybersecurity topics, as they feel appropriate for their class. 3.1

Relationship to CSEC2017

The Cybersecurity Curricula 2017 (CSEC 2017) [8] defines 8 knowledge areas (KAs), each of which consists of knowledge units and essential learning outcomes. The modules present an avenue for teaching many of those learning essentials. As an example, elections are governed by both laws and regulations. These vary among jurisdictions, so the instructor can begin by reviewing the local laws and how those constrain the design. In the US, the laws in all jurisdictions require that no voter can be associated with the cast ballot, which means that no one, including the voter, can say which cast ballot is that voter’s (Societal and Organizational learning essentials). How this is done varies, and the students can brainstorm about different ways to protect the privacy and secrecy of the ballots (Human and Societal learning essentials). How these rules affect remote voting, and the construction of the systems, are other interesting areas to discuss; in some cases, certain cryptographic mechanisms would violate laws.1 . Then the instructor can segue into translating these requirements into software constraints (Software learning essentials) and hardware constraints (Component, Connection, and System learning essentials). As another example, in the context of elections and E-voting, user authentication covers all of the essential learning objectives of the Data KA; it also includes the Human KA identity management learning essential and the authentication part of the System KA. The types of proof of identity needed at the polling stations vary from a voter ID card to a simple verbal statement and recognition by a poll worker, and so relate to the cyberlaw learning essentials of the Societal 1

For example, the California Election Code states “it is [to be] impossible to distinguish any one of the ballots from the other ballots of the same sort.” [45, §13202].

34

R. Hosler et al.

KA. In all jurisdictions, impersonating a registered voter is a crime, leading to cyberlaw considerations (in the Societal KA).

4

Module Format and Construction

Interactive learning involves students’ active participation and engagement. Four interactive learning modules, along with their formats and constructions, are described below. The instructor can use these to cover individual topics or a mixture of topics above. 4.1

Interactive Lecturing

Interactive activity engages the instructor and students in a controlled dialogue. The instructor, acting as the tallying authority, poses questions during lecturing on the bulletin board. Students, acting as voters, respond their answers via the E-voting system, and the system tallies and shows the results. The tallied answers can guide the instructor so they can tailor their lecture to the learning needs of the students. This module can apply to different topics. With the implemented interface modules, the instructor sets questions, the students respond to questions, and the system computes and displays the students’ responses. Moreover, the instructor can ask True/False, Yes/No, or multiple choice questions. For challenging questions, instructors can present essay prompts requiring students to provide justification and analysis. In summary, this interactive lecturing module provides user friendly and flexible options for an instructor to engage students and gauge how well the students are learning. The knowledge gathered by assessing student responses will let the instructor know whether to advance to the next topic or stay on the present topic. 4.2

Interactive Class Projects

This set of activities have the students interact with one another and with the system, both in and out of class. Interactive class projects are typically cybersecurity topic-related and in many cases involve multiple topics. Class Project Design Principles. The overall purpose of interactive class projects is to gain first-hand experience with security concepts and principles. The projects described below have students implement and analyze different components of the E-voting system and, ideally, integrate these components to form a complete E-voting system. Several approaches can be used to design class projects.

Electronic Voting Technology Inspired Interactive Teaching and Learning

35

– A completely runnable mutual restraining E-voting system is provided at the beginning of the class for use by students during the class. This allows students to actively engage in the material from the beginning. During the course of the class, students will be asked to implement parts of the system to replace the corresponding existing ones in a cut-and-paste manner. – Attacker and defender class projects have students act as attackers who find and exploit vulnerabilities and as defenders who try to thwart them. This includes both implementation and operation vulnerabilities. – Class projects such as implementing vote casting, vote tally, and verification allow students to study trade-offs. They also can check that the system with the newly-implemented modules complies with election requirements. – General cybersecurity topic-based class projects cover many security and cryptographic primitives such as user authentication, encryption and decryption, digital signatures, n-party secret sharing, and secure multiple party multiplication. Moreover, different authentication systems such as user name/password authentication or biometrics-based authentication can be designed and implemented. These class projects are independent from Evoting systems and can be used in any security course. Students will be assigned projects of different types at different times. The assignments will reflect different student-centered interactions as follows. Interaction between students (i.e., their implemented software modules) and the system: students finish the projects and plug the implemented modules into the system to test their interaction with the rest of the system as well as the integrated system. Student-Centered Interactions of Class Projects. These assignments will reflect different student-centered interactions. As students finish their implementations of software modules, they plug them into the system to test their interaction with the rest of the system. This requires interactions among students. Here the cut and paste method is used: cut the original standard module and put in the implemented one. This also requires the group implementing a module to interact with other groups implementing and testing other modules. This will give students hands-on experience with security principles, protocols, and systems as well as system integration, message passing, and peer-to-peer protocols. Six concrete student-centered interactions InterA-1 to InterA-6 are discussed in the next subsection. Systematic Design of Interactive Class Projects. Figure 2 shows possible projects, their corresponding student groups, and the interaction among the students and the E-voting system. There are four primary types of class projects and six interactions. The instructor acts as one tallying server. Students are divided into groups. Each group plays one of the roles of honest voter, dishonest voter (attacker), and the second tallying server. During the term, student groups will switch roles so each group will implement different projects. The E-voting system connects all projects and controls all interactions.

36

R. Hosler et al.

Fig. 2. Student-centered interactive class projects–students will switch groups.

– Project 1: (honest) vote casting. This type of project implements a voting process, which includes two main modules: voter authentication and forming and casting ballots. The goal is to have students consolidate their knowledge of authentication, cryptosystems, data integrity, digital signatures, and n party secret sharing. It can also include determining acceptable (legal) methods of authentication and keeping track of eligible voters. The project can be implemented at different levels of security and protection strengths: e.g., with authentication or not, encryption or not, integrity or not, digital signature or not, commitments or not. As an additional benefit, the class project also serves to educate students on the importance and difficulty of robust and bug-free implementation. – Project 2: (dishonest) vote casting and attacks. This project has students implement various attacks on the E-voting system. Attacks can be designed to bypass authentication, disrupt the n party secret sharing protocol, and modify data in transition—any type of disruption or comoromise. Examples of attacks include sending invalid secret shares, publishing an invalid commitment, and casting invalid votes such as multiple votes or unauthorized votes. The goal here is to have students not only use and consolidate the above knowledge base, but also be able to analyze and exploit the system vulnerability and to design attacks. – Project 3: vote checking. This project implements one of two tallying servers. The tallying server resulting from this project and the tallying server con-

Electronic Voting Technology Inspired Interactive Teaching and Learning

37

trolled by the instructor will jointly perform vote checking. Students will extend their knowledge in areas such as homomorphic encryption and secure multi-party computation. – Project 4: requirements analysis and verification. This project analyzes aspects of the E-voting system to determine if it meets specific requirements. For example, are there problems with the software that could lead to a denial of service? What assumptions does this scheme make, and are those assumptions viable in the election jurisdiction of the class (or of some other jurisdiction)? If not, what would need to change? Or, what requirements must the jurisdiction change in order to ensure this system is usable, available during the election, and meets the laws and regulations of the jurisdiction? This leads to six possible interactions. – InterA-1: Group 1 and Group 2 students interact using n party secret sharing to cast their ballots and to make the E-voting process run to completion. The former will cast their ballot honestly and the latter dishonestly to attempt to invalidate or disrupt the voting process. – InterA-2: Group 1 and Group 3 students: Group 3 acts as the second tallying server. Group 1 sends their information to Group 3. – InterA-3: Group 2 and Group 3 students: Similarly, Group 2 send information to Group 3, but they can send wrong shares, wrong ballots, or wrong commitments, or even send nothing. – InterA-4: Group 1 students and the instructor: The students send their shares and commitment to the instructor for enforcement. – InterA-5: Group 2 students and the instructor: As InterA-4, but with Group 2 and not Group 1. – InterA-6: Group 3 students and the instructor interact using homomorphic encryption-based secure two party multiplication to jointly check and constrain a voter’s behavior. Design of Cybersecurity Topic-Based Class Projects. The above projects are E-voting related. However, topics in information security are not limited to E-voting systems. Therefore, additional projects can use material which are generic and applicable to all courses without tying them to the E-voting system. For example, one project can have students implement different encryption algorithms such as AES and RSA with varying key lengths, independent of the E-voting system. This will help the students understand different cryptosystems, evaluate their strengths and weaknesses, and the strength and impact of different key lengths. Attacker and defender projects can impose different constraints on the tools and defenses to be used to mimic different environments of the defender. 4.3

Interactive Self-study and Evaluation

These activities occur among students without the involvement of the instructor. This student-student interaction is feasible because of the unique feature of

38

R. Hosler et al. Table 1. Projects, their modules and involved interactions Projects

Modules

Interactions

1 2 3 4 5 6 7 8 9 A-1 A-2 A-3 A-4 A-5 A-6 Project 1 Project 2 Project 3 Project 4

√ √ √

√ √

√ √ √

√ √ √









√ √ √ √

√ √ √ √

√ √ √



√ √



√ √



√ √

√ √





Table 2. Four interactive modules and their features Module formats

Topic Each related? topic?

Next activity

In or outside class

Interactive lecturing

N

Y

Interactive class project

In

Interactive class project

Y

Y

Interactive self study and evaluation or Interactive evaluation

Outside

Interactive self study and evaluation

N

Y or N Interactive evaluation

Both

Y

Both

Interactive evaluation N

Interactive Lecturing

two tallying authorities having conflicting interests. The E-voting system allows students to act as one tallying authority and engage in discussion among themselves. For example, one student, acting as a tallying server, can post questions or quizzes and other students can cast votes as their answers. The results can guide students into further discussion on the related topics. This kind of student engagement and discussion may be held outside the class and would be supported by the system under study. Anonymous evaluation by students is also useful when a project team finishes a project. In many situations, one member of the team does not put in effort on a project commensurate with the other team members, but because the team is graded as a whole, everyone gets the same grade. To ensure all team members contribute to the project and earn credit proportional to their respective contributions, the anonymous E-voting system can be used to report when one of the members is not contributing; the instructor can then decide how to proceed. Such an anonymous project evaluation mechanism can potentially impact students’ involvement and contribution to team projects. 4.4

Interactive Topic/Class Evaluation

This interactive activity occurs between students and the instructor normally at or near the end of the class. In fact, it can occur whenever a module is done or the instructor deems it necessary. For example, once a topic and its corresponding projects are finished, an immediate class evaluation on teaching of this can be

Electronic Voting Technology Inspired Interactive Teaching and Learning

39

Table 3. Interactive framework, modules, and their application/adaption to different courses Framework/ Modules

Generic/Topic-wise Cryptography Security-oriented Non-security courses courses

Whole framework

Topic-wise cyclic model

Directly use

Directly use

Directly use

Interactive lecturing

General function/ generic question

Directly use

Directly use

Directly use

Topic based questions

Use all

Use some create new ones

Create all new ones

Topic based projects

Use all

Use some create new ones

Create all new ones

Interactive class project

Interactive self study General function and evaluation Topic based evaluation Interactive evaluation

General function/ class evaluation

Directly use

Directly use

Directly use

Use all

Use some create new ones

Create all new ones

Directly use

Directly use

Directly use

Topic based Use all Use some create Create all evaluation new ones new ones The general functions include instructor sets questions, students respond, students set questions, instructor sets topics/class schedules, etc.

done. Two types of evaluations can be conducted: topic based evaluation and general student survey. The former can use quizzes to evaluate and will guide the instructor how to proceed (i.e., advance or repeat) in a timely manner. In addition, a general class survey should be done to give students opportunities to express their opinions, including questions like: how do you feel about the current class pace? Should the class advance to the next topic? How do you feel about the instructor’s knowledge on this topic? How do you feel about the instructor’s enthusiasm and effort on the topic? This module format can be used for all topics. Frequent evaluation (in an appropriate frequency) and timely feedback, as compared to a single end-ofsemester evaluation practice, will improve the instruction quality and enhance the student learning outcomes. 4.5

Summary of Topics, Projects, and Interactive Modules

The modules and interactions that each project covers are summarized in Table 1. As evident, each of the four projects covers four basic cybersecurity topics: authentication, confidentiality, integrity, and access control/authorization. They together cover all topics and all interactions, and can do so from different disciplinary points of view. Depending on (types and levels) of courses, one or multiple class projects can be assigned for students to do. Table 2 summarizes four (format/constructions of) interactive modules, their features and interconnection and transition. As can be seen from the table, within

40

R. Hosler et al.

the four modules, only interactive class projects are strongly topic dependent. All the others can be adapted easily to other topics and thus, other courses. Even though the interactive framework is designed based on E-voting technology, it can be independently applied to other security courses. The overall framework, i.e., topic based cyclic interactive learning process (interconnected by four interactive modules) can even be adapted to non-security courses. Table 3 gives a condensed overview for adapting the four module formats to other courses. For example, a non-security course would need to create new interactive class projects whereas a non-electronic voting security course may find relevance in E-voting projects.

5

Conclusions

This paper proposed a flexible cybersecurity curriculum development framework based on an E-voting system. It is configurable to different topics such as cryptography, information security, and network security course curricula. It includes modules on security-related topics that can be used in non-securityspecific courses. Using this material, instructors can entice students’ interest in cybersecurity and enable students to learn cybersecurity in an attractive and engaged manner. Many future works can, or to say should, be done with the proposed new cybersecurity curriculum, including, but not limited to, developing a system to support/facilitate such a new teaching and learning methodology and testing and evaluating the effectiveness and impact of such a new curriculum. Acknowledgement. This material is based upon work supported by the National Science Foundation under Grant Nos. DGE-2011117 and DGE-2011175. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

References 1. Bishop, M., Frincke, D.A.: Achieving learning objectives through e-voting case studies. IEEE Secur. Priv. 5(1), 53–56 (2007) 2. Kolb, D.A.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Englewood Cliffs (1984) 3. Laurillard, D.: Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technology, 2nd edn. RoutledgeFarmer, London (2002) 4. Kennedy, G.E., Cutts, Q.I.: The association between students’ use of an electronic voting system and their learning outcomes. J. Comput. Assist. Learn. 21(4), 260– 268 (2005) 5. Cutts, Q.I., Kennedy, G.E.: Connecting learning environments using electronic voting systems. In: Australiasian Computing Education Conference, pp. 181–186 (2005) 6. Stowell, J.R., Nelson, J.M.: Benefits of electronic audience response systems on student participation, learning, and emotion. Teach. Psychol. 34(4), 253–258 (2007)

Electronic Voting Technology Inspired Interactive Teaching and Learning

41

7. Zou, X., Li, H., Sui, Y., Peng, W., Li, F.: Assurable, transparent, and mutual restraining e-voting involving multiple conflicting parties. In: Proceedings of the 2014 IEEE Conference on Computer Communications, IEEE INFOCOM 2014, Piscataway, NJ, USA, pp. 136–144. IEEE, April 2014 8. Joint Task Force on Cybersecurity Education: Cybersecurity curricula 2017. Technical report, ACM, New York, NY, USA, December 2017 9. Du, W., Wang, R.: SEED: a suite of instructional laboratories for computer security education. ACM J. Educ. Resour. Comput. 8, 3:1–3:24 (2008) 10. Du, W.: Computer Security: A Hands-On Approach. CreateSpace, Seattle (2017) 11. Van Niekerk, J., von Solms, R.: Using bloom’s taxonomy for information security education. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 280–287. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-64239377-8 33 12. Flushman, T., Gondree, M., Peterson, Z.N.J.: This is not a game: early observations on using alternate reality games for teaching security concepts to first-year undergraduates. In: Proceedings of the Eighth Workshop on Cyber Security Experimentation and Test, Berkeley, CA, USA. USENIX Association, August 2015 13. Schreuders, Z.C., Butterfield, E.: Gamification for teaching and learning computer security in higher education. In: Proceedings of the 2016 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association, August 2016 14. Vykopal, J., Bart´ ak, M.: On the design of security games: from frustrating to engaging learning. In: Proceedings of the 2016 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association, August 2016 15. Burley, D.L., Bishop, M.: Summit on education in secure software final report. Technical Report GW-CSPRI-2011-7, The George Washington University, Washington, DC, USA, June 2011 16. Raina, S., Taylor, B., Kaza, S.: Security injections 2.0: increasing engagement and faculty adoption using enhanced secure coding modules for lower-level programming courses. In: Bishop, M., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2015. IAICT, vol. 453, pp. 64–74. Springer, Cham (2015). https://doi.org/10.1007/ 978-3-319-18500-2 6 17. Bishop, M., Elliott, C.: Robust programming by example. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 140–147. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39377-8 15 18. Bishop, M., et al.: Learning principles and the secure programming clinic. In: Drevin, L., Theocharidou, M. (eds.) WISE 2019. IAICT, vol. 557, pp. 16–29. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23451-5 2 19. Chothia, T., de Ruiter, J.: Learning from others’ mistakes: penetration testing IoT devices in the classroom. In: Proceedings of the 2016 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association (2016) 20. Formby, D., Rad, M., Beyah, R.: Lowering the barriers to industrial control system security with GRFICS. In: Proceedings of the 2018 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association, August 2018 21. Sitnikova, E., Foo, E., Vaughn, R.B.: The power of hands-on exercises in SCADA cyber security education. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 83–94. Springer, Heidelberg (2013). https:// doi.org/10.1007/978-3-642-39377-8 9

42

R. Hosler et al.

22. Atwater, E., Bocovich, C., Hengartner, U., Goldberg, I.: Live lesson: Netsim: network simulation and hacking for high schoolers. In: Proceedings of the 2017 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association, August 2017 23. Weiss, R., Mache, J., Locasto, M.: Live lesson: the EDURange framework and a movie-themed exercise in network reconnaissance. In: Proceedings of the 2017 USENIX Workshop on Advances in Security Education, Berkeley, CA, USA. USENIX Association, August 2017 24. Shamos, M.I.: Electronic voting (2014). http://euro.ecom.cmu.eduprogramcourse stcr17-803 25. Halderman, A.J.: Secure digital democracy (2014). https://www.coursera.orgin structorjhalderm 26. Zou, X., Li, H., Li, F., Peng, W., Sui, Y.: Transparent, auditable, and stepwise verifiable online e-voting enabling an open and fair election. Cryptography, MDPI 1(2), 1–29 (2017) 27. Clarkson, M., Chong, S., Myers, A.: Civitas: toward a secure voting system. In: Proceedings of the 2008 IEEE Symposium on Security and Privacy, pp. 354–368 (2008) 28. Gardner, R.W., Garera, S., Rubin, A.D.: Coercion resistant end-to-end voting. In: 13th International Conference on Financial Cryptography and Data Security (2009) 29. Benaloh, J., Tuinstra, A.: Receipt-free secret-ballot elections. In: Proceedings of the 26th Annual ACM Symposium on Theory of Computing, New York, pp. 544–553. ACM (1994) 30. Sako, K., Kilian, J.: Receipt-free mix-type voting scheme. In: Guillou, L.C., Quisquater, J.-J. (eds.) EUROCRYPT 1995. LNCS, vol. 921, pp. 393–403. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-49264-X 32 31. Osterweil, L.J., et al.: Iterative analysis to improve key properties of critical humanintensive processes: an election security example. ACM Trans. Priv. Secur. 20, 5:1–5:31 (2017) 32. Neff, C.A.: A verifiable secret shuffle and its application to e-voting. In: Proceedings of the 8th ACM Conference on Computer and Communications Security, CCS 2001, New York, NY, USA, pp. 116–125. ACM (2001) 33. Chaum, D.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 24, 84–90 (1981) 34. Chaum, D., Ryan, P., Schneider, S.: A practical voter-verifiable election scheme. In: Proceedings of the 10th European Conference on Research in Comp. Security, ESORICS 2005, Milan, Italy, pp. 118–139 (2005) 35. Lee, B., Boyd, C., Dawson, E., Kim, K., Yang, J., Yoo, S.: Providing receiptfreeness in mixnet-based voting protocols. In: Lim, J.-I., Lee, D.-H. (eds.) ICISC 2003. LNCS, vol. 2971, pp. 245–258. Springer, Heidelberg (2004). https://doi.org/ 10.1007/978-3-540-24691-6 19 36. Weber, S.: A coercion-resistant cryptographic voting protocol - evaluation and prototype implementation. Master’s thesis, Darmstadt University of Technology (2006) 37. Brassard, G., Chaum, D., Crepeau, C.: Minimum disclosure proofs of knowledge. J. Comput. Syst. Sci. 37, 156–189 (1988) 38. Naor, M.: Bit commitment using pseudorandomness. J. Cryptol. 4(2), 151–158 (1991). https://doi.org/10.1007/BF00196774

Electronic Voting Technology Inspired Interactive Teaching and Learning

43

39. Goldreich, O., Micali, S., Wigderson, A.: Proofs that yield nothing but their validity, or all languages in NP have zero-knowledge proof systems. J. ACM 38(3), 690–728 (1991) 40. Ge, H., et al.: Koinonia: verifiable e-voting with long-term privacy. In: The 2019 Annual Computer Security Applications Conference (ACSAC 2019), San Juan, Puerto Rico, 9–13 December (2019) 41. Benaloh, J., Yung, M.: Distributing the power of a government to enhance the privacy of voters. In: Proceedings of the Fifth Annual ACM Symposium on Principles of Distributed Computing, PODC 1986, New York, NY, USA, pp. 52–62. ACM (1986) 42. Benaloh, J.: Verifiable Secret Ballot Elections. Ph.D. thesis, Yale University (1987) 43. Benaloh, J., Yung, M.: Distributing the power of a government to enhance the privacy of voters. In: Proceedings of the 5th Annual ACM Symposium on Principles of Distributed Computing, PODC 1986, pp. 52–62 (1986) 44. Schoenmakers, B.: A simple publicly verifiable secret sharing scheme and its application to electronic voting. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 148–164. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-484051 10 45. California Elections Code: Irvine, p. 2015. DFM Associates, CA, USA (2015)

Teaching Methods and Tools

Minimizing Cognitive Overload in Cybersecurity Learning Materials: An Experimental Study Using Eye-Tracking Leon Bernard1(B)

, Sagar Raina2

, Blair Taylor1

, and Siddharth Kaza1

1 Department of Computer and Information Sciences, Towson University, Baltimore, USA

[email protected], {btaylor,skaza}@towson.edu 2 Division of Maths and Information Technology, Mount Saint Mary College, Newburgh, NY, USA [email protected]

Abstract. Cybersecurity education is critical in addressing the global cyber crisis. However, cybersecurity is inherently complex and teaching cyber can lead to cognitive overload among students. Cognitive load includes: 1) intrinsic load (ILdue to inherent difficulty of the topic), 2) extraneous (EL- due to presentation of material), and 3) germane (GL- due to extra effort put in for learning). The challenge is to minimize IL and EL and maximize GL. We propose a model to develop cybersecurity learning materials that incorporate both the Bloom’s taxonomy cognitive framework and the design principles of content segmentation and interactivity. We conducted a randomized control/treatment group study to test the proposed model by measuring cognitive load using two eye-tracking metrics (fixation duration and pupil size) between two cybersecurity learning modalities – 1) segmented and interactive modules, and 2) traditional-without segmentation and interactivity (control). Nineteen computer science majors in a large comprehensive university participated in the study and completed a learning module focused on integer overflow in a popular programming language. Results indicate that students in the treatment group had significantly less IL (p < 0.05), EL (p < 0.05), and GL (p < 0.05) as compared to the control group. The results are promising, and we plan to further the work by focusing on increasing the GL. This has interesting potential in designing learning materials in cybersecurity and other computing areas. Keywords: Bloom’s taxonomy · Cognitive overload · Cybersecurity · Eye tracking · Pupillometry · Secure coding · Curriculum

1 Introduction Demand for effective cybersecurity education has led to the increased development of cybersecurity learning materials [7, 12]. Learning complex topics, including cybersecurity, involve the use of higher cognitive resources within learners’ limited working © IFIP International Federation for Information Processing 2021 Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 47–63, 2021. https://doi.org/10.1007/978-3-030-80865-5_4

48

L. Bernard et al.

memory [17]. Educational materials that consume too much of limited working memory can lead to cognitive overload [13]. Therefore, instructional designers need to design and develop learning materials that minimize cognitive overload to enhance learning. According to the cognitive load theory (CLT), learning materials can impose three types of cognitive loads – intrinsic, extrinsic and germane. Intrinsic load is induced because of the inherent difficulty of the topic due to several interconnected concepts that a learner needs to simultaneously understand [22]; extraneous load is induced because of the way information or tasks are presented to a learner [22]; germane load is desirable and is induced when extra effort is put in a task carried out to construct new learning [8]. Therefore, in order to prevent learners from reaching a state of cognitive overload, instructional designers need to carefully design learning materials to manage the three types of cognitive loads by minimizing intrinsic and extraneous loads, and maximizing germane load [22]. To address this, we propose a theoretical model for developing effective cybersecurity learning materials to minimize cognitive overload in a learner. We test the effectiveness of the model by conducting an experimental study with 19 computer science majors completing cybersecurity learning modules and measuring their cognitive load using an eye-tracker.

2 Theoretical Model The proposed theoretical model incorporates Bloom’s taxonomy (remember, understand, apply, analyze, evaluate and create) and the design principles of segmentation and interactivity to develop cybersecurity learning materials. We map these cognitive levels to the cognitive loads as follows -1) remember and understand are mapped to intrinsic load, because inherent difficulty of the topic can be addressed at the first two levels of Bloom’s taxonomy; 2) apply, analyze, evaluate, create are mapped to germane load, because learning for long term can be addressed at the last four levels of Bloom’s taxonomy [15]. Segmentation is defined as chunking content and displaying each chunk one at a time, whereas interactivity is the responsiveness to the users’ (learner) action on the content. We choose segmentation and the interactivity design principles, because these are known to reduce cognitive load due to the content presentation issues. We measure learner’s cognitive load (intrinsic, extraneous and germane) induced while going through a learning material by measuring pupil size and fixation duration using eye-tracker. Applying principles of Bloom’s taxonomy to create learning material minimizes intrinsic load and maximizes germane load, and applying the design principles of segmentation and interactivity minimizes extraneous. Assuming all learning materials incorporate some component of Bloom’s taxonomy, the proposed theoretical model (see Fig. 1) compares traditional (linear) noninteractive with segmented and interactive learning materials and shows their impact on learner’s intrinsic, germane and extraneous cognitive load in the learner’s working memory. Using Bloom’s taxonomy based segmented and interactive learning material will have a lower intrinsic (--) and extraneous load (--); and, higher germane load (++) compared to traditional (linear) non-interactive learning material.

Minimizing Cognitive Overload in Cybersecurity Learning Materials

49

Fig. 1. Proposed theoretical model for cybersecurity learning materials

Based on the proposed theoretical model, we answer the following research questions in this paper: RQ1: Does segmented and interactive cybersecurity learning material induce significantly less cognitive load (intrinsic) as compared to traditional (linear) non-interactive learning material at the remember and understand cognitive levels? RQ2: Does segmented and interactive cybersecurity learning material induce significantly high cognitive load (germane) as compared to traditional (linear) non-interactive learning material at the apply, analyze, evaluate and create cognitive levels? RQ3: Does segmented and interactive cybersecurity learning material induce significantly less cognitive load (extraneous) as compared to traditional (linear) non-interactive learning material at all six cognitive levels?

3 Literature Review This section discusses literature related to cognitive load theory, Bloom’s taxonomy, design principles of segmentation and interactivity, eye-tracking and cognitive load. 3.1 Cognitive Load Theory Cybersecurity involves problem solving that requires knowledge and skills in many areas, including secure programming, networking security, operating system security, cryptography, and vulnerability analysis [27]. Such computing-based problem solving skills induce excessive cognitive load in learners during the learning process [14, 24]. Extensive research has been conducted to assess cognitive load in computer science education, specifically with regards to programming. A study by Asai et al. [2] proposed

50

L. Bernard et al.

a model that could detect intrinsic and germane load to help teachers adjust their learning materials. Morrison et al. [20] adapted a previously developed instrument to measure intrinsic, extraneous and germane load in an introductory programing course. Excessive cognitive load can cause frustration that may discourage further learning activities [11]. Therefore, in recent years, to address cognitive load issues, there has been growing interest in designing learning materials that follow principles of cognitive load theory to manage intrinsic, extrinsic and germane loads in the working memory [1]. Intrinsic load is imposed when the topic itself is difficult to learn due to several interconnected elements that a learner needs to simultaneously understand [22]. If a learner has a high level of previous knowledge, the intrinsic load imposed will be less compared to a learner who has no previous knowledge [6]. For example, in order to learn the concept of integer overflow, a learner needs to learn other interrelated topics including variables, data types and programming. A learner with a prior knowledge of programming will face less intrinsic load as compared to a learner with no programming knowledge. Extraneous load (EL) is the load placed on working memory due to presentation of the learning material that does not contribute directly toward the learning [22]. For example, if a learning material presents a text or a diagram or a video and each of these does not explain the integer overflow clearly to a learner. The learning material will impose extraneous load on the learner. The IL and EL are the factors that can be controlled through instructional design [21]. Germane load occurs when a learner requires an effort to learn the complex content for long-term storage [8]. For example, use of working examples to learn about integer overflow will impose germane load. Thus, more germane load contributes towards learning. In order to manage cognitive load during learning, intrinsic and extraneous loads must be minimized. Minimizing intrinsic load will create a space in the working memory to accommodate germane load [22]. 3.2 Bloom’s Taxonomy Cognitive Levels The use of Bloom’s taxonomy is frequently seen in computing related disciplines including computer science and cybersecurity [18, 29]. Bloom’s learning taxonomy consists of six levels which increase in complexity as the learner moves up through these levels [15]. The levels include remember, understand, apply, analyze, evaluate and create. ‘Remember’ represents the lowest level of learning in the cognitive level domain. At this level, the learner is required to rote recall the terms introduced through the learning material. There is no presumption that the learner has understood the learning material. ‘Understand’ allows the learner to comprehend the material towards the goal of using this understanding in the future for problem solving and decision making. ‘Apply’ allows the learner to apply learned materials in new tasks with a minimum direction. ‘Analyze’ enables the learner to dissect complex problems into smaller components in order to better understand the structure. ‘Evaluate’ enables the learner to assess different problem-based scenarios and make a decision using a certain criteria and knowledge acquired from prior levels. ‘Create’ enables the learner to come up with new ideas based on his/her knowledge acquired from the prior levels. Bannert, M [4] asserts that the use

Minimizing Cognitive Overload in Cybersecurity Learning Materials

51

of learning taxonomies (Bloom’s) can be used to manipulate intrinsic cognitive load for novice learners. 3.3 Design Principles of Segmentation & Interactivity Segmentation implies breaking large content into smaller chunks and presents one chunk at a time on a single screen. Segmentation makes processing, retention and recalling of information easier. The design principles of segmentation are known to minimize cognitive load [19]. Interactivity is the “responsiveness to the learner’s actions during learning” [23]. Interactivity can be implemented using dialoging and controlling. The process of a learner answering a question and receiving feedback on his/her input is referred to as dialoguing. Dialoguing improves learning as learners can relate feedback to the current content. Controlling implies that the learner can determine the pace of the presentation. Controlling facilitates learning by allowing students to process information at their own pace. Interactivity has been shown to increase engagement and reduce cognitive load [19]. Extraneous load occurs as a result of how learning material is presented to the learner; therefore, use of segmentation and interactivity may reduce extraneous cognitive load. 3.4 Eye-Tracking and Cognitive Load To measure the effectiveness of learning materials in the context of cognitive load, several studies have used survey-based instruments [20, 25]. While survey-based instruments are easier to administer in a classroom setup, their results may not indicate accurate measure of cognitive load [26]. Some studies have used methods that measure learners’ physiological behavior to measure cognitive load including electroencephalogram (EEG) and eye-tracking [10, 16]. Borys et al. [5] compared data captured using EEGs and eye-tracking to measure cognitive load. The study found that eye-tracking captured the best cognitive load measures as compared to EEG. While several eye-tracking metrics can be used to measure cognitive load including fixation duration, saccades, pupil size and blink rate; fixation duration and pupil size have been found to be the most used eye-tracking metrics to measure cognitive load [9]. Bafna et al. uses performance related typing scores and eye tracking metrics such as blink rate and pupil size to measure cognitive load during eye-typing tasks [3]. Fixation refers to a focused state when the eye remains still over a period of time. Fixation duration is the average time for fixations. The levels of cognitive processing affect fixation duration indicating an increased strain on the working memory. Therefore, the higher the fixation duration, the higher the cognitive load [9]. Pupil size refers to the diameter of the pupil in the human eye. Psychologists have observed that pupil size varies with cognitive processing. If the difficulty and the effort to understand the task increases, pupil size increases. Therefore, the higher the pupil size, the higher the cognitive load [9].

4 Research Method In this section, we present learning interventions used in the study followed by research design.

52

L. Bernard et al.

4.1 Learning Materials To test the effectiveness of our proposed theoretical model, we picked two versions of Cyber4All@Towson (SI) cybersecurity learning modules in this study (refers to Fig. 2).

Fig. 2. Linear (A) vs segmented and interactive (B)

Both versions are designed using Bloom’s taxonomy. Each version has five sections, including background, code responsibly, laboratory assignment, security checklists and discussion questions. Some of the above sections also have subsections that are outlined in Fig. 3. The first version, traditional (linear), is a non-interactive module, implying that the entire learning module (all five sections) is displayed on a single scrollable web page; the second version, is segmented and interactive, where only one section displayed on a web page at a time. In both versions, learners start with the background section, followed by code responsibly, the laboratory assignment section, the security checklist and finally the discussion questions. In segmented and interactive modules - students read content related to the topic in the background and code responsibly sections and answer feedbackbased interactive checkpoint questions; in the laboratory assignment section, students complete interactive code checklists and answer interactive text-response questions; and in the discussion section, students answer discussion questions. Students cannot move to the next section until they have answered the checkpoint questions correctly. In the traditional (linear) module, no checkpoint and feedback-based questions are provided. Figure 3 shows the mapping of Bloom’s taxonomy cognitive levels to cognitive load types between traditional (linear) and segmented and interactive SI cybersecurity learning modules.

Minimizing Cognitive Overload in Cybersecurity Learning Materials

53

Fig. 3. Mapping of Bloom’s taxonomy cognitive levels to cognitive load types between traditional (linear) and segmented & interactive learning modules

4.2 Research Design An experimental study was conducted in the human computer interaction (HCI) laboratory at a large comprehensive university, using a control-group treatment-group design. A total of 19 (6 females, 13 males) computer science undergraduate students participated in the study. To avoid selection bias, participants were randomly assigned to two groups: control (n = 10) and treatment (n = 9). Randomization of the samples were done using drawing of paper chits from a box container. The container box included an equal number of chits stating which version of the security injection modules students should use. The paper chit also includes the URL for the module the students should complete. The control group completed an integer error module using traditional (linear) format and the treatment group completed the same module presented in a segmented and interactive format. Each participant was allocated different time slots (one hour each) due to the availability of a single eye-tracking device. For each participant, the experiment involved three steps -1) eye calibration; 2) a demographics survey and 3) completing the module. Apparatus. The eye movements of each participant were recorded using a Tobii T60 eye tracker with Tobii studio 3.0 software package. The eye-tracker was installed on a Windows 7 operating system with 64 GB memory, 3 GHz processor and 1 TB hard drive. The device was placed on the bottom frame of a 17 inch LCD monitor with a resolution of 1280 * 1020 pixels and 60 Hz frequency. The eye fixations were detected

54

L. Bernard et al.

using Tobii’s I-VT filter fixation detection algorithm. A second monitor, connected to the eye tracking computer and kept at a distance in the same room, was used to monitor participants’ eye-track status (Fig. 4).

Fig. 4. Experiment setup

Procedure. Each participant showed up in their allocated one-hour time slot in the HCI laboratory. Participants were given brief introductions about the experiment, shown the IRB protocols, followed by eye calibration. Eye Calibration. The eye calibration includes a three-step process - 1) eye detection, 2) calibration, and 3) result acceptance (Refer Fig. 5). In eye-detection, participants were asked to sit on a chair in a comfortable position in front of the eye-tracker and look at the monitor. The participants’ positions were adjusted until eyes were detected at the center of the eye-track status window to be able to capture eye-movements accurately with high precision. The allowable distance of the participants’ position from the monitor

Fig. 5. Eye calibration

Minimizing Cognitive Overload in Cybersecurity Learning Materials

55

was 50 cm–80 cm. In calibration, participants were asked to look at the center point of a moving ball on a 9-point calibration view. In result acceptance, the calibration results are presented with an option to accept the calibration or re-calibrate. The calibration was accepted only when green dots were within each 9-point circle, otherwise recalibration was performed. After calibration, participants completed the demographics survey, the integer error module, and the usability survey in sequence (Fig. 6).

Fig. 6. Participants eye gaze

Data Processing. In order to compare students’ fixation duration and pupil size for each section of the content, we process the raw data from Tobii T60 eye-tracker that involved the following steps (Fig.7):

Fig. 7. Data processing

Data Export. The eye movement data from the eye-tracker was exported for each participant in tab separated values (tsv) format using Tobii Studio 3.0.

56

L. Bernard et al.

Data Import. Each participants’ gaze data was extracted from the tsv file and stored in a SQL database using the SSIS package. The SSIS package includes three main tasks; 1) read tsv files from the input folder; 2) insert data in SQL data table and 3) move files to the processed folder. Participants’ start and end recording times for each section of the content were manually taken from the recorded videos. These timings were also stored in SQL data tables. Computing Mean Pupil Size. Tobii T60 eye tracker output pupil size information for each eye together with each gaze point through the Tobii Pro Studio. The pupil size data is provided for the left and the right eye individually and is an estimate of the pupil size in millimeters. We only include pupil size where both left and right validity code is 0 as this means that the eye tracker is certain that it has recorded all relevant data for both left and right eye. We compute mean pupil size per section for each participant for both treatment and control group (Refer Fig. 8).

Fig. 8. Average pupil size code snippet

Computing Mean Fixation Duration. Fixation duration is the elapsed time between the first gaze point and the last gaze point in the sequence of gaze points that makes up the fixation. Fixations were classified using Tobii’s I-VT fixation filter algorithm. We compute mean fixation duration per section for each participant’s for both treatment and control groups (Refer Fig. 9).

Fig. 9. Average fixation duration code snippet

4.3 Research Design RQ1, RQ2 and RQ3 were tested using independent sample t – tests to compare mean pupil size and mean fixation duration between the control (linear module) and the treatment (segmented and interactive module) groups. We picked independent sample t- tests because: 1) data for the groups was found to be normally distributed using kolmogorovsmirnov and shapiro-wilk test (p > 0.05), and 2) the two groups were independent samples.

Minimizing Cognitive Overload in Cybersecurity Learning Materials

57

Pupil Size as a Function of Time. An example of participant pupil size as a function of time for Linear and Segmented content types for each of the sections is displayed in Fig. 10 below. Participants spend more time in the laboratory assignment section for both content types.

Fig. 10. A participant’s average pupil size by content sections

Pupil Size Changes as a Function of Time. An example of participant changes in pupil size as a function of time for linear and segmented content is displayed in Fig. 11 below. Linear content type displays larger changes in pupil size than Segmented. Changes in pupil size are associated with changes in cognitive state [30]. Average Pupil Size as a Function of Average Fixation Duration. Participants average pupil size as a function of average fixation duration for linear and segmented content is displayed in Fig. 12 below. Linear content type has longer fixation durations and higher pupil size when compared to Segmented content type. Comparison for Mean Pupil Size and Mean Fixation Duration in the Control and Treatment Group for Intrinsic Load. The mean pupil size for the treatment group (2.61) was lower than the mean pupil size for the control group (2.88) at the remember, apply cognitive levels of Bloom’s taxonomy and this difference was found to be statistically significant at the 95% level (p < = .05, p = 0.05). The mean fixation duration for the treatment group (213) was lower than the mean fixation duration for the control group (288) at the remember, apply cognitive levels of Bloom’s taxonomy and this difference was found to be statistically significant at the 95% level (p < .05, p = 0.028). This implies segmented and interactive learning modules induce less average

58

L. Bernard et al.

Fig. 11. A participant’s average change in pupil size by content sections

Fig. 12. Average pupil size as a function of average fixation duration by content type

intrinsic load (IL) on students than traditional (linear) non-interactive learning modules. This answers RQ1 (refer to Figs. 13 and 14). Comparison for Mean Pupil Size and Mean Fixation Duration in the Control and Treatment Group for Germane Load. The mean pupil size for the treatment group (2.58) was lower than the mean pupil size for the control group (2.98) at the apply, analyze, evaluate cognitive levels of Bloom’s taxonomy, and this difference was found

Minimizing Cognitive Overload in Cybersecurity Learning Materials

59

Fig. 13. IL - Average pupil size in control and treatment groups for intrinsic load

Fig. 14. IL - Average fixation duration in control and treatment groups for intrinsic load

to be statistically significant at the 95% level (p < .05, p = 0.03). The mean fixation duration for the treatment group (194) was lower than the mean fixation duration for the control group (339) at the apply, analyze, evaluate cognitive levels of Bloom’s taxonomy, and this difference was found to be statistically significant at the 95% level (p < .05, p = 0.001). This implies that segmented and interactive learning modules induce less average germane load (GL) on students than traditional (linear) non-interactive learning modules. This is because the treatment group completed interactive modules and received feedback for each task, requiring less cognitive effort as compared to the traditional (linear) non-interactive learning module, where students did not receive any

60

L. Bernard et al.

hint/feedback to answer questions, which requires more cognitive effort. Van Merriënboer et al. [28] concluded that only limited guidance and feedback should be provided to increase germane load. This answers RQ2 (refer Figs. 15 and 16).

Fig. 15. GL - Average pupil size in control and treatment groups

Fig. 16. GL - Average fixation duration in control and treatment groups

Comparison for Mean Pupil Size and Mean Fixation Duration in Control and Treatment Group for Extraneous Load. The mean pupil size for the treatment group (2.62) was lower than the mean pupil size for the control group (2.91) at all cognitive levels of Bloom’s taxonomy, and this difference was found to be statistically significant at the 95%

Minimizing Cognitive Overload in Cybersecurity Learning Materials

61

level (p < .05, p = 0.03). The mean fixation duration for the treatment group (214.19) was lower than the mean fixation duration for the control group (299.34) at all cognitive levels of Bloom’s taxonomy, and this difference was found to be statistically significant at the 95% level (p < .05, p = 0.006). This implies that the segmented and interactive learning modules induce less average extraneous load (EL) on students than traditional (linear) non-interactive learning modules. This answers RQ3 (refer Figs. 17 and 18).

Fig. 17. EL - Average pupil size in control and treatment groups

Fig. 18. EL - Average fixation duration in control and treatment groups

62

L. Bernard et al.

5 Conclusion and Future Work We proposed a theoretical model for developing effective cybersecurity learning materials to minimize intrinsic and extraneous cognitive load and maximize germane load among learners. The model incorporated Bloom’s taxonomy and the design principles of segmentation and interactivity. We conducted a study to test the effectiveness of the model. The results indicate that the intrinsic and extraneous loads are significantly minimized using segmented and interactive modules. However, we also found germane load to be significantly less in the segmented and interactive modules, compared to the traditional (linear) modules. Van Merrienboer et al. [28] suggest that, because the interactive modules provide feedback, students are able to progress through content more quickly with less effort on learning. In the future, we plan to further expand the study to investigate the three types of cognitive load on the novice and expert learners. However, this model and methodology provides uses eye-tracking in a novel way to influence design of learning materials and cybersecurity and other computing disciplines.

References 1. Andrzejewska, M., Skawi´nska, A.: Examining students’ intrinsic cognitive load during program comprehension – an eye tracking approach. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12164, pp. 25–30. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52240-7_5 2. Asai, S., et al.: Predicting cognitive load in acquisition of programming abilities. Int. J. Electr. Comput. Eng. 9(4), 3262–3271 (2019). https://doi.org/10.11591/ijece.v9i4.pp3262-3271. 3. Bafna, T., et al.: Cognitive load during eye-typing. In: Eye Tracking Research and Applications Symposium (ETRA), New York, NY, USA, February 2020, pp. 1–8 (2020) 4. Bannert, M.: Managing cognitive load - recent trends in cognitive load theory. Learn. Instruc. 12(1), 139–146 (2002). https://doi.org/10.1016/S0959-4752(01)00021-4 5. Borys, M., et al.: An analysis of eye-tracking and electroencephalography data for cognitive load measurement during arithmetic tasks. In: 2017 10th International Symposium on Advanced Topics in Electrical Engineering, ATEE 2017, April 2017, pp. 287–292 (2017) 6. Curum, B., Khedo, K.K.: Cognitive load management in mobile learning systems: principles and theories. J. Comput. Educ. 8(1), 109–136 (2020). https://doi.org/10.1007/s40692-02000173-6 7. Dark, M., Kaza, S., LaFountain, S., Taylor B.: The cyber cube: a multifaceted approach for a living cybersecurity curriculum library. In: The 22nd Colloquium for Information Systems Security Education (CISSE), New Orleans, LA (2018) 8. Debue, N., van de Leemput, C.: What does germane load mean? An empirical contribution to the cognitive load theory. Front. Psychol. 5 (2014). https://doi.org/10.3389/fpsyg.2014.01099 9. Holmqvist, K., et al.: Eye Tracking: A Comprehensive Guide to Methods and Measures. OUP Oxford (2011) 10. Jerˇci´c, P., Sennersten, C., Lindley, C.: Modeling cognitive load and physiological arousal through pupil diameter and heart rate. Multimedia Tools Appl. 79(5–6), 3145–3159 (2018). https://doi.org/10.1007/s11042-018-6518-z 11. Kalyuga, S.: Cognitive Load Theory: Implications for Affective Computing. Undefined (2011) 12. Kaza, S.: A Cyber Security Library – The need, the distinctions, and some open questions. In: New Approaches to Cyber Security Education (NACE) Workshop, New Orleans, LA (2018) 13. Kirsh, D.: A Few Thoughts on Cognitive Overload (2000)

Minimizing Cognitive Overload in Cybersecurity Learning Materials

63

14. Knorr, E.M.: Worked examples, cognitive load, and exam assessments in a senior database course. In: Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE, New York, NY, USA, February 2020, pp. 612–618 (2020) 15. Krathwohl, D.R.: A revision of bloom’s taxonomy: an overview. Theory Pract. 41(4), 212–218 (2002) 16. Larmuseau, C., et al. Multimodal learning analytics to investigate cognitive load during online problem solving. Br. J. Educ. Technol. (2020), bjet.12958. https://doi.org/10.1111/bjet.12958 17. Manson, D., Pike, R.: The case for depth in cybersecurity education. ACM Inroads. 5(1), 47–52 (2014). https://doi.org/10.1145/2568195.2568212 18. Masapanta-Carrión, S., Velázquez-Iturbide, J.Á.: A systematic review of the use of Bloom’s taxonomy in computer science education. In: SIGCSE 2018 - Proceedings of the 49th ACM Technical Symposium on Computer Science Education, pp. 441–446 (2018). https://doi.org/ 10.1145/3159450.3159491. 19. Mayer, R.E., Moreno, R.: Nine ways to reduce cognitive load in multimedia learning. Educ. Psychol. 38(1), 43–52 (2003). https://doi.org/10.1207/S15326985EP3801_6. 20. Morrison, B.B., et al.: Measuring Cognitive Load in Introductory CS: Adaptation of an Instrument. (2014). https://doi.org/10.1145/2632320.2632348 21. Paas, F., et al.: Cognitive load theory and instructional design: recent developments. Educ. Psychol. 2003, 1–4 (2003) 22. Paas, F., van Merriënboer, J.J.G.: Cognitive-load theory: methods to manage working memory load in the learning of complex tasks. Current Directions Psychol. Sci. 29(4), 394–398 (2020). https://doi.org/10.1177/0963721420922183 23. Raina, S., et al.: Security Injections 2.0: Increasing Ability to Apply Secure Coding Knowledge Using Segmented and Interactive Modules in CS0 (2016). https://doi.org/10.1145/283 9509.2844609 24. Shaffer, D., et al.: Applying Cognitive Load Theory to Computer Science Education (2003) 25. Skulmowski, A., Rey, G.D.: Measuring cognitive load in embodied learning settings. Front. Psychol. 8, 1191 (2017). https://doi.org/10.3389/fpsyg.2017.01191 26. Skulmowski, A., Rey, G.D.: Subjective cognitive load surveys lead to divergent results for interactive learning media. Hum. Behav. Emerg. Technol. 2(2), 149–157 (2020). https://doi. org/10.1002/hbe2.184 27. Speelman, C., et al.: Towards a method for examining the effects of cognitive load on the performance of cyber first responders. In: Proceedings of the International Conference on Security and Management (SAM), January 2019 28. Van Merriënboer, J.J.G, et al.: Teaching complex rather than simple tasks: balancing intrinsic and germane load to enhance transfer of learning. Appl. Cogn. Psychol. 20(3), 343–352 (2006). https://doi.org/10.1002/acp.1250 29. Van Niekerk, J., von Solms, R.: Using Bloom’s taxonomy for information security education. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 280–287. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39377-8_33 30. Venkata, N., et al.: Towards inferring cognitive state changes from pupil size variations in real world conditions. (2020). https://doi.org/10.1145/3379155.3391319

A Layered Model for Building Cyber Defense Training Capacity Erik L. Moore1(B) , Steven P. Fulton2 , Roberta A. Mancuso1 Tristen K. Amador1 , and Daniel M. Likarish1

,

1 Regis University, Denver, CO 80221, USA {emoore,rmancuso,tamador,dlikaris}@regis.edu 2 USAF Academy, Colorado Springs, CO 80840, USA [email protected]

Abstract. As technology proliferates and becomes indispensable to all functions of society, so does the need to ensure its security and resilience through cyber defense training, education, and professional development. This paper presents a layered model that supports cyber defense training progressively through the development of technology services, digital context, performance assessment, and impact analysis. The methods used were applied to college laboratories associated with cybersecurity classes, defense training exercises, cyber based competitions, and graduate research program designs. The service layer presents methods for developing the technical infrastructure and agile deployment necessary to support cyber defense training. This, then, is layered with conceptual frameworks to guide teams as they immerse into scenarios within cyberspace. To enhance team performance in this space and to enhance the value of the training process itself, psychometric feedback, Agile methods, and quantitative assessments are used to track efficacy and facilitate future development. The final layer represents active incident response and ongoing collaborative efforts between institutions and across disciplines. The work is presented as a progression and illustrates a decade of research from 2010 to 2020. The context has been updated here with the intention that it can be used as a guide for designing a broad range of collaborative cyber defense and cyber range programs. The influence of socio-behavioral factors increasingly illuminates the path forward. Keywords: Cyber defense · Cybersecurity · Psychometrics · Agile · Cyber range · Collaborative · Training · Education · Behavioral · Teams · Societal · Capacity

1 Introduction This paper describes a path of increasing cyber defense capability that has demonstrated significant benefits within and across the layered model for cybersecurity students, faculty, practitioners, and partnering organizations. The framework describes layers that © IFIP International Federation for Information Processing 2021 Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 64–80, 2021. https://doi.org/10.1007/978-3-030-80865-5_5

A Layered Model for Building Cyber Defense Training Capacity

65

represent both new levels of capabilities and underlying technological and organizational foundations. In early 2000, the authors were building laboratory infrastructure for university information technology programs. The pivot to using this space for cyber competitions started occurring by 2005. The authors started publishing the underlying work for this framework by 2010 and the resultant modeling is still used in their work currently. This layered approach ensures a robust model of collaborative services among peer institutions that are working to build cyber defensive capabilities, including all levels of government (federal, state, and local), academia, industry, and cyber defense organizations. The origin of this work was an effort to build laboratory environments, using Agile development techniques to provide immersive, hands-on experiences to Regis University students anywhere on the Internet. In 2005, Regis University started the process to obtain authorization as a National Security Agency Center for Academic Excellence (CAE) in Cybersecurity Defense, adapting curriculum and mapping to federal standards. The Regis team worked to attain this designation in 2007, partially inspired by the cyber competitions that were occurring in the area. Several related publications describe the general national trend toward cybersecurity competitions and inter-collegiate collaboration at that time [1–4]. While this work influenced the authors, our research focuses on describing, in a way designed to be replicable, the method of building capacity at multiple layers that has led to agile and collaborative communities of cyber defense training. Further, our work and these communities are uniquely informed by socio-behavioral and psychometric based training. Table 1. CyCap A layered model for building cyber defense training capacity that illustrates the range of affecting technical infrastructure to societal norms for sustainable cyber defense [5].

By 2007, several universities in Colorado had initiated significant collaborative cyber competition efforts, particularly the United States Air Force Academy, Colorado State University, and the University of Colorado, Boulder. This led to coordinated work in both the Computer and Network Vulnerability Assessment Simulation (CANVAS) competition beginning in 2007 and the Rocky Mountain Collegiate Cyber Defense Competition

66

E. L. Moore et al.

(RMCCDC) beginning in 2011. This context provided the motivation that led to a rapid capacity-building program as described in the model presented here. By 2016, collaboration with Regis faculty regarding behavioral analytics and psychometric instrumentation allowed the team to add capacity to include support for team behavior during training and actual incident response. Our framework is built on the experiences of the authors as we navigated the challenge of maturing and adding technology-based services, providing digital context, ensuring assessment, and developing analytical capabilities. On reflection this was much in the same way that the Capability Maturity Model Integration (CMMI) represents progressive development of technical capabilities [6]. It was actually based on our progressive effort over time, which allowed for the creation of a robust Collaborative Training and Response Community (CTRC) within an empowering socio-technical environment. The compounding capabilities reflect how a roadmap of progressive development can be built, offering value as each new layer of capacity is added, as seen in Table 1. Table 2. Matrix of cyber defense capacity building efforts mapped to the CyCap model. Focus

Differentiating traits

Overlap with the cyber defence training capacity building model presented here

Model examples

Relates to CyCap layers

Cyber Range computing platform, infrastructure provisioning, software provisioning, and scenario engine, describing a used build

Cloud-ready, or hardware/software-driven scenario deployment, can be used for exercises, training, and research & development

Models of infrastructure dependencies designed to describe cybersecurity training for multiple professions to meet scenarios of multi-organizational collaboration

AIT Cyber L1–L2 Range [7] NCR, Michigan, Virginia, IBM, CRATE, Cisco, UD, NATO, DOD, Raytheon, Baltimore, Florida [8]

Institutional program-oriented effort for role preparation

Skill, knowledge, competence-driven, merging multiple standards

Layered model, success expectations

Finnish Cyber Security Degree Program Model [9]

L3

National scale multi-model collaboration architecture for coordinated response

Holistic approach to solutioning that includes societal action implemented through multi-sector rapid reaction teams

Contextualizing/situational models, collaborative modeling at the organizational level

Cyber Resilient Bulgarian National Model [10]

L3–L4

A Layered Model for Building Cyber Defense Training Capacity

67

A review of literature suggests that the CyCap layered model presented in Table 1 appears to be relatively unique in that it illustrates a span of concerted effort starting at the technical infrastructure level (L1), providing cyber contextual level (L2) for participants in the immersive experience, adding a layer (L3) for tuning and evaluating the programmatic output, and sets a layer (L4) for establishing organizational and societal structures is both the result of training, and a structure that can maximize how cyber resources are leveraged. A representative sampling of work shown in Table 2 suggests the the focus of most cyber defense capacity generally addresses at only a single layer, either at the national (societal L4) level structure, the curricular structure of institutions (programmatic L3), the changing experience of an immersive digital experience (contextual L2), or the technical challenges of creating a robust cyber environment (infrastructure L1). Table 2 maps several models that illustrate cyber defense training capacity building to the various layers of the proposed CyCap model which refers to four interdependent layers of cyber capability or capacity. The categorized models offer focused value in specific areas such as cyber ranges, curricular models, hands-on defense training scenarios, or inter-organizational cyber defense collaboration modeling. Across a review of US and international efforts to bolster cyber defense, this compartmentalization of efforts into the CyCap layers is a commonality. This is reflected in the use of standardized modeling at each layer, such as facilitating workforce skill, knowledge, and ability development based on standards such as the US National Initiative for Cybersecurity Education (NICE) [11]. The cyber range row in Table 2 owes a significant debt to Priyadarshini [8] who provided a broad survey of cyber range activity across the US.

2 Research Methodology This work uses a case study methodology. This methodology was selected because the quantitative and qualitative research methods in the underlying papers varied as appropriate for the individual challenges at the time, but do not aggregate well as a single research method except through treatment of the entire effort as a case [12]. The work is summarized here as a multi-year effort that led to the development of the CTRC. The analysis presented here draws on a series of published works produced by the authors and their peers over a span of 10 years as well as extant material from the described events. The research question addressed by this case analysis reflects on this decadelong effort, asking “What components can be used to create a collaborative training and response community, and can they be added incrementally to support an effective cyber defense capability for society?”. The methodologies of the underlying published work, and the strategy of application, are described below. This list includes the rationale for using methodologies and a critique of appropriateness. This review of research methodologies of the earlier work is offered as part of the decade-long case analysis presented here as a reference for those engaged in similar lines of research. 2.1 Primary Research Methodologies Case study: As the authors were describing events and projects where they did not have sufficient control to establish quantitative measures, the case study methodology

68

E. L. Moore et al.

provided the ability to describe events and projects while analyzing actions and outcomes of large groups that were collaborating in less structured efforts. The work represents more structured and finite events, the level of formality of the methodology increases. For instance, a less formal application was in early use of the SCRUM framework to produce competitions [13] where many participants were temporary and goals fluctuated. The most formal application of case study was later in the body of work when a more formal effort to develop multi-agency collaboration was facilitated using Agile methods [14]. The level of structure of the events was a limiting factor on the formality of case study methodology. Quantitative pre-post testing methodology [15] was used to measure technical skill of cyber defense training participants, evaluating variance in objective capability and self-evaluation of confidence with the technologies [16]. The tool was also used to determine the efficacy of a 3D virtual world training experience to determine how much students learned about physical security risks in a datacenter [17]. In both cases, the confidence level of the results was limited somewhat by either the sample size or by variance between sampled groups. Creswell’s interrelating themes methodology [18] was used when different types of data were gathered, from live observation of cyber defense events, post-event interviews, etc., and the authors had less control over the environment. This method was used to analyze how a gathering of institutions formed into a CTRC [19]. Creswell and Clark’s explanatory design mixed methods research methodology [20] was used when significant quantitative data had been gathered and analyzed, and qualitative analysis could enhance the value and add context to the work, particularly where psychometric analysis was performed [16]. 2.2 Secondary Research Secondary research [21] was used by the authors to support the primary research and to support the development and application of theoretical models and frameworks in the developing contexts that were being formed in the virtual environments [22, 23]. While this work was driven by observation of students’ needs as they worked in immersive digital contexts and is applied there to teaching and coaching, the basis of the model development relied heavily on synthesizing extant research.

3 Infrastructure The initial needs that drove the development of the service layer in Table 1 included online coursework-associated laboratory space, graduate students’ need for research lab facilities, faculty need for infrastructure to support research projects, and the regional need for multi-institutional cybersecurity exercises. The diagram in Fig. 1 illustrates the complexity of the challenge while also offering one of the analytical techniques developed through whiteboard troubleshooting and brainstorming sessions. The diagram uses the pipeline analysis model, nesting virtualized systems as they present to users and aggregate processing load across a broad range of technologies. The initial goal was to provide an immersive, engaging, and sustainable education experience across

A Layered Model for Building Cyber Defense Training Capacity

69

a broad range of client systems and internet connections while mixing a complex range of technologies like QuickTime Streaming, Second Life® virtual world immersive experiences, and active, hands-on keyboard challenges on live machines provisioned agilely to the students. Other technologies used to create a rich community at that time included TeamSpeak, Citrix, VMware, GNS3, RDP, VNC, and SAN technologies orchestrated in small clusters so as not to overwhelm students in an individual course, but customized to the delivery of each course topic.

Fig. 1. Service pipeline modeling demonstrating clients both at home and in university labs

One of the greatest challenges that existed in this work at small to medium sized universities was the ability to offer this type of environment while working with limited, donated equipment, volunteer graduate students and adjunct faculty, and budget constraints all while much of the early work occurred before universities were making strong pushes into the cybersecurity space. Without this Agile methodology, the longer-term vision of building the program likely would not have been sustainable in this resource-lean environment. Because of this agile approach, Regis University was able to regularly host RMCCDC effectively for a decade, even though it was not one of the larger or significantly resourced institutions in the region. Initially the infrastructure for these virtualized environments was based on donated equipment cobbled together by graduate students and adjunct faculty. Once the capability was demonstrated on-site, the program began to apply for grants to expand capacity. Within two years, the infrastructure was infused with sufficient servers, Internet bandwidth, and SAN storage to sustain a local cloud environment capable of projecting entire competitions over the Internet, or hosting on-site labs with each team in different classrooms. In addition to the projection of a live “console experience” and collaborative spaces that simulate network environments, Regis also started a Second Life® campus that included a security operations center, complete with computer forensic lab, server room,

70

E. L. Moore et al.

conference rooms, and live workstations that included interactive consoles embedded in the virtual world as shown in Fig. 2. Students could enter the world to gain more immersive experiences in Security Operations Center environments. The models developed in this world contributed significantly to the design of the Regis University Denver Technical Center (DTC) campus that was built in 2011 in Greenwood Village, Colorado. In both the physical world and the virtual world, the idea was to offer an immersive professional experience.

Fig. 2. Regis Security Operations Center in Second Life®.

In these virtual worlds, students faced several challenges that were not evident when engaging in a cybersecurity lab on a college campus. In order to accomplish tasks in the virtual world, students had to first overcome the idea that this environment was only a game. This distracted them from engaging fully with the scenario. It required coaching from the faculty to keep the students focused on the relevant learning tasks. Another barrier to entry was that students had to have a computer with sufficient capabilities, so some students needed to use the university computer labs when their computers at home did not meet the minimum hardware requirements.

4 Contextual As Regis started providing significant numbers of immersive digital experiences in online labs, cyber competitions, training events, and collaborative events, the authors began observing variance in the way participants were behaving and relating to each other in the space. Of the two models presented here, one is a reference framework for digital identity, and the other is a matrix for analyzing the impact of cyber-influenced reality, also called “Bit Induced Reality.” At collegiate cyber competitions, each student takes on the role of a team member in a given competition working with their teammates to successfully secure their systems. They may be further given a technical role in the environment to gain experience that they would not normally obtain in the classroom—for example, a “firewall engineer”

A Layered Model for Building Cyber Defense Training Capacity

71

for a fictitious company becoming responsible for all the tasks that a firewall engineer would be responsible for in a given company. This creates new identities which can cause conflicting behavioral motivations. The goal of introducing contextual models to cyber trainees and challenge participants is to provide an orienting context for participants as they take on nested identities within events and experiences. This enables a participant’s behavior to be motivated by the right layer of identity. For example, a person may be a student at the university with the primary intention of learning from an experience. His or her identity is nested by a team member in a scenario-based game trying to win, and within that game scenario they may be a firewall administrator attempting to defend against a cyber attack. Many immersive digital scenarios presented in Table 3 were developed initially for CANVAS or RMCCDC and later repurposed for professional development, graduate course experiences, cyber defense training, and research laboratory support. As the authors participated in working groups designing these workshops, the driving factor was that the scenario should be taken from contemporary events and concerns, so the scenarios were similar to challenges the participants were likely to face in their professional careers. Year after year the competitions became more complex in both the scenario and in the technical challenge as teams became more prepared for the challenges. Table 3. Immersive digital scenarios and their contemporary sources Scenario

Contemporary source

First use

Voting Protection

Diebold Hack

CANVAS

Myface.com on ELGG

Facebook Popularity

CANVAS

Smart Electric Grid

US Infrastructure Protection Efforts

CANVAS

Medical Records, Open EMR

Anthem Hack

CANVAS

Medical Device Defense

Donated Medical Devices

RMCCDC

Banking Security on Cyclos/Citadel

Western Union Hack

CANVAS

SCADA Defense

Target Hack

CANVAS

Regis HMO - medical Device Hack

Team Member worked in Hospital

RMCCDC

Hotel Management

Team Visit to Casino SOC

RMCCDC

Traffic Signal “Regis City”

City of Denver recommendation

RMCCDC

Regis Global Financial Services

Equifax Hack

RMCCDC

Online Gaming Companies

2011 STEAM® Hack

RMCCDC

Electric Dam Tampering

Cyber attack on Dam in Rye, NY

CTRC

Town Electrical Grid

National Guard

CTRC

In cyber competitions, the digital identity of the participants is created by providing the participant with a role, a goal, and a context that is valid in that virtualized experience. In one such scenario, some of the authors witnessed a student who had taken on the role of a cyber defender and responded to a barrage of exploits from the competition hosts as

72

E. L. Moore et al.

part of the scenario. The student became emotionally invested in this identity, similarly to a belief structure, and this role affinity drove the student to rapid defensive actions in the digital world. While this was going on, a member of the event host team who was simulating the cyber attack attempted to perform an in-person view of the defending team’s whiteboard in their classroom through a hallway window. The defending student responded by racing out the door and beginning a physical scuffle with the attack team member. The digital identity in-scenario had become high-stakes as the student felt both a strong affinity for the cyber defender role, and allegiance to his team. The exuberance at that moment began to violate basic assumptions of the competition and of orderly conduct on a college campus. The effects of this immersive digital identity within the scenario may have had a significant impact on the motivating frame of reference of the student. Table 4. The B/K-A/E model representing belief/knowledge and allegiance/ethics [22].

Explaining the experience from the Belief/Knowledge (B/K-A/E) model in Table 4, students can become more self-aware of their investment or “belief” in a frame of reference and “allegiance” with and motivated by that role. Using the model they become more able to discern how much that frame of reference is driving their behavior. “Belief” is used in a specific way here to refer to the level of motivational engagement in a particular frame of reference. We can think of it as “How real in this moment is it that I’m the firewall defender?” If the participant is high in the Belief/Allegiance (B/A) scale for a nested identity of “firewall defender,” this can affect the coherence of their primary role in the competition as a student. To counteract this issue, students are reminded through this model to self-assess with critical thinking what knowledge should drive their behavior, and the appropriate ethical framework that should set boundaries on that behavior on the Knowledge/Ethics (K/E) side of the model [22]. A very different example occurred when the competition was held at the United States Air Force Academy and illustrated the impact of cyber-influenced reality. All arrangements for the conference had been made digitally with the competing universities, primarily using email and telephone communications. When participating teams and

A Layered Model for Building Cyber Defense Training Capacity

73

coaches arrived at the US Air Force Academy, they were met by two student greeters who were dressed in civilian clothes, carrying a clipboard, and offering guidance to the student participants and faculty alike. No introductions were made, but the student greeters presented confidently in their role. The opening for deception occurred since the trust of the digital relationships transferred to the greeters, even though no participant team had met them before. This remained true even though the students did not wear military uniforms (unlike other US Air Force Academy cadets), nor did they present credentials. Each team trusted the greeters with an extended excursion before ending up at the competition; no students or faculty questioned them. Fortunately they were operating under the guidance of the sponsoring faculty as part of a study in social engineering [24]. In Fig. 3, the Bit Induction Scale shows the range of how digital systems influence objects in the real world. At level zero a person in civilian clothes is observed. At level 2 a computer-printed photo ID might have been printed on a computer and used for physical identity confirmation. At level 5 email communications would not even exist in a meaningful way without the Internet. Figure 3 displays how the dominant mode of security that shifts from a primarily digital context of email establishes trust to the physical situation of being greeted at an institutional entrance. When the student greeters were confronted in a physical mode, the trust established in email should not have automatically transferred without a digital authentication linked to the email. Because of this disconnect in the scenario, the physical modes of establishing trust have been undercut. If the student greeters’ contact information had been presented in the email, physical trust could have been established with more assurance, rather than relying on insufficient trust established via email.

Fig. 3. The bit induction level set against the physical/psychological significance of an object.

These brief examples represent a significant number of observations during digital challenges and stressful training that suggest a need to address cyber identity and the shift in security postures that accompanies immersive cyber experiences. Models of this type become necessary to support participants in these experiences, where the evolutionary instinctual responses and previous interpersonal experiences that participants bring to the event may not have prepared the participant for immersive interaction in

74

E. L. Moore et al.

cyberspace. While the frameworks presented here were applied to the immediate situations, the observations suggest that more work on these types of cognitive dissonance and disconnect from the most relevant social frames of behavior is necessary. The leadership of the CTRC engaged across disciplines at the university, discussing what was happening in the cyber defense exercises. Based on these discussions, in 2015 the CTRC leadership invited socio-behavioral experts to observe what was happening in these digitally immersive experiences. The scenario at that event was the defense of medical devices in a hospital from a cyber attack. The socio-behavioral experts observed the digital role-based behavior in this scenario first in a competition, and then when it was used in a cyber defense exercise, and then in the formal planning group of the CTRC.

5 Programmatic As cyber competition and training events progressed, questions emerged among the training leadership team such as, “What can we do to extend and add value in terms of effectiveness to the training and response experience?” and “What are the needs of the teams we are training?” These questions initiated a collaborative effort to identify individual and team characteristics that could elevate team performance. The goal was to create awareness and build knowledge and skills among individuals and teams with regard to their strengths and limitations. This was done in order to develop more effective and efficient teams during training and in actual incident response. Further, the goal was to fully develop individuals and teams that were prepared to respond rapidly and deliberately to incidents. In the past, the authors trained cyber defenders to narrowly focus on highly technical skills used individually. We shifted our training of cyber defenders into a more comprehensive and holistic approach that continued to include highly technical skills but were utilized by all individuals as a cohesive and unified team. More recently, the focus of this work included socio-behavioral factors rather than solely technical factors. As the mission of cyber defense teams formed, additional activities identified that could be facilitated by socio-behavioral support included cross-institution team training for rapid collaboration, tabletop exercises to work out playbooks, and strategic leadership planning sessions for expected cyber attacks. The authors facilitated the performance assessment and advancement through the use of psychometric instruments and a feedback and coaching process. The authors included specific psychometric instruments such as the Myers-Briggs Type Indicator (MBTI) [25] to evaluate individual personality traits, the Parker Team Player Survey (PTPS) [26] to evaluate team player roles, and the Crew Cohesion Scale (Crew) [27] to evaluate cohesion of team members. These psychometric instruments provided a comprehensive method for assessing individuals and teams and then advancing their ability to utilize their individual strengths in personality and leverage role diversity to the benefit of the entire team. The choice of psychometric instruments grew out of the authors’ realization that one of the most significant barriers to cyber defenders was team members’ reluctance to communicate with each other during training exercises. The stereotypical cybersecurity team is composed of IT introverts who prefer interaction with computers over human colleagues. Yet, the authors’ real-time observations of the most effective cyber defense

A Layered Model for Building Cyber Defense Training Capacity

75

teams have consistently shown that open communication between members is linked to greater success [28]. Not only does the MBTI identify the degree to which individuals prefer introversion over extraversion (which often translates into a preference for written versus verbal communication), but it also recommends strategies that introverts can adopt in order to increase effective communication when the context demands it. Moreover, the MBTI identifies individuals who prefer to attend to the smaller building blocks of a problem versus the “big picture”; the ability to tackle an incident by utilizing both of these perspectives is critical to cyber defense. Additionally, a review of research on highly-effective teams indicates that role diversity within teams is associated with better performance. The PTPS was then selected as a well-known, reliable, and brief assessment of the roles that people adopt in team settings. This would also allow the authors to test their assumptions about greater role diversity in the context of cyber defense: When team members adopt all four of the roles identified by the PTPS, and create maximum role diversity, teams are most effective. A measure was then needed that would provide a more holistic assessment of the team and its members’ ability to work as a unified front, so the Crew was chosen as a global measure of team cohesion. This measure is most commonly used to assess the cohesion of first responders in crisis situations, particularly firefighters. The Crew is well-suited to assess teams in the crisis environment that emerges as the result of a cyber attack. As more data were gathered, it became clear that the best course of action was to design an instrument that focused on the key variables of interest in the context of cyber defense while leaving extraneous variables behind. For this reason, the authors have begun to build and test a specialized instrument that will identify the trait preferences, team roles, levels of cohesion, and additional factors that are most consequential to cyber defense. The ultimate goal in this endeavor is to use the data to create a plugand-play type of coaching mechanism so that at a moment’s notice, cyber defense teams can be composed of available technical experts with a range of trait preferences who receive strategic directives that maximize their leadership capabilities, communication effectiveness, and role diversity. This work focused on organizational relationships with and between cyber defenders as team members and also with and between multiple agencies including industry (private sector), government (federal, state, and local), and academia (Regis University and the United States Air Force Academy). This multi-agency collaboration was based on longstanding, cooperative relationships and mutually beneficial partnerships that leveraged the interdisciplinary nature of the teams and a more coordinated and strategic response. This kind of comprehensive response to developing cyber defense teams within and across organizations for mutual benefit is powerful and can only be achieved through a true CTRC. A CTRC builds bridges within and between cyber defense teams in industry, government, and academia in order to provide training and incident response that will be strong and sustainable over time in a rapidly growing world of cybercrime. Quantitative Pre-Post assessment of the efficacy of the cyber defense challenge events was another significant basis for determining how to advance the effectiveness of the CTRC [16]. The Pre-Post assessment used both objective testing of skills as well as self-assessment as measures to determine participant readiness. Covered skills included

76

E. L. Moore et al.

things like the network scanning utility NMAP, network traffic analyzer Wireshark, and a suite of forensic tools. While the sample sizes in this work were relatively small, as limited by the size and participation of the team being assessed, the work suggested increases in skill level, particularly across multiple events. Some increased skill levels can be attributed to the additional training in certifications required by employers. As discovered in discussion, some individuals’ personal intensive studying and practicing between events also contributed to their overall improvement. The results of this work suggests that across a three year longitudinal assessment, participants self-perceived skills did not consistently advance across a technical range. Yet their actual capabilities as demonstrated in a longitudinal pre-test doubled in capability over the same period. The quantitative results in specific categories are available in previous work by three of the authors [16]. Based on discussions with participants, this variance between self-perception and capability may partially be attributed to a growing understanding of the related body of knowledge and their changing perception of where they sit within that discipline. Also, the participants may not be accounting for growth in relation to self-study. The previous work also presents the Personalized Education Learning Environment (PELE), which is designed to account for these variances while tracking participants across institutions.

6 Societal Once the first three layers were functioning and the CTRC had formed, new functions needed to be added as the CTRC: responded to actual cyber attacks; engaged a range of interdisciplinary subject matter experts; began to analyze the organizational risks inherent in the cyber defense preparation work itself. The authors began to analyze these new functions to offer the CTRC a more fluid team engagement with external entities and to evaluate its role and risks as part of the cyber defense capacity building sector. The reason that the Societal Layer (L4) of the CyCap model (Table 1) was developed was to develop the capability to address the inter-institutional interest in leveraging the first three layers (L1, L2, L3). The dynamic tension of this challenge is illustrated in Fig. 4 as the interaction between four different types of entities. The initial live cyber defense response that occurred was in reaction to a highly disruptive attack on the Colorado Department of Transportation (CDOT) in early 2018. The Colorado National Guard responded as an integral member of the CTRC. A primary advantage was that the response team had trained with many responders from other agencies, so that command structure, team formation, and strategic assignments could happen more rapidly than with usual incident response. Because many jurisdictional and logistic issues had been resolved in that training, they were even able to give a significant number of defenders at CDOT an opportunity to rest and recover as they took over defensive operations resulting in a successful network defense [29]. After the CDOT event, an interdisciplinary model was created to clarify how psychometric support for cyber defense teams could be incorporated into both cyber defense training and response to enhance team effectiveness. One exercise involved pulling the leaders out of each team to determine the adaptive capabilities of the team. Another included assembling response teams across institutions in order to analyze their rapid

A Layered Model for Building Cyber Defense Training Capacity

77

team formation and develop strategies and coaching methods. The authors see this work as opening up a range of interdisciplinary opportunities and have therefore created models for forming fluid teams within the CTRC to allow interdisciplinary work to expand [30].

Fig. 4. An executable model (X-model) representing the partners in a multi agency collaborative for rapid response [14].

With trained students, professionals, and cyber defense teams actively engaged in cybersecurity and cyber defense, the authors began focusing on a new need, reducing risks for institutions that perform cybersecurity training and cyber defense. The first aspect of this work addressed the risks associated with the behavior of students and the professionals engaged in training and the potential culpability of the training organization if skills were misused [31]. The initial value of this model is the identification of key programmatic risk controls in the teaching, training or competition. The model is used across public education cyber challenges, college education, professional development, and cyber defense training. The institutions analyzed as the initial cases where these methods were applied included Adams 12 Five Star Schools (representing the public sector) and Regis University (representing higher education, research, professional development, and cyber defense training). The areas of control are: 1) curricular limits, 2) technical controls such as firewalls, 3) ethical engagement with participants in curriculum, and 4) institutional behavioral policies and agreements. The particular controls vary widely across each of the groups considered, but hold their value across that range. A good example was that for minors in public education, teaching proactive cyber defense attack skills should be assessed as potentially causing too much risk for the student, the teacher, and the institution. A second value of the work is a set of models tracking participant behavioral risk across different situations. The risk case analysis extended beyond the direct relationship with the institution and on-campus technical controls, and includes potential behavior of participants after the training at home, work, and other locations where the training and motivation might be tracked back to the institution. With a broad set of risk controls, an institution can mitigate off-site risk that may not be anticipated in traditional cybersecurity risk analysis [31].

78

E. L. Moore et al.

7 Conclusion The layered model for building cyber defense training capacity is an empirical framework derived from a series of quantitative and qualitative studies of a decade long effort. This analysis of a long-term program development represented in the layered model offers an example and some ready methods for those engaged in similar work. The authors continue to focus on the development of cybersecurity programs using this layered model as a reference when creating course focused lab exercises. As a new cycle of capacity building starts, however, developing a more systematic and agile approach using this or a similar model could help to define the overall challenge more clearly. Our approach is intended to further the incorporation of human-centric psychometric instruments and behavioral coaching tools designed to enhance participant experience in the potentially disorienting world of immersive technical cyber training and defense. It also adds a significant layer of inter-organizational and interdisciplinary structure to facilitate collaboration. The CDOT incident represents a new level of capability in society in regard to cyber defense, in that it is the first time in the United States that a National Guard unit came to the cyber defense of a state government. They did this in conjunction with other participants in the CTRC. The CyCap model lays out how this new level of capability was achieved. The authors publish this in the hopes of finding common vocabulary for such efforts that will lead to new forms of collaboration. Detailed analysis of each section of this work are available in papers previously published by the authors [5]. Currently the authors are working to expand these research interests to larger scales of collaboration. As cyber ranges, cyber defense training programs, and cyber defense collectives grow in popularity and scope across the United States and across the world, the models and details of progressive development presented here are designed to help reduce risk and systematize their work, thereby enhancing the stability of our modern digital society. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or U.S. Government.

References 1. Carlin, A., Manson, D., Zhu, J.: Developing the cyber defenders of tomorrow with regional collegiate cyber defense competitions (CCDC). Inf. Syst. Educ. J. 8(14), 10 (2010) 2. White, G., Williams, D.: Collegiate cyber defense competitions. ISSA J. (2005) 3. Hoffman, L., Rosenberg, T., Dodge, R., Ragsdale, D.: Exploring a national cybersecurity exercise for universities. In: Donner, M. (ed.) Security and Privacy, IEEE, vol. 3, No. 5, pp. 27–33 (2005) 4. Steven, S., Schweitzer, D., Dressler, J.: What are we teaching in cyber competitions? In: Frontiers in Education Conference Proceedings. IEEE (2012) 5. Moore, E.: Building cyber defense training capacity. Doctoral thesis, University of Plymouth (2020) 6. Paulk, M., Curtis, B., Chrissis, M., Weber, C.: Capability maturity model, version 1.1. IEEE Softw. 10(4), 18–27 (1993)

A Layered Model for Building Cyber Defense Training Capacity

79

7. Leitner, M., et al.: AIT cyber range: flexible cyber security environment for exercises, training and research. In: Proceedings of the European Interdisciplinary Cybersecurity Conference, pp. 1–6 (2020) 8. Priyadarshini, I.: Features and architecture of the modern cyber range: a qualitative analysis and survey. Masters Thesis, University of Delaware (2018) 9. Saharinen, K., Karjalainen, M., Kokkonen, T.: A design model for a degree programme in cyber security. In: Proceedings of the 11th International Conference on Education Technology and Computers, pp. 3–7 (2019) 10. Sharkov, G.: From cybersecurity to collaborative resiliency. In: Proceedings of the ACM Workshop on Automated Decision Making for Active Cyber Defense, pp. 3–9 (2016) 11. Newhouse, W., Keith, S., Scribner, B., Witte, G.: National initiative for cybersecurity education (NICE) cybersecurity workforce framework. NIST Spec. Publ. 800, 181 (2017) 12. Smith, C.: The case study: a useful research method for information management. J. Inf. Technol. 5(3), 123–133 (1990) 13. Novak, H., Likarish, D., Moore, E.: Developing cyber competition infrastructure using the SCRUM framework. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 20–31. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39377-8_3 14. Moore, E., Likarish, D.: A cyber security multi agency collaboration for rapid response that uses agile methods on an education infrastructure. In: Bishop, M., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2015. IAICT, vol. 453, pp. 41–50. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-18500-2_4 15. Gall, M., Gall, J., Borg, W.: Educational Research: An Introduction, 8th edn. Pearson, New York (2006) 16. Moore, E., Fulton, S., Likarish, D.: Evaluating a multi agency cyber security training program using pre-post event assessment and longitudinal analysis. In: Bishop, M., Futcher, L., Miloslavskaya, N., Theocharidou, M. (eds.) IFIP World Conference on Information Security Education, vol. 503, pp. 147–156, Springer, Cham (2017). https://doi.org/10.1007/9783-319-58553-6_13 17. Moore, E., Fulton, S., Likarish, D.: Evaluating a multi agency cyber security training program using pre-post event assessment and longitudinal analysis. In: Bishop, M., Futcher, L., Miloslavskaya, N., Theocharidou, M. (eds.) WISE 2017. IAICT, vol. 503, pp. 147–156. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58553-6_13 18. Creswell, J.W., Creswell, J.D.: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications, Los Angeles (2017) 19. Moore, E., Fulton, S., Mancuso, R., Amador, T., Likarish, D.: Collaborative training and response communities - an alternative to traditional cyber defense escalation. In: 2019 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (Cyber SA), pp. 1–8. IEEE (2019) 20. Creswell, J., Clark, V.: Designing and Conducting Mixed Methods Research. Sage Publications, Los Angeles (2017) 21. Thorne, S.: Secondary analysis in qualitative research: issues and implications. Crit. Iss. Qual. Res. Methods 1, 263–279 (1994) 22. Moore, E.: Managing the loss of control over cyber identity. In: 2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC). IEEE (2016) 23. Moore, E.: A vulnerability model for a bit-induced reality. In: ICIW 2013-the Proceedings of the 8th International Conference on Information Warfare and Security (2013) 24. Kvedar, D., Nettis, M., Fulton, S.P.: The use of formal social engineering techniques to identify weaknesses during a computer vulnerability competition. J. Comput. Sci. Coll. 26(2), 80–87 (2010)

80

E. L. Moore et al.

25. Briggs-Meyers, I., Hammer, A., McCauley, M., Quenk, N.: MBTI Manual: A Guide to the Development and Use of the Meyers-Briggs Type Indicator. CPP Incorporated (2003) 26. Parker, G.: Team Player and Team Work: The New Competitive Business Strategy. JosseyBass Inc., San Francisco (1990) 27. Wildland Fire Leadership Development Program: Toolbox: Crew Cohesion Assessment Tool, April 16. 2018. www.fireleadership.gov. Accessed 20 Oct 2018 28. Zarya, V.: These Mormon women are some of the best cyber security hackers in the U.S. In: Fortune.com April 27 (2016) 29. Moore, E.L., Fulton, S.P., Mancuso, R.A., Amador, T.K., Likarish, D.M.: A short-cycle framework approach to integrating psychometric feedback and data analytics to rapid cyber defense. In: Drevin, L., Theocharidou, M. (eds.) WISE 2019. IAICT, vol. 557, pp. 45–58. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23451-5_4 30. Amador, T., Mancuso, R., Moore, E., Fulton, S., Likarish, D.: Enhancing cyber defense preparation through interdisciplinary collaboration, training, and incident response. J. Colloq. Inf. Syst. Secur. Educ. 8(1), 6 (2020) 31. Moore, E., Likarish, D., Bastian, B., Brooks, M.: An institutional risk reduction model for teaching cybersecurity. In: Drevin, L., Von Solms, S., Theocharidou, M. (eds.) WISE 2020. IAICT, vol. 579, pp. 18–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-592 91-2_2

Measuring Self-efficacy in Secure Programming Matt Bishop1

, Ida Ngambeki2(B) , Shiven Mian1 and Phillip Nico4

, Jun Dai3

,

1 University of California, Davis, USA

{mabishop,smian}@ucdavis.edu

2 Purdue University, West Lafayette, USA

[email protected]

3 California State University, Sacramento, USA

[email protected]

4 California Polytechnic State University, San Luis Obispo, USA

[email protected]

Abstract. Computing students are not receiving enough education and practice in secure programming. A key part of being able to successfully implement secure programming practices is the development of secure programming self-efficacy. This paper examines the development of a scale to measure secure programming self-efficacy among students participating in a secure programming clinic (SPC). The results show that the secure programming self-efficacy scale is a reliable and useful measure that correlates satisfactorily with related measures of programming expertise. This measure can be used in secure programming courses and other learning environments to assess students’ secure programming efficacy. Keywords: Self-efficacy · Secure programming · Secure programming clinic

1 Introduction Bugs in computer programs are as old as programming. Indeed, the term “bug” is said to have come from Grace Murray Hopper in 1946, when she investigated an error in the Mark II computer at the Harvard Computation Laboratory. A moth was trapped in the relay, causing the problem. This became the first computer bug. As computing became more widespread, concern about the security of information on the systems grew. The meaning of the term “security” was defined by a (formal or informal) statement, the security policy, which varied depending on the organization. Common to all definitions was the notion of escalating the privileges of a process so it could perform functions it was not supposed to. And a common way to do this was to exploit bugs in programs. Programs were, and are, usually written non-robustly. The Common Vulnerabilities and Exposures (CVE) database, which enumerates software vulnerabilities, has over 152,000 entries [1]. Programming errors have been found in electronic voting systems [2] and automobiles [3]. The lack of robustness inspired Weinberg’s Second Law, which says © IFIP International Federation for Information Processing 2021 Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 81–92, 2021. https://doi.org/10.1007/978-3-030-80865-5_6

82

M. Bishop et al.

that if builders built buildings like programmers wrote programs, the first woodpecker to come along would destroy civilization [4]. Implicitly, programs trust the environment they execute in, and that the input they receive is well-formed. Robust programs do not do this; they examine those parts of the environment that affect how the program works to ensure it is as required. Input is examined to ensure it is well-formed. The code is written to handle failures by reporting the error conditions and either recovering or terminating gracefully. This type of programming is called “robust programming”. The term “secure programming” is often used as a synonym to robust programming. Technically, there is a difference: a “secure program” is a robust program that satisfies a given security policy. However, many security problems arise from non-robust programming. So, the focus of improving programming is on improving the robustness, because it is reasonable to assume that it will eliminate many security vulnerabilities. Most first programming classes teach robust programming. Students are taught to validate input, check that references to arrays and lists are within bounds, and so forth— all elements of good programming style. As they progress, the focus of classes shifts to the content of the class, such as data structures or networking, and programs are no longer graded based on their robustness. Because of this, students’ knowledge and abilities to program robustly is not used and so grows rusty due to lack of practice. Worse, as many introductory courses move to higher-level, friendlier programming languages such as Python, students may not have even encountered whole classes of non-robust practices during this formative period and will be unprepared when they encounter less friendly languages like C. This is similar to writing. In many disciplines, essays and written answers to homework and test questions are sloppy; the questions are answered, but the writing often makes understanding the answer difficult or the writing is jumbled. English departments and law schools are well aware of this problem, and in response have developed “writing clinics”. These clinics do not judge whether the writing answers the questions in the homework. Instead, they look at the structure and grammar of the essay and make suggestions on how to improve the writing. This suggests that a “secure programming clinic” (SPC) performing analogous reviews of programs might help improve the quality of programs [5, 6]. The SPC would review programs for robustness. It would not determine whether the program had met the requirements of the assignment. This way, students have numerous opportunities to develop good programming style and the practice of robustness throughout their educational career. A key benefit of the SPC is to serve as a place where students can find information about secure and robust programming as well as tools to help them check their code for vulnerabilities and other coding problems. They can make appointments with a clinician, who is typically a faculty member or an experienced graduate student, to review programs and can use the materials at the SPC to learn more or reinforce what they have learned. The SPC can take many forms. As described above, one form is that of the traditional “writing clinic”. Another is to have the clinic review programs after they are turned in, and report problems in robustness to the graders; the final grade would take this into account. A variant is to allow students to fix the problems in robustness and resubmit the program to have the robustness part regraded. Other variants are possible. By appropriately requiring use of the SPC, classes and instructors can continue to

Measuring Self-efficacy in Secure Programming

83

emphasize the importance of robust, secure programming in a way that does not impact instruction time. This will continue throughout and, with hope, after the student’s work at the academic institution. One of the key goals of the SPC is to increase students’ self-efficacy viz. their confidence in their knowledge of secure programming and their ability to complete secure programming tasks. Prior work has shown SPCs to be effective in developing students’ expertise in Secure Programming concepts [7]. It was also found that the SPCs’ overall efficacy was highly dependent on adjustment of the clinic structure to the contextual peculiarities of the sites where they were deployed [8]. These findings suggest that students’ prior experience is related to the effectiveness of their clinic experience. We posit that this is due to a connection between students’ experiences, the development of expertise, and students’ self-efficacy. Proving any such correlation would require us to have quantitative measures for each of them, however, no reliable means to measure secure programming self-efficacy exists as of now. This paper reports on efforts to develop a reliable measure of secure programming self-efficacy. In Sect. 2, we first discuss self-efficacy and its role in the development of expertise. Section 3 then reports on efforts to develop and validate a secure programming self-efficacy scale. Section 4 explores the validity of the scale by examining its relationship to secure programming knowledge and general programming experience, and Sect. 5 summarizes our contributions and avenues for future work.

2 Background and Related Work In general, self-efficacy can be described as an individual’s confidence in themselves, their confidence that the capabilities they possess are effective to accomplish a specific task or thrive in a certain situation. Formally, self-efficacy is defined as an individual’s belief about his or her ability/capability to complete a specific task [9]. This theory was first postulated by Albert Bandura, a well-known social-cognitive psychologist in 1977 which was earlier added to his original Social Learning Theory (SLT) and revised into Social Cognitive Theory in 1986 [10]. According to Bandura, a person’s efficacy beliefs are largely based on four sources: • Enactive Mastery Experiences, i.e. actual performance of a task, or familiarity with a situation. • Vicarious Experience, i.e. observation of others performing a task and succeeding. • Verbal Persuasions, i.e. encouragement or discouragement from peers, verbal or otherwise which aid individuals to overcome self-doubt. • Physiological and Affective States, i.e. relating to body or physical states as opposed to mind or psychological states. Among these four, Enactive Mastery Experience is the most significant source of SelfEfficacy [11]. This source is strongly related to work on the development of expertise as students develop from novices to advanced beginners, to competent, to proficient, to expert. Students progress through these stages as a result of the accumulation of knowledge, time spent immersed in the subject, and repeated practice and application of knowledge in different contexts.

84

M. Bishop et al.

How Does Self-efficacy Impact Learning and Expertise? While the domain-specific knowledge and intellectual abilities of a student play a great role in academic success, self-efficacy is another significant characteristic that should not be overlooked. Several studies based on this theory by various researchers have demonstrated that students with higher self-efficacy are more successful academically, as they are self-regulated and believe in their own abilities. According to Bandura’s theory, the self-efficacy of an individual influences 1) the amount of effort expended, 2) the type of coping strategies adopted, 3) the cognitive strategies used while solving problems, 4) persistence at the time of failure, and 5) their performance outcomes [12]. In learning conditions, especially in programming, these attributes play a vital role. Since programming is a highly cognitive activity, students come across difficulties, problem solving, failure and complicated situations frequently. In a study aimed at exploring factors affecting a pre-service computer science teacher’s ‘attitude towards computer programming’ (ATCP), one of the factors examined was computer programming self-efficacy. A computer programming self-efficacy scale was used to collect computer programming self-efficacy data. Gurer et al. [13] found that “there was a positive and significant correlation (r = 0.738, p < 0.01) between students’ computer programming self-efficacy and their ATCP. Moreover, it was found that computer programming self-efficacy was a significant variable in predicting ATCP” [13]. How Is Self-efficacy Measured? Multiple self-efficacy scales exist in literature that either measure generalized self-efficacy or measure efficacy belief specific to domains like reading/writing, mathematics, and using computer software [14]. The Self-Efficacy Scale by Sherer and Adams [15] was developed to assess expectancies of self-efficacy. This Self-Efficacy Scale has two subscales, both with adequate reliability, the General Self-Efficacy sub scale (Cronbach α = 0.86) and the Social Self-Efficacy subscale (Cronbach α = 0.71). The “general self-efficacy subscale predicted past success in vocational, educational, and military areas. The social self-efficacy subscale predicted past vocational success” [15]. There are examples of self-efficacy scales adapted specifically to programming. For example, Ramalingam and Wiedenbeck [14] developed and established the Computer Programming Self-Efficacy Scale, based on the three dimensions (magnitude, strength and generality) of self-efficacy in Bandura’s theory. The scale involves answering thirty-three items in ten minutes. These items ask students to judge their competence in various programming tasks in object-oriented C++, to make the scale domain specific. Moreover, the items were reviewed by self-efficacy theory and C++ experts. The results showed that the reliability of these scores was 0.97 [14]. However, in order to keep the scale short and make it applicable specifically to secure programming, we adapted the general self-efficacy scale by Chen et al. [16] to measure secure programming self-efficacy (see Sect. 3). What Are the Implications for Secure Programming? As Bandura stresses, mastery experiences are the most effective source of self-efficacy [11]. This can be applied to the development of secure programming self-efficacy. More practice increases students’ confidence in their ability to write secure programs or learn robust coding practices. Both students’ self-efficacy and performance are shaped by their prior experience and expertise before they come to the SPC, and students come to the SPC from a wide variety

Measuring Self-efficacy in Secure Programming

85

of backgrounds ranging from introductory programming students to seniors studying operating systems or computer security. Students develop expertise and efficacy in a variety of ways, which include classroom activities and programming projects. In addition to these, though, many students participate in extracurricular activities that could boost the development of expertise and efficacy. Such activities include programming competitions, participation in programming message boards, technical internships, and online gaming. We developed a way to measure secure programming self-efficacy as described below in Sect. 3.

3 Study 1: Developing and Validating the Secure Programming Self-Efficacy Scale In order to measure secure programming self-efficacy, we developed a secure programming self-efficacy scale. We based the scale on the General Self-Efficacy (GSE) Scale developed by Chen et al. [16]. We selected this scale because it is widely cited, strongly validated, and flexible. Since general programming knowledge is an important element in secure programming, we included a programming sub-scale. We constructed questions about programming self-efficacy and secure programming self-efficacy in the same form as those on the GSE Scale. Each scale consisted of eight items. A team of five secure programming and cybersecurity education faculty who teach secure programming then examined the scale items to ensure that they were clearly worded, consistent with the GSE definitions, consistent with self-efficacy theory and displayed no redundancy. The experts also assured the logical validity of the scales ensuring that it measured elements of programming self-efficacy and secure programming self-efficacy. Sample and Procedure Participants in the scale validation were 101 undergraduates (21% female) enrolled in a computer science and computer engineering majors at a large lower-Pacific university. The participants were 2nd to 5th year students enrolled in a secure programming course (2nd yr. = 14%, 3rd yr. = 21%, 4th yr. = 49%, 5th yr. = 15%; the remaining 1% is due to rounding). Participants completed a survey containing the self-efficacy scales towards the end of the semester. This scale asked students to rank the extent to which the student agreed or disagreed with the statements on a 1–5 scale. For example, if a student indicated “5” to the first statement, that means they are “very” confident in their ability to program; in contrast, if a “1” were given, that means they are “not” confident in his/her ability to program. In order to control order effects, the order in which the items appeared in the survey was randomized. Results and Discussion The results of the survey indicated that the proposed secure programming scale performed well. An analysis of the internal consistency of the scale yielded a Cronbach’s alpha of 0.86. A principle components analysis yielded two factors for the 16 items (Table 1). The two factors loaded exactly along the programming and secure programming items on the scale. Each factor or subscale had a high internal consistency (α = 0.91, α = 0.78).

86

M. Bishop et al.

The results of the survey demonstrated that students generally scored higher on the programming self-efficacy scale (M = 3.95) than they did on secure programming selfefficacy (M = 3.08). This suggests that students are more confident in their ability to program than they are in their ability to program securely. This could be explained by the fact that most of these students have extensive programming expertise, having engaged in programming since at least their first year of college if not even earlier. However, for many if not most of them, this was their first course in secure programming. Most computer science and computer engineering programs place a greater emphasis on functionality than security. Security is therefore approached as a separate topic in a different course taken later in the program rather than integrated into all programming courses. Table 1. Secure programming self-efficacy scale and reliability Item

M

SD

a In general I am confident in my ability to program

3.70

0.87

a Compared to other people, I can program fairly well

3.44

0.79

a I believe I am good at programming

3.61

0.93

a I am confident in my ability to solve programming problems

3.72

0.86

a I enjoy programming

4.14

0.90

a I like to understand how programs work

4.33

0.88

a I enjoy my computer science classes

4.30

0.73

a I am interested in designing new programs

4.32

0.85

b I am familiar with secure programming

2.62

0.85

b I think secure programming is important

4.23

0.95

b I think it is important that my programs are secure/robust

4.15

0.98

b I will be able to successfully complete assignments in this class

3.70

0.92

b I check my programs specifically for security flaws

2.54

0.93

b I am able to identify security issues in my programs

2.54

0.99

b I am confident that I can produce programs without major security flaws

2.48

0.97

b I am confident that I can recognize security flaws in others’ programs

2.40

0.91

a Programming self-efficacy; b secure programming self-efficacy

Overall α = 0.86; programming self-efficacy α = 0.91; secure programming selfefficacy α = 0.78; N = 101;

Measuring Self-efficacy in Secure Programming

87

4 Study 2: Examining the Predictive Validity of the Secure Programming Self-Efficacy Scale In Study 2, we examined the predictive validity of the secure programming self-efficacy scale. We conducted this study in order to examine the relationship of secure programming self-efficacy and programming efficacy to related variables. This allows us to make inferences about discriminant, convergent, and predictive validity. Sample and Procedure Participants in Study 2 were 65 students (13.8% female) at a large lower-Pacific university. This was a different university than that in Study 1. Participants were 4th yr. and 5th yr. students (4th yr. = 69%, 5th yr. = 31%). All students were computer science majors enrolled in a secure programming course. Participants completed an electronic survey towards the end of the course. The survey contained demographic questions, questions about student performance, questions about students’ other activities related to programming and expertise. To relate students’ understanding of secure coding to their expertise and confidence levels, students completed questions related to their prior experiences and expertise. Students’ performance was measured using a series of 45 conceptual questions about secure programming. The pool of questions was developed and validated by testing it at four US universities in a previous study [7]. The generation of the questions was based on a concept map that we built to epistemologically depict the important sets of secure programming objects [17]. The objects were classified into ten categories: Inputs, Assumptions, Bad Code, Programming Development Environment, Software Assurance Tools, Algorithms, Input Validation, Memory Management, Code Design, Authoritative Cryptography. Development of the concept map based on input from subject matter experts is detailed in a related paper [18]. The questions diagnosed students’ conceptual understanding of secure programming using multiple-choice questions. The questions contained carefully crafted distractors and a single correct option. This was to ensure that the students can only get to the correct answer based on truly understanding the concept instead of eliminating the obviously wrongful options. Results and Discussion Students were measured on both general programming self-efficacy and secure programming self-efficacy. The expectation was that there would be a strong correlation between programming self-efficacy and secure programming self-efficacy. The study found that there was a significant correlation between secure programming self-efficacy and programming self-efficacy r(65) = 0.86, p < 0.01. Since secure programming is a subsidiary skill to programming, students would have to first develop confidence in their ability to program before they could be confident in their ability to program securely. Students expressed significantly less secure programming self-efficacy than general programming self-efficacy, i.e. students felt more capable about their ability to program than their knowledge of secure programming (Fig. 1). A paired samples t-test showed that students programming self-efficacy (M = 3.57, SD = 0.51) is significantly higher than their secure programming self-efficacy (M = 2.97, SD = 0.66); t(65) = 0.86, p < 0.001.

88

M. Bishop et al.

Programming vs. Secure Programming Self Efficacy 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 SD Mean

Programming SE 0.51 3.57 Mean

Secure Programming SE 0.66 2.97 SD

Fig. 1. Programming vs. secure programming self-efficacy

We also examined the effect of gender on secure programming self-efficacy. We found that gender did not have a significant effect on student self-efficacy. Male and female students scored similarly on both programming (Male, M = 3.54, SD = 0.50; Female, M = 3.69, SD = 0.56) and secure programming (Male, M = 2.95, SD = 0.67; Female, M = 3.11, SD = 0.68) self-efficacy. An independent samples t-test comparing males and females found no significant differences by gender. However, this may be due to the number of women in the sample, as this group had only nine females representing only 13.8% of the population. Students’ knowledge of secure programming concepts was measured by the survey. It was expected that secure programming self-efficacy would be strongly related to students’ knowledge of secure programming. That knowledge was measured using the series of forty-five multiple choice secure programming questions as described above. Students generally scored poorly on the secure programming conceptual questions (M = 46%). This could be explained by the fact that for many students this was their first course in secure programming. However, both programming self-efficacy r(65) = 0.34, p < 0.01 and secure programming self-efficacy r(65) = 0.35, p < 0.01 were strongly correlated to performance (Table 2). Students who expressed high programming efficacy scored highly on the secure programming test. These results are in agreement with the theory that suggests that as students increase in knowledge, their self-efficacy in secure programming will also increase. This could be due to increased knowledge, practice, and exposure resulting in increased confidence among students. Students score higher in programming self-efficacy because they have more experience with programming than secure programming. However, programming self-efficacy is strongly related to secure programming self-efficacy because the

Measuring Self-efficacy in Secure Programming

89

Table 2. Correlation between secure programming knowledge, programming self-efficacy, and secure programming self-efficacy Knowledge Knowledge

1

Programming efficacy

Programming efficacy

Secure programming efficacy

0.349**

0.351**

1

0.858**

Secure programming efficacy

1

** Correlation is significant at the 0.01 level

latter is predicated on the former – i.e. students cannot develop knowledge in secure programming without prior knowledge of programming. Students were also asked to report their level of expertise as regards programming. They were asked to rank their expertise as beginner, intermediate, competent, and expert. 31% of students reported intermediate expertise, 67% reported competent expertise, and 2% reported being an expert level. None of the students reported being beginners. When secure programming self-efficacy was broken down by reported expertise, we found that self-efficacy increased with expertise though their secure programming efficacy started lower and increased more significantly (Fig. 2). The pattern observed between performance and expertise is repeated between efficacy and expertise. This would suggest that self-efficacy may in fact be a mediating variable between expertise and performance.

Efficacy by Level of Expertise 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 Intermediate Programming efficacy

Competent

Expert

Secure Programming Efficacy

Fig. 2. Variation in self-efficacy by reported level of expertise

90

M. Bishop et al.

Table 3. Correlation among extracurricular activities, programming self-efficacy, secure programming self-efficacy, and secure programming knowledge Programming Self-efficacy

Secure programming efficacy

Knowledge (%)

Coding competitions

3.61*

3.02*

30.00

Programming message boards

3.55

3.07*

32.14

Programming internship

3.64*

3.03*

34.03*

Online gaming

3.5

2.92

33.63*

* Correlation is significant at the 0.01 level

Students develop expertise in various ways bringing prior knowledge from nonclassroom activities. The survey asked students to report their engagement with a selection of programming related extra-curricular pursuits. 75% of students reported engaging online gaming, 72% reported having had a computing related internship, 42% reported asking questions and interacting with others on programming message boards, and 28% reported engaging in hackathons. These extra-curricular activities were selected because they were reported anecdotally as being the most common computing related activities among students. Coding competitions increased students’ confidence in their programming ability but did not improve their performance on secure programming questions. Online gaming increased students’ performance on secure programming questions but did not have an impact on self-efficacy. Participating in a programming internship had the greatest impact on students’ expertise increasing their self-efficacy across the board as well as their performance on secure programming questions (Table 3). As seen in Table 3, participating in programming competitions increased the students’ confidence in their programming abilities, both general and secure, but did not correlate with increased performance on the knowledge questions. In fact, they performed notably worse. Participating with other students and practitioners on programming message boards increased secure programming efficacy, although not general programming efficacy, but did not have a noticeable effect on performance on secure programming questions. The most significant effect of prior experience was for those who had had programming internships. Unsurprisingly having had such an internship increased both programming and secure programming self-efficacy and performance on the secure programming questions. Perhaps the most surprising outcome was for those who participated in online games. The gamers reported lower self-efficacy levels for both secure programming and programming in general but performed notably better. The results suggest that the SPC should not focus exclusively on secure programming skills. Secure programming is a subskill to programming. Interventions must therefore make sure to support students’ programming skill and confidence as well. Various extracurricular activities also support secure programming self-efficacy. This suggests that the SPC can use these avenues as a way to increase student skills. Incorporating elements like gamification and message boards into the clinic structure will provide alternate ways for students to improve.

Measuring Self-efficacy in Secure Programming

91

5 Conclusion This paper described the development of a secure programming self-efficacy scale with a programming self-efficacy subscale. The paper reported the validation process for the scale. The scale was found to have high internal consistency and load on two factors corresponding with the secure programming self-efficacy and the programming self-efficacy subscales. The paper also examined efforts to assure convergent and predictive validity of the secure programming self-efficacy scale by comparing the scale to measures. We found that secure programming self-efficacy is, as expected, positively correlated with programming self-efficacy, knowledge of secure programming, expressed expertise in programming, and experience with programming related extracurricular activities. The results show that self-efficacy results from practice and exposure to secure programming, which is consistent with the theory of the development of self-efficacy. 5.1 Limitations and Future Work Ideally, the two studies should have followed the participants through their education at the institutions, but this was not possible. The Institutional Review Board approving the studies’ protocols required that all participants in the study be anonymous. So each participating student was assigned a random number. In every assignment and interaction with the student, the student’s name was replaced with the number. At the end of the term, the file containing the association of the student name with the random number was erased and deleted, so the identity of each participant could not be recovered. The two studies reported in this paper collected data from computer science and computer engineering students at two public universities in the lower-Pacific in the United States. The educational systems in other states and countries differ, as does education at non-public institutions. Further, computer science students are generally male, reflecting the bias of the field of computer science [19, 20]. Thus, this study should be rerun in other places with a more balanced population to better understand how differences in environment affect self-efficacy. The populations at these institutions were relatively small for a validation of this type. Women and other under-represented populations were also significantly underrepresented in our sample consistent with their representation in this population. Further studies to validate these instruments would test the scale against other proposed programming self-efficacy scales for stronger validation. These studies would also expand the population across the US and to other countries and over sample among women and under-represented populations.

References 1. CVE Database. https://cve.mitre.org. Accessed 15 Apr 2021 2. Zetter, K.: Serious Error in Diebold Voting Software Caused Lost Ballots in California County—Update. Wired (2008) 3. Checkoway, S., et al.: Comprehensive experimental analyses of automotive attack surfaces. In: Proceedings of the 20th USENIX Security Symposium, USENIX Association, Berkeley, CA, USA (2011)

92

M. Bishop et al.

4. Weinberg, G.: The Psychology of Computer Programming. Van Nostrand Reinhold, New York (1971) 5. Bishop, M., Orvis, B.: A clinic to teach good programming practices. In: Proceedings of the 10th Colloquium on Information Systems Security Education, pp. 168–174 (2006) 6. Bishop, M.: A clinic for ‘secure’ programming. IEEE Secur. Priv. 8(2), 54–56 (2010) 7. Dark, M., Stuart, L., Ngambeki, I., Bishop, M.: Effect of the secure programming clinic on learners’ secure programming practices. J. Colloq. Inf. Syst. Secur. Educ. 4(1) (2016) 8. Bishop, M., et al.: Learning principles and the secure programming clinic. In: Drevin, L., Theocharidou, M. (eds.) WISE 2019. IFIP Advances in Information and Communication Technology IAICT, vol. 557, pp. 16–29. Springer, Cham (2019). https://doi.org/10.1007/ 978-3-030-23451-5_2 9. Bandura, A.: Self-efficacy: toward a unifying theory of behavioral change. Psychol. Rev. 84(2), 139–161 (1977) 10. LaMorte, W. The Social Cognitive Theory (2019) 11. Bandura, A.: Self-Efficacy: The Exercise of Control. Worth Publishers, New York (1997) 12. Ramalingam, V., Labelle, D., Wiedenbeck, S.: Self-efficacy and mental models in learning to program. ACM SIGCSE Bull. 36(3), 171–175 (2004) 13. Gurer, M., Cetin, I., Top, E.: Factors affecting students’ attitudes toward computer programming. Inform. Educ. 18(2), 281–296 (2019) 14. Ramalingam, V., Wiedenbeck, S.: Development and validation of scores on a computer programming self-efficacy scale and group analyses of novice programmer self-efficacy. J. Educ. Comput. Res. 19(4), 367–381 (1998) 15. Sherer, M., Adams, C.: Construct validation of the self-efficacy scale. Psychol. Rep. 53(3), 899–902 (1983) 16. Chen, G., Gully, S., Eden, D.: Validation of a new general self-efficacy scale. Organ. Res. Methods 4(1), 62–83 (2001) 17. Bishop, M., Dai, J., Dark, M., Ngambeki, I., Nico, P., Zhu, M.: Evaluating secure programming knowledge. In: Bishop, M., Futcher, L., Miloslavskaya, N., Theocharidou, M. (eds.) Information Security Education for a Global Digital Society. WISE 2017. IFIP Advances in Information and Communication Technology, vol. 503. Springer, Cham (2017). https://doi. org/10.1007/978-3-319-58553-6_5 18. Dark, M., Ngambeki, I., Bishop, M., Belcher, S.: Teach the hands, train the mind . . . a secure programming clinic! In: Proceedings of the 19th Colloquium for Information Systems Security Education, pp. 119–133 (2015) 19. Frieze, C., Quesenberry, J.: How computer science at CMU is attracting and retaining women. Commun. ACM 62(2), 23–26 (2019) 20. Ganley, C., George, C., Cimoian, J., Makowski, M.: Gender equity in college majors: looking beyond the stem/non-stem dichotomy for answers regarding female participation. Am. Educ. Res. J. 15(3), 453–487 (2017)

End-User Security

Children’s Awareness of Digital Wellness: A Serious Games Approach J. Allers , G. R. Drevin(B) , D. P. Snyman , H. A. Kruger , and L. Drevin School of Computer Science and Information Systems, North-West University, Potchefstroom, South Africa {gunther.drevin,dirk.snyman}@nwu.ac.za Abstract. Children today are more exposed to cyberspace and cyber threats than any of the previous generations. Due to the ever-evolving nature of digital technologies, devices like cell phones and tablets are more accessible to both young and old. Although technological advancements create many opportunities to its users, it also exposes them to many different threats. Young users are especially vulnerable, as they are rarely educated about these threats and how to protect themselves against them. One possible solution to this problem is to employ serious games as an educational tool to introduce concepts relating to cybersecurity and overall digital wellness on a level that is appropriate to a younger audience. This paper, therefore, presents the development of a mobile serious game to promote the digital wellness and foster cybersecurity awareness of pre-school children by incorporating existing literature in a new format. In order to assess its appropriateness as an educational tool, the resulting serious game was subjected to expert review with a focus on its value of conveying security and wellness concepts at the proper level, thereby promoting children’s safety in the digital world. Keywords: Digital wellness · Serious games · Cybersecurity awareness · Cyber safety for children · Digital wellness for pre-school children

1

Introduction

Technological advancement holds many advantages and it also presents many new opportunities to the younger generations, but with these advantages there are also many dangers. One issue that arises from this exposure to cyberspace, is that young people are much more likely to be exposed to negative online experiences and cyber threats than other groups. The reason why these individuals are more likely to have negative experiences is because they are exposed to cyberspace without knowledge of how to maintain good digital wellness [1]. Digital wellness can be defined as being healthy in a digital society. This involves being able to distinguish between dangers and opportunities in the c IFIP International Federation for Information Processing 2021  Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 95–110, 2021. https://doi.org/10.1007/978-3-030-80865-5_7

96

J. Allers et al.

digital realm, acting responsibly in online situations and aligning online behaviour with offline values, thereby to ensure digital safety and security [2]. It does not only mean avoiding threats to data, assets and security, but it also means maintaining a physical and mental well-being. Individuals especially at risk to the dangers of cyberspace are pre-school children. They are exposed to cyberspace and all of its dangers from a very young age and usually do not (yet) posses the skills and knowledge to protect themselves from these dangers [2]. Although there are a lot of educational material and awareness strategies that focus on the awareness of cybersecurity and the education thereof, these methods rarely address educating pre-school children about how to protect themselves against cyber threats. Existing resources are not suitable in this context, because there are different and specific requirements for facilitating teaching and learning at this level. One of these requirements is to present information in different modalities to provide for different methods of learning, e.g., most pre-school children are not able to read or write and content should be tailored accordingly. Another requirement is that the content should be presented in an easily understandable and fun way, to ensure that they will be interested and motivated to participate. Although the increasing number of pre-school children using mobile devices to access cyberspace creates security risks, it also creates an opportunity to educate them about cyber threats and about the dangers of cyberspace. Because children learn through playing games [3], using a serious game on a mobile platform could be considered to be a viable method of promoting the different concepts of digital wellness among pre-school children. A serious game is an application that presents serious aspects with utilitarian functions, as a video game [4,5]. By using a serious game which appeals to pre-school children, on mobile devices that they are familiar with, they could be educated about the dangers that exist in cyberspace and how to defend themselves and act against these dangers. Using a mobile serious game is a viable method of promoting digital wellness among pre-school children, however, the success of such a game is not guaranteed [6]. There are certain factors that can be considered and elements that can be implemented that will increase the game’s chance of success. By identifying these different factors and critical elements, it becomes possible to create a mobile serious game that contributes to the goal of spreading awareness of digital wellness among pre-school children. Therefore, the primary aim of this study is to develop a mobile serious game to effectively promote digital wellness among pre-school children. To validate the effectiveness of the game, the following secondary objective is identified: Use expert reviews to validate the various educational aspects of the mobile serious game. This paper is based on the first author’s Master’s dissertation [7]. The remainder of the paper is structured as follows: In Sect. 2 a brief overview of concepts relating to digital wellness as well as serious games are provided. The mobile serious game that was developed is presented in Sect. 3. Section 4 is used to present the findings of the expert review of the game. Finally, the study is concluded in Sect. 5 with a reflection on the findings and a look ahead to possible future work.

Children’s Awareness of Digital Wellness: A Serious Games Approach

2

97

Related Literature

In this section an overview of the literature regarding digital wellness and serious games will be given. Creating awareness in the realm of cybersecurity refers to any activity intended to focus an individual’s attention on cybersecurity issues. The aim is to enable an individual to be able to recognise the different cybersecurity threats and concerns and respond to them in the correct way [8]. Although many strategies, protocols and campaigns for the improvement of cybersecurity awareness among users exist, cybersecurity attacks still happen on a daily basis [9]. Evaluating the success of cybersecurity awareness campaigns suggest that these campaigns often fail [10]. The Information Security Forum [10] identified a number of reasons why cybersecurity awareness activities fail. Using the reasons why these awareness campaigns fail makes it possible to create and implement more effective strategies. Simplicity has also been identified as an important element. It is important that the user feels in control of the situation and can follow specific behaviours if an awareness campaign is to be successful [11]. Also by keeping the rules simple and consistent, the user’s perception of control will make it easier to accept the new behaviour [12]. The objective of this study is to create a game that successfully promotes digital wellness, therefore these elements are to be implemented in the game. 2.1

Digital Wellness

Digital technologies can affect one’s personal experiences in daily life with both positive and negative outcomes [13]. This ultimately means that these everevolving digital technologies have a direct and growing impact on the well-being of users. This fact leads to the need for a new evaluation, measure or standard to determine the well-being of users in the digital realm and that standard is called digital wellness [14]. Several definitions for digital wellness are found in literature [2,14,15]. From these definitions, the recurring theme of the overall well-being of the individual as (s)he interacts with content in a digital environment is central. Furthermore, digital well-being is not only dependent on how a person uses these digital technologies, but it is also affected by the user’s ability to identify dangers in the cyber realm and how the user acts on these dangers. Creating awareness in the realm of cybersecurity refers to any activity intended to focus an individual’s attention on cybersecurity issues. The aim is to enable an individual to be able to recognise the different cybersecurity dangers and concerns and respond to them in the correct way [8]. Cybersecurity awareness is aimed

98

J. Allers et al.

at the end-users of a system and thus addresses the human element of cybersecurity. Although many strategies, protocols and campaigns for the improvement of cybersecurity awareness among users exist, cybersecurity attacks still happen on a daily basis [9]. This means that digital wellness can be influenced by digital assets, digital threats and digital communications and thus to maintain a good well-being in the cyber realm, it is necessary to uphold cybersecurity protocols, while also maintaining a positive mental and physical health. Maintaining positive digital wellness does not mean following a set of rules and instructions, but to rather find the balance in which one is happy, comfortable, healthy and safe in a digital realm and how well balanced one’s mental and physical state is when using different digital technologies and how safe users and their assets are. The elements of digital wellness can be divided into three groups: physiological, behavioural and psychological [14]. To better understand what it means to maintain a digitally healthy lifestyle, these three categories are discussed. The physiological elements of digital wellness refers to the physical wellbeing of a user. This means that the physiological elements of digital wellness are concerned with the health and safety of users. There are two main elements in the physiological category, namely screen time and technostress [14]. The behavioural elements of digital wellness are concerned with the behaviour and actions of users. This means that the behavioural aspects of digital wellness focus on how the use of digital technologies affect one’s habits, actions and performance both in the digital realm and in the real world. Two of the main elements that affect one’s overall digital wellness are the problematic use of the internet and media multitasking [14]. The final group of elements of digital wellness concerns the user’s mental or psychological well-being. In addition to the mental well-being of the user, the psychological elements of digital wellness also focus on the emotional state of the user. The two main elements of this group are on-line security and on-line disinhibition [14]. Of particular interest in this study is online disinhibition as it relates to the digital wellness of children. The online disinhibition effect is a phenomenon where people behave differently online than they do in the real world [16]. One of the most common forms of online disinhibition, which is of particular concern to this study, is cyberbullying. Cyberbullying can be defined as an aggressive, intentional act carried out by an individual or by a group of people, using electronic forms of contact, repeatedly and over time against a victim who can not easily defend him or herself. It stands to reason that a child can be either the perpetrator (against other children) or the victim of cyber bullying.

Children’s Awareness of Digital Wellness: A Serious Games Approach

99

The following are possible negative effects that cyberbullying can have on users (especially children and adolescents) [17,18]: – Increased levels of anger, powerlessness, sadness, and fear; – Loss of confidence, disassociation from friends and school, and a general sense of uneasiness; – Reactive behaviour that could lead to physical harm, which is likely to escalate the situation further; – Increased levels of anxiety and depression; and – Self-harm or suicidal thoughts. To avoid acting in an uninhibited way, it is important to self monitor one’s actions and discussions online. Avoiding discussions and situations where a person might show signs of disinhibition, will reduce the opportunity and impulse to act in a way that the person normally would not behave. In situations where other users make a user uncomfortable, unsafe or unhappy, there are two recommended actions. The first is to avoid online contact with those users by blocking contact with them or avoiding websites or situations where they might make contact. The second option is to report these users to the corresponding authorities. These authorities may be the administrators of an application or website in an online capacity or the parents, guardians or teachers in the case of younger users [2]. However, ultimately the bullying behaviour needs to be addressed and corrected to prevent future incidents. It is widely recognised in cybersecurity literature that awareness is a method of improving unwanted behaviour of the perpetrator while simultaniously informing victims of avenues of recourse. In the next section, digital awereness strategies for children will be discussed. 2.2

Digital Wellness Awareness for Children

To better understand how to spread awareness of cybersecurity among children, the focus of this section is to identify how they learn and develop important skills. The early experiences that children have, play a big role in their overall development and exposing them to the concepts of digital wellness can have significant benefits [19]. Children, especially preschool children, learn in five different ways [20]: – – – – –

Observation - Visual learning via observation and imitation; Listening - Auditory learning; Exploring - Investigative learning; Experimenting - Physical learning via trial and error; and Asking questions - Inquisitive learning.

Although this shows that children are capable of learning in different ways, it is important to note that not all children learn in the same way. Some children might respond better to teaching methods that involve observing and listening, while others might receive more stimulation from practical experimentation and asking questions. Fortunately, preschool children are still at the age of learning

100

J. Allers et al.

through play [3]. Play is a fun way for children to learn, regardless of their preferred method of learning. Play allows children the opportunity to observe, listen, explore, experiment and ask questions to solve problems. By using play as a tool for learning, the method of teaching is not limited to only one or two of the different ways of learning, but can instead be set up to include all of these methods. By creating a game specifically aimed at children, it is possible to stimulate all forms of learning using only one learning medium. The observation and listening methods of learning can be achieved by presenting information using both audio and visual methods. The addition of audio and visual feedback can also contribute to the player’s overall learning experience [6]. By using different tasks and interactions, it is possible to include both the exploring and experimenting methods of learning and by adding thought provoking questions, the inquisitive learning method will also be included. Another important factor that should not be overlooked when discussing the awareness of preschool children, is the involvement of the child’s parent, teacher or guardian. When the parent, teacher or guardian of a child knows how the child learns best, they can guide the child to optimize learning and thus spread awareness effectively. By showing interest in what the child is doing, playing games with them, reading to them and spending time with them, the child’s motivation and productivity will noticeably improve [21]. This is especially important for inquisitive learning as children who learn by asking questions should feel comfortable to ask these questions of people whom they trust. Examples of digital wellness campaigns for children include both traditional paper based as well as contemporary game based attempts to promote digital wellness awareness. Two traditional paper based examples, with the purpose of making children aware of digital wellness and cybersecurity, are given in Table 1 while a number of serious games to the same effect are summarised in Table 2. Games will be further discussed in Sect. 2.3. Table 1. Traditional paper based digital wellness content for children Title

Content

Digital wellnests: Let us play in safe nests [22]

A book consisting of concepts, 14 poems and 14 messages set in the animal kingdom. Furthermore nine digital wellness and cybersecurity morals are identified

Savvy Cyber Kids [23] A book series consisting of three books identifying three elements of digital wellness and cybersecurity specifically for children: – Online anonymity (not sharing information online); – Online bullying (tell people you trust when someone is being bullied); and – Limit screen time

Children’s Awareness of Digital Wellness: A Serious Games Approach

101

Table 2. Serious games to promote digital wellness for children Game name

Digital wellness topic

Interland: Be Internet awesome [24]

A game and resources (that includes a curriculum for educators) which explores four different worlds teaching the user four different lessons about cybersafety: – Communicate Responsibly; – Know the Signs of a Potential Scam; – Create a Strong Password; and – Set an Example and take action against inappropriate behaviour

Budd:e [25]

– Staying safe online – Protection against viruses and malware – Using social networks responsibly

Carnegie Cadets [26]

– Staying safe online – Protection against viruses and malware – Using social networks responsibly

FBI Cyber Game [27]

– Staying safe online

PBS Cybersecurity Lab [28]

– Staying safe online – Spotting scams – Defending against cyber attacks

Most cybersecurity content that is available online is of international origin. Videos make use of robots and pirates and provide information using an English accent. Children whose home language is not English find this material difficult to follow [2]. Furthermore, the use of academic terminology to discuss cybersecurity issues also make the concepts difficult to grasp for most users and even more so for children. Among the identified related works, the book, “Digital wellnests: Let us play in safe nests” [22], aligns best with the aim of this study. This book identifies most digital wellness topics and is specifically aimed at preschool children. Both “Interland: Be Inter-net awesome” and “Savvy Cyber Kids” identify core digital wellness issues, but the number of issues are limited compared to that of “Digital wellnests: Let us play in safe nests”. The book, which is available electronically1 , was created with the purpose to promote a cybersecurity culture amongst children. The book uses simple explanations and depict animals as the main characters. These animals are familiar to African children. In 2015, while this book was still in its developmental stage, it was presented at a workshop in Nairobi, Kenya. At this workshop the book received much praise for its clear approach and identifiable characters in the African context. Both academic and civil delegates present at the workshop

1

https://www.up.ac.za/african-centre-of-excellence-for-information-ethics/article/21 09737/digital-wellness-toolkit.

102

J. Allers et al.

praised the work for successfully achieving the aims of giving the topic a local flavour [2]. Selected elements of the book are used in the mobile application developed in this study. The book consists of four main sections. The first is a foreword and introduction that are primarily aimed at the parent, guardian or teacher. The second section of the book contains a few technology-related concepts that are both explained and illustrated by a drawn representation. The main content of the book is provided in the third section in the form of poems. Each of these poems features animals interacting with technology and ends with a moral lesson. The fourth and final section of the book consists of short cybersecurity-related lessons that are easy to remember [2]. Concepts - The book describes technology-related concepts such as: Cell phone, Computer, E-mail and Social Media. Poems - The poems in the book serve as the main content and it is used to teach the reader a moral lesson regarding cybersecurity and digital wellness. There are 14 poems, each with a cybersecurity-related theme, a moral lesson and what element or elements of digital wellness it addresses. An example of these poems is Buffy the Bully. The story of Buffy the Bully follows a buffalo that feels sad and alone and acts out by sending cruel messages to his classmates online. After the class reports Buffy to the teacher, the teacher confronts the bully and tells him that he must speak about what is causing him to send the mean messages and the teacher tells him that bullying is not the solution to his problems. The moral of the story is that if one sees that someone is being a bully, it must be reported to a trusted adult. The moral of this story is related to the online disinhibition element of digital wellness as cyberbullying is a form of toxic disinhibition. A related poem is Happy Hippo in which the victim of cyber bullying is portrayed. This poem will be illustrated in Sect. 3 as part of the discussion of the game that was implemented. Short Messages from the Animals - The final section of the book contains 14 short messages that convey cybersecurity-related lessons in a way that is fun and easy to memorize. The messages roughly match the lessons of the poems, the following is an example of one of the messages given in the book [22]: “I got messages from kids at my school. They said things that were hurtful and mean and cruel. It made me feel really sad, Until I told my mom and dad. When someone is bullying you or a friend, Tell a grown-up so that it can end.” Digital Wellness for Children. By analysing the content of the book, we can identify the most important digital wellness elements for children as: screen time; the problematic use of technology and the internet; online security and privacy; and online disinhibition. The general morals of the stories and messages can be summarized as follows:

Children’s Awareness of Digital Wellness: A Serious Games Approach

103

– Do not share personal information online; – Delete messages and friend requests from unknown people and suspicious sources; – Report cyberbullying to trusted adults; – Be honest when going online and do not visit dangerous or suspicious websites; – Balance using technology (screens) and playing outside; – Always set up strong passwords and keep them a secret from others; – Remember to use and update anti-malware tools and software; – Do not engage in illegal activities when using technology or the internet; and – Do not physically meet strangers that you met online without adult supervision and consent. 2.3

Serious Games

Games can be defined as a set of actions, that are restricted by rules and constraints, with a certain objective [29]. Over the last half century video games have transformed the way in which people play and spend their leisure. Due to the growth in popularity and use of games, the primary goal of some games is no longer restricted to the original purpose of pure entertainment. The types of games has now extend past entertainment games to also include concepts such as serious games and gamification. A number of approaches to define serious games exist in the literature, but most of these are reduced to a simple definition: A serious game has the primary goal of combining fun and play with a serious or utilitarian aspect [4,5]. Therefore, serious games are video games that do not only aim to be fun and entertaining, but also have a serious motive, such as teaching the user something or spreading awareness on a certain topic. In contrast to entertainment games, learning new information and skills while playing serious games is the intended outcome. It is important to note that in this context, video games refer to all games that are played using a digital medium or on a digital platform. The fact that preschool children learn through play [3] allows educators and parents to make use of games as an additional method to teach children new skills and building on existing knowledge. When the idea of using games for teaching and spreading awareness is considered in the light of children’s growing exposure to digital technology [6], using video games appears to be a viable approach to spread awareness among children. This has become evident by the number of serious games targeted directly at young children as well as the increase in their popularity [30]. However, even though more games directed at preschool children have been developed, it does not mean that these games are optimised for their target audience. In an attempt to address this problem Callaghan et al. [6] identified four design elements based on how preschool children learn. The first element is Clear and simple goals: Children learn best when given clear instructions. Therefore, giving clear and simple goals at the start of the game will result in the child less likely to become confused and overwhelmed. The result will be that they will their tasks with minimal disruptions.

104

J. Allers et al.

The second is Quality of feedback and rewards: Feedback is a powerful and important tool to both encourage children as well as notify them when they are doing something wrong. Preschool children most likely are not able to read and thus text feedback is of no use to them. A more effective approach is to combine visual and auditory feedback that can be easily understood by the child. Next is the Structure of the challenge: When structuring a challenge, the level of performance of the target audience should be kept in mind. By adapting the level of challenge of an application to gradually increase in difficulty as the child understands more of the material, as well as decrease in difficulty when the child appears to struggle, it becomes possible to scaffold the child’s learning. The final element is Motion based interaction: This refers to physical methods that children can use to interact with applications. These interactions can serve as an alternative to complex touch screen activities that might be too difficult for many children. By creating the game in such a way that it aligns with the physical capabilities of preschool children (e.g., touchable object sizes, simplified touchscreen motions, etc.), the overall experience of the child will improve. These four elements are essential to ensure that a game is suitable for a preschool aged child. One of the objectives of this study is to create a game that is suitable for pre-school children, therefore these elements are to be implemented in the created mobile serious game.

3

Overview of the Game

Fig. 1. Main menu of the game

The aim of this study is to implement a mobile serious game that can serve as a method to promote awareness of digital wellness among pre-school children. The resulting game should satisfy the following three criteria: the game should be (1) fun to play, (2) appropriate for pre-school children and (3) spread awareness of digital wellness.

Children’s Awareness of Digital Wellness: A Serious Games Approach

105

The game consists of four main scenes. Each of these scenes contribute to the main goal of spreading awareness of digital wellness among pre-school children. The first scene that the user can interact with is the main menu screen. This scene primarily serves as a selection screen for picking the poem, quiz and game that the user will play. A screenshot of the main menu scene is given in Fig. 1. The user can navigate through the poems using the two green arrows pointing left and right. The poem currently selected is shown in the center, in in Fig. 1 it is the Happy Hippo poem, and is a selectable animation. Tapping on this part of the scren will take the user to the selected poem.

Fig. 2. Happy Hippo poem scenes

Once the user selects a poem in the main menu scene, the application will display the poem scene. In this scene the selected poem will be displayed on the screen and it will be read to the user. The user interacts with the screen by tapping anywhere to move to the next paragraph or page. The main goal of this scene is to inform the user and spread awareness of the dangers of cyberspace in an enjoyable way. The first screen of the Happy Hippo poem, which covers the aspect of online bullying, is shown in Fig. 2. The second screen continues the conversation between Ron (the Rhino) and Henry (the Hippo). Henry says that he does not know from who the messages are, but that maybe it is not a big deal as it is only text messages. Ron replies that it still is a case of bullying and that

106

J. Allers et al.

they should go and tell the teacher so that the bullying can end. This is then what they do as is shown on the third screen of the poem. The third screen of the poem also gives the moral of the story: “When you are being bullied or see someone is being bullied, tell someone you trust” as well as poses the question “Why is online bullying just as hurtful as physical bullying?” The quiz scene is entered immediately after the user completes the reflection questions in the poem scene. The aim of this scene is to determine whether or not the user understands the problem described in the poem by motivating them to answer four questions about the topic. These questions are randomly chosen from a pool of questions to provide a form of replayability and to ensure that a pattern cannot be memorised when answering the questions. The quiz does not block progress if the user’s results are sub par. The user’s progress does not get blocked for two reasons: The goal of the game is not to formally educate children of the dangers, but rather to spread awareness on the matter; and the quiz is only meant to be used by parents, teachers and guardians as a tool to motivate the user keep track of their efforts and progression. The final scene of the application is the game scene. This scene allows the user to play a mini-game based on the poem that they have chosen. Each of these games are different and serves as the final reward for completing the poem and quiz scenes. Before the game starts, the instructions and goal of the game are displayed on the screen and it is read out loud (shown on left in Fig. 3) . The user can then use the slider to pick a level and play the game. The Happy Hippo game is shown on the right in Fig. 3. The objective of this game is to tap on the bullies (buffaloes) as they appear, while avoiding tapping on the hippos. The game can be won by tapping on twenty bullies and the game is lost if three hippos are tapped. The bullies and hippos spawn every few seconds at a random location that does not overlap other targets. This game is an exercise in small motor movement and reflexes. The message of the game is to take action against bullies by reporting them to an adult. A higher difficulty level increases the speed at which targets appear and disappear and the size of the targets. When the game is finished, a message is displayed to notify the user whether they won or lost. This message is accompanied by a matching animation relating to the poem. The user is then given the option to play again or return to the main menu. If the user decides to play again, the user is redirected to the level selection screen. After its creation, the serious game was sent to experts in the field of preschool education, for review.

Children’s Awareness of Digital Wellness: A Serious Games Approach

107

Fig. 3. Happy Hippo game

4

Expert Review and Validation

In this section the results of this expert review of the mobile serious game that was developed are discussed. Six experts were asked to review the game. The reviews were conducted using a questionnaire and it was followed up by a brief phone interview to gather more information. The questionnaire consisted of several questions regarding the reviewers experience and opinion of the problem, and a review of the mobile serious game and the educational elements implemented in the game. Based on the questionnaire responses, the reviewers had a combined experience of 168 years working with pre-school children, resulting in an average of 28 years of experience. The age of the children that the reviewers generally work with ranged between three and nine years old. All six of the reviewers have indicated that they have either a good or very good understanding of the different dangers and threats of the cyberspace and digital technologies, but have also indicated that they believe that the parents of the children are not very aware of these dangers. When asked whether or not the reviewers believe that the parents effectively teach the children about these dangers, all responded negatively. Even though five of the six reviewers indicated that they believe that the current level of exposure of children to digital technologies is problematic, none of the reviewers were aware of any resources that can be used to promote digital wellness and cybersecurity and none of the reviewers used any resources for this purpose. The validation of the mobile serious game was conducted using a scoring system. After playing the game multiple times, the reviewers were asked to score the implementation of educational element on a scale of one to five, where one indicates poor to no implementation and five indicates excellent implementation. In Table 3 the average implementation score of each element is shown, with comments from the reviewers.

108

J. Allers et al. Table 3. Reviewer scores.

Educational element

Mean Answer Comments score range

Clear and simple goals

4

3–5

“Perhaps clearer instructions on how the story works. (click to turn the page etc.)” (on suggestion for further development)

Feedback and rewards

4

3–5

“Good feedback, but more colours can be good.” (Overall comments)

Structured challenge

4

3–5

“I love the different levels of play”, “More difficult levels are too difficult (in envelope game)” (from interview)

Appropriate interface

4.3

3–5

“This application is very child friendly. I will not change a thing.” (on suggestion for further development)

Appropriate materials

4.5

4–5

“It [the animal theme] was very fitting and the children will love it.”, “The different concepts are explained to them (the children) very well.” (from interview)

Appropriate method of 4.5 presenting the materials

4–5

“I like that it [the game] uses sounds, pictures and words so everyone can understand it.” (from interview)

From Table 3, it is evident that the reviewers found the implementation of each element was satisfactory, but there is still room for improvement. To further validate the game, the reviewers were also asked to score the game based on the following two criteria how suitable is the game for pre-school children; and how effective will the game spread awareness of digital wellness? All three of these criteria were awarded a mean score of 4.5 out of 5 and a range of 4–5, meaning that the reviewers believed that the game would be an overall success.

5

Conclusion and Future Work

The aim of this study was to develop a mobile serious game to effectively promote digital wellness among pre-school children. A secondary objective was identified to gauge the effectiveness of the game by means of expert review. Section 4 was used to illustrate the opinions of the experts. The game was scored high in all of the identified educational aspects. Therefore, this study’s main contribution to the current body of knowledge is a successful serious game for pre-school children that has been developed based on the concepts identidied by Fischer and Von Solms [2]). Future studies may consider focusing on both the development and deployment of this mobile serious game. The effectiveness of this game can be compared to that of other, traditional methods of spreading awareness in order to validate

Children’s Awareness of Digital Wellness: A Serious Games Approach

109

if a mobile serious game is an effective tool to spread awareness of digital wellness among pre-school children. Furthermore, the validation of both the game and critical elements could be done by a more diverse group of experts from different, but relevant, fields (pre-school teachers, child psychologists, application developers etc.) and by observing pre-school children while playing. The knowledge gained from observing the children playing the game could lead to a better insight of the needs of the children.

References 1. Burton, P., Leoschut, L., Phyfer, J.: South African Kids Online: a glimpse into children’s internet use and online activities. Technical report, UNICEF (2016). http://www.cjcp.org.za/uploads/2/7/8/4/27845461/south african kids online brochure.pdf 2. Von Solms, S., Fischer, R.: Digital wellness: concepts of cybersecurity presented visually for children. In: Furnell, S., Clarke, N.L. (eds.) Eleventh International Symposium on Human Aspects of Information Security & Assurance (HAISA 2017), vol. 11, pp. 156–166 (2017) 3. Yogman, M., Garner, A., Hutchinson, J., Hirsh-Pasek, K., Golinkoff, R.M.: The power of play: a pediatric role in enhancing development in young children. Pediatrics 142(3), e20182058 (2018). https://doi.org/10.1542/peds.2018-2058 4. D¨ orner, R., G¨ obel, S., Effelsberg, W., Wiemeyer, J. (eds.): Serious Games. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40612-1 5. Ritterfeld, U., Cody, M., Vorderer, P. (eds.): Serious Games: Mechanisms and Effects. Routledge, New York (2009) 6. Callaghan, M.N., Reich, S.M.: Are educational preschool apps designed to teach? an analysis of the app market. Learn. Media Technol. 43(3), 280–293 (2018). https://doi.org/10.1080/17439884.2018.1498355 7. Allers, J.: A mobile serious game to promote digital wellness among pre-school children. Master’s thesis, North-West University (2021) 8. Kissel, R.: Glossary of Key Information Security Terms. Tech. Rep. Revision 2, National Institute of Standards and Technology, Gaithersburg, MD (2013). https://doi.org/10.6028/NIST.IR.7298r2 9. Kirlappos, I., Parkin, S., Sasse, M.A.: Learning from “Shadow Security”: Why understanding non-compliance provides the basis for effective security. In: Workshop on Usable Security, pp. 1–10 (2014). https://doi.org/10.14722/usec.2014. 23007 10. Information Security Forum: From Promoting Awareness to Embedding Behaviours. Tech. Rep. (2014) 11. Ajzen, I.: Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. J. Appl. Soc. Psychol. 32(4), 665–683 (2002). https://doi.org/ 10.1111/j.1559-1816.2002.tb00236.x 12. Bada, M., Nurse, J.R.: The social and psychological impact of cyberattacks. In: Benson, V., Mcalaney, J. (eds.) Emerging Cyber Threats and Cognitive Vulnerabilities, pp. 73–92. Academic Press, London (2020). https://doi.org/10.1016/B9780-12-816203-3.00004-6 13. Dauden Roquet, C., Sas, C.: Digital wellbeing: evaluating mandala coloring apps. In: CHI Conference on Human Factors in Computing Systems (2019)

110

J. Allers et al.

14. McMahon, C., Aiken, M.: Introducing digital wellness: bringing cyberpsychological balance to healthcare and information technology. In: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 1417–1422 (2015). https://doi.org/10.1109/ CIT/IUCC/DASC/PICOM.2015.212 15. Royal, C., Wasik, S., Horne, R., Dames, L.S., Newsome, G.: Digital wellness: integrating wellness in everyday life with digital content and learning technologies. In: Keengwe, J., Bull, P.H. (eds.) Handbook of Research on Transformative Digital Content and Learning Technologies, pp. 103–117 (2017). https://doi.org/10.4018/ 978-1-5225-2000-9.ch006 16. Rose, E.: “Would you ever say that to me in class?”: Exploring the implications of disinhibition for relationality in online teaching and learning. In: Bayne, S., Jones, C., de Laat, M., Ryberg, T., Sinclair, C. (eds.) 9th International Conference on Networked Learning, pp. 253–260 (2014) 17. Hamm, M.P., et al.: Prevalence and effect of cyberbullying on children and young people: a scoping review of social media studies. JAMA Pediatr. 169(8), 770–777 (2015). https://doi.org/10.1001/jamapediatrics.2015.0944 18. Hinduja, S., Patchin, J.W.: Bullying, cyberbullying, and suicide. Arch. Suicide Res. 14(3), 206–221 (2010). https://doi.org/10.1080/13811118.2010.494133 19. McCall, R.B., et al.: Early caregiver–child interaction and children’s development: lessons from the St. Petersburg-USA Orphanage intervention research project. Clin. Child Fam. Psychol. Rev. 22(2), 208–224 (2018). https://doi.org/10.1007/ s10567-018-0270-9 20. Matthews, D., Lieven, E., Tomasello, M.: How toddlers and preschoolers learn to uniquely identify referents for others: a training study. Child Dev. 78(6), 1744–1759 (2007). https://doi.org/10.1111/j.1467-8624.2007.01098.x 21. Cheung, C.S., Pomerantz, E.M.: Value development underlies the benefits of parents’ involvement in children’s learning: a longitudinal investigation in the United States and China. J. Educ. Psychol. 107(1), 309–320 (2015). https://doi.org/10. 1037/a0037458 22. Fischer, R., Von Solms, S.: Digital Wellnests. ACEIE (2016) 23. Halpert, B.: Savvy Cyber Kids (2014). https://savvycyberkids.org/families/kids/ 24. Google: Be internet awesome: Google Digital Literacy and Citizenship Curriculum (2019), https://www.google.ch/goodtoknow/web/curriculum/ 25. Australian Department of Broadband Communications and the Digital Economy: Budd:e (2011) 26. Carnegie Mellon University: The Carnegie Cyber Academy - An Online Safety site and Games for Kids (2011), http://www.carnegiecyberacademy.com/ 27. FBI: FBI - Cyber Task Forces (n.d.). https://www.fbi.gov/about-us/investigate/cy ber/cyber-task-forces-building-alliances-to-improve-the-nations-cybersecurity-1 28. PBS: Cybersecurity Labs. https://www.pbs.org/wgbh/nova/labs/lab/cyber/ 29. Stenros, J.: The game definition game: a review. Games Cult. 12(6), 499–520 (2017). https://doi.org/10.1177/1555412016655679 30. Shuler, C., Levine, Z., Ree, J.: Learn II: an analysis of the education category of Apple’s app store. Technical report, Joan Ganz Cooney Center, January 2012

Environmental Uncertainty and End-User Security Behaviour: A Study During the COVID-19 Pandemic Popyeni Kautondokwa1(B) , Zainab Ruhwanya1(B) , and Jacques Ophoff1,2(B) 1

University of Cape Town, Cape Town, South Africa [email protected], [email protected] 2 Abertay University, Dundee, UK [email protected] Abstract. The COVID-19 pandemic has forced individuals to adopt online applications and technologies, as well as remote working patterns. However, with changes in technology and working patterns, new vulnerabilities are likely to arise. Cybersecurity threats have rapidly evolved to exploit uncertainty during the pandemic, and users need to apply careful judgment and vigilance to avoid becoming the victim of a cyberattack. This paper explores the factors that motivate security behaviour, considering the current environmental uncertainty. An adapted model, primarily based on the Protection Motivation Theory (PMT), is proposed and evaluated using data collected from an online survey of 222 respondents from a Higher Education institution. Data analysis was performed using Partial Least Squares Structural Equation Modelling (PLSSEM). The results confirm the applicability of PMT in the security context. Respondents’ behavioural intention, perceived threat vulnerability, response cost, response efficacy, security habits, and subjective norm predicted self-reported security behaviour. In contrast, environmental uncertainty, attitude towards policy compliance, self-efficacy and perceived threat severity did not significantly impact behavioural intention. The results show that respondents were able to cope with environmental uncertainty and maintain security behaviour. Keywords: Information security · Protection Motivation Theory · Theory of Planned Behaviour · Environmental uncertainty · COVID-19

1

Introduction

In recent years, the world has seen an exponential rise in cybersecurity incidents, and many of these incidents can be attributed to human error [1]. Although many of these incidents have occurred in corporate environments, cybersecurity incidents, such as data breaches, have also increased in Higher Education Institutions (HEIs) [2]. A large number of HEI worldwide experienced an increase c IFIP International Federation for Information Processing 2021  Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 111–125, 2021. https://doi.org/10.1007/978-3-030-80865-5_8

112

P. Kautondokwa et al.

in cyber-attacks in 2020. The environmental uncertainty and fear caused by the COVID-19 pandemic have resulted in a spike in cyber-attacks. There is an increased social engineering scam, phishing, ransomware, data harvesting malware, all exploiting the COVID-19 pandemic [3]. It is essential to understand the impact this environmental uncertainty has on security behaviour, especially at HEIs, where a significant number of staff and students have adopted new technologies and working from home patterns. In the context of information security, behaviour is the actions generally related to computer use [4]. Behaviour is an essential aspect of studying information security, and one of the research’s critical end goals is to influence positive security behaviour [5]. Security behaviour is at the heart of any organisations security culture; the attitude and intentions of the individual dictate information security behaviour. Good security behaviour is typically associated with compliance to set policies and guidelines, while bad security behaviour is attributed to non-compliance [5]. Factors that motivate security behaviour are generally drawn from existing research models such as the Protection Motivation Theory (PMT) and the Theory of Reasoned Action (TRA) or Theory of Planned Behaviour (TPB). Although it is common to find constructs from research models other than these, they remain the most consistently applied in behavioural security studies [6]. This study aims to understand the factors that motivate end-user security behaviour at HEIs, considering the environmental uncertainty caused by the COVID-19 pandemic. Therefore, the research question is, What are the factors that motivate end-user security behaviour at HEIs during the COVID-19 pandemic? The paper proceeds with a review of the literature and the development of the research hypotheses. This is followed by a description of the research design. The results of data analysis are presented and discussed, whereafter the paper concludes with suggestions for further research.

2 2.1

Literature Review Theories Related to Information Security Behaviour

In the following subsections, PMT and TPB and constructs thereof are examined, including how they are relevant in present studies on security behaviour and how they impact recent security behaviour studies. Many studies have used at least one factor deriving from these models to study the motivators for information security behaviour [7–10]. 2.2

Protection Motivation Theory

The PMT is used to predict how individuals would respond under stress from threats [11]. This theoretical model has been effectively utilised in information security research pertaining to behaviour [9,10]. Not only to measure individuals’ intentions as the model is used in its purity, but also as one of the measures integrated with research models to obtain actions from behavioural intentions as demonstrated in previous studies [12].

End-User Security Behaviour During the COVID-19 Pandemic

113

In the PMT, it is argued that an individual’s assessment of the severity and the vulnerability of a threat (threat appraisal) and the extent to which they can cope with this threat by steering particular human behaviour (coping appraisal) will determine the intention [13]. Self-efficacy is the individual’s belief in their ability to mitigate the threat; this factor features in findings of studies in this review as an essential coping appraisal [13,14]. Self-efficacy is one coping appraisal that is triggered across many contexts; because of how common it is, one can argue that self-efficacy can also be detrimental to an organisation or institution’s security posture because an individual backing their ability to mitigate a threat may provide a false sense of security. It is essential to note that coping appraisals were indiscriminately found to impact findings in previous studies [8,9,14]. In some instances, such as [15], all PMT factors were found to impact the respondents’ security behaviour. Response cost is used to measure the costs incurred in adopting a specific response to a threat. The response efficacy is the effectiveness of the response to this threat; some findings have identified response cost and response efficacy as motivating factors in security behaviour [8,9,14]. 2.3

Theory of Planned Behaviour

The TPB is adopted in many studies about information security behaviour, much more because it is possible to extract actual behaviour from intentions, making the TPB model unrestrictive than the PMT. Nevertheless, TPB has been used along with the PMT and viewed as a viable addition to the PMT. The TRA and TPB models are different but related; TPB is an extension of the TRA [16]; in information security, it is not uncommon to observe them being cited interchangeably. A revised version of the TPB has the attitude, perceived behaviour control and the subjective norms constructs. In TRA and TPB, an attitude refers to the degree to which an individual has a positive or negative evaluation or appraisal of the behaviour in question [16]. The subjective norms construct’ refers to social pressure to perform a particular action; this construct is a common feature in many studies in this review. It is seen in previous studies as motivating factors to certain security behaviours [7, 12,17]. The behavioural control construct is seen as vital because it is noted to make the intentions to behave a particular way to actual behaviour [18]. The perceived behavioural control is the ease or the difficulty of which behaviour is acted upon, typically reflected on past experiences or the anticipated obstacles. The perceived behavioural construct is not found in the PMT, and therefore some studies have used TRA and TPB where actions can be sought from the intentions [18]. 2.4

Intentions and Security Behaviour

An intention is not necessarily the act; this dilemma causes some confusion. Some researchers note that behavioural intention cannot dictate actions when the control over the action has not been completed [18]. Studies on information

114

P. Kautondokwa et al.

security behaviour state that while actual behaviour and intentions are, in fact, related and can have the same effect, there is a difference between the intent and the behaviour in its practicality. Research models such as the PMT makes use of behavioural intentions to respond to threats to predict behaviour. However, it is said that intentions versus actual behaviour raise some research issues [14,19]. Moreover, compliance to information security policy is a good security behaviour [14]. Commonly, the opposite of not being compliant is acting against the information security policies, and this behaviour is typically detrimental to the security posture of the said organisation. Previous studies on security behaviour focused specifically on what causes individuals to be non-compliant and intentions thereof [20,21]. This suggests that studying compliance and attitudes towards security compliance is essential when considering that many organisations and institutions have policies. It can be argued that in many cases, compliance with security policy prevents harmful security behaviour.

3

Hypotheses Development

This study uses coping and threat appraisals from the PMT and the subjective norm adopted from the TPB to examine and discuss the environmental uncertainty and end-user security behaviour. 3.1

Protection Motivation Theory

The PMT predicts how individuals would respond under stress from threats [22]. It is argued that an individual’s assessment of the severity and the vulnerability of this threat, also called a threat appraisal, and the extent to which they can cope with this threat by controlling particular human behaviour called the coping appraisal will determine their intention [13]. This study adopts all coping and threat appraisals in the PMT. An individual’s assessment of their view of the severity of the threat and their view of their susceptibility to this threat is called threat appraisals. In PMT, perceived severity would be a student’s belief about the size of the threat or harm that this threat will inflict. The perceived vulnerability is the student’s belief of their susceptibility to this threat [22]. Threat appraisals significantly impact readiness to perform a specific behaviour; this is noted in previous studies that have been conducted on security behaviour [8,14]. As such, the following hypotheses are offered: Hypothesis 1. Students’ perceived vulnerability of losses by security threats positively impact their behavioural intention to practice information security. Hypothesis 2. Students’ perceived severity of losses by security threats positively impacts behavioural their intention to practice information security. The coping appraisals in the PMT are response cost, response efficacy and self-efficacy; it is accepted that coping appraisals play an essential role in intentions to perform an action. In the context of this study, self-efficacy is the student’s belief in their ability to lessen the threat that they are facing. Response

End-User Security Behaviour During the COVID-19 Pandemic

115

cost is used to estimate the costs incurred in adopting a specific response to a threat; the response efficacy is the effectiveness of the student’s response to this existing threat [11]. Students are expected to consider coping appraisals when deciding if certain technologies or measures are viable in confronting a particular security threat. The importance of coping appraisals is well noted in studies previously conducted, and their impact on behavioural intention is well observed [13]. As such, the following hypotheses are offered: Hypothesis 3. Response efficacy positively impacts students’ behavioural intention to practice information security. Hypothesis 4. Self-efficacy positively impact students’ behavioural intention to practice information security. Hypothesis 5. Response costs negatively impact students’ behavioural intention to practice information security. 3.2

Theory of Planned Behaviour

Central to TPB is the intentions to perform a specific behaviour motivated by specific factors [16]. In the TPB, the three core constructs are attitude towards the behaviour, subjective norm and perceived behavioural control, which also influences the actual behaviour. This study utilises the subjective norm and intention constructs from the TPB. Subjective norms in the context of this study are the student’s acceptance of pressure from peers, fellow students and anyone close to the student to perform protective information security behaviours; it is said in previous studies that subjective norm has a significant influence on the intentions to perform a behaviour [12,14,23]. HEIs have security policies, rules and guidelines. This study’s proposed research model uses the attitude toward compliance with an information systems security policy (ISSP) construct. It is also essential to examine if compliance with these rules and policies influence students’ behaviour to practice information security. Compliance tends to be adequate security behaviour [5]; previous studies show that attitude towards ISSP compliance has a positive impact on behavioural intentions [18,24,25]; therefore, it is crucial to look at attitudes towards ISSP when examining an institution which has an ISSP. As such, the following hypotheses is proposed: Hypothesis 6. Students’ behavioural intention to practice information security positively impacts their information security behaviour. Hypothesis 7. Subjective norms positively impact behavioural intention to practice information security. Hypothesis 8. Attitude toward compliance with ISSP positively impact behavioural intention to practice information security. 3.3

Security Habit

This study’s proposed research model also adopts the security habits construct with integrating constructs from the PMT and TPB. This construct has been

116

P. Kautondokwa et al.

used in previous studies and shows an impact on influencing information security behaviours [12,26].In this study’s context, security habit is the continuous action that influences information security behaviour [12,25]. These actions tend to become routine, which makes this construct a critical moderator in examining end-user behaviour. Hence, we propose: Hypothesis 9. Security habits positively impact students’ information security behaviours. 3.4

Environmental Uncertainty

As defined by [27], uncertainty is an individual’s perception of lacking sufficient information to predict accurately because of the inability to discriminate between relevant and irrelevant data and lack of knowledge of response options. An environmental uncertainty means that the source of uncertainty is external to the individual. These external events can be caused by political, economic, cultural, or global events such as the COVID-19 pandemic [3]. Environmental uncertainties affect how individuals decide in times of crisis due to the inability to understand changes, events, and causal relationships in the external environment [27]. The environmental uncertainty during the COVID-19 pandemic made it difficult for an individual to deduce relevant and irrelevant, accurate and fake COVID-19 information. An individual experienced an overload of information included in news, fake news, disinformation, misinformation, all related to the COVID-19 pandemic. Since individuals were desperate to learn about the new disease and understand the unfolding events, they were easily tricked into giving out personal information, clicking on malicious links, and fake websites with COVID-19 or corona domain names, including installing malware from attachments [3]. As the environmental uncertainty with the COVID-19 pandemic increased, individuals felt less level of behavioural intent to protect their information effectively. We thus hypothesise that: Hypothesis 10. Environmental uncertainty negatively impacts behavioural intention to practice information security.

4

Research Design

The overall approach taken to perform an empirical test was a survey methodology for data collection. In the following subsections, we discuss the details of the instrument development and survey administration processes. 4.1

Instrument Development

To improve the result’s reliability and validity [28], we used previously validated and tested questions. Each item were adapted from literature, these are perceived vulnerability [12,29], perceived severity [25], self-efficacy [14,25], response

End-User Security Behaviour During the COVID-19 Pandemic

117

cost [12], response efficacy [25,30], information security behaviour [12,25,29], behavioural intentions [25], security habits [26], subjective norms [14], attitude toward ISSP compliance [24] and environmental uncertainty [31,32]. The items used in this study are presented in Appendix Table 3. Each item involved a 7-point Likert scale indicating a respondent’s level of agreement with the statements. A pilot test was carried out to ensure initial reliability and the questionnaire’s general mechanics, notably survey instructions, completion time, and appropriate wording. The pilot was conducted with a group of graduate students at the South African HEI. 4.2

Survey Administrations and Participants

This survey was accessible online through a link distributed via e-mail to students at a research-focused South African HEI. The questionnaire setup and data collection were managed using the Qualtrics platform. This platform was used to design and create an online questionnaire and subsequently collected the results. Ethics approval was obtained before data collection proceeded. The survey was open for three weeks, from 11th September 2020 to 5th October 2020, during the COVID-19 pandemic. At the initial invitations of the study, South Africa was in the COVID-19 lockdown alert level 2. Some lockdown restrictions were lifted at level two, including visits to family and friends, and all inter-provincial travel was permitted while adhering to physical distancing. The transition to the lockdown alert level 1 on the 20th September 2020 occurred while the survey is still running; at this level, the lockdown rules were relaxed, and most normal activities were resumed, and international travel was allowed with precautions and health guidelines followed at all times. Nevertheless, HEIs in South Africa were still teaching under emergency remote teaching; some students received invitations to return to their residence, and the majority were studying from home. At the end of the survey period, a total of 229 responses had been recorded [33]. However, only 222 responses were complete and considered valid for subsequent data analysis.

5

Data Analysis and Results

The data was analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM). The analysis was performed using the SmartPLS software application. The analysis first evaluated the validity and reliability of the measurement items before testing the study’s hypotheses. 5.1

Respondent Demographics

The demographics for age were split into six groups: 18–24, 25–34, 35–44, 45– 54, 55–64, 65+. Most respondents, 77%, were between the ages of 18–24, 16% of respondents between 25–34, 5% between 45–54, and 1% between 55–64. The gender was split into Male, Female, and Prefer not to answer; most respondents were female at 70%, with 29% being male, and 1% preferred not to answer.

118

5.2

P. Kautondokwa et al.

Internal Consistency Reliability and Convergent Validity

There is internal consistency reliability as all indicators fall within 0.6 and 0.9 values as recommended [30], the attitude toward ISSP compliance and self-efficacy constructs are slightly above 0.9. However, this is not undesirable when values are not significantly above the 0.95 thresholds, which these two constructs are not [34]. All constructs have an AVE value of above 0.50, indicating the convergent validity of all constructs in this study. The internal consistency reliability and convergent validity for this study are shown in Table 1. Table 1. Construct reliability and convergent validity. Composite reliability

Average variance extracted (AVE)

Behavioural intention

0.694

0.562

Information security behaviour

0.776

Attitude toward compliance with ISSP 0.953

5.3

0.634 0.87

Perceived severity

0.894

0.808

Perceived vulnerability

0.609

0.509

Response cost

0.841

0.725

Response efficacy

0.826

0.614

Self-efficacy

0.948

0.901

Security habits

0.619

0.514

Subjective norm

0.747

0.612

Environmental uncertainty

0.687

0.567

Discriminant Validity

In Heterotrait Monotrait Ratio (HTMT), for discriminant validity to exist, items should have a value of less than .90. Some constructs did not meet this criterion. One way to improve discriminant validity is by eliminating items with low correlations with items measuring the same construct [35]. However, because the problematic constructs only had two measurement items, it was decided to retain them without modification. 5.4

Coefficient of Determination

The coefficient of determination, generally denoted as R-squared (R2 ), is used to evaluate a research model’s fitness. It is accepted that an R2 value of 0.75 indicates a substantial fit. In contrast, any R2 of below 0.25 indicates a weaker fit [36], Information security behaviour has an R2 of 0.174 (17.4%), which makes it a weak fit for the model, the behavioural intentions variable has a moderate fit with an R2 value of 0.424 (42.4%).

End-User Security Behaviour During the COVID-19 Pandemic

5.5

119

Hypotheses Test Results

Bootstrapping was used to get the t-values to test the hypotheses for the study. As recommended, a sub-sample of 5,000 was taken from the original sample [36]. The significance of using the bootstrap method in analysing data is that it gives the closest estimate using a simulated sample [36]. Testing of the hypotheses was conducted using a two-tailed test with a significance level of 5%. Of the ten hypotheses, four are not supported; the hypotheses that are not supported are H2–perceived severity, H4–self-efficacy, H8–attitude toward ISSP compliance, and H10–environmental uncertainty. While H1–perceived vulnerability, H5–response cost, H3–response efficacy, H7–subjective norm and H9–security habits were are all supported. The results of the hypotheses test results are shown in Table 2. Table 2. Construct reliability and convergent validity. Hypothesis Path

Path coefficient t-values

H1

Perceived vulnerability → Intention

H2

Perceived severity → Intention

−0.195 0.099

1.901

H3

Response efficacy → Intention

0.296

4.280**

H4

Self-efficacy → Intention

0.007

0.103

H5

Response cost → Intention

H6

3.162**

−0.236

4.172**

Intention→ Information security behaviour

0.260

3.521**

H7

Subjective norm → Intention

0.231

3.121**

H8

Attitude toward ISSP compliance → Intention

0.029

0.423

H9

Security habits → Information security behaviour

0.284

3.963**

H10

Environmental uncertainty → Intention

0.005

0.068

Behavioural intention R2 : 0.424 Information security behaviour R2 : 0.174 *Significant at the 0.05 level **Significant at the 0.01 level

6

Discussion

This study presents several key findings, each of which contributes to both theory and practice. On the theory level, we evaluate the security behaviours of students at HEI as related to information security behaviour during the COVID-19 pandemic in an integrated model that uses TPB, PMT and a new additional construct environmental uncertainty. Our results indicate that perceived vulnerability impacts behavioural intention to practice information security; these results are consistent with previous studies [15,37]. We, however, found that perceived severity did not have a significant impact on behavioural intentions to perform information security. This hypothesis is supported in a previous study [9], but another study in a similar context did not appear to support this hypothesis [12].

120

P. Kautondokwa et al.

Our result suggests that students’ behavioural intention was influenced by their perception of susceptibility to threats; however, the severity of threats did not influence intention to practice information security protective behaviours. Our findings indicate that the response efficacy impacts behavioural intentions to practice information security. This is consistent with previous studies [9,12]. Our result suggests that students have faith in the effectiveness of their response to threats with whatever recommended measures at their disposal. Response cost was found to significantly impact behavioural intention to practice information security; this is consistent with previous studies [12]. This suggests that students feel that maintaining information security is an expensive exercise. In contrast, self-efficacy did not have an impact on behavioural intentions to practice information security. Self-efficacy has been found to have mixed results in the IS security literature [9,12]. Our result suggests that students do not have faith in their abilities to mitigate threats, and this might be due to prior experiences with virus infections and security breach. Subjective norms were also found to significantly impact intentions to perform information security; this is consistent with previous research [37]. In testing the attitude toward compliance with ISSP, it was found that attitude toward compliance with ISSP had no impact on behavioural intention to practice information security–this contradicted previous studies [28]. Our result suggests that since the study was done while students are studying remotely during the COVID-19 pandemic, students did not feel that following security policy was necessary. Consistent with previous research [12], behavioural intention to practice information security significantly impacts information security behaviour. The study findings also suggest that security habits positively impact students’ information security behaviours. These results are consistent with previous research [12]. Our result shows that students’ routine habits have a significant impact on their information security behaviour. Furthermore, the study findings suggest that environmental uncertainty did not impact behavioural intention to practice information security. While not consistent with previous research [38], this shows that the study population did not feel that uncertainty was a factor in their behavioural intentions to perform information security. Our result suggests that respondents were able to cope with environmental uncertainty with resilience and success and maintain security behaviour. On the practical level, the factors of the PMT, such as perceived vulnerability, response cost and self-efficacy, are essential in responding to security threats. So, when designing a security plan for an HEI, it would be beneficial to focus on security training, such as technical security tools. We also found that subjective norm plays an essential role in students’ intentions to perform information security. Thus, security awareness may be necessary for any future security plan for HEI students as students may significantly influence other students’ security behaviour. This study also found that students’ security habits play a role in information security behaviour; therefore, it would be beneficial for training to be made a routine for security habits to be positively solidified.

End-User Security Behaviour During the COVID-19 Pandemic

121

The limitations of this study create several opportunities for further research. The environmental uncertainty did not have statistically significant weight. The insignificant effect of environmental uncertainty on behavioural intention may be due to reasons such as; respondents may not have understood questions associated with this factor in the context of information security because of its novelty. Another explanation might be that the study population did not understand that the uncertainty about the COVID-19 pandemic, which brought the desire to search for and receive information about COVID-19/Coronavirus, also brought security issues. Reports have shown that in 2020 there was a massive increase in security breaches [3]; as people were searching for and eager to receive information about the COVID-19 pandemic, they became vulnerable to cyber-attacks. Hence future research would benefit from exploring the impact that uncertainty has had on the actual security behaviour of HEIs students during the COVID-19 pandemic and attitudes towards COVID-19 related cyber-attacks.

7

Conclusion

In this paper, we aimed to understand the factors that motivate end-user security behaviour at HEIs, considering the environmental uncertainty caused by the COVID-19 pandemic. This study determined that the PMT and TPB are indeed suitable for determining factors that motivate security behaviour. We also evaluated the effect of environmental uncertainty on security behaviour intentions. With the help of 222 responses from HEIs students, we performed an empirical test on the proposed model. Our results suggest that response efficacy, response cost, and subjective norm are likely to positively affect behaviour intention, which in turns are a significant predictor of HEIs students’ information security behaviours. Also, security habits showed a significant effect on HEIs students’ information security behaviours. While perceived severity, self-efficacy, and attitude toward ISSP compliance did not significantly affect behaviour intentions to practice information security. Future research would also benefit from a more comprehensive definition of attitude toward ISSP compliance. It is possible that compliance in the HEI context of this study was too simplistic, which may have influenced the results. This study incorporated environmental uncertainty due to the ongoing COVID-19 pandemic–which is new in the context of information security. This study did not find any impact of environmental uncertainty on behavioural intentions. Future research would benefit from exploring the impact that uncertainty has had on the actual security behaviour of HEIs students during the COVID19 pandemic and attitudes towards COVID-19 related cyber-attacks. This study did not measure how student attitudes evolved as the pandemic developed; further research is needed to investigate this issue– investigation can be carried out with in-depth interviews and focus group discussion.

122

A

P. Kautondokwa et al.

Appendix Table 3. Instrument documentation

Construct

Items

Source

Information I regularly check and erase viruses and malicious software security behaviour I instantly delete dubious e-mails without reading them

[25]

Perceived vulnerability

[29]

There is a chance that my personal information has been leaked due to hacking

[29]

There is a chance that my anti-virus has not been updated in a [33] long time Perceived severity Losing data privacy as a result of hacking would be a serious problem for me

[25]

Becoming a victim of a cyberattack would result in my losing a [25] lot of valuable, important data Response efficacy

Using security technologies is effective for protecting confidential information

[25]

Taking preventive measures is effective for protecting my personal information

[30]

Enabling security measures on my computer is an effective way [30] of preventing computer data from being damaged by malicious software such as viruses Self-efficacy

Behavioural intention Subjective norm

Response cost

I am able to protect my personal information from external threats

[25]

I am able to protect data on my computer from being damaged by external threats

[14]

I will aggressively use security technologies to protect confidential information

[25]

I will never share important personal information

[33]

If I enthusiastically make use of security technologies, most of the people who are important to me would endorse

[14]

Most important people in my life think it is a good idea to take precautionary measures to protect personal information

[14]

Obtaining the latest security technology to safeguard confidential information is irritating

[12]

Maintaining security measures (such as changing the password [12] regularly) to protect personal information is a burden Security habit

I should regularly delete viruses and malicious software

[26]

I routinely send dodgy e-mails to the recycle bin

[26]

Attitude toward compliance with ISSP

Following the institution’s ISSP is a good idea

[24]

Following the institution’s ISSP is a necessity

[24]

Following the institution’s ISSP is beneficial

[24]

Environmental uncertainty

I feel that e-mails that are COVID-19 related are without a doubt safe to follow

[32]

I feel conflicted about the need to reflect on e-mails requesting [32] my personal information if these e-mails are COVID-19 related Note: Items were measured on a 7-point scale from “strongly disagree” to “strongly agree.”

End-User Security Behaviour During the COVID-19 Pandemic

123

References 1. Dom´ınguez, C.M.F., Ramaswamy, M., Martinez, E.M., Cleal, M.G.: A framework for information security awareness programs. Issues Inf. Syst. 11(1), 402–409 (2010) 2. Beautement, A., Sasse, M.A., Wonham, M.: The compliance budget: managing security behaviour in organisations. In: Proceedings of the 2008 New Security Paradigms Workshop, pp. 47–58 (2008) 3. Naidoo, R.: A multi-level influence model of COVID-19 themed cybercrime. Eur. J. Inf. Syst. 29(3), 306–321 (2020) 4. Pattinson, M., Parsons, K., Butavicius, M., McCormac, A., Calic, D.: Assessing information security attitudes: a comparison of two studies. Inf. Comput. Secur. 24(2), 228–240 (2016) 5. Rupere, T., Muhonde, M.: Towards minizing human factors in end-user information security (2012) 6. Nasir, A., Arshah, R.A., Ab Hamid, M.R.: The significance of main constructs of theory of planned behavior in recent information security policy compliance behavior study: a comparison among top three behavioral theories. Int. J. Eng. Technol. 7(2.29), 737–741 (2018) 7. Dang-Pham, D., Pittayachawan, S., Bruno, V.: Why employees share information security advice? Exploring the contributing factors and structural patterns of security advice sharing in the workplace. Comput. Hum. Behav. 67, 196–206 (2017) 8. Tsai, H.S., Jiang, M., Alhabash, S., LaRose, R., Rifon, N.J., Cotten, S.R.: Understanding online safety behaviors: a protection motivation theory perspective. Comput. Secur. 59, 138–150 (2016) 9. Holmes, M., Ophoff, J.: Online security behaviour: factors influencing intention to adopt two-factor authentication. In: 14th International Conference on Cyber Warfare and Security, ICCWS 2019, p. 123 (2019) 10. Moletsane, T., Tsibolane, P.: Mobile information security awareness among students in higher education: an exploratory study. In: 2020 Conference on Information Communications Technology and Society (ICTAS), pp. 1–6. IEEE (2020) 11. Maddux, J.E., Rogers, R.W.: Protection motivation and self-efficacy: a revised theory of fear appeals and attitude change. J. Exp. Soc. Psychol. 19(5), 469–479 (1983) 12. Yoon, C., Hwang, J.W., Kim, R.: Exploring factors that influence students’ behaviors in information security. J. Inf. Syst. Educ. 23(4), 407–416 (2012) 13. Tu, Z., Yuan, Y., Archer, N.: Understanding user behaviour in coping with security threats of mobile device loss and theft. Int. J. Mob. Commun. 12(6), 603–623 (2014) 14. Yoon, C., Kim, H.: Understanding computer security behavioral intention in the workplace. Inf. Technol. People (2013) 15. Srisawang, S., Thongmak, M., Ngarmyarn, A.: Factors affecting computer crime protection behavior. In: PACIS, p. 31 (2015) 16. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179–211 (1991) 17. Chen, Y., Zahedi, F.M.: Individuals’ internet security perceptions and behaviors: polycontextual contrasts between the United States and China. MIS Q. 40(1), 205–222 (2016)

124

P. Kautondokwa et al.

18. Safa, N.S., Sookhak, M., Von Solms, R., Furnell, S., Ghani, N.A., Herawan, T.: Information security conscious care behaviour formation in organizations. Comput. Secur. 53, 65–78 (2015) 19. Johnston, A.C., Warkentin, M.: Fear appeals and information security behaviors: An empirical study. MIS Q. 34(3), 549–566 (2010) 20. Cheng, L., Li, Y., Li, W., Holm, E., Zhai, Q.: Understanding the violation of IS security policy in organizations: an integrated model based on social control and deterrence theory. Comput. Secur. 39, 447–459 (2013) 21. Williams, A.S., Maharaj, M.S., Ojo, A.I.: Employee behavioural factors and information security standard compliance in Nigeria banks. Int. J. Comput. Digit. Syst. 8(04), 387–396 (2019) 22. Rogers, R.W.: A protection motivation theory of fear appeals and attitude change1. J. Psychol. 91(1), 93–114 (1975) 23. Foltz, C.B., Newkirk, H.E., Schwager, P.H.: An empirical investigation of factors that influence individual behavior toward changing social networking security settings. J. Theor. Appl. Electron. Commer. Res. 11(2), 1–15 (2016) 24. Ifinedo, P.: Information systems security policy compliance: an empirical study of the effects of socialisation, influence, and cognition. Inf. Manag. 51(1), 69–79 (2014) 25. Workman, M., Bommer, W.H., Straub, D.: Security lapses and the omission of information security measures: a threat control model and empirical test. Comput. Hum. Behav. 24(6), 2799–2816 (2008) 26. Limayem, M., Khalifa, M., Chin, W.W.: Factors motivating software piracy: a longitudinal study. IEEE Trans. Eng. Manag. 51(4), 414–425 (2004) 27. Milliken, F.J.: Three types of perceived uncertainty about the environment: state, effect, and response uncertainty. Acad. Manag. Rev. 12(1), 133–143 (1987) 28. Straub, D.W.: Validating instruments in MIS research. MIS Q. 13(2), 147–169 (1989) 29. Woon, I., Tan, G.W., Low, R.: A protection motivation theory approach to home wireless security (2005) 30. Ng, B.Y., Xu, Y.: Studying users’ computer security behavior using the Health Belief Model. In: PACIS 2007 Proceedings, p. 45 (2007) 31. Chen, X., Zhang, X.: How environmental uncertainty moderates the effect of relative advantage and perceived credibility on the adoption of mobile health services by Chinese organizations in the big data era. Int. J. Telemed. Appl. 2016 (2016) 32. Pavlou, P.A., Liang, H., Xue, Y.: Understanding and mitigating uncertainty in online exchange relationships: a principal-agent perspective. MIS Q. 31(1), 105– 136 (2007) 33. Kautondokwa, P.: Factors that motivate end-user security behaviour in higher education: a study of UCT during COVID-19. Department of Information Systems, University of Cape Town, South Africa (2020). Unpublished Honours Project 34. Hair, J.F., Jr., Hult, G.T.M., Ringle, C., Sarstedt, M.: A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM). Sage Publications, Los Angeles (2016) 35. Henseler, J., Ringle, C.M., Sarstedt, M.: A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43(1), 115–135 (2014). https://doi.org/10.1007/s11747-014-0403-8 36. Hair, J.F., Risher, J.J., Sarstedt, M., Ringle, C.M.: When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. (2019)

End-User Security Behaviour During the COVID-19 Pandemic

125

37. Martens, M., De Wolf, R., De Marez, L.: Investigating and comparing the predictors of the intention towards taking security measures against malware, scams and cybercrime in general. Comput. Hum. Behav. 92, 139–150 (2019) 38. Sharma, P., Leung, T.Y., Kingshott, R.P., Davcik, N.S., Cardinali, S.: Managing uncertainty during a global pandemic: an international business perspective. J. Bus. Res. 116, 188–192 (2020)

What Parts of Usable Security Are Most Important to Users? Joakim K¨ avrestad1(B) , Steven Furnell2 , and Marcus Nohlberg1 1

University of Sk¨ ovde, Sk¨ ovde, Sweden {joakim.kavrestad,marcus.nohlberg}@his.se 2 University of Nottingham, Nottingham, UK [email protected] Abstract. The importance of the human aspects of cybersecurity cannot be overstated in light of the many cybersecurity incidents stemming from insecure user behavior. Users are supposed to engage in secure behavior by use of security features or procedures but those struggle to get widespread use and one hindering factor is usability. While several previous papers studied various usability factors in the cybersecurity domain, a common understanding of usable security is missing. Further, usability covers a large range of aspects and understanding what aspects users prioritize is integral for development of truly usable security features. This paper builds on previous work and investigates what usability factors users prioritize and what demographic factors that affects the perception of usability factors. This is done through a survey answered by 1452 respondents from Sweden, Italy and UK. The results show that users prefer security functions to minimize resource consumption in terms of cost, device performance and time. The study further demonstrate that users want security functions to require as little effort as possible and just work. Further, the study determines that nation of residence and IT-competence greatly impacts the perception of usability for security functions while gender and age does so to a much lesser extent. Keywords: Usability User · Perception

1

· Usable security · Cyber security · Human ·

Introduction

Cybersecurity is a property that is undeniably integral to modern individuals, organisations, and even nations [1,2]. Much like [3], we consider cybersecurity to be a socio-technical property and a high level of security can only be achieved if social as well as technical factors are considered [4]. The importance of the social, or human, side of security is widely acknowledged by researchers as well as practitioners [5,6]. Several recent industry reports even suggest that the human element is a part of most cybersecurity incidents, further emphasizing its importance [7,8]. On this note, a preferable scenario is that users increase their security c IFIP International Federation for Information Processing 2021  Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 126–139, 2021. https://doi.org/10.1007/978-3-030-80865-5_9

What Parts of Usable Security Are Most Important to Users?

127

level through use of security functions such as e-mail encryption or multi-factor authentication, and security practices such as good password creation and management strategies [9–12]. However, while such practices has been on the market for several decades, they are not in widespread use. As demonstrated in several previous papers, the (perhaps perceived) lack of usability seems to be a big part of the answer to why [13–15]. While the primary task of a security function is to provide security, functions designed to be used by end-users cannot do so unless they are adopted by users, and correctly used. Incorrect use can lead to a false sense of security or even be harmful [9]. For instance, password managers are considered a good way to use unique passwords for various accounts while only having to remember one master password. However, if that password is compromised, all accounts related to it are also compromised [16]. The consequence of security functions or practices not being adopted is obvious, the security they intend to provide is not provided. This is what often happens to so called secure password guidelines which prompt users to use long and complex passwords. Many users are unwilling to follow this guideline and select insecure passwords instead [17]. There are several theories that can be used to explain how users choose to adopt security functions and procedures. Three theories commonly used in cybersecurity can be briefly described as follows: – Protection Motivation Theory (PMT) where [18] describe that peoples decision to protect themselves against a supposed threat are influenced by how severe and likely the person perceives the threat, how effective a preventative measure is, and the persons perceived ability to engage in that measure. – Theory of Planned Behaviour (TPB) which highlights that actual behaviour is influenced by a persons perception of how easy or difficult a certain behaviour is [19]. – Technology Acceptance Model (TAM) which in its original form describe that a users decision to adopt a technology is based on how useful she perceives the technology to be, and how easy she perceives it to be to use the technology [20,21]. Applied to the cybersecurity domain, PMT, TPB, and TAM demonstrate that usability is precursor to user adoption of security functions and practices. User adoption, in turn, is an obvious precursor to whatever security a function or practice is intended to add. As such, usability is a crucial aspect to research in relation to end-user security. While there has been a fair bit of research conducted on usability of security functions, a fundamental issue seems to be that there is no common understanding of what usability means to the cybersecurity community. This is demonstrated how various previous papers consider usability of security functions in vastly different ways. To exemplify, [22] evaluates a subset of usability criteria in the context of phishing, and [23] discusses usability in the context of IoT access control without further describing what usability in that context entails. Further, the System Usability Scale (SUS), presented by [24] as been adopted in the cybersecurity domain by, for instance, [25]. While SUS measures important aspects of usability it does not factor in all aspects that are

128

J. K¨ avrestad et al.

considered important in the cybersecurity domain, for instance risk associated with incorrect use [9]. A recent literature review [26] summarizes how usable security has been discussed in 70 scientific publication from 2015 to 2020. [26] presents 31 aspects of how usability has been studied in the cybersecurity domain, and groups those into 14 themes. Our study seeks to expand on the work conducted by [26] by exploring which of those aspects that are considered the most important by users. The study was performed as a survey reaching more than 1400 respondents and provides insight into what usability features that users perceive as most important. It also investigates how the perception of usability features is impacted by various demographic variables. As such, it provides insight that can support practitioners towards development of usable security functions and procedures. the study further provides the research community with a better understanding of what users considers to be the most important usability aspects, and how demographic aspects impact the perception of usable security. The rest of this paper is structured as follows; Sect. 2 describes the methodological approach, Sect. 3 presents and analyses the results which are further discussed in Sect. 4 before the paper is concluded, and directions for future work are presented in Sect. 5.

2

Methodology

With the purpose of collecting quantitative data from a large sample of respondents, a web-based survey was used. The survey panel company Webropol was hired for the distribution of the survey and while this approach restricted the range of possible participants to the members of Webropols panel, it is a practically feasible method to achieve a sample of high quality [27]. It also minimizes demographic bias, and accidental sampling bias common when distributing web based surveys using, for instance, social media [28]. A stratified sampling approach was used to generate a probability sample [29]. The panel members were split into strata based on gender, age, and geographical region. Equal proportions from each strata were then recruited using simple random sampling [30]. The primary target of the survey was Swedish users, and the target sample size for Swedes was set to 800 respondents. With the goal of comparing the results to users from other European nations, samples with a target size of 300 respondents were drawn from UK and Italy. UK and Italy was chosen since they, according to [31], belong to different culture groups than Sweden. The survey was part of a larger survey and, for the purpose of this paper, contained demographic questions describing the respondents perceived gender, IT-competence and age. The participants were then asked to pick the five most and least important usability aspects from a list of 21 aspects derived from [26]. The original list by [26] included 31 aspects. However, the surveys development and testing phase revealed that several of those were too similar, or could be perceived in different ways by the respondents. They were therefore combined and/or reworded to ensure that the list of options was easy for the respondents to

What Parts of Usable Security Are Most Important to Users?

129

understand. For instance, [26] describe compatibility with systems and services, and compatibility with other security solutions as two separate usability aspects. In this study, they were combined into one answer option stating It should work with all sites and services I use so that I only need one tool of each type. Further, [26] describes several types of interference to the users workflow and those were combined to one statement expressed as It should not interfere with the way I work. The complete list of aspects is presented along with the results to save space, and appeared to the participants in randomized order to minimize responder bias. Both questions were followed by a free-text field where the respondents could add additional comments. Before the survey was distributed, it was taken through a pilot procedure in three steps: 1. A small sample was recruited using social media, and those respondents were specifically asked to provide feedback on the structure and readability of the survey. 2. Two respondents were asked to fill out the survey under personal supervision from a researcher, they were also asked to continuously express their thoughts while filling it out. 3. The survey was distributed to a sample of peers who were asked to assess it in relation to the research aim. For data analysis, the percentage of respondents picking each aspects was first reported and a maximum 95% confidence interval computed as suggested by [32]. Next, the impact of various demographic factors was investigated by testing how the distribution of answers was impacted when the results of the full sample was divided based on nation, gender, IT-proficiency and age, respectively. The statistical analysis was performed using chi-square because of the nonparametric nature of the collected data [33], and formally measured if the distribution of data points within a demographic group differed from an expected distribution with statistical significance. The conventional significance level of 95% was adopted in this study. Note that, while data is presented as percentages throughout this study, frequencies in absolute numbers was used for the chi-square tests.

3

Results and Analysis

Webropol distributed the survey to a sample of 10 times the target sample size and the survey was open for one week. A total of 1452 respondents completed the survey, and were distributed over the national answer groups as follows: – Sweden: 834 participants – Italy: 314 participants – UK: 304 participants. The respondents were rather evenly divided based on gender and spread through various age groups as shown in Table 1. However, reported level of ITcompetence differed between the groups, with Italian respondents reporting to be more IT-competent, on average.

130

J. K¨ avrestad et al. Table 1. Demographic overview (in percent)

Gender

Sweden UK

Female

45.6

53.3 43.9

Male

54.2

46.7 55.7

Other/Prefer not to say Age

0.3

0

Sweden UK

0.2 Italy

18–25

8.3

26–35

20.3

18.1 21.3

36–45

18.8

25.7 31.2

46–55

23.9

18.1 20.7

56–65

15.3

21.4 15.3

66–75

13.3

15.5 7.3

Above 76 IT-Competence Professional - working in, Hold a degree in or study IT

0.1

1.0

Italy

0.3

Sweden UK 9.4

9.9

4.1

0 Italy 22.3

Expert user - Interested user that know my way around IT. Usually 22.3 asked to help people with home routers, printer installations etc.

19.4 27.7

Average user - I use IT with no major problems but need help occasionally

64.1 38.5

Below average - I have a hard time using IT and feel like I need help with tasks that others do with ease

65.1 3.2

6.6

11.5

The respondents were then provided with the following information before they were asked to rate what five usability aspects they perceived as most and least important. This part of the survey concerns what properties a security tool or functions should possess for you to use it. We want you to select the five most and the five least important properties from a list of 21 properties. A security tool or function includes anything designed to improve your level of IT-security and that you can choose to use. Some examples are: – Password creation guidelines (suggestion for password length, complexity, etc.) – Encryption software, for instance, e-mail encryption tools used to encrypt e-mails or data encryption tools used to encrypt your computer – Browsing filters that warn you if you are visiting a web site that can be fraudulent – Malware (eg Viruses and Ransomware) protection software. We want to know which of the following properties you consider to be the most important and the least important. The first question will ask you to check the five most important properties and the second will ask you to check the five properties you think is least important.

What Parts of Usable Security Are Most Important to Users?

131

All questions are followed by a text-box where you can input additional comments. The available options and the percentage of respondents choosing each options is displayed in Table 2, sorted in order of preference according to the complete data-set. Table 2. Percentage of participants picking the respective options as the most important. Option

Sweden UK

It should not be costly

41.5

50.3 44.0

Italy

All

It should be easy to understand and navigate the interface

42.6

42.8 32.8

40.5

It should not impact the performance of my device

42.5

46.7 30.3

40.2

43.7

It should not take a lot of time to use

41.0

32.6 35.4

38.0

Information about how to use it should be easy to find and understand

36.3

32.2 32.2

34.6

It should work with all sites and services I use so that I only need one tool of each type

37.4

31.0 20.4

32.3

It should not interfere with the way I work

31.5

30.9 24.2

29.9

It should require as little interaction from me as possible

33.3

27.6 21.0

29.5

I should not need to learn how to configure or manage it, and default configuration should be safe to use

30.7

24.3 22.3

27.6

It should not take a lot of time to install

25.2

28.6 27.4

26.4

When I need to make a decision, the tool should provide information about the different options

20.0

20.7 27.7

21.8

It should not put me under time pressure

20.4

19.1 20.4

21.1

It should be developed by, or recommended by someone I trust

20.0

19.4 19.1

19.7

Benefits and effects of using different security options should be clearly presented

14.0

15.8 29.9

17.8

The tool should provide feedback such as progress updates, system status etc.

15.4

16.5 16.2

15.8

It should allow me to customize the configuration to my liking and adapt it to my skill level

13.2

14.1 21.7

15.2

I should be able to adjust the interface to my preference

11.3

14.5 25.16 14.9

It should be predictable; similar tasks should work in the same way and it should be easy to recognize requirements and conditions during setup

14.2

13.5 16.9

14.6

It should be possible to handle accounts for different users

8.2

14.5 20.7

12.2

It should be perceived as cool by others

2.6

4.9 12.4

5.2

Maximum 95% CI

3.3

5.6

2.6

5.4

As seen in the Table 2, a general tendency is that the respondents favour aspects that minimize cost and resource consumption as well as ease of use. The preferred ease of use properties can be summarized as properties where interaction and time consumed using the security function is minimized. On the other hand, properties speaking to customizability are less favoured. National differences can be observed for several properties, and those will be further explored below.

132

J. K¨ avrestad et al.

The results for the second question, asking the respondents to pick the five properties they perceived as least important, are presented in Table 3. It is sorted in order of preference according to the complete data set. Table 3. Percentage of participants picking the respective options as the least important. Option

Sweden UK

It should be perceived as cool by others

79.6

66.8 39.8

Italy All 68.3

It should be possible to handle accounts for different users

48.8

38.8 25.5

41.7

I should be able to adjust the interface to my preference

36.1

29.9 28.0

33.6

It should allow me to customize the configuration to my liking and adapt it to my skill level

36.9

30.6 22.6

32.5

It should be developed by, or recommended by someone I trust 31.4

29.0 32.8

31.2

The tool should provide feedback such as progress updates, system status etc.

33.6

28.0 26.1

30.8

It should not take a lot of time to install

31.1

22.0 27.4

28.4

It should not put me under time pressure

24.2

29.3 31.5

26.9

When I need to make a decision, the tool should provide information about the different options

19.8

30.3 27.7

23.7

Benefits and effects of using different security options should be clearly presented

23.7

19.1 22.3

22.5

It should be predictable; similar tasks should work in the same 21.1 way and it should be easy to recognize requirements and conditions during setup

24.0 23.9

22.3

It should require as little interaction from me as possible

16.2

22.0 28.7

20.1

It should not be costly

17.2

18.8 28.0

19.8

I should not need to learn how to configure or manage it, and default configuration should be safe to use

17.4

22.7 20.4

19.2

It should work with all sites and services I use so that I only need one tool of each type

13.9

20.1 16.6

15.8

It should not interfere with the way I work

13.1

15.1 20.7

15.2

It should not take a lot of time to use

10.0

17.1 21.0

13.8

It should not impact the performance of my device

10.8

14.8 19.1

13.4

8.2

11.8 19.4

11.4 10.1

Information about how to use it should be easy to find and understand It should be easy to understand and navigate the interface

7.1

9.9 18.5

Maximum 95% CI

2.7

5.3

5.4

2.4

As seen in Table 3, the least preferred options follow the same line as the most preferred option. Customizability options are in this case selected over options speaking to ease of use and limited need for interaction. Further, Tables 2 and 3 suggest that the participants do not care about how cool the functions are perceived by others and are not interested in spending time and money on security features and functions.

What Parts of Usable Security Are Most Important to Users?

133

The next part of the analysis investigated how the results are impacted by the examined demographics aspects; nation, perceived gender, perceived ITcompetence and age. Chi-square was used to measure if the distribution of data points within a demographic group differed from an expected distribution, given the complete data set. The analysis first measured the effect of each individual demographic on each data point. The analysis then measured the effect of gender, age and IT-competence within each national answer group. As such, 168 tests were performed and, given the permitted space, not presented in detail. To exemplify, the first test measured the impact of nation on the option It should not be costly for the question most. Key statistics are presented in Table 4. Table 4. Example of statistical analysis using chi-square and the option It should not be costly for the question most Answer

Sweden UK

Italy Chi-square Sig.

Yes - Observed Yes - Expected No - Observed No - Expected

344 364.7 490 469.3

138 7.475 137.3 176 176.7

153 132.9 151 171.1

0.024

The hypothesis tested in this example is that Nation impacts the number of respondents who perceive ‘It should not be costly” as one of the most important usability aspects for security features. The hypothesis is supported given that the p-value is below 0.05, which is true in this case. Table 5 provides an overview of the demographic aspects that were shown to have a significant impact on what usability aspects respondents rank as most important. Significant tests are marked with an asterisk (*). As shown by Table 5, demographics do impact what usability aspects respondents perceive as most important and nation of residence and perceived ITcompetence are the most prominent demographic aspects while age and gender impacts far fewer of the usability aspects. It could, however, be noted that nation and IT-competence impact the same aspects in nine cases and the sample from Italy is distributed differently than the other sampling groups on the demographic of IT-competence (as seen in Table 1). Thus, it is hard to say if the perception of those aspects is impacted by IT-competence, nation, or both. Table 6 provides an overview of the demographic aspects that were shown to have a significant impact on what usability aspects respondents rank as least important, significant tests are marked with an asterisk (*). While there is some variation between Tables 5 and 6, nation and IT-competence are the demographic factors influencing the perception of most usability aspects. Gender and age, on the other hand, influence below 25% of the aspects.

134

J. K¨ avrestad et al.

Table 5. Overview of what demographics that had a significant impact on what usability aspects that were perceived as most important, in the complete data set (n = 1452). It is ordered with most frequently picked option in the complete sample on top. Option

Nation Age IT-comp. Gender

It should not be costly

*

It should be easy to understand and navigate the interface

*

*

It should not impact the performance of my device

*

*

It should not take a lot of time to use

*

Information about how to use it should be easy to find and understand

*

*

*

It should work with all sites and services I use so that * I only need one tool of each type It should not interfere with the way I work

*

It should require as little interaction from me as possible

*

*

* *

*

I should not need to learn how to configure or manage * it, and default configuration should be safe to use It should not take a lot of time to install When I need to make a decision, the tool should provide information about the different options

*

*

*

It should not put me under time pressure

*

It should be developed by, or recommended by someone I trust Benefits and effects of using different security options * should be clearly presented

*

The tool should provide feedback such as progress updates, system status etc.

*

It should allow me to customize the configuration to my liking and adapt it to my skill level

*

*

*

I should be able to adjust the interface to my preference

*

*

*

It should be possible to handle accounts for different users

*

*

It should be perceived as cool by others

*

It should be predictable; similar tasks should work in the same way and it should be easy to recognize requirements and conditions during setup

*

*

What Parts of Usable Security Are Most Important to Users?

135

Table 6. Overview of what demographics that had a significant impact on what usability aspects that were perceived as least important, in the complete data set (n = 1452). It is ordered with most frequently picked option in the complete sample on top. Option

Nation Age IT-comp. Gender

It should be perceived as cool by others

*

*

*

It should be possible to handle accounts for different users

*

*

*

I should be able to adjust the interface to my preference

*

*

It should allow me to customize the configuration to my liking and adapt it to my skill level

*

*

The tool should provide feedback such as progress updates, system status etc.

*

*

*

It should not take a lot of time to install

*

*

*

It should not put me under time pressure

*

When I need to make a decision, the tool should provide information about the different options

*

It should be developed by, or recommended by someone I trust

*

*

Benefits and effects of using different security options should be clearly presented It should be predictable; similar tasks should work in the same way and it should be easy to recognize requirements and conditions during setup It should require as little interaction from me as possible

*

It should not be costly

*

I should not need to learn how to configure or manage it, and default configuration should be safe to use It should work with all sites and services I use so that * I only need one tool of each type

4

It should not interfere with the way I work

*

It should not take a lot of time to use

*

*

*

It should not impact the performance of my device

*

Information about how to use it should be easy to find and understand

*

*

It should be easy to understand and navigate the interface

*

*

* *

* *

Discussion

The aim of this study was twofold; the first aim was to analyze what usability aspects that users consider most important for security functions, and the second was to identify demographic aspects which affect how those usability aspects are perceived. The study continued on the work by [26] and derived 21 usability

136

J. K¨ avrestad et al.

aspects from the list of 31 usability aspects presented there. The study was conducted using a web based survey in order to generate a large sample of respondents, and resulted in a data set with survey data from 1452 individual respondents from Sweden, Italy and UK. The survey was carefully developed by the research team and evaluated in a three-step pilot procedure to ensure that is was appropriate for the study aim and easy to understand for respondents. The first aim was meet by two questions where the respondents were asked to rate the five usability aspects they perceived as most and least important. The intent was to let the two question combined serve as a form of triangulation around the question of what aspects the respondents considered to be most important [34]. The results show that the usability aspects perceived as most important are those reflecting resource minimization and ease of use. The aspects perceived as least important reflect customizability interfaces and behaviour. One respondents commented as follows; “Needs to be free. Runs in the background with no input from me. Should run without impacting on my use of technology”. That quote is a good summary of the study’s results in regards to the first aim. This notion aligns well with previous research suggesting that users just want cybersecurity to work. The second aim was meet by dividing the dataset based on nation, gender, age and reported IT-competence and analyzing if any of those factors significantly impacted the respondents perception of the usability aspects. Repeated chi-square tests revealed that nation and IT-competence both affected the perception of up to 75% of the included usability factors while age and gender both only impacted about 25%. It should be noted that the distribution of answers to the demographic question about IT-competence is uneven between the national sample groups, and that can impact the results in this case. Still, the results do suggest that nation of residence does impact how users perceive the importance of usability aspects. This notion aligns well with previous research suggesting that both culture and IT-competence are important factors in human aspects of cybersecurity [35]. However, the study also suggest that age and gender does not affect how users prioritize usability aspects of security functions to any large extent, and this is surprising due to previous research suggesting age and gender to be important factors in cybersecurity in general [36–39]. By extending the work by [26], this study contributes to the academic community with increased understanding about what the concept of usable security entails. It does so by providing an analysis of what usability aspects that users consider to be most important and is, to the best of our knowledge, the first publication of that sort. This paper also demonstrate the complex nature of the human aspects of cybersecurity and emphasizes the need for continued research in pursuit of generalizable results that can help the community move towards better cybersecurity behaviour with cost-effective means. As a contribution to practitioners, the results of this survey insight depicting what security aspects to focus on when implementing security features. Since the results of this survey shed light on what usability features that are prioritized by the most users, it also shows which of those aspects a feature appealing to as

What Parts of Usable Security Are Most Important to Users?

137

many users as possible should include. Perhaps at least as important, it uncovers which aspects that are perhaps not worthwhile to put efforts into.

5

Conclusions and Future Work

The first part of this study asked the respondents to rank the five most and five least important usability aspects of security functions. The available aspects are presented in Table 2. As a summary, the results suggest that respondents prioritize resource effective security functions. The functions should not be costly, impact device performance, or require a lot of time to use. This notion is emphasized by several free-text comments stating that “it should just be there and work ”. The results also show that the respondents want security functions to be easy to use and to understand. As a general rule, usability aspects speaking to more advanced use are found at the bottom of the list of preferred aspects. Those aspects include customizability, ability to handle multiple accounts and existence of feedback from the security features. The second part of the analysis evaluated if nation, gender, age or ITcompetence had an impact on what usability aspects that are most or least preferred. This analysis showed that nation and IT-competence had the most widespread impact with nation impacting the perception of about 75% of the aspects, and IT-competence impacting about 55%. It should here be noted that the answers to the demographic question about IT-competence are unevenly distributed between the national sample groups, and that can impact the results in this case. Finally, age and gender both impacted the perception of less than 25% of the included usability aspects, suggesting that those demographics are not as important when it comes to the perception of usability in relation to security functions. This paper shows that several demographic aspects can impact the usability aspects that users prioritize for security functions. Given the limitations of space, it was not possible to dwell into the nature of this effect and further analysis of the demographic effect in this, or other, data sets is a natural continuation of this work. Further areas for future work could be to expand on this study by including more demographic aspects such as disabilities or more different nation groups.

References 1. Huskaj, G., Wilson, R.L.: An anticipatory ethical analysis of offensive cyberspace operations. In: Cruz, T., Simoes, P. (eds.) In: Proceedings of the 15th International Conference on Cyber Warfare and Security. Academic Conferences and Publishing International Ltd. (2020) 2. Vroom, C., Von Solms, R.: Towards information security behavioural compliance. Comput. Secur. 23(3), 191–198 (2004) 3. Al Sabbagh, B., Kowalski, S.: ST(CS)2 - featuring socio-technical cyber security warning systems. In: Proceedings Title: 2012 International Conference on Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), pp. 312–316 (2012)

138

J. K¨ avrestad et al.

4. Paja, E., Dalpiaz, F., Giorgini, P.: Managing security requirements conflicts in socio-technical systems. In: Ng, W., Storey, V.C., Trujillo, J.C. (eds.) ER 2013. LNCS, vol. 8217, pp. 270–283. Springer, Heidelberg (2013). https://doi.org/10. 1007/978-3-642-41924-9 23 5. Furnell, S., Esmael, R., Yang, W., Li, N.: Enhancing security behaviour by supporting the user. Comput. Secur. 75, 1–9 (2018) 6. Cybint: 15 alarming cyber security facts and stats (2020). https://www. cybintsolutions.com/cyber-security-facts-stats/ 7. EC-Council (2019). https://blog.eccouncil.org/the-top-types-of-cybersecurityattacks-of-2019-till-date/ 8. Soare, B.: Vectors of attack (2020). https://heimdalsecurity.com/blog/vectors-ofattack/ 9. Whitten, A., Tygar, J.D.: Why Johnny can’t encrypt: a usability evaluation of PGP 5.0. In: USENIX Security Symposium, vol. 348, pp. 169–184 (1999) 10. Aljahdali, H.M., Poet, R.: The affect of familiarity on the Usability of Recognitionbased graphical Password Cross cultural Study Between Saudi Arabia and the United Kingdom. In: IEEE International Conference on Trust Security and Privacy in Computing and Communications, pp. 1528–1534 11. Alsaiari, H., Papadaki, M., Dowland, P., Furnell, S.: Graphical one-time password (GOTPass): a usability evaluation. Inf. Secur. J. 25(1–3), 94–108 (2015) 12. Das, S., Dingman, A., Camp, L.J.: Why Johnny doesn’t use two factor a two-phase usability study of the FIDO U2F security key. In: Meiklejohn, S., Sako, K. (eds.) FC 2018. LNCS, vol. 10957, pp. 160–179. Springer, Heidelberg (2018). https://doi. org/10.1007/978-3-662-58387-6 9 13. Florencio, D., Herley, C.: A large-scale study of web password habits. In: Proceedings of the 16th international conference on World Wide Web, pp. 657–666. ACM (2021) 14. Benenson, Z., Lenzini, G., Oliveira, D., Parkin, S., Uebelacker, S.: Maybe poor Johnny really cannot encrypt: the case for a complexity theory for usable security. In: NSPW 2015, pp. 85–99 (2015) 15. Lerner, A., Zeng, E., Roesner, F.: Confidante: usable encrypted email: a case study with lawyers and journalists. In: 2017 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 385–400 (2017) 16. Chaudhary, S., Schafeitel-T¨ ahtinen, T., Helenius, M., Berki, E.: Usability, security and trust in password managers: a quest for user-centric properties and features. Comput. Sci. Rev. 33, 69–90 (2019) 17. Haga, W.J., Zviran, M.: Question-and-answer passwords - an empirical-evaluation. Inf. Syst. 16(3), 335–343 (1991) 18. Rogers, R.W.: A protection motivation theory of fear appeals and attitude change1. J. Psychol. 91(1), 93–114 (1975) 19. Ajzen, I.: From intentions to actions: a theory of planned behavior. In: Kuhl, J., Beckmann, J. (eds.) Action Control. SSSSP, pp. 11–39. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-69746-3 2 20. Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340 (1989) 21. Davis, F.D.: A technology acceptance model for empirically testing new end-user information systems: theory and results. Ph.D. thesis, Massachusetts Institute of Technology (1985) 22. Marchal, S., Armano, G., Gr¨ ondahl, T., Saari, K., Singh, N., Asokan, N.: Offthe-hook: an efficient and usable client-side phishing prevention application. IEEE Trans. Comput. 66(10), 1717–1733 (2017)

What Parts of Usable Security Are Most Important to Users?

139

23. He, W., et al.: Rethinking access control and authentication for the home internet of things (IoT). In: 27th USENIX Security Symposium (USENIX Security 2018), pp. 255–272 (2018) 24. Brooke, J.: SUS-a quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996) 25. Khan, H., Hengartner, U., Vogel, D.: Usability and security perceptions of implicit authentication: convenient, secure, sometimes annoying. In: Eleventh Symposium On Usable Privacy and Security (SOUPS 2015), pp. 225–239 (2015) 26. Lennartsson, M., K¨ avrestad, J., Nohlberg, M.: Exploring the meaning of “usable security”. In: Clarke, N., Furnell, S. (eds.) HAISA 2021. IAICT, vol. 593, pp. 247– 258. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57404-8 19 27. Rivers, D.: Sampling for web surveys. In: Joint Statistical Meetings, p. 4 (2007) 28. Culotta, A.: Reducing sampling bias in social media data for county health inference. In: Joint Statistical Meetings Proceedings, pp. 1–12. Citeseer (2014) 29. Henry, G.T.: Practical Sampling, vol. 21. Sage, Newbury Park (1990) 30. Scheaffer, R.L., Mendenhall, W., III., Ott, R.L., Gerow, K.G.: Elementary survey sampling. Cengage Learning (2011) 31. Inglehart, R., Welzel, C.: The WVS cultural map of the world. World Values Survey (2010) 32. Wheelan, C.: Naked Statistics: Stripping the Dread from the Data. WW Norton & Company, New York (2013) 33. Fowler, F.J., Jr.: Survey Research Methods. Sage Publications, Thousand Oaks (2013) 34. Lincoln, Y.S., Guba, E.G.: Naturalistic inquiry (1985) 35. Joinson, A., van Steen, T.: Human aspects of cyber security: behaviour or culture change? Cyber Secur. Peer Rev. J. 1(4), 351–360 (2018) 36. Bansal, G., Hodorff, K., Marshall, K.: Moral beliefs and organizational information security policy compliance: the role of gender. In: Proceedings of the Eleventh Midwest United States Association for Information Systems, pp. 1–6 (2016) 37. Anwar, M., He, W., Ash, I., Yuan, X., Li, L., Xu, L.: Gender difference and employees’ cybersecurity behaviors. Comput. Hum. Behav. 69, 437–443 (2017) 38. McGill, T., Thompson, N.: Gender differences in information security perceptions and behaviour. In: 29th Australasian Conference on Information Systems (2018) 39. Fatokun, F., Hamid, S., Norman, A., Fatokun, J.: The impact of age, gender, and educational level on the cybersecurity behaviors of tertiary institution students: an empirical investigation on Malaysian universities. J. Phys. Conf. Ser. 1339, 012098 (2019). IOP Publishing

WISE Workshops

Foundations for Collaborative Cyber Security Learning: Exploring Educator and Learner Requirements Jerry Andriessen1, Steven Furnell2(&), Gregor Langner3, Gerald Quirchmayr4, Vittorio Scarano5, and Teemu Tokola6 1

Wise & Munro, The Hague, The Netherlands University of Nottingham, Nottingham, UK [email protected] Austrian Institute of Technology, Vienna, Austria [email protected] 4 University, of Vienna, Vienna, Austria 5 University of Salerno, Fisciano, Italy 6 University of Oulu, Oulu, Finland 2

3

Abstract. This brief paper outlines the background to a workshop session at the 14th World Conference on Information Security Education (WISE 14), drawing upon early findings from the Collaborative Cybersecurity Awareness Learning (COLTRANE) project funded under the European Union ERASMUS + programme. It presents the background to the COLTRANE project and an outline of the workshop focus. The latter is based upon an investigation of current cyber security education delivery that has been conducted amongst existing educators and learners via prior survey-based data collection and follow-up workshop discussions within individual COLTRANE partner countries. Keywords: Collaborative learning

 Educators  Learners

1 Introduction Cyber security is now an increasing area of focus within university-level educational provision, with related coverage within both undergraduate and postgraduate programme and encompassing programmes specifically addressing the topic and those that integrate it as a tangible thematic strand. However, while there is clearly a level of broad demand, few other fields require such a holistic and multidisciplinary view as cyber security. As such, it poses a challenge for institutions and educators to make effective provision, and for learners to feel they are receiving an appropriate experience.

© IFIP International Federation for Information Processing 2021 Published by Springer Nature Switzerland AG 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 143–145, 2021. https://doi.org/10.1007/978-3-030-80865-5

144

J. Andriessen et al.

2 The COLTRANE Project COLTRANE aims to enhance cybersecurity education by introducing innovative concepts in the context of collaborative awareness education [1]. Traditional forms of education mainly focus on knowledge transmission, but in highly dynamic areas such as cybersecurity this does not lead to sufficient learning outcomes. We therefore need more innovative forms of education that aim at the development of joint action: being able to act in a variety of situations and knowing how to do this together. The COLTRANE consortium consists of six partners, combining expertise in the areas of higher education learning and teaching, cybersecurity, state of the art technology, and collaborative learning and educational design: • • • • • •

Austrian Institute of Technology, Austria University of Nottingham, United Kingdom University of Oulu, Finland University of Salerno, Italy University of Vienna, Austria Wise & Munro, The Netherlands The work conducted within COLTRANE ultimately aims to contribute by:

• exploiting the affordances of cyber-ranges and collaborative learning platforms to create hands-on activities, as well as collaborative reflection; • supporting the co-design of learning activities together with teachers, according to a framework for developing cybersecurity awareness education; • developing a toolkit for teachers for evaluation of learning activities by students; and • supporting managers and policy makers by co-designing toolkits for institutional implementation planning. However, a starting point is to benchmark the current position and feelings of educators and students already operating in the space, determine the aspects that they already consider to be well-served, and the areas in which COLTRANE’s intended approaches could offer opportunities for improvement.

3 Workshop Focus The aim of the workshop is to share and further explore the current provision of cybersecurity education, based upon views collected from academics and students in current programmes. The project has conducted a series of data collection activities amongst educators and students in the partner countries. Specifically, a pair of online surveys (one targeting educators, the other addressing learners) were distributed to relevant participants in partner countries, followed by a series of related workshops (each with *6–8 participants) to explore and discuss matters in more detail. The overall focus of these activities has been to establish:

Foundations for Collaborative Cyber Security Learning

145

• interpretations of cybersecurity (e.g. the extent to which it is seen as an interdisciplinary topic, drawing upon topic areas beyond computer science and IT); • the existing content and coverage of cybersecurity education (e.g. in terms of knowledge coverage and skills development, with specific reference frameworks provided by the Cyber Security Body of Knowledge [2] and the CIISec Skills Framework [3]); • the modes of learning and delivery (e.g. incorporation of activities such as coloration and problem-based learning, as well as the provision of facilities and practical experiences); and • the extent of professional alignment (e.g. the extent of industry and professional body engagement within programme provision). The workshop at WISE 14 shares the related findings as a basis for discussion and further sharing of experience amongst the attendees. For COLTRANE this provides a valuable opportunity to share, validate and extend the findings before the project moves into further design and development phases, while for WISE attendees it provides an insight into current challenges and approaches that could help to inform participants’ own initiatives and future developments.

References 1. COLTRANE Homepage. https://coltrane.ait.ac.at. Accessed 15 April 2021. 2. Rashid, A., Chivers, H., Danezis, G., Lupu, E., Martin, A.: The Cyber Security Body of Knowledge. Version 1.0, 31 October 2019 (2019). https://www.cybok.org/media/downloads/ cybok_version_1.0.pdf 3. CIISec. CIISec Skills Framework, Version 2.4, Chartered Institute of Information Security, November 2019. https://www.ciisec.org/CIISEC/Resources/Capability_Methodology/Skills_ Framework/CIISEC/Resources/Skills_Framework.aspx

Reimagining Inclusive Pedagogy in Cybersecurity Education (A Workshop Proposal) Angela G. Jackson-Summers(&) U.S. Coast Guard Academy, New London, CT 6320, USA [email protected]

Abstract. Cybersecurity education programs have steadily been working to meet the increasing global demand for cybersecurity professionals. However, academic institutions are often faced with meeting such demand, because of the lack of enrollment and retention of students from varying backgrounds and reflecting other differences (i.e. cultural, gender, age, racial). Minimal literature exists relating to inclusive pedagogy in cybersecurity education. The goal of this proposed workshop is to rethink inclusive pedagogy in cybersecurity education programs and develop a future research agenda that promotes inclusive pedagogy in cybersecurity education program delivery. Keyword: Inclusive pedagogy  Inclusive pedagogical practices communications  Curricular design  Cybersecurity education

 Inclusive

1 Introduction 1.1

Overview

To support cybersecurity workforce needs, academic institutions [6], governmental agencies [4], and researchers [1] have been working to develop and continuously enhance cybersecurity education programs. In recognizing the need to support cybersecurity workforce growth, the need to focus on diversity and inclusion in our pedagogical approaches has been a concern in recent literature [3]. Reimagining Inclusive Pedagogy in Cybersecurity Education is a workshop that proposes to foster collaborative discussion and continued growth in inclusive pedagogical practices and future research efforts supporting cybersecurity education delivery. The Center for the Integration of Research, Teaching and Learning (CIRTL) offers an Inclusive Pedagogy Framework (Inclusive Pedagogy Framework (2) (cirtlincludes. net)) involving three core competencies as depicted in Fig. 1 below that will help frame the workshop into three (3) separate segments. Each segment will correspond and address a specific core competency, including related skills, strategies, and practices. Angela G. Jackson-Summers, is an Assistant Professor of Information Systems at the U.S. Coast Guard Academy. The views here are her own and not those of the Coast Guard Academy or other branches of the U.S. government. This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 L. Drevin et al. (Eds.): WISE 2021, IFIP AICT 615, pp. 146–149, 2021. https://doi.org/10.1007/978-3-030-80865-5

Reimagining Inclusive Pedagogy in Cybersecurity Education

147

The objective in addressing each segment separately is to provide a common understanding of each core competency among workshop participants. During each segment, the collective audience will be divided into smaller breakout sessions where participants can participate in brainstorming activities. These brainstorming activities will be designed to capture challenges and desired outcomes in achieving each core competency. Additionally, workshop participants will help define a forward-thinking research strategy and tactical action plan that offers continued research efforts in inclusive pedagogy in cybersecurity education. At the close of the workshop, the facilitator looks to establish a draft future research initiative that drives an inclusive pedagogy in cybersecurity education agenda supported by interested workshop participants.

Fig. 1. Adapted depiction of CIRTL’s Inclusive Pedagogy Framework.

Objectives. This online event is intended to exercise this “workshop as a research methodology” that offers workshop participants an opportunity to address their own research interests and promote future research in cybersecurity education. Workshop as a research methodology [5] has been described as a means of advancing and negotiating a topic, such as inclusive pedagogy in cybersecurity education, among workshop attendees (i.e. researchers, teachers). Additionally, workshop participants will share in the following takeaways. • Build knowledge in an existing framework used for promoting inclusive pedagogy • Gain insights on existing literature related to inclusive pedagogy and cybersecurity education needs • Share their own challenges and desired outcomes in achieving more diverse and inclusive classrooms in cybersecurity education delivery • Engage with other instructors and researchers having like interests in promoting inclusive pedagogy in cybersecurity education • Participate in future collaborative-based dialogue and research-driven events that promote inclusive pedagogy in cybersecurity education Workshop Length and Format. This online workshop is expected to be conducted as a collaborative group event in 2 to 2 ½ hours and is intended for a small audience of 12–15 attendees.

148

A. G. Jackson-Summers

Audience/Participants. Attendees should include instructors and researchers who have a focal interest in the topics of inclusive pedagogy and cybersecurity as well as the use of varying theories and research methods. Additionally, the workshop welcomes attendees who are willing to share prior pedagogical practices that were intended to foster diversity and inclusiveness in cyber education programs or more specifically, cyber education course delivery. Workshop Agenda. This online workshop is expected to be conducted as a collaborative group event in 2 to 2 ½ hours and is intended for a small audience of 12–15 attendees. • Give an introduction from each workshop participant (i.e. affiliation, research interest). • Revisit workshop objectives. • Address the CIRTL Inclusive Pedagogy Framework and its three (3) core competencies. • Share existing literature relating to inclusive pedagogy (i.e. inclusive communications, inclusive pedagogical practices, curricular design) and cybersecurity education. • Present the approach to conducting the break-out sessions, capturing feedback, and presenting results. • During each break-out session, brainstorm challenges and improvement opportunities to inclusive pedagogy. • Identify next steps, including those attendees interested in actively engaging with colleagues to address specific research efforts. Outcomes. To help foster and improve inclusive pedagogical practices in cybersecurity education through faculty development and research, the primary outcome of this workshop is as follows. • A research working group of 2–3 teams (i.e. 3–5 individuals per team), which represent interested participants in future periodic forum discussions and collaborative research efforts. Deliverables. To promote future research and efforts towards strengthening diverse and inclusive cybersecurity education, and related inclusive pedagogical practices, the following three workshop deliverables are expected as follows. • Draft Inclusive Pedagogy in Cybersecurity Education strategy and high-level tactical action plan that shares proposed next steps in future research efforts and continued dialogue on inclusive pedagogical best practices. • A summary of initial challenges and improvement opportunities to inclusive pedagogy in cybersecurity education.

Reimagining Inclusive Pedagogy in Cybersecurity Education

149

References 1. Cabaj, K., Domingos, D., Kotulski, Z., Respício, A.: Cybersecurity education: evolution of the discipline and analysis of master programs Comput. Secur. 75, 24–35 (2018) 2. Center for the Integration of Research, Teaching, and Learning. Inclusive pedagogy framework. Retrieved from Inclusive Pedagogy Framework (2) (cirtlincludes.net) (2018) 3. Mountrouidou, X., Vosen, D., Kari, C., Azhar, M.Q., Bhatia, S., Gagne, G., ... Yuen, T.T.: Securing the human: a review of literature on broadening diversity in cybersecurity education. Proceedings of the Working Group Reports on Innovation and Technology in Computer Science Education, pp. 157–176, Aberdeen, Scotland UK (2019) 4. Newhouse, W., Keith, S., Scribner, B., & Witte, G.: NIST special publication 800–181: National initiative for cybersecurity education (NICE) cybersecurity workforce framework. Gaithersburg, MD USA: U.S. Department of Commerce and the National Institute of Standards and Technology (2017) 5. Ørngreen, R., Levinsen, K.: Workshops as a research methodology Electron. J. e-Learning 15 (1), 70–81 (2017) 6. Schneider, F.B.: Cybersecurity education in universities IEEE Secur. Priv. 11(4),3–4 (2013)

Author Index

Allers, J. 95 Amador, Tristen K. 64 Andriessen, Jerry 143 Bernard, Leon 47 Bishop, Matt 3, 27, 81 Dai, Jun 81 Drevin, G. R. 95 Drevin, Lynette 3, 95 Fulton, Steven P. 64 Furnell, Steven 126, 143 Futcher, Lynn 3 Hosler, Ryan

27

Mancuso, Roberta A. 64 Mian, Shiven 81 Miloslavskaya, Natalia 3, 13 Moore, Erik L. 3, 64 Ngambeki, Ida 81 Nico, Phillip 81 Nohlberg, Marcus 126 Ophoff, Jacques 3, 111 Quirchmayr, Gerald Raina, Sagar 47 Ruhwanya, Zainab

143

111

Jackson-Summers, Angela G. 146

Scarano, Vittorio 143 Snyman, D. P. 95

Kautondokwa, Popyeni 111 Kävrestad, Joakim 126 Kaza, Siddharth 47 Kruger, H. A. 95

Taylor, Blair 47 Tokola, Teemu 143 Tolstoy, Alexander 13

Langner, Gregor 143 Leung, Wai Sze 3 Likarish, Daniel M. 64

von Solms, Suné 3 Zou, Xukai

27