Evolution of STEM-Driven Computer Science Education: The Perspective of Big Concepts 3031482344, 9783031482342

The book discusses the evolution of STEM-driven Computer Science (CS) Education based on three categories of Big Concept

108 44 10MB

English Pages 376 [368] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Abbreviations
1 Context and Model for Writing This Book: An Idea of Big Concepts
1.1 Introduction
1.2 A Short Glance to Education Evolution
1.3 A Short Glance to Computing Evolution
1.4 A Short Glance to STEM Evolution
1.5 A Short Glance to Computational Thinking Skills
1.6 Context Model to Define Our Approach
1.7 Evolutionary Model for Change
1.8 The Topics This Book Addresses
1.9 Concluding Remarks
References
Part I Pedagogical Aspects of STEM-Driven CS Education Evolution: Integrated STEM-CS Skills Model, Personalisation Aspects and Collaborative Learning
2 Models for the Development and Assessment of Integrated STEM (ISTEM) Skills: A Case Study
2.1 Introduction
2.2 The Aim and Motivation
2.3 Research Tasks and Methodology
2.4 Related Work
2.5 Defining Context and Functionality for STEM-CS Skills
2.6 Defining the Structure of STEM-CS Skills Model
2.7 Analysis of the Interdependencies Among Different Skills
2.8 Feature-Based STEM-CS Skills Model (RQ3)
2.9 Analysis of Metrics and Defining Metrics Model for Skills Evaluation
2.10 Model for Evaluating and Describing of the ISTEM-CS Skills
2.11 Validation of the ISTEM-CS Skills Model Through Case Study (RQ6)
2.12 ISTEM-CS Skills and Their Metrics Generating Tool
2.13 Summarising Discussion and Evaluation
2.14 Conclusion
Appendix
References
3 Enforcing STEM-Driven CS Education Through Personalisation
3.1 Introduction
3.2 Related Work
3.3 Requirements for Personalised STEM-Driven CS Learning and Research Questions
3.4 Basic Idea and Methodology
3.5 Background
3.6 A Framework for Implementing Personalised STEM-Driven CS Education
3.6.1 Structural Models of Personalised LOs
3.6.2 Personalised Processes and Activities Within the Framework
3.6.3 Tools and Approaches to Implement the Proposed Framework
3.7 Case Study
3.8 Discussion and Concluding Remarks
References
4 Personal Generative Libraries for Personalised Learning: A Case Study
4.1 Introduction
4.2 Related Work
4.3 The Concept of the Personal Generative Library
4.4 Methodology and Background
4.5 Structure and Functionality of PGL
4.6 Integration of PGLs into the Framework of Personalised Learning
4.7 Case Study and Results
4.8 Discussion and Evaluation
4.9 Conclusion
References
5 Enforcing STEM-Driven CS Education Through Collaborative Learning
5.1 Introduction
5.2 Related Work
5.3 Basic Idea of the Approach and Methodology
5.4 The Concept ‘Real-World Task’ in STEM Research and Its Complexity
5.4.1 Complexity Issues of Real-World Tasks
5.4.2 Conceptual Model for Solving Real-World Tasks
5.5 Framework for STEM-Driven Contest-Based Collaborative Learning
5.6 Case Study
5.7 Discussion and Evaluation
5.8 Conclusion
Appendix
References
Part II Internet of Things (IoT) and Data Science (DS) Concepts in K–12 STEM-Driven CS Education
6 Methodological Aspects of Educational Internet of Things
6.1 Introduction
6.2 Related Work
6.3 Research Strategy, Aim, and Requirements
6.4 Motivation and Basic Idea
6.5 Background: Conceptual Modelling of IoT
6.6 A Framework for Introducing IoT into STEM-CS Education
6.7 Interpretation of IoT Architecture for STEM-Driven CS Education
6.8 Discussion on Proposed Methodology
6.9 Conclusion
References
7 Multi-stage Prototyping for Introducing IoT Concepts: A Case Study
7.1 Introduction
7.2 Related Work
7.3 Methodology: Implementation Aspects Through Modelling
7.3.1 A Multi-stage Model for Introducing IoT into STEM-Driven CS Education
7.3.2 A Framework for Solving Real-World Tasks Through IoT Prototyping
7.3.3 A Detailed Specification of IoT Prototype Design Processes
7.3.4 IoT Prototyping Task Solving Through Inquiry-Based and Design-Oriented Collaborative Learning
7.4 Extending Smart Learning Environment with Tools for IoT Prototyping
7.5 Case Study
7.6 Summarising Discussion and Evaluation
7.7 Conclusion
References
8 Introducing Data Science Concepts into STEM-Driven Computer Science Education
8.1 Introduction
8.2 Related Work
8.3 Motivation and Research Methodology
8.4 Conceptual Model for Introducing DS Concepts into K–12
8.5 Implementation of the Methodology: A Three-Layered Framework
8.6 Development of the DS Model
8.7 Extending Smart Learning Environment
8.8 Modelling for Developing the Task Solution System
8.9 Development of the Assessment Model
8.10 A Case Study and Experiments
8.11 Summarising Discussion and Evaluation
8.12 Conclusion
References
Part III Introduction to Artificial Intelligence
9 A Vision for Introducing AI Topics: A Case Study
9.1 Introduction
9.2 Related Work
9.3 Background and AI Key Concepts
9.4 A Framework for Introducing AI Topics
9.5 Methodology for Implementing the Proposed Framework
9.6 Generic Architecture for Introducing AI Tools into SLE
9.7 Adopted Generic Scenario for Delivery of the AI Content
9.8 Summarising Discussion and Conclusion
References
10 Speech Recognition Technology in K–12 STEM-Driven Computer Science Education
10.1 Introduction
10.2 Related Work
10.3 Basic Idea with Motivating Scenario
10.4 Background
10.5 Research Methodology
10.6 Extending Smart Learning Environment for Speech Recognition Tasks
10.7 Case Study to Support Task 1
10.8 Case Study to Support Task 3
10.9 Summarising Discussion and Conclusions
Appendix 1
Appendix 2
Appendix 3
References
11 Introduction to Artificial Neural Networks and Machine Learning
11.1 Introduction
11.2 Related Work
11.3 Operating Tasks and Methodology
11.4 Background: Basic Concepts and Models of ANNs (RQ2)
11.5 Motivating Example: A Binary Classification (RQ3)
11.6 Case Study 1: Implementation of Single-Layered Perceptron Model
11.7 Case Study 2: Implementation of Multi-Layered Perceptron Model
11.8 Summarising Discussion and Evaluation
11.9 Conclusion
References
12 Overall Evaluation of This Book Concepts and Approaches
12.1 Aim and Structure of This Chapter
12.2 What Is the Contribution of This Book?
12.3 Difficulties and Drawbacks of the Proposed Approach
12.4 Rethinking of Discussed Approach
12.5 STEM-Driven Precision Education: A Vision Inspired by Concepts Discussed in This Book
12.6 Topics for Future Work
References
Index
Recommend Papers

Evolution of STEM-Driven Computer Science Education: The Perspective of Big Concepts
 3031482344, 9783031482342

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Vytautas Štuikys Renata Burbaitė

Evolution of STEM-Driven Computer Science Education The Perspective of Big Concepts

Evolution of STEM-Driven Computer Science Education

Vytautas Štuikys · Renata Burbait˙e

Evolution of STEM-Driven Computer Science Education The Perspective of Big Concepts

Vytautas Štuikys Department of Software Engineering Kaunas University of Technology Kaunas, Lithuania

Renata Burbait˙e Department of Software Engineering Kaunas University of Technology Kaunas, Lithuania Juozas Balˇcikonis Gymnasium Panev˙ežys, Lithuania

ISBN 978-3-031-48234-2 ISBN 978-3-031-48235-9 (eBook) https://doi.org/10.1007/978-3-031-48235-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

STEM is an interdisciplinary approach to learning. It comprises knowledge from Science, Technology, Engineering, and Mathematics. This educational paradigm has emerged as a response to the twenty-first century’s social, economic, and technological challenges. Currently, STEM is evolving extremely rapidly towards a higher level of integration. The possibilities for integration are indeed large. The integration occurs among the constituent components within some educational environments, or by integrating other components, such as Art, Medicine, and Design. Therefore, multiple approaches and visions to STEM research and practice exist worldwide now. Our approach focuses on STEM research and practices visible through the lens of computer science (CS) education and robotics. We have presented it in the published book by Springer in 2018 under the title “Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices”. Since the appearance of the mentioned book, many innovations have taken place in the field of STEM and CS education research and practice globally. We have also accumulated multiple new knowledge and new scientific results due to our intensive research in this field. The core of that is the new or extended approaches we have developed since that time. The basic idea on which we constructed the content of this book is Big Concepts. The title of this new book begins with the term ‘evolution’ to define the recent changes and innovations of our approach in the context of worldwide initiatives and research concerning Big Concepts. What are the Big Concepts to drive the evolution of our approach? They are the Internet of Things (IoT), Data Science (DS), artificial intelligence (AI), integrated STEM thinking (ISTEMT), and Smart Education. Note that, in our vision, the concept ISTEMT covers computational thinking, design thinking, data thinking, and scientific thinking. What is the role of STEM, computer science, and robotics? Whether, or not they are Big Concepts? Of course, in our vision, they are Big Concepts too; however, their role is different. For developing the concept and content of this new book, we put STEM-driven computer science education at the centre. We treat this paradigm (STEM, computer science, and robotics) as the object here for the investigation, research, and practice from the perspective of evolutionary changes under the impact of the content and technology of basic Big Concepts, such as IoT, DS, and AI. Note the v

vi

Preface

remaining Big Concepts (ISTEMT, Smart Education) do have not specific content. We argue that it is possible to consider and research these relying on the content of basic Big Concepts and technology (tools) supporting them. The other important aspects regarding the basic Big Concepts are (1) how we introduce these concepts and (2) how we evaluate their impact on the advancement of STEM education and students’ interdisciplinary knowledge. To define the way of introducing the Big Concepts, we apply a wide range of modelling approaches. They range from contextual modelling, conceptual modelling, feature-based (or structural), and process-based functional modelling to virtual and physical modelling applied in the implementation of case studies. All these enable us to provide the motivated design procedures of the entities, i.e., real-world tasks taken from the fields relevant to Big Concepts applications. The complexity of real-world tasks creates the preconditions to apply design as a learning approach for the enhancement of creativity, design thinking, computational thinking, and scientific thinking. To evaluate the outcomes of learning through design, we have developed the Integrated STEM-CS skills model. This model integrates all skills and thinking minds gained through design. It is a practical implementation of the concept ISTEMT. In our vision, this concept integrates all known so far movements in the integrated STEM research. What is the structure of this book? We have distributed the whole book’s content into three parts, plus Chaps. 1 and 12 as independent chapters. Chapter 1 is an introduction. It is about the context and our vision of the entire book. It provides an evolutionary model to better understand the book’s intention. Chapter 12 is a summarising re-evaluation of the whole content and concepts with an indication of future work. Part I includes chapters from Chaps. 2 to 5. They present evolutionary aspects from the pedagogical perspective as follows. Chapter 2 brings an approach to the development and usage of the integrated STEM-CS skills model to assess students’ knowledge. It is one of the most important ingredients of integrated STEM-driven CS education. Chapter 3 discusses personalised learning (PL) and the personalised content at the component level for smart STEM-driven education. Chapter 4 extends a discussion on PL at the sub-system level with a focus on the automation of personal libraries (for the teacher and students). Chapter 5 delivers the outcomes of research regarding robot contests-based collaborative learning along with the complexity issues of real-world task solving. Part II includes Chaps. 6–8. Chapter 6 discusses the methodological aspects of introducing the IoT into K–12 STEM-CS education. Those aspects include the conceptual modelling of IoT, architectural aspects and their interpretation for STEMCS education, and adequate framework. Chapter 7 deals with the design of the educational IoT system (through prototyping) and its use in practice. Chapter 8 is about how to introduce and play with Data Science approaches and techniques in the context of integrated STEM-CS education. Part III consists of the following chapters. Chapter 9 introduces a vision of how AI-based approaches we have incorporated into STEM-CS education systematically. Chapter 10 deals with the topics of speech recognition regarding tools, techniques,

Preface

vii

methods, and content. Chapter 11 provides background on artificial neural networks in STEM-driven CS education. What is the novelty and contribution of this book as compared to the previously published one? Methodological contribution includes (i) vision and framework for the introduction of Big Concepts into STEM-CS education; (ii) enforcement of personalised and collaborative learning (we treat them as components of Smart Education) by explicit models and outcomes; (iii) new content regarding IoT, DS, and AI along with the adequately extended Smart Learning Environment; and (iv) next, lifting the integrated STEM-CS education to a higher level in terms of better student involvement and self-involvement, changing the status or role of students from recipients of knowledge to creators of the new knowledge. Based on the extended space of real-world tasks and applying previous knowledge, students can create educational prototypes of IoT applications, voice-based smart device control systems, etc. Scientific contribution includes: (i) the enforcement of conceptual modelling as background for motivating and presenting our evolutionary model; (ii) the development and use of the integrated STEM-CS skills evaluation model for the assessment of new knowledge, regarding new STEM-CS topics; (iii) the enforcement of the automation level in designing tools for personalised learning, learning scenarios, and assessment; and (iv) the development and implementation of the multi-stage prototyping model for educational IoT design. (1) How should one read the book? (2) Who will be interested in reading this book? Regarding the first question, one needs to look at the layout within the book’s chapters. We have arranged the material within each part as much as possible independently from other chapters. In most cases, we have presented chapters as a complete scientific work on the selected topic with relevant attributes, such as (motivation, context (references), contribution, methodology, case studies, conclusions, etc.). Therefore, one can read, share ideas, and discuss each concept and topic separately. However, the specific title of this book (e.g., the word ‘evolution’) calls for looking back to the previously published book. We have tried to minimise this need by including a very short reference here to an item from that book if this was unavoidable only. Regarding the second question, we hope this book will be interesting to the researchers working in the field of STEM and computer science (CS) educational research in the first place. This book is a monograph, not a textbook for the entire educational course at the K–12 level (for 9–12 grades, or higher). Nevertheless, smart STEM teachers, CS teachers, or teachers from related disciplines (e.g., Science, Engineering) will be benefited by reading and using this book in their working practice. We admit that the most motivated students will find many interesting topics for their informal learning and research here. We hope that the university teachers/students, especially those students who are preparing to be STEM teachers, will find this book useful and interesting as well. We suppose that the book’s methodology, content,

viii

Preface

and especially case studies might or could be helpful for the organisers of teacher’s re-qualification courses. Kaunas, Lithuania Kaunas/Panev˙ežys, Lithuania

Vytautas Štuikys Renata Burbait˙e

Acknowledgements We thank the colleagues of Faculty of Informatics at Kaunas University of Technology and the colleagues of Panev˙ežys Balˇcikonis Gymnasium for promotion and support in writing this book. Special thanks are to students of the gymnasium who spent many hours in testing multiple case studies and their robot competition-based projects we included in this book. Also, we thank anonymous reviewers for their evaluation of the book’s ideas and content and suggestions for future work.

Contents

1

Context and Model for Writing This Book: An Idea of Big Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A Short Glance to Education Evolution . . . . . . . . . . . . . . . . . . . . . . 1.3 A Short Glance to Computing Evolution . . . . . . . . . . . . . . . . . . . . . 1.4 A Short Glance to STEM Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 A Short Glance to Computational Thinking Skills . . . . . . . . . . . . . 1.6 Context Model to Define Our Approach . . . . . . . . . . . . . . . . . . . . . 1.7 Evolutionary Model for Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8 The Topics This Book Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part I

2

1 1 3 7 10 16 22 25 28 29 30

Pedagogical Aspects of STEM-Driven CS Education Evolution: Integrated STEM-CS Skills Model, Personalisation Aspects and Collaborative Learning

Models for the Development and Assessment of Integrated STEM (ISTEM) Skills: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Aim and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Research Tasks and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Defining Context and Functionality for STEM-CS Skills . . . . . . . 2.6 Defining the Structure of STEM-CS Skills Model . . . . . . . . . . . . . 2.7 Analysis of the Interdependencies Among Different Skills . . . . . 2.8 Feature-Based STEM-CS Skills Model (RQ3) . . . . . . . . . . . . . . . . 2.9 Analysis of Metrics and Defining Metrics Model for Skills Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Model for Evaluating and Describing of the ISTEM-CS Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41 41 42 46 47 50 53 53 55 59 60

ix

x

Contents

2.11 Validation of the ISTEM-CS Skills Model Through Case Study (RQ6) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 ISTEM-CS Skills and Their Metrics Generating Tool . . . . . . . . . . 2.13 Summarising Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . 2.14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63 65 66 69 69 75

3

Enforcing STEM-Driven CS Education Through Personalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.3 Requirements for Personalised STEM-Driven CS Learning and Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.4 Basic Idea and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.5 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.6 A Framework for Implementing Personalised STEM-Driven CS Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.6.1 Structural Models of Personalised LOs . . . . . . . . . . . . . . . . 97 3.6.2 Personalised Processes and Activities Within the Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.6.3 Tools and Approaches to Implement the Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.7 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.8 Discussion and Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . 100 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4

Personal Generative Libraries for Personalised Learning: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The Concept of the Personal Generative Library . . . . . . . . . . . . . . 4.4 Methodology and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Structure and Functionality of PGL . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Integration of PGLs into the Framework of Personalised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Case Study and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

Enforcing STEM-Driven CS Education Through Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Basic Idea of the Approach and Methodology . . . . . . . . . . . . . . . .

109 109 111 114 116 122 123 125 128 130 131 135 135 137 141

Contents

xi

5.4

The Concept ‘Real-World Task’ in STEM Research and Its Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Complexity Issues of Real-World Tasks . . . . . . . . . . . . . . . 5.4.2 Conceptual Model for Solving Real-World Tasks . . . . . . . 5.5 Framework for STEM-Driven Contest-Based Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part II 6

7

142 144 148 149 151 156 159 160 162

Internet of Things (IoT) and Data Science (DS) Concepts in K–12 STEM-Driven CS Education

Methodological Aspects of Educational Internet of Things . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Research Strategy, Aim, and Requirements . . . . . . . . . . . . . . . . . . . 6.4 Motivation and Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Background: Conceptual Modelling of IoT . . . . . . . . . . . . . . . . . . . 6.6 A Framework for Introducing IoT into STEM-CS Education . . . . 6.7 Interpretation of IoT Architecture for STEM-Driven CS Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Discussion on Proposed Methodology . . . . . . . . . . . . . . . . . . . . . . . 6.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi-stage Prototyping for Introducing IoT Concepts: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Methodology: Implementation Aspects Through Modelling . . . . 7.3.1 A Multi-stage Model for Introducing IoT into STEM-Driven CS Education . . . . . . . . . . . . . . . . . . . . . 7.3.2 A Framework for Solving Real-World Tasks Through IoT Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 A Detailed Specification of IoT Prototype Design Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 IoT Prototyping Task Solving Through Inquiry-Based and Design-Oriented Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Extending Smart Learning Environment with Tools for IoT Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

167 167 169 172 173 175 179 181 184 186 186 191 191 192 195 195 197 198

200 202

xii

Contents

7.5 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Summarising Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . 7.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Introducing Data Science Concepts into STEM-Driven Computer Science Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Motivation and Research Methodology . . . . . . . . . . . . . . . . . . . . . . 8.4 Conceptual Model for Introducing DS Concepts into K–12 . . . . . 8.5 Implementation of the Methodology: A Three-Layered Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Development of the DS Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Extending Smart Learning Environment . . . . . . . . . . . . . . . . . . . . . 8.8 Modelling for Developing the Task Solution System . . . . . . . . . . . 8.9 Development of the Assessment Model . . . . . . . . . . . . . . . . . . . . . . 8.10 A Case Study and Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.11 Summarising Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . 8.12 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203 209 212 213 217 217 219 223 225 226 227 229 229 232 234 238 241 242

Part III Introduction to Artificial Intelligence 9

A Vision for Introducing AI Topics: A Case Study . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Background and AI Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 A Framework for Introducing AI Topics . . . . . . . . . . . . . . . . . . . . . 9.5 Methodology for Implementing the Proposed Framework . . . . . . 9.6 Generic Architecture for Introducing AI Tools into SLE . . . . . . . 9.7 Adopted Generic Scenario for Delivery of the AI Content . . . . . . 9.8 Summarising Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 Speech Recognition Technology in K–12 STEM-Driven Computer Science Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Basic Idea with Motivating Scenario . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Extending Smart Learning Environment for Speech Recognition Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

249 249 251 258 260 262 264 266 268 270 275 275 277 282 285 288 290

Contents

10.7 Case Study to Support Task 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Case Study to Support Task 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Summarising Discussion and Conclusions . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Introduction to Artificial Neural Networks and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Operating Tasks and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Background: Basic Concepts and Models of ANNs (RQ2) . . . . . 11.5 Motivating Example: A Binary Classification (RQ3) . . . . . . . . . . . 11.6 Case Study 1: Implementation of Single-Layered Perceptron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7 Case Study 2: Implementation of Multi-Layered Perceptron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Summarising Discussion and Evaluation . . . . . . . . . . . . . . . . . . . . . 11.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Overall Evaluation of This Book Concepts and Approaches . . . . . . . 12.1 Aim and Structure of This Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 What Is the Contribution of This Book? . . . . . . . . . . . . . . . . . . . . . 12.3 Difficulties and Drawbacks of the Proposed Approach . . . . . . . . . 12.4 Rethinking of Discussed Approach . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 STEM-Driven Precision Education: A Vision Inspired by Concepts Discussed in This Book . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Topics for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

292 293 294 298 300 303 308 311 311 313 318 319 328 330 334 342 344 344 347 347 348 350 352 355 356 357

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Abbreviations

AI ANN BC BD CB LO CBL CL CS CT DB DL DS DT ESLE GLO IoT ISTEM ISTEM SM LKSAM LO LO LG MDG ML NLP PCB LO PGL PGLO PL PLE PSLO QG

Artificial Intelligence Artificial Neural Network Big Concept Big Data Component-Based Learning Object Contest (Competition)-Based Learning Collaborative Learning Computer Science Computational Thinking Database Deep Learning Data Science Design Thinking Extended Smart Learning Environment Generative Learning Object Internet of Things Integrated STEM Integrated STEM Skills Model Learner’s Knowledge and Skills Assessment Module Learning Object Learning Object List Generator Metadata Generator Machine Learning Natural Language Processing Personalised Component-Based Learning Object Personal Generative Library Personalised Generative Learning Object Personalised Learning Personalised Learning Environment Personalised Smart Learning Object Query Generator xv

xvi

SLE SLO SPGL SR SRS ST STEM TPACK TPL VTT

Abbreviations

Smart Learning Environment Smart Learning Object Student’s Personal Generative Library Speech Recognition Speech Recognition System Scientific Thinking Science, Technology, Engineering and Mathematics Technological, Pedagogical And Content Knowledge Teacher’s Personal Library Voice-to-Text

Chapter 1

Context and Model for Writing This Book: An Idea of Big Concepts

1.1 Introduction The title of this book begins with the term ‘evolution’ to define the recent changes and innovations in the so-called STEM-driven computer science (CS) education. By this paradigm, we mean concepts, ideas, and approaches presented in the published book by Springer in 2018 under the title “Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices”. What is this book about? Shortly, that book [ŠB18] is about how three fields, i.e., STEM, CS, and Educational Robotics, are to be (or could be) combined seamlessly into a coherent educational methodology to gain synergistic benefits from all. More specifically, the basic idea we have discussed there focused on (1) the high variability of STEM pedagogy, variability of CS-related content, and variability of the available technology used and (2) how to project these variabilities onto high school CS curriculum using modern technologies of two types. The first type is educational robotics as direct facilities for learning for introducing interdisciplinary knowledge (i.e., scientific, engineering, and technological ones the robotics highly promotes) in a high (secondary) school. The second type (borrowed from software engineering) is supporting technologies for the automated preparation, design, and use of the content and learning processes. The supporting technologies include high-level modelling aiming at defining and expressing the learning variabilities at a high abstraction level through feature models and then providing transformations of those into the executable code of the teaching content using meta-programming techniques. Note that we have integrated those technologies seamlessly using conventional software only. Since 2018, when the book [ŠB18] appeared, many innovations have taken place in STEM and CS education research and practice worldwide. We have also accumulated new knowledge and new scientific results due to our intensive research in this field. The core of that is the novel or extended approaches we have developed since that time. Those rely on the concepts we have introduced in the title as Big Concepts. Why do we use the term “Big Concepts” and what they are? We will present a more

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Štuikys and R. Burbait˙e, Evolution of STEM-Driven Computer Science Education, https://doi.org/10.1007/978-3-031-48235-9_1

1

2

1 Context and Model for Writing This Book: An Idea of Big Concepts

extended motivation for using this term later. Here, the reader should accept the following terms (Internet of Things (IoT), Data Science (DS) and its sub-field Big Data (BD), Artificial Intelligence (AI), Integrated STEM Thinking (ISTEMT), and Smart Education) as Big Concepts (further BCs) either intuitively from the context, or his/her previous knowledge. Note that we define ISTEMT as a compound (ISTEMT = Computational Thinking (CT ) + Design Thinking (DT ) + Data Thinking (DtT ) + Scientific Thinking (ST )). In other words, it aggregates all known so far movements, i.e., CT movement [Win06, DT19, MRR+22], DT movement [LSG+19], and DtT movement [BVW+21, MRR+22] supplemented by Scientific Thinking. What is the role of STEM, CS, and robotics? Whether, or not they are BCs? Of course, in our vision, they are BCs too; however, their role is different. For developing the concept and content of this new book, we put STEM-driven Computer Science Education at the centre. We treat this paradigm (STEM, CS, and Robotics) as our object here for investigation, research, and practice from the perspective of evolutionary changes under the impact of the content taken from the field of BCs such as IoT, DS, and AI. In other words, this book aims to show how BCs can contribute to extending, enforcing, and enhancing STEM-driven CS education at the K–12 level at least in this decade of the twenty-first century. Note that regarding the term ‘Big Concept’, one can meet in the literature the term ‘Big Ideas’ [RAE16, Boo19, MRR+22] or “Big Computer Science” [Cer17] used in similar contexts. At the very beginning, we need to motivate and explain yet two important aspects. (1) What is the way for introducing BCs into this educational paradigm? (2) How to measure and evaluate the evolutionary changes based on those concepts? Regarding the first aspect, it is worth reminding how it is difficult to introduce new courses without governmental initiatives and support into the K–12 curriculum due to administrative and other reasons. A more reasonable way is to change and extend the already existing relevant courses because this largely depends on the teacher’s readiness and vision. Of course, this way has own restrictions on the extent and scope of the new content. We argue that it is impossible to cover a large portion of the relevant content for any BC within the existing courses such as Computer Programming, Robotics, etc. Therefore, for extending the existing courses, typically we rely on introductive and more engaging themes for the relevant BC. Those need to be adapted to the student’s level and their previous knowledge. We admit the relevance of BCs to STEM-CS education. Note that, in our case, some BCs (ISTEMT/CT, Smart Education) do not have specific content. We argue that it is possible to deliver these concepts by providing teaching content regarding other BCs. In addition, we will describe the evolutionary model in this chapter as a framework for systematically introducing BCs throughout this book. Regarding the second aspect, we have developed and researched the integrated STEM-CS skills model [BŠK+22]. It covers multiple skills (CT skills, design thinking, scientific thinking, and other skills) and competencies integrated into a unified model needed for learners to provide self-adaptation and self-preparation for managing and overcoming the challenges in the twenty-first century. This model serves as an instrument to evaluate the obtained new knowledge based on BCs.

1.2 A Short Glance to Education Evolution

3

The aim of this chapter is (i) to outline the context of our research and (ii) to motivate and develop the evolutionary model for change based on BCs along with guidelines for implementing this model so that it would be possible to formulate tasks and content of the entire book. Tasks of this chapter are: (i) Definition of education evolution. (ii) Characterisation of basic attributes of computing evolution. (iii) Evolution of integrated STEM education, (iv) Short glance to computational thinking skills. (v) Development of the evolutionary model explaining the way we introduce BCs into STEM-driven CS education. (vi) Formulation of objectives, content, and contribution of our research.

1.2 A Short Glance to Education Evolution Section 1.2 outlines an educational context for the evolutionary model for change and its representation. Typically, the evolution of a physical system (e.g., a software system) is a process of continuous change from a lower, or simpler, worse to a higher, more complex, or better state [GT03]. That happens over time by affecting many factors. Changes in education in our age are an inseparable part of the technology evolution, though technological innovations in the past stimulated progress in education too. Currently, technology influences education in many ways. Perhaps, the commonly used term “Technology-Enhanced Learning” explains and motivates well the influence of technology on education. Now technology evolves extremely rapidly from traditional systems (computers, Internet, mobile phones, robotics, etc.) to more advanced systems (cloud computing, the IoT, AI systems, and Learning Analytics, to name a few). Therefore, the educational sector should be ready to accept and adopt the new capabilities of modern technology. CS educational research and practice, like other educational fields, is not a purely physical system but a compound of the physical-social sub-systems. Nevertheless, the above-given definition of the term evolution of a physical system through continuous changes is general enough, and it is valid in many other contexts, including education. In the education literature, researchers often use the term transformation to define evolutionary aspects in this field. The compound term ‘short glance to education evolution’ means a restricted vision of changes in education in the twenty-first century with regard to two concepts, i.e., Education 4.0 and Smart Education. Let us look at the context of emerging the first concept. Matt Church [Chu20], the Founder of Thought Leaders, describes the evolutionary changes in education from the teacher’s perspective in this way. He introduces three periods (Education 1.0, Education 2.0, and Education 3.0) and interprets these periods simply using general terms and saying: Education 1.0: I’ll deliver content at my pace and in my way and you will learn if you can. Education 2.0: I’ll adjust how and what I teach so that it gives you the best chance for success. Education 3.0: I’ll provide you with the resources and content you need so you can learn what you need, in a way that works for you, whenever and wherever you are. I will then make myself available to help you apply that learning in a useful manner. If you are in the business of educating others, you need to understand the changes and evolve.

4

1 Context and Model for Writing This Book: An Idea of Big Concepts

Education 1.0 relates to teacher-centred education. Education 3.0 relates to student-centred education, and Education 2.0 is an intermediate case. The education community focuses now on a more extensible vision to define the education evolution by introducing the term Education 4.0. Linkedin [www.linkedin.com/pulse/edu cation-40] defines Education 4.0 as “a purposeful approach to learning that lines up with the fourth industrial revolution and about transforming the future of education using advanced technology and automation”. The foundation of Education 4.0 is creativity and the need to prepare students to take on challenges head-on. It is easy to understand the adoption of these highlighted four terms by the education community at large if we look at the history of the Industry Revolutions because those have always echoed the needed changes in education. History knows four periods of Industry Revolutions: Industry 1.0, 2.0, 3.0, and Industry 4.0. Industry 1.0 began in the eighteenth century using steam power and mechanization of production. Industry 2.0 started in the nineteenth century by applying electricity and assembly line production. Industry 3.0 began in the 70s of the twentieth century through partial automation using programmable controls and computers. Industry 4.0 covers the present times with advancements in production automation. The essential feature of Industry 4.0 is the wide networking and smart factories, in which production systems, components and people communicate via a network, and production is nearly autonomous. Note that the term Industry 4.0 originates from the German ‘Industry 4.0’ proposed in 2011 as the initiative to strengthen the competitiveness of the German manufacturing industry [HPO15]. Industry 4.0 is “the technical integration of Cyber-Physical Systems into manufacturing and logistics and the use of the Internet of Things and Services in industrial processes” [SR20]. The characterising features of Industry 4.0 are (i) automated manufacturing; (ii) augmented reality, (iii) cloud computing, (iv) Big Data, (v) IoT, and (vi) AI and machine learning [DMG+22]. Looking at the future, researchers envision Industry 5.0 and Industry 6.0 to come. According to Duggal et al. [DMG+22], Industry 5.0 will deal with personalisation and synergy between human and machine labour. Industry 6.0 will be one of renewable energy, machine independence, interplanetary resource gathering and manufacturing, and quantum control. In terms of recent changes, it is possible to characterise education evolution broadly by a set of attributes that have been changing over time. These include changes in teaching philosophy and theories, changes in the teacher and student roles, used approaches, outcomes of learning, changes in motivation to learn, and changes in available information sources, facilities, and technology. In this context, let us look at the evolutionary changes in the transition from Education 3.0 to Education 4.0. Relying on the more extended model [MNN+21], we compare the evolutionary aspects of education in two periods (late twentieth century and present times) and summarise those in Table 1.1. By making a juxtaposition of both the industrial revolution and education evolution, the paper [MNN+21] proposes the concept of Education 4.0 in higher education and defines it as follows. Education 4.0 is the current period in which Higher Education institutions apply new learning methods, innovative didactic and management tools, and smart and sustainable infrastructure mainly complemented by new and emerging ICTs to improve knowledge generation

1.2 A Short Glance to Education Evolution

5

and information transfer processes. Combining these resources during teaching-learning processes will support the training and development of desirable critical competencies in today’s students.

Now we return to considering the concept of Smart Education. Nowadays the word ‘smart’ is routinely used by the educational research community to form a long list of new terminology like Smart Education, Smart Learning, Smart School, Smart University, Smart Learning Environment, Smart Cities, etc. (Uskov et al. [UBH+18], Liu et al. [LHW17]). It is applied not only to teaching content, educational organisations, and environments, but also to actors involved (Smart teacher, Smart student). Perhaps one of the earliest announcements on smart education comes from Table 1.1 Education trends in the late twentieth century and present times Attributes of education

Characteristics of Education 3.0 Characteristics of Education 4.0 Period: late twentieth century Period: present times

Philosophy

Heutagogical connectivity

Heutagogicala , peeragogicalb and cybergogicalc

Educator role

Orchestrator, curator, collaborator

Mentor, coach, collaborator, reference

Student role

Active, “knowledge ownership”, Active, high independence, initial independent trajectory designer, and knowledge creator

Approaches used

Co-constructed, teacher-centred, Mostly student-centred, student-centred personalised, collaborative, or mixed

Learning/teaching methods

Learning by doing, face-to-face, blended learning

Learning by doing, active learning Inquiry-based, problem-based

Learning outcome and knowledge and skills

Prepared for practice and scenario analysis

Training of key competencies both soft and hard, including computational thinking and integrated interdisciplinary skills

Enabler to learn

Computer and wide use of Internet

ITCs tools and platforms powered by IoT, AI, BD, LAd

Information and learning resources

Texts, case studies, second-hand experiences, LO, GLO

Based on online sources

Facilities used

Blended and flexible physical shared spaces

Cyber and physical spaces both shared and individual

Industrial technology

Internet access, automatization, and control

Connectivity, digitalization, virtualization, pedagogical assistants (chatbots)

a

Heutagogy is the pedagogy based on the management of self-managed learners Peeragogy is the pedagogy based on the management of collaborative learning c Cybergogy is the pedagogy based on using cyber technologies d IoT—Internet of Things, AI—Artificial Intelligence, BD—Big Data, LA—Learning Analytics Adapted from [MNN+21] b

6

1 Context and Model for Writing This Book: An Idea of Big Concepts

the “Promotion Strategy for Smart Education” declared by Ju-Ho Lee, Minister of Education, Science and Technology, Korea in 2011 [Lee11]. According to this announcement, Smart Education is a customized learning system that enhances the capacity of learners in the twenty-first century, by moving away from uniform to individualized education, from standardized to diversified knowledge, and from admission-oriented to creativity-based learning. Smart Education will become a momentum for the innovation of the overall education system, including its environment, contents, teaching method, and evaluation.

The International Association of Smart Learning Environments considers smart learning as a more general term by defining it as “an emerging area alongside other related emerging areas such as smart technology, smart teaching, smart education, smart-e-learning, smart classrooms, smart universities, smart society. The challenging exploitation of smart environments for learning together with new technologies and approaches such as ubiquitous learning and mobile learning could be termed smart learning”. Therefore, we can accept this as an expression of the dynamic nature of the contemporary evolution of the educational domain, which is now also often characterised in terms of transformation [LHW17, BWM+17]. To characterise Smart Education, Uskov et al. [UBH+18] introduce the term smartness level by defining it through the following attributes (Adaptation. Sensing or Getting Data (or Big Data). Inferring or Processing Data or Making Logical Conclusions. Self-Learning. Anticipation. (Self-Optimization and Self-Organisation). We do not intend to provide a more comprehensive discussion on this topic; instead, we need to highlight one important aspect. With maturing of fast-changing domains, they need to be conceptualised to better understood and optimised for their stakeholder. In this regard, Hoel and Mason [HM18] propose a framework and some guidelines for introducing standardisation into Smart Education. The systematic literature review given by Martin et al. [MAK19] aims at finding out how the term “Smart Education” is used, what the implications are, and what the prospects are. The main findings of this paper are statements. (i) The number of papers on this topic grows continuously. (ii) The following technologies (IoT, Big Data, Artificial Intelligence approaches) dominate among analysed papers. (iii) Smart Education covers all levels, with a bigger focus in higher education. (iv) There are wide research opportunities in creating Smart Education Systems in terms of connectivity, security, predictability, and data visualisation. Gua et al. [GLG21] present a systematic overview of Smart Education literature research. Based on this analysis, the authors created a structural map for Smart Education knowledge and proposed a theoretical framework for Smart Education. We present this framework in a shortened version below. Smart education is a system of smart educators, smart students, and smart educational intermediary factors. The relationship between the three main factors is independent and interrelated. Smart educators organize and lead smart educational activities. They grasp the educational purposes and adopt appropriate educational contents and smart means. They create the necessary smart educational environment and regulate the smart learners and the entire smart educational process.

1.3 A Short Glance to Computing Evolution

7

Next, this framework highlights. “Smart education is always built around the smart learners. A smart environment based on artificial intelligence and other technologies can meet learners’ needs for personalized education. Teachers control and regulate the student by mastering, controlling, and regulating intermediary links. Students develop themselves and react to instruction by using, absorbing, and inheriting intermediary links as media. Teachers and students are connected and interact with each other via the transmission, connection, harmony, regulation, and extradition of smart intermediary factors”. Smart intermediary factors include material factors (equipment, location, medium, etc.) and non-material factors (content such as knowledge, skills, emotions; activities; purpose; method; behaviour; management; technology such as IoT, cloud computing, and big data; and evaluation). We recommend the following references [Sha19, BSL+20] for more intensive research on Education 4.0. Readers can learn more about Smart Education from Proceedings of SEEL-16, -17, -18, -20, and -22 conferences. In summary, we conclude with the following findings. (1) Though this analysis is restricted, two terms (Education 4.0 and Smart Education) characterise well the current transformative or evolutionary changes in education. (2) Both terms rely largely on technologies (Big Data, AI, IoT, etc.) and therefore motivate our choice of using the term Big Concepts well. (3) In this book, we are interested in the present education period, with a specific focus on innovations introduced through Big Concepts into STEM-driven robot-based CS education. (4) The presented short glance at education evolution serves us as a wide pedagogical context to define and research the evolutionary model—the main goal of this chapter.

1.3 A Short Glance to Computing Evolution There are two terms, computer science and computing, in the literature to define the modern computational technology aspects. Computing refers to a branch of engineering science that studies the algorithmic processes to describe and transform information. Allison [All21] defines computing from the educational context as “a discipline combining knowledge, skills, and attitudes, which includes aspects such as computational thinking, an ability to use different tools and software, programming, an understanding of the role of technology in society, or using technology to solve business problems”. Further, we prefer the term computer science. At this point, we need to look at computer science (CS) broadly from the perspective of its role and evolution. In comparison to, e.g., mathematics, physics, etc., CS is a modern science (for debates of CS as science, see [TD17]). CS brings the basic skills and knowledge needed for our lives in the digital age that is rapidly changing. The advancement of technology is the primary source and driving force to stimulate, support, and drive changes. CS stands for the core discipline to take a theoretical background of modern technology and innovations, thus contributing to changes in our dynamic world. For a general

8

1 Context and Model for Writing This Book: An Idea of Big Concepts

understanding of current innovations, we need to add to CS yet another component—Information Communication Technology (ICT). Broadly, the Association for Computing Machinery (ACM) defines CS as “the study of the theory, design, implementation, and performance of computer software and computer systems, including the study of computability and computation itself ”. This definition does not cover ICT aspects explicitly but provides that implicitly. ICT with its capabilities of the Internet, cloud computing (CC), Internet of Things (IoT), mobile communication, and more, is to be broadly thought of as “computer software and computer systems” of a specific kind with varying capabilities. In order words, these technologies have emerged due to the extremely rapid evolution of CS. However, the first independent discipline that has grown from the roots of CS was software engineering—the field of software systems. That is another significant component of the perception of modern technology mentioned in the ACM definition. The next prominent event in computing history was the separation of data managing processes from computational processes in the middle of the eighties of the last century. That was possible due to the advent of high-speed and large-capacity storage devices such as discus. Technologically, those devices enabled the creation of Data Basis (DB) and Data Management Systems (DMS), the core of Information Systems, to support various applications at that time and later. Those systems were predecessors of the present Big Data systems, a sub-field of CS. Therefore, CS is a broad and heterogeneous field. It evolves extremely rapidly in multiple dimensions and sub-fields. Take, for example, aspects of human–computer interaction and computers’ ability to model human brains. In this context, it is worth reminding the old Turing’s idea on computer (machine) intelligence formulated in 1951 at the dawn of CS [Tur50]. That was indeed a far-reaching idea at that time. Later it sparked and grounded the advent of Artificial Intelligence (AI), the modern CS branch, now considered as an independent discipline with high promises and expectations. AI is indeed a Big Concept in multiple aspects. Firstly, from the perspective of CS itself. It is so because AI transforms our traditional understanding of the basic CS terms such as program, algorithm, and data. How has this understanding evolved over time? We discuss that shortly from the perspective of computer programming below. Perhaps the simplest way to describe the traditional vision of the essence of programming (in terms of basic concepts) is to look at the program’s structure and functionality. To do that abstractly, we present the following formula: Program = Data Structure + Algorithm

(1.1)

This formula is a slightly changed title of the famous Virth’s book “Algorithms + Data Structure = Program” published in 1976 [Vir76]. This formula defines the structure of a program as a compound of the data structure and algorithm. The algorithm is thought of as a set of operations typically predefined and sequenced in advance by humans; data structure is a feasible representation of a given data to perform operations defined by the algorithm. Here, the reader needs to interpret the sign ‘ + ’ (meaning operation or process) in two ways: (i) explicitly as a structural

1.3 A Short Glance to Computing Evolution

9

composition of a program and (ii) implicitly as a process of executing the program in some computing environment to define its functionality. Taking into account the efforts in computing history to automate programming (we mean the advent of highlevel programming languages and the theory of compilers in the fifties of the last century [AU77], including compilers generators [Ter97]), it is possible to generalise (1.1) as follows: Metaprogram = Program as Data + + Meta-algorithm

(1.2)

Here, the term metaprogram can be interpreted abstractly as a compiler of any high-level programming language [She01], as a program generator (in terms of homogenous or heterogeneous meta-programming [ŠD12]), or even as a system of constructing compilers [Ter97]. The term Program as Data has the meaning of a lower-level program here. The latter stands for an argument for a meta-algorithm to provide higher-level manipulations identified in (1.2) as the doubled plus sign. Meta-algorithm is not much different from the interpretation in (1.1); however, it donates the higher-level operations here. You can learn more about the origins of meta-programming and its evolution from [ŠD12, see this Chap. 1, pp. 5–11]. Looking at the history of computing (e.g., software evolution in the late 20th and the beginning of the twenty-first century), we can extend this meta-vision also to models and meta-models in terms of Product Line (PL) programming and program/ model transformations [HHU08] as follows: PLB program = Model & Meta-model as Data + + + Model-transformation Algorithm

(1.2a)

The PL-based (PLB) program denotes a family of programs developed using PL methodologies. Those prevailed at the end of the previous century and the beginning of this century [Sch06] and are prevailing so far [MSR+19, QVR+20]. In summary, these abstract formulae reflect the efforts of the CS research community to lift the program development process to the ever-higher level of abstraction. That enabled the automation to come into the field to cope with the ever-growing complexity of tasks and systems. By repeating the symbol plus in formulae (1.2 and 1.2a), we express the growth of complexity. The triple sign ‘plus’ in (1.2a) also means adding the third abstraction level (in comparison to (1.2)). Let us return to the AI concepts. In terms of AI and CS (programming) evolution, we can rewrite formulae (1.1) and (1.2) for a simplified understanding of AI in this way: AI-based model = Big Data Sets ⊕ AI Algorithm(e.g., Machine Learning) (1.3) Here, the AI-based model is quite different from traditional program-based systems. This model provides a decision or problem-solving autonomously, typically without human intervention. Machine learning, for example, is a new algorithm type to manipulate data accepted from an environment in large amounts (identified as

10

1 Context and Model for Writing This Book: An Idea of Big Concepts

Big Data sets here) and to provide a self-learning procedure using that data for autonomous decision-making. The sign ‘⊕’ indicates the cardinally new form of the relationship between data and algorithm. Therefore, with the advent of AI, the technology advancement moves from the pre-programmed automated systems (abstractly described by (1.1)–(1.2a)) to autonomous systems (here represented by (1.3)), where the human intervention either is minimised or not presented at all. Secondly, concerning both CS and AI, we need to outline their role in a broader context for the future development of the economy (Industry 4.0) and society as a whole. Already, AI is the core technology to drive Industry 4.0. The main challenge for the implementation of the Industry 4.0 initiatives, however, is to manage the so-called “Reskilling Revolution” [WEF21] in the right way. Finally, knowing the role of IoT in Industry 4.0 [MNN+21] and preserving the used notation so far, we introduce the IoT model and define it as follows: IoT-model = Sensors(Smart Devices)as Data Sources ∗ Networks ∗ Computational Infrastructure

(1.4)

In formula (1.4), the sign ‘*’ means the physical and logical aggregation among indicated items. This model defines the core components of the IoT in a very simplified manner. Note that we presented a short vision of computing evolution from the perspective of our previous [ŠD12, ŠB18] and current research. One can find more extensive research on this topic in [DT19, TD16], for example, from the perspectives of the evolution of computational thinking. In summary, we conclude. (1) Computing/CS evolves with ever-growing acceleration and impacts society. (2) This evolution has resulted in the continuous complexity growth of computing systems. (3) CS evolution can be characterised by the efforts to manage the complexity growth by adding ever-higher abstraction levels. (4) All citizens, especially the young generation, have to be prepared to understand and accept those changes. (5) In this regard, STEM-driven CS education is the relevant approach to learning for understanding, accepting, and implementing the newest achievements in computing. (6) Broadly, this is about skills and knowledge the young generation needs in the twenty-first century for “unlocking a Reskilling Revolution” [WEF21].

1.4 A Short Glance to STEM Evolution Currently, STEM is important for many reasons. First, STEM allows students to acquire interdisciplinary knowledge suited to their future employment and career. Second, the future workforce educated in STEM disciplines will promote the development and prosperity of nations. Third, STEM helps to address issues related to humans and the natural environment. Next, STEM can provide diverse opportunities to facilitate students’ learning through design and can enhance their design thinking [LSG+19], computational thinking, creativity, collaborative skills, and competencies

1.4 A Short Glance to STEM Evolution

11

needed in the twenty-first century. Finally, the integrated STEM approach aims to find relationships between different STEM subjects and provide a relevant context for learning the contents. Kelley and Knowles [KK16] argue that engineering design is a critical factor to integrated STEM (“When considering integrating STEM content, engineering design can become the situated context and the platform for STEM learning”). Therefore, a variety of perspectives, approaches, visions, and definitions of STEM exist now. Despite this diversity, however, there is a consensus around the following issues: (i) using integrated STEM instruction (Bryan et al. [BMJ+15], English [Eng16]), (ii) exploiting real-world contexts to engage students in authentic and meaningful learning (Bryan et al. [BMJ+15], Kelley and Knowles [KK16]), (iii) employing student-centred pedagogies, including inquiry-based learning and design thinking (Bryan et al. [BMJ+15], Kelley and Knowles [KK16]), (iv) supporting the development of twenty-first century competencies such as creativity, collaboration, communication, and critical thinking (Bryan et al. [BMJ+15]) and (v) making connections among STEM disciplines explicit to students (Burrows et al., [BLB+18], English [Eng16], Kelley and Knowles [KK16]). Next, we focus on definitions and some terminological issues of STEM. According to [ALV+21], the acronym STEM was first used in 2001 by the director of the Education and Humanities division of the National Science Foundation, Judith A. Ramaley, about the curricula of the disciplines involved. Since then, it has been gaining importance at both political (economic, social) and educational levels. STEM is an interdisciplinary teaching paradigm that integrates all four disciplines (Science, Technology, Engineering, and Mathematics) into a single, cross-disciplinary program, which offers instruction to solve real-world tasks with a focus on inquiry-based teaching/learning methods. The National Science Teacher Association defines STEM as an “interdisciplinary approach to learning, where rigorous academic concepts are coupled with realworld lessons as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling the development of STEM literacy and with it the ability to compete in the new economy”. (https://www.nsta.org/topics/stem). Hsu and Fang define STEM education as an educational approach in which the contents of the disciplines involved may be addressed (1) as a group of isolated ideas (multidisciplinary); or (2) as ideas integrated into the process of real-world problem-solving (interdisciplinary and cross-disciplinary) [HF19]. Currently, researchers highlight and focus on integrative STEM. According to Aguilera et al. [ALV+21], the terms “integrative” or “integrated” to STEM are redundant, as the acronym already alludes to disciplinary integration. According to Martín-Páez et al. [MAP+19], STEM should be defined, as the educational approach that promotes the integration of the content (concepts, skills, and/or attitudes) originating from science, technology, engineering, and mathematics in the resolution of real-world problems.

12

1 Context and Model for Writing This Book: An Idea of Big Concepts

The driving force to advance STEM is the “STEM workforce”. In this regard, the National Science Foundation (USA) [NSB15] highlights that the STEM skills and “STEM workforce” are of the highest importance for the 21st economy by saying: 1. “The STEM workforce is extensive and critical to innovation and competitiveness. It is also defined in various ways and is made up of many sub-workforces.” 2. “STEM knowledge and skills enable multiple, dynamic pathways to STEM and non-STEM occupations alike. Assessing, enabling, and strengthening workforce pathways is essential to the mutually reinforcing goals of individual and national prosperity and competitiveness.” STEM research evolves rapidly now. This process includes the following stages: (1) originating in traditional disciplines, (2) emerging in political agendas, (3) establishing in the education systems, and (4) evolving towards integrated pedagogies (Zhou et al. [ZGW+22]). Starting since the 1990s, the National Science Foundation (1996) first introduced the acronym ‘SMET’ to define an instructional innovation in Science, Mathematics, Engineering, and Technology. Later, in 2001, the acronym STEM emerged due to J. Ramaley. Currently, integrated STEM education is the latest development stage; again, there is no consensus among researchers on what the term “integrated STEM”, in essence, means with multiple visions proposed. The report prepared by the Committee on Integrated STEM Education [HPS14] has developed a descriptive framework that includes four interdependent large components (Goals, Nature and Scope of Integration, Outcomes, and Implementation) each containing a set of sub-components. Goals for Students include (STEM literacy twenty-first century competencies, STEM workforce readiness, Interest and engagement, and Making connections). Goals for Educators include (Increased STEM content knowledge, and Increased pedagogical content knowledge). Nature and scope of integration cover (Type of STEM connections, Disciplinary emphasis and Duration, size, and complexity of initiative). This report emphasises “connections” between and among the STEM subjects. “Between and among” refers to connections between any two STEM subjects (e.g., most commonly math and science) and those among three or more. Outcomes for Students covers (Learning and achievement, twenty-first century competencies, STEM course taking, educational persistence, and graduation rates, STEM-related employment, STEM interest, Development of STEM identity, and Ability to make connections among STEM disciplines) Outcomes for Educators include (Changes in practice and Increased STEM content and pedagogical content knowledge). Kelly and Knowles define integrated STEM education as “the approach to teaching the STEM content of two or more STEM domains, bound by STEM practices within an authentic context for the purpose of connecting these subjects to enhance student learning” [KK16]. Bryan et al. [BMJ+15], for example, define integrated STEM as “the teaching and learning of the content and practices of the interdisciplinary knowledge which include science and/or mathematics through the integration of the practices of engineering and engineering design of relevant technologies”. Nadelson and Seifert [NS17] define integrated STEM education as “the seamless amalgamation of content and

1.4 A Short Glance to STEM Evolution

13

concepts from multiple STEM disciplines, where knowledge and process are jointly considered and applied in a problem-based context”. Cheng and So [CS20] consider a typology of integration in STEM learning as a combination of three items (i) content integration, (ii) pedagogical integration, and (iii) learner integration. Content integration covers different types of content and knowledge taken from the four STEM domains or/and subject knowledge (e.g., physics, chemistry, computers, etc.). Pedagogical integration covers learning approaches such as inquiry-based, cognitive integration, etc. Learner integration covers special education needs and diverse abilities integration. Roehrig et al. [RDE+21] propose an integrated STEM framework that includes seven key characteristics: (a) focus on real-world problems, (b) centrality of engineering, (c) context integration, (d) content integration, (e) STEM practices, (f) twenty-first century skills, and (g) informing students about STEM careers. In the context of the Australian education system, Zhou et al. [ZGW+22] define integrated STEM education as “the science of teaching across two or more STEM-related subjects to address and solve authentic problems through design solutions. Its three attributes include (i) transdisciplinary integration, (ii) authentic contexts, and (iii) design problem-solving”. Vasquez and English [Eng16] define four integration levels. (1) Disciplinary, when students learn concepts separately in each discipline. (2) Multidisciplinary, when students learn disciplines yet separately but with the reference to a common theme. (3) Interdisciplinary, when students learn concepts from two or more disciplines that are tightly linked to deepening knowledge and skills. (4) Transdisciplinary, when students are undertaking real-world problems, apply knowledge from two or more disciplines enabling to shape the learning experiences. Currently, STEM evolves (i) towards a higher degree of integration among the basic disciplines, or (ii) by adding new ones such as Art or Social. Trevallion D. and Trevallion T. indicate a few variations of STEM, mainly STEAM (adding Art to traditional components), STEMM (adding Medicine component), and D-STEM (focus on Design in STEM education) and evaluate their role in the STEM curriculum in secondary education [TT20]. Problems of the integrated STEM are typically illstructured and require knowledge from multiple disciplines. According to Cheng and So [CS20], the current movement and related initiatives of STEM learning are suffering from a vague and unclear conception of integration in STEM learning in school. It has set a tight limitation to its further development as well as the discussion among policymakers, educators, teachers, change agents, researchers, social leaders, and other stakeholders who have put high expectations on the role of STEM learning in students’ future development as well as to the innovation and technology development of the society/nation. As a result, the authors proposed a typology with three categories (content integration, pedagogical integration, and learner integration), six subcategories, and four related basic models of integration in STEM learning. Subcategories include (subject integration, domain integration, method integration, cognitive integration, special education needs, and diverse ability integration in STEM learning). Depending on the extent of content and pedagogical integration, that is a new comprehensive framework for conceptualising

14

1 Context and Model for Writing This Book: An Idea of Big Concepts

and managing various types of STEM learning in the school curriculum locally and internationally. Cheng‘s and So‘s four models follow [CS20]. Model I covers total integration (i.e., High integration in content, High integration in pedagogy, Maximised exposure in content and pedagogy, and the Highest complexity in learning). Model II covers content integration—pedagogy separation (i.e., High integration in content, Low integration in pedagogy, Maximised exposure in content, and High complexity in content but low complexity in pedagogy). Model III covers content separation—pedagogy integration (i.e., Low integration in content, High integration in pedagogy, Maximised exposure in pedagogy, High complexity in pedagogy but low complexity in content). Model IV covers total separation and includes (Low integration in content, Low integration in pedagogy, Limited exposure in content and pedagogy, Separated and fragmented learning, and Lowest complexity in learning). Authors evaluate the level of integration (“high”, “low”) by taking some examples from practice, however, without a more precise indication. STEM knowledge includes epistemological knowledge, procedural knowledge, and technical knowledge associated with each STEM discipline (Science, Technology, Engineering, and Mathematics) and how associated ideas, concepts, principles, and theories overlap and interrelate [Boo19]. Procedural knowledge provides the foundation for acquisition, application, and practice of the STEM skills such as measuring data, ascertaining its precision, validity, and reliability, or selecting and displaying data. Learners obtain procedural knowledge through investigative and hands-on activities progressively and dynamically in or/and outside the classrooms. Technical knowledge relates to applying knowledge, skills, attitudes, and values to a specific field, career, or task, such as software engineering. STEM competence refers to an individual’s ability to apply STEM knowledge, skills, and attitude appropriately in everyday life, the workplace, or educational context. It should not be confined and developed within the traditional boundaries of discrete bodies of knowledge (e.g., physics competence or computing competence). STEM competence covers both the ‘know-what’ (the knowledge, attitudes, and values associated with the disciplines) and the ‘knowhow’ (the skills to apply that knowledge, taking account of ethical attitudes and values to act appropriately and effectively in a given context). In the information age of the 4IR, STEM (i.e., the ‘know-what’ and the ‘know-how’) encompasses the traditional components of knowledge, skills, values, and attitudes and the allimportant expansion of information, big data, and technology. It is important not to view these components as isolated or ‘stand-alone’ but in a connected, contextualised, and holistic manner [Boo19]. Next, the other trend in the integrated STEM paradigm is the enhanced focus on engineering and design (English and King [EK15], English [Eng18]). Design is an interdisciplinary domain that uses approaches, tools, and thinking skills that help designers to devise more and better ideas for creative solutions (Danah et al. [DRM17]). Typically, design is an engineering activity. From the engineer’s mindset, design is a process of generating abstract ideas and then moving to concrete

1.4 A Short Glance to STEM Evolution

15

ones, seeking out design patterns, experimenting, testing, and rethinking to find the needed solution. “Design and Technology should be used as the platform for integrating STEM and creative design, and for raising the profile of engineering in schools”—the report [Fin16] of the Institution of Mechanical Engineering states. The design promotes creativity and design thinking. Therefore, the design-based approaches that develop the student’s ability in design thinking are at the heart of engineering education and STEM education too. Razzouk and Shute [RS12] define design thinking as “an analytic and creative process that engages a person in opportunities to experiment, create and prototype models, gather feedback, and redesign”. Li et al. [LSG+19] argue that design and design thinking are important to creativity and innovation not only in engineering and technology, in the current movement of developing and implementing integrated STEM education but also for everyone (similarly to computational thinking, the far-reaching concept proposed by Wing [Win06]). According to Li et al., design thinking can and should be viewed, from a broader perspective; that is as a model of thinking in school education to help nurture and develop creativity and innovations for every student in the twenty-first century. The authors (Li et al. [LSG+19]) (i) call for students’ learning in STEM education through design, (ii) emphasise the benefits of design practices for integrated STEM education, and (iii) call for developing students’ design thinking not only in Technology and Engineering but also in Mathematics and Science. The authors’ concluding observation is methodologically valuable to highlight. Systematic studies on students’ design thinking and its development, especially in and through STEM education, would help provide important foundations for developing sound educational programs and instruction.

Note that concerning the design movement in education, researchers use other terms with a close or similar meaning, such as Do-it-yourself , Maker initiatives (Halverson and Sheridan [HS14], Papavlasopoulou et al. [PGJ17]), or Design sprint (Arce et al. [ASL+22]). Standards for Technological and Engineering Literacy (STEL) promotes STEM education, where technology and engineering play a critical role as subject integrators. Integration of science, technology, engineering, and mathematics (STEM) in project-based learning is an ideal approach, with the benefits of enhancing student learning and twenty-first century competencies [HKB+20]. As a concluding remark, we present the OECD Learning Framework 2030, the Future of Education and Skills 2030 project that distinguishes four different types of knowledge. (i) Disciplinary knowledge includes subject-specific concepts and detailed content, such as that learned in the study of mathematics and language, for example. (ii) Interdisciplinary knowledge involves the concepts and content of one discipline/subject to the concepts and content of other disciplines/subjects. (iii) Epistemic knowledge is the understanding of how expert practitioners of disciplines work and think. This knowledge helps students find the purpose of learning, understand the application of learning, and extend their disciplinary knowledge. (iv) Procedural knowledge is the understanding of how something is done, the series of steps or actions taken to accomplish a goal (www.oecd.org/education/2030-project/teachingand-learning/learning/knowledge/Knowledge_for_2030.pdf).

16

1 Context and Model for Writing This Book: An Idea of Big Concepts

1.5 A Short Glance to Computational Thinking Skills Computational thinking (CT) is one of the most needed skills in the twenty-first century. The primary source to provide this skill lies within computer science (CS) disciplines such as programming. CT skill needs for everyone, not only for computing professionals. Many papers, reports, and discussions with different visions on this topic exist now. Here, we aim to motivate CT as a Big Concept at large. We do that by (i) presenting a variety of existing definitions, (ii) analysing worldwide initiatives, (iii) showing the research intensiveness in the field, and (iv) outlining the relationship of CT with STEM, CS, and robotics. Definitions of CT. As there is a great interest and many driving forces to advance CT, the research stream in this field is very diverse and intensive. Multiple visions, attitudes, concepts, frameworks, and initiatives exist. Despite that, there is no unique understanding of the concept itself. In addition, this understanding evolves rapidly along with computing technology advances and challenges of the twenty-first century and the context of use. This understanding goes through definitions firstly. It is reasonable to have a starting point for considering the CT topics. Many researchers accept J. Wing‘s seminal paper “Computational Thinking” (published in Communication of the ACM, in 2006) as a reference point. We do the same. J. Wing emphasises that the roots of CT are within computer science (CS) by saying [Win06]: Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science. Computational thinking includes a range of mental tools that reflect the breadth of the field of computer science. Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines.

She indicates the following characteristics of CT: (i) Conceptualising, not programming. (ii) Fundamental, not rote skill. (iii) A way that humans, not computers, think. (iv) Complements and combines mathematical and engineering thinking. (v) Ideas, not artifacts. (vi) For everyone, everywhere. The ideas expressed in this paper have sparked a new wave of discussions on the CT role. Aho [Aho11] provides a more concise and precise definition by stating that CT is “the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms.” He, therefore, emphasises the need to connect CT with computational models. Later, in the essay published on March 6, 2011, Wing provides a more extended panorama on CT issues [Win11] as compared to her paper [Win06]. This panorama includes such topics as CT and Other Disciplines, CT in Daily Life, Benefits of CT, and CT in Education. She finalises her essay by saying. Computational thinking is not just or all about computing. The educational benefits of being able to think computationally transfer to any domain by enhancing and reinforcing intellectual skills. Computer scientists see the value of thinking abstractly, thinking at multiple levels of abstraction, abstracting to manage complexity, abstracting to deal with scale, etc. We know the value of these capabilities. Our immediate task ahead is to better explain to

1.5 A Short Glance to Computational Thinking Skills

17

non-computer scientists what we mean by computational thinking and the benefits of being able to think computationally.

Denning and Tedre [DT19], the other famous proponents of CT ideas, classify research efforts in CT into three periods. The first covers the pre-Wing period (before 2006) identified as the “evolution of computing’s disciplinary ways of thinking and practicing”. The second covers the Wing‘s initiatives (roughly 2006–2015) defined as “educational research and efforts in computing”. The third identified as the postWing period is defined as the “emergence of computational science and digitalization of society”. These authors define CT as a distilled product of many existing attitudes and visions in this way. CT is (i) “the mental skills and practices for designing computations that get computers to do jobs for” and (ii) “explaining and interpreting the world as a complex of information processes” [DT19]. Selby and Woollard define CT as a cognitive or thought process that reflects “the ability to think in abstractions, the ability to think in terms of decomposition, the ability to think algorithmically, the ability to think in terms of evaluations, and the ability to think in generalizations” [SW13]. As there is no universal definition, often researchers define CT depending on the context. For example, Digital Technologies Hub 2018 states: “Computational thinking describes the processes and approaches we draw on when thinking about how a computer can help us to solve complex problems and create systems”. In the context of the International Computer and Information Literacy Study (2018), Fraillon et al. [FAS+18] define CT in this way. CT is “an individual’s ability to recognize aspects of real-world problems, which are appropriate for the computational formulation, and to evaluate and develop algorithmic solutions to those problems so that the solutions could be operationalized with a computer”. Based on the existing CT literature, interviews with mathematicians and scientists, and exemplary computational, the paper [WBH+16] presents a definition of CT for mathematics and science classrooms in the form of a taxonomy. The latter consists of four main categories: data practices, modelling and simulation practices, computational problem-solving practices, and systems thinking practices. The International Society for Technology in Education (ISTE) and the Computer Science Teachers Association (CSTA) developed an operational definition: “Computational thinking (CT ) is a problem-solving process that includes (but is not limited to) the following characteristics: (i) Formulating problems in a way that enables us to use a computer and other tools to help solve them. (ii) Logically organizing and analyzing data. (iii) Representing data through abstractions such as models and simulations. (iv) Automating solutions through algorithmic thinking (a series of ordered steps). (v) Identifying, analyzing, and implementing possible solutions with the goal of achieving the most efficient and effective combination of steps and resources. (vi) Generalizing and transferring this problem-solving process to a wide variety of problems. These skills are supported and enhanced by a number of dispositions or attitudes that are essential dimensions of CT; they include the following: • Confidence in dealing with complexity • Persistence in working with difficult problems • Tolerance for ambiguity

18

1 Context and Model for Writing This Book: An Idea of Big Concepts

• The ability to deal with open-ended problems • The ability to communicate and work with others to achieve a common goal or solution”. Looking at these and other definitions [CC19] it is possible, for example for practical use, to express CT through the following list of categories: Abstraction, Algorithm, Data, Patterns (structural or component vision), Automation, Decomposition, Parallelization, Simulation, Analysis, and Generalization. Worldwide initiatives to promote computing and CT. As stated in [Kon18], there are multiple examples and efforts to develop CT skills worldwide in K–12 education: • • • • • • • • •

In the USA (CS curriculum (2003 and new K–12 CS framework (2016)). In England, a new computing curriculum (2014). In Japan, computer programming is compulsory for all levels of schools. In Singapore (CT-based curriculum for K–12 in 2014). In India (a new CS curriculum was in 2013 and textbooks with CT curricula used by 300 schools from K1-8 in 2016). In China, the inclusion of CT in the K–12 curriculum goes from 2013. In South Korea (high schools adopted coding as an elective course, 2018). In Finland, the core coding curricula with CT, 2016). In Estonia, (an initiative to bring programming to K–12 began in 2012).

This list of initiatives is by no means comprehensive. For other initiatives, see [Iye19]. The report [BCD+16] discusses the most significant CT developments for compulsory education in Europe and provides a comprehensive synthesis of evidence, including implications for policy and practice. Among other key issues, the report focuses on the following questions. How can teachers be trained to effectively integrate CT into their teaching practice? Should CT be addressed within a specific subject (e.g., CS, as part of STEM), or as a cross-curricular topic? What does it mean to assess CT? What is needed to further the CT agenda in compulsory education settings? The philosophy of the project includes three issues: (i) Developing computer fluency, not just computer literacy. (ii) Developing thinking process skills, not just content mastery. (iii) Highlighting the interconnectedness of knowledge, not just addressing a topic in isolation. The paper [KAL19] focuses on pre-Wing’s period of CT research initiatives in K–12 (computing without the use of the term CT) and summarises: One common approach to incorporating computation into the K–12 curriculum is to emphasize computer literacy, which generally involves using tools to create newsletters, documents, Web pages, multimedia presentations, or budgets. A second common approach is to emphasize computer programming by teaching students to program in particular programming languages such as Java or C++. A third common approach focuses on programming applications such as games, robots, and simulations

The paper [Eic19] presents a research module on CT as a part of the international comparative study considered as an international option for the first time in ICILS 2018. Countries with education systems, which participated in this forum,

1.5 A Short Glance to Computational Thinking Skills

19

were able to choose whether they wanted to take part in this additional module. The option included computer-based tests, two test modules for each student, in the domain of computational thinking for Grade 8 students as well as additional computational-thinking-related questions and items in the study’s questionnaires for students, teachers, school principals, and IT coordinators. Palt and Pedaste [PP20] propose a model for developing CT skills in three stages (1) defining the problem, (2) solving the problem, and (3) analysing the solution. Each stage consists of ten CT skills: problem formulation, abstraction, problem reformulation, decomposition, data collection and analysis, algorithmic design, parallelisation and iteration, automation, generalisation, and evaluation. Berglund et al. [BDD+21] present the issues of the international project in cooperation between the European and South Asia institutions in STEM–CT education. Research intensiveness in CT. The interest in researching CT issues grows extremely rapidly, especially in the last decade, and identified as the “CT movement”. To illustrate that, the paper [RZN+18] presents the results of searching publications using the keyword “computational thinking” from only three databases. Such a search has resulted in 2296 (IEEE), 15,387 (Springer) and 43,729 (Science Direct) published articles about CT found in academic journals. Based on reading abstracts of 6326 articles, the authors of this paper have selected and analysed 12 articles in three dimensions (CT concepts, CT practices, and CT perspectives) to identify a unified model of teaching CT. The paper [RZN+18] concludes that a unified model can be developed by including elements from three analysed models, namely [BDŠ18, XY17, MMH17], which cater to various dimensions of CT for better learning. CT links with STEM and robotics. The report “The Global STEM Paradox” prepared by The New York Academy of Sciences (2015) defines STEM as “the integration of Science, Technology, Engineering and Math instruction, combined in classroom learning with real-world experiences to provide students with both the technical and personal professional skills they need to succeed”. Therefore, STEM education and teaching is the key to innovation and economic growth in this digitally connected world, where we are highly dependable on technology and innovations. It is so because STEM relates to critical thinking, computational thinking, teamwork, communication, collaboration, and creativity (UNESCO, [iCGL16]). Carnegie Mellon University is the curator of computational thinking, and this concept is applied in their STEM Programs that include the following issues [iCGL16]: (i) Their vision “to empower ALL with STEM education” resonates with the Wing‘s attitude “Computational thinking for ALL”. (ii) Fundamental of CT practices (Decomposition, Algorithms, Abstraction, and Debugging) combined with communication and collaboration. (iii) The basis of the pedagogical approach is “learning-by-doing” embedded in the integrative curriculum and empowered by learner-centric Deep Learning [FS14]. (iv) Focus on dispositions and attitudes outlined by the Computer Science Teachers Association (CSTA) that include (Confidence in dealing with complexity; Persistence in working with difficult problems; Tolerance for ambiguity; The ability to deal with open-ended problems; The ability to communicate and work with others to achieve a common goal or solution). (v) Continuous Support for STEM Educator Training and (vi) Assessment issues (for teachers,

20

1 Context and Model for Writing This Book: An Idea of Big Concepts

students, schools, and parents) and Types of Assessment Tools (Comprehension checks; Assessing processes; Performance tasks; Pre- and post-tests; Assessing prior knowledge; Reflection assignment). Sengupta et al. [SDF18] argue that computing and computational thinking should be viewed as discursive, perspectival material, and embodied experiences, among others. These experiences include but are not subsumed by, the use and production of computational abstractions. The paper illustrates what this paradigmatic shift toward a more phenomenological account of computing can mean for teaching and learning STEM in K–12 classrooms by presenting a critical review of the literature, as well as a review of several studies authors have conducted in K–12 educational settings grounded in this perspective. Thus, STEM and CT go together. According to Cheung [Che17], there are at least three ways of introducing CT and STEM together (i) CT skills and computing; (ii) CT skills and programmable robotics, and (iii) CT skills and simulations. The ever-growing popularity of computer science (CS) in the USA and worldwide has fostered the need for computational thinking (CT) in K–12 education. The paper [WF17] describes a 5th-9th grade STEM outreach program that includes classes on microcontrollers and computer programming. In addition, it focuses on determining the effectiveness of these curricula in improving confidence in CT and problem-solving skills. Swaid reports on a project, HBCU-UP II that works on bringing CT to STEM disciplines [Swa15]. A CT-based strategy is adopted to enforce thinking computationally in STEM gatekeeping courses. This paper presents the framework, implementation, and outcomes. This ongoing project contributes to efforts to establish computational thinking as a universally applicable attitude that is meshed within STEM conversations, education, and curricula. Robotics is a relevant instrument for supporting the development of computational thinking in students. It is so because robots (including educational ones) are “dedicated computers” for solving pure mechanical tasks. In addition, educational robotics activities unfold in a multidimensional problem space that requires the integration of programming, modelling, and building of environmental aspects of the activity. Students working collaboratively with robots have the opportunity to adopt roles within the group that are aligned to these multiple dimensions (e.g., programmer, designer, and analyst) [KSP19]. Swanson et al. [SAB+19] consider CT-STEM practices (data practices, modelling, and simulation practices, computational problemsolving practices, and systems thinking practices) to develop computational science curricula that teach both CT-STEM practices and science content in biology. This paper presents findings from a quantitative analysis of students’ written responses to assessments given before and after their participation in three computational biology units. Results suggest that the computational biology curriculum helped students develop some important competencies for the strand on modelling and simulation practices. Recently many countries in Europe, the USA, and Australia have moved towards activities for reconsidering, re-evaluating, or developing new computing curricula in order to meet the requirements and needs of the twenty-first century. Among other requirements, CT stands as a key theme in those curricula. Crick, for example,

1.5 A Short Glance to Computational Thinking Skills

21

evaluates the new computing curriculum in England [Cri17] with regard to CT and reports the following: “A high-quality computing education equips pupils to use computational thinking and creativity to understand and change the world”. In this context, Larke [Lar19] evaluates this England’s National Curriculum as “a muchneeded modernization of the country’s digital skills curriculum, replacing a poorly regarded ICT program of study with an industry-supported scheme of computer science, robotics, and computational thinking”. However, this paper also reveals the efforts of teachers (based on data selected from four state-maintained primary school classrooms) to oppress or even to block a curriculum that they view it “as narrow, difficult to teach and in conflict with their beliefs and practices as educational professionals”. The results obtained by Tang et al. [TYL+20] through a systematic review of CT assessment indicate the following. (a) More CT assessments are needed for high school, college students, and teacher professional development. (b) Most CT assessments focus on students’ programming or computing skills. (c) Traditional tests and performance assessments are often used to assess CT skills, and surveys are used to measure students’ CT dispositions. (d) More reliable and valid evidence needs to be collected and reported in future studies. Angeli and Giannakos [AG20] present a cyclic (evolutionary) five-step plan about how CT research can be addressed in future research studies. The first step tackles the definition of CT competencies in order to provide a baseline and common language across different contexts (e.g., different countries, educational levels, school subjects, disciplines) about the concept of CT. The next step is that of creating powerful metaphors as a mechanism for transforming abstract CT concepts into more concrete and easier notions to understand. The third step is to research the effectiveness of pedagogies and technologies in enhancing and enabling the development of CT competencies. The fourth step focuses on the crucial issue of preparing teachers and instructors to teach CT as well as integrating appropriate technological tools to enable the teaching of CT in their respective teaching contexts. Lastly, the fifth step deals with the measurement and assessment of CT competencies, an area of research that is currently in its infancy. In summary, we can state the following. Though the provided analysis is by no means comprehensive, the introduced attributes characterise CT well. Furthermore, CT is an essential attribute to define STEM components from different perspectives (Science, especially CS, where algorithms, data, abstractions, and computations, are at the focus and where abstract and scientific thinking is needed; Engineering and Technology, where design is the main activity that requires design thinking). Therefore, based on the findings of Sects. 1.3 and 1.4, we introduce the term Integrated STEM Thinking (ISTEMT) (we define it as ISTEMT = Computational Thinking (CT ) + Design Thinking (DT ) + Data Thinking (DtT ) + Scientific Thinking (ST )). ISTEMT is an extended vision of CT for the STEM context and beyond this paradigm. It aggregates all so far known movements (CT, DT, and DtT) supplemented with ST. We have discovered this term taking into account the known visions (Wing’s [Win06], Li et al.’s [LSG+19], Denning’s and Tedre’s [DT19], Mike et al.’s [MRR+22] and Reeve’s STEM thinking [Ree15]) and additionally the evolution of integrated STEM (see Sect. 1.4). We present these visions below.

22

1 Context and Model for Writing This Book: An Idea of Big Concepts Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science [Win06]. Design thinking can and should be viewed as a model of thinking in school education to help nurture and develop creativity and innovations for every student in the twenty-first century [LSG+19]. Computational thinking is (i) “the mental skills and practices for designing computations that get computers to do jobs for” and (ii) “explaining and interpreting the world as a complex of information processes” [DT19]. STEM thinking is purposely thinking about how STEM concepts, principles, and practices are connected to most of the products and systems we use in our daily lives [Ree15]. Data thinking integrates computational thinking, statistical thinking, and domain thinking [MRR+22]. In addition, we have taken into account Benita’s et al. ideas on Data Thinking from [BVW+21].

In our interpretation, ISTEMT is a human’s mind and behavioural model for understanding the essence of Integrated STEM through solving complex real-world tasks that require integrated skills including computational, design, data-driven, and scientific thinking. In our vision, the integrated STEM-CS skills model—the product of our research [BŠK+22]—is a practical implementation of the ISTEMT conceptual model. In terms of the STEM evolution and our approach, we treat both CT and ISTEM as Big Concepts. Furthermore, in terms of “smartness”, we equalise ISTEMT and Smart Education due to their importance and because they supplement each other. They will appear in this rank in multiple contexts we discuss throughout the book.

1.6 Context Model to Define Our Approach Context is a very important attribute in discovery models and providing conceptual modelling. Based on analysis of the context-related literature [Dey01, ZLO07, Dou04, LCW+09, VMO+12, Tha10, BBH+10, All21, MT21], we have drawn the following findings. (1) Context is a multi-dimensional category that, in general, may include the following entities: special-time, physical conditions, computing resources, user, activity, and social, to name a few. (2) Context is always bound to an entity and that information that describes the situation of an entity is a context. In this regard, the ideas and concepts discussed in Sects. 1.1–1.4 are an external (broad) context. (3) Typically, context is a hierarchical entity, because many constituents form its structure at different levels. Furthermore, if you want to focus on one constituent and investigate it in more detail, the other may serve as a context for the first. (4) Any entity, any process, any design, and any system we want to study have two essential parts: the base part and its context. Typically, context enables us to extract additional information to understand better the base part of the system

1.6 Context Model to Define Our Approach

23

under consideration. Our topic, the evolution of STEM-CS education, is not the exclusion from this rule. Based on analysis of the sources on models, conceptual models, and conceptual modelling [Myl08, TDK+10, Tha10, RAL+15, Vri17, Fra18, HS19, MT21, All21], we conclude. (1) There are many visions on conceptual modelling (e.g., as a conceptualisation of the ontological structure of the problem domain [TDK+10], as a process of abstracting a model from the real world [KR08], or as an activity of knowledge acquisition from the real world [RAL+15]). (2) Context plays a significant role in conceptual modelling. Conceptual modelling is a powerful approach, helping researchers to understand better the domain and its challenges. It helps to reason about the domain, communicate the domain details and document the domain for future reference. (3) Multiple attributes specify the conceptual modelling such as those: relation to origins, concerns and their usage, purpose, and function (e.g., understand, analyse, assess, realise and explore, predict, use), concepts space, domain and context, representation and concept relationships. (4) We treat conceptual modelling as a background for explaining the way for implementing our ideas systematically. Therefore, having this background, we discuss two conceptual models (the Big Concepts context model and the evolutionary model for change) in this and the next section. In this section, we propose the context model for introducing the Big Concepts into STEM-driven CS education. The aim is to define the role of each concept because they differ and outline their relationships conceptually. To do that, we have combined ideas taken from a few sources (Biggs’s 3P model [Big93], Allison’s 4P model [All21], and the TPACK framework proposed by Koehler and Mishra [KM09]). The model [Big93] represents education/learning by triple items: Presage as context, Process used, and Product. Allison uses this model to define challenges in K–12 Computing education and extends it with the fourth P dimension (meaning Policy). In classroom learning, Policy encompasses Government Priorities and Regulatory Bodies. The Policy is a driving force that highly influences the remaining components. The Presage factors are students and teaching context. The Process is task processing. The Product is the nature of outcomes. We have borrowed some ideas from both [Big93] and [All21] models. In Fig. 1.1, we outline the proposed model. Structurally, it looks like the 3P model. The interpretation of all P dimensions, however, is quite different. We define Presage at a much higher context level than in [Big93] or [All21]. We interpret it as a hierarchical context that covers external and internal contexts. We refer to the external context as ideas from documents reflected in the initiatives of Industry 4.0 regarding the educational technology and needed skills for the twentyfirst century as defined in Sects. 1.3 and 1.4. Concerning the 4P model, we consider its first dimension (Policy) as an external context here. We refer to the internal context as four Big Concepts (Educational Robotics—ER, Internet of Things—IoT, Data

24

1 Context and Model for Writing This Book: An Idea of Big Concepts

Fig. 1.1 Adapted 3P context model [Big93] to define the integration of Big Concepts in our approach

Presage External context: Industry 4.0, Challenges for STEM-driven CS education, etc. STEM-CS Edu & Big Concepts as internal context Educational Robotics

Internet of Things

Pedagogy

Data Science

Technology

Artificial intelligence

Content TPACK framework Process

Smart STEM-driven CS education

Product Integrated STEM-driven skills, including Computational Thinking, etc.

Science—DS, and Artificial Intelligence—AI) we intend to integrate into STEMdriven CS education. We accept that the connection between the external and internal contexts is a mental high-level bidirectional activity. Therefore, the Presage is the two-staged item (see Fig. 1.1). We define the next component of our model, i.e., Process through two actions. The first is mapping of four Big Concepts onto the TPACK framework and then mapping the latter onto the Smart STEM-driven CS education. We represent the first mapping by the complete bipartite graph. This graph defines the relationships abstractly among Big Concepts and three components of the TPACK framework (Pedagogy, Technology, and Content). Finally, the Process results in defining the Product—Integrated STEM-driven Skills—expected or real, when the Process realisation occurs. In our vision, Integrated Skills are the multidimensional entity that covers computational thinking (it is also a Big Concept), scientific thinking, design thinking, engineering thinking, and soft skills. In summary, the presented vision specifies the role, intention, relationships, and consequences of using Big Concepts within the STEM-driven CS education paradigm. This model provides a categorisation of Big Concepts according to their role. The four Big Concepts (Educational Robotics, IoT, DS, and AI) stand for a source for delivery of new real-world tasks, new educational technology, new content, and new aspects of pedagogy in the following sense. All these either were not considered at all in the previously researched STEM paradigm (e.g., DS, AI), or were considered to a limited extent (e.g., IoT). STEM-SC and robotics are our problem domains under investigation; therefore, they appear at the top level. The remaining Big Concepts appear at the Process level (Smart Education for our STEM) and computational thinking at the Product level. Smart Education and Computational Thinking are both derivative Big Concepts.

1.7 Evolutionary Model for Change

25

What is the value of the proposed model? It serves as a pre-condition to achieving the goal of this chapter—constructing the evolutionary model for change. We present it in the next section.

1.7 Evolutionary Model for Change Now having a general understanding of models, conceptual models, and the role of context, we can define our evolutionary model for change. We define the process of building our model using the following terms (concepts): problem domain, solution domain, state and vision (of a domain), external context, internal context, requirements for change, and mapping (e.g., of solution domain on problem domain). Our problem domain under consideration is STEM-driven CS education. At the top level, we can represent it using the TPACK framework as a compound of three interdependent components, i.e., Technological context knowledge, Pedagogical context knowledge And Content context Knowledge [KM09]. We define the solution domain as a set of explicit requirements for change along with approaches related to Big Concepts and considered implicitly here. We refer to the state of the problem domain by applying three visions (contextual, structural, and functional). The contextual vision covers external context, i.e., the evolution of education in terms of Education 4.0, Smart Education, Industry 4.0, and needed skills for the twentyfirst century, including skills taken by Big Concepts (see previous Sects. for details). We refer to the internal context as the structure depicted in Fig. 1.2. The structural vision defines the components and their possible links within a domain. The functional vision defines an interaction among the components. When implemented, this interaction appears as behavioural (learning) processes, e.g., within the Extended Smart Learning Environment. We define mapping (e.g., in constructing our conceptual model) as the expression of the problem domain Fig. 1.2 A framework of internal context built on the TPACK model [KM09] and derived from [ŠB18]

STEM-Driven Learning Processes (LPs) STEM Technological Context (TC) TC+CC

TC+PC STEM Pedagogical Context (PC)

LPs PC+CC

STEM Contexts

STEM Content Context (CC)

26

1 Context and Model for Writing This Book: An Idea of Big Concepts

concepts through the solution domain concepts to obtain the solution itself. Next, we focus on the essential part of the solution domain, i.e., requirements for change. We categorise requirements for change into four categories (pedagogical, technological, content-based, and the common of the first three). Pedagogical requirements for change include the following: (1) Focusing on learning individualisation in terms of personalised learning (2) Focusing on more complex and diverse real-world tasks in terms of both introducing collaborative learning and tasks related to using Big Concepts (e.g., Speech recognition, etc.). (3) Shift from evaluating STEM-CS separate skills (CT, Design thinking, Scientific thinking, etc.) to the integrated STEM-CS skills, where these skill components are evaluated as a unified compound according to the proposed model [BŠK+22]. (4) Enforcement of STEM-driven pedagogical approaches (inquiry-based, problem-based, and design-based) and learning and learners’ models taken from requirements from Education 4.0 and Smart Education. (5) Reconsidering teaching scenarios. (6) Focusing on internal relationships among Pedagogy-Content-Technology. (7) Enforcing feedback among students, teachers, parents, and professionals. Content Requirements for change include: (1) Reconsidering curriculum topics regarding ideas taken from Industry 4.0. (2) Selecting new topics for introducing new content regarding basic Big Concepts, i.e., IoT, DS and AI. (3) Defining the scope and extent of the new content for those Big Concept according to the learner’s model. (4) Defining the learning content models for selecting/creating/ modifying/ evaluating the content (separate each). (5) Focusing on internal relationships between Content–Pedagogy (learner’s model, learning models, scenarios) as we plan to implement within Extended Smart Learning Environment (ESLE). (6) Focusing on internal relationships between Content–Technology (model, learning models, and scenarios). Technology Requirements for change include: (1) Focusing on stable and freely accessible dedicated Software (SW) regarding basic Big Concepts, i.e., IoT, DS, and AI. (2) Focusing on modes of using SW tools online and offline. (3) Focusing on well-known Educational robotics platforms by extending them with new smart items. (4) Introducing metrics for tool selection (if any). (5) Focusing on extending and integrating the newly selected tools into the previously developed SLE.

1.7 Evolutionary Model for Change

27

(6) Focusing on the internal relationships between Technology-Pedagogy (Selecting adequate technological tools for pedagogical needs such as support learning processes). (7) Focusing on the internal relationships between Technology-Content (selecting adequate technological tools for content representation, content creation by students, and using the tools as a physical part of learning objects). (8) Focusing on networking challenges (especially for introducing IoT). The common requirements for all are: (1) (2) (3) (4)

Adaptability and flexibility. Tolerance of all requirements within the Extended SLE (ESLE). Tolerance of diversity, variability, and complexity management. Requirements should correspond to guidelines for introducing the Standardising in Smart Education (according to Hoel and Mason recommendations [HM18]).

Next, our intention is to visualise the evolutionary model in three dimensions (i) as a development process; (ii) as a structural representation, and (iii) as a template of functional representation. It is possible to visualise the development process semiformally using Y-chart (we have borrowed this notation from [GK83], where it was used to visualise the hardware components design process). The left branch of the Y-chart represents the problem domain and its state and visions (see Fig. 1.3). The right branch represents the solution domain. The vertical branch depicts the solution itself obtained after producing the mapping. We present the mapping process as a full bipartite graph whose nodes (vertexes) lie on left and right branches adequately (see Fig. 1.3). In Fig. 1.4, we summarise the structural vision as a result devised from the previous discussion. In addition, Fig. 1.4 outlines the requirements regarding Big Concepts when implementing evolutionary changes. In summary, the proposed evolutionary model for change is a conceptual background to formulate the content for each chapter. Fig. 1.3 Vision to define the process of creating the evolutionary model (P—pedagogical, T —technological and C—content)

Previous state of STEMdriven CS education

Requirements for change driven by Big Concepts

Context Structure Functionality

New state of STEM-driven CS education driven by Big Concepts

Pedagogical Technological Content

Context (P, T, C) Structure (P, T, C) Functionality (P, T, C)

28

1 Context and Model for Writing This Book: An Idea of Big Concepts

External context: Top level (Industry 4.0) & derivative from Big Concepts Internal STEM-driven CS Edu context

Content requirements for change inspired by Big Concepts

... Pedagogical Requirements for change inspired by Smart Education ... Learner‘s model Learning models Assessment models Learning scenario models ...

Content creation models Content selection models

Content Pedagogy Technology

Technological requirements for change inspired by Big Concepts Software models Hardware models

Fig. 1.4 A structural vision to define the evolutionary model for our problem domain

1.8 The Topics This Book Addresses This book is the story of research and practice in STEM-driven CS education that considers new requirements and approaches arising from the Industry 4.0 and Education 4.0 frameworks. We have identified those requirements as evolutionary changes by extending the previously developed STEM-CS paradigm presented in [ŠB18]. In the previous two sections, we have outlined the evolutionary model for change. This Sect. aims to highlight the content of those aspects that we abstractly defined in the evolutionary model. What is the structure of this book? The book consists of twelve chapters. We have distributed the book’s content (Chaps. 2–11) into three parts (Part I, Part II and Part III). Two chapters (In this chapter and Chap. 12) are separate. In this chapter is introductive; it is about the context and our vision of the entire book. It provides an evolutionary model to better understanding the book’s intention. Chapter 12 is a summarising re-evaluation of the whole content and concepts with an indication of future work. Part I includes chapters from Chaps. 2 to 5. They present evolutionary aspects from the pedagogical perspective as follows. Chapter 2 brings an approach for the development and usage of the integrated STEM-CS skills model to assessing students’ knowledge. It is one of the most important ingredients of integrated STEM-driven CS education. Chapter 3 discusses Personalised Learning (PL) and the personalised content at the component level for smart STEM-driven education. Chapter 4 extends a discussion on PL at the sub-system level, with a focus on the automation of Personal Generative Libraries (for the teacher and students). Chapter 5 delivers

1.9 Concluding Remarks

29

the outcomes of research regarding robot contest-based collaborative learning along with the complexity issues of real-world tasks solving. The complexity of real-world tasks is a topic loosely presented in the research literature on STEM. Part II includes Chaps. 6–8. Chapter 6 discusses the methodological aspects of introducing the IoT into K–12 STEM-CS education. Those aspects include the conceptual modelling of IoT, architectural aspects and their interpretation for STEMCS education, and adequate framework. Chapter 7 deals with the design of the educational IoT system (through prototyping) and its use in practice. Chapter 8 is about how to introduce and play with Data Science approaches and techniques in the context of integrated STEM-CS education. Part III consists of the following chapters. Chapter 9 introduces a vision and the way in which AI-based approaches we have incorporated into STEM-CS education systematically. Chapter 10 deals with the topics of speech recognition regarding tools, techniques, methods, and content. Chapter 11 provides background on Artificial Neural Networks in STEM-driven CS education. What is a novelty and contribution of this book as compared to previously published one? The novelty and contribution of this book as compared to the book [ŠB18] comprise the following aspects. Methodological contribution includes (i) visions, frameworks, and models for introducing Big Concepts into STEM-driven CS education; (ii) enforcement of student-centred learning, such as personalised and collaborative learning (we treat them as components of Smart Education) by presenting explicit models and outcomes. (iii) Next, raising integrated STEM-CS education to a higher level, better involvement of students and self-involvement, changing the status of students from the recipients of knowledge to the creators of the new knowledge. Using a larger space of real-world tasks and applying the previous knowledge, students can focus on STEM deeper and broader, due to Integrated STEM Thinking (that includes CT, DT, DtT, and ST), to solve more complex tasks, such as designing of prototypes for IoT applications, smart devices controlled by voice, etc. Scientific contribution is (i) the enforcement of various modelling approaches (conceptual, structural, and functional) as a background for motivating, presenting, and implementing the proposed evolutionary model; (ii) the development and implementation of the extensible Smart Learning Environment flexibly; (iii) the development and use the integrated STEM-CS skills evaluation model for assessing learning outcomes.

1.9 Concluding Remarks In this chapter, we have presented the basic idea of this book, i.e., the evolution of STEM-driven CS education research and practice. We have accepted Big Concepts (BC) as a driver to deliver the book’s content. We have categorised BCs into three groups. Group 1 consists of STEM, CS, and Robotics. It is our object for investigation, research, and practice from the perspective of evolutionary changes under the impact of the content taken from the BCs of Group 2. Group 2 includes the

30

1 Context and Model for Writing This Book: An Idea of Big Concepts

Internet of Things (IoT), Data Science (DS and Big Data—BD), and Artificial Intelligence (AI)—the new learning content to drive the evolutionary changes of STEMdriven CS education. Group 3 includes Integrated STEM Thinking/Computational Thinking (CT) and Smart Education. This group of BCs serves us in considering the evolutionary aspects of our approach from the pedagogical perspective. Integrated STEM Thinking is a new term. It is a product of our research. We have defined it as ISTEMT = Computational Thinking + Design Thinking + Data Thinking + Scientific Thinking, taking into account the known so far multiples views of other authors. In our interpretation, ISTEMT is a human’s mind and behavioural model for understanding the essence of Integrated STEM through solving complex real-world tasks that require integrated skills including computational, design, data-driven, and scientific thinking. In our vision, the integrated STEM-CS skills model—the product of our research [BŠK+22]—is a practical implementation of the ISTEMT conceptual model. Here, we have motivated its usage only partially. A more extended motivation will appear in other chapters, including the final discussion on the book’s outcomes. To motivate the book idea, we have put a short glance at the evolution of Smart Education, Computing/CS, STEM, and Computation Thinking. This analysis is a broad external context to motivate the evolutionary model we have discovered and presented in this chapter, aiming at explaining the writing of the remaining chapters. This model is a background for the development of the entire book content.

References [AG20] Angeli C, Giannakos M (2020) Computational thinking education: Issues and challenges. Computers in Human Behavior, 105, 106185 [Aho11] Aho AV (2011) What is Computation? Computation and Computational Thinking. Ubiquity, an ACM publication, http://ubiquity.acm.org [All21] Allison J (2021) The Importance of Context: Assessing the Challenges of K–12 Computing Education through the Lens of Biggs 3P Model. In: Koli Calling ‘21: 21st Koli Calling International Conference on Computing Education Research. Koli Calling. Association for Computing Machinery (ACM), New York, pp. 1–10. ISBN 9781450384889 [ALV+21] Aguilera D, Lupiáñez JL,Vílchez-González JM, Perales-Palacios FJ (2021) In Search of a Long-Awaited Consensus on Disciplinary Integration in STEM Education. Mathematics, 9, 597 [ASL+22] Arce E, Suarez-García A, Lopez-Vazquez JA, Fernandez-Ibanez, MI (2022) Design Sprint: Enhancing STEAM and engineering education through agile prototyping and testing ideas. Thinking Skills and Creativity, 44, ww.elsevier.com/locate/tsc [AU77] Aho AV, Jeffrey D, Ullman JD (1977) Principles of Compiler Design. Pearson Education Australia [BBH+10] Bettini C, Brdiczka O, Henricksen K, Indulska J, Nicklas D, Ranganathan A, Riboni D (2010) A Survey of Context Modelling and Reasoning Techniques. Pervasive and Mobile Computing 6 (2):161–180 [BCD+16] Bocconi S, Chioccariello A, Dettori G, Ferrari A, Engelhardt K (2016) Developing computational thinking in compulsory education – Implications for policy and practice; EUR 28295 EN; doi:https://doi.org/10.2791/792158,

References

[BDD+21]

[BDŠ18]

[Big93]

[BLB+18]

[BMJ+15]

[Boo19]

[BŠK+22]

[BSL+20]

[BVW+21]

[BWM+17]

[CC19] [Cer17] [Che17]

[Chu20] [Cri17] [CS20]

[Dey01] [DMG+22]

31 Retrieved from http://publications.jrc.ec.europa.eu/repository/bitstream/JRC104188/ jrc104188_computhinkreport.pdf Berglund A, Dagiene V, Daniels M, Dolgopolovas V, Rouvrais S, Tardell M (2021) Euro-Asia Collaboration for Enhancing STEM Education. https://cte-stem2021.nie. edu.sg/assets/docs/CTE-STEM_Compiled-Proceedings.pdf Burbaite R, Dr˛asut˙e V, Stuikys V (2018) Integration of Computational Thinking Skills in STEM-Driven Computer Science Education, 2018 IEEE Glob. Eng. Educ. Conf., pp. 1824–1832 Biggs B (1993) From Theory to Practice: A Cognitive Systems Approach, Higher Education Research & Development, 12:1, 73–85, DOI: https://doi.org/10.1080/072 9436930120107 Burrows A, Lockwood M, Borowczak M, Janak E, Barber B (2018) Integrated STEM: Focus on Informal Education and Community Collaboration through Engineering Educ. Sci. 8, 4; doi:https://doi.org/10.3390/educsci8010004 Bryan LA, Moore T J, Johnson CC, Roehrig GH (2015) Integrated STEM education. In C. C. Johnson, E. E. Peters-Burton, & T. J. Moore (Eds.), STEM roadmap: A framework for integration (pp. 23–37). London: Taylor & Francis. https://doi.org/10. 4324/9781315753157-3 Boon SN (2019) Exploring STEM Competencies for the 21st Century. In-Progress Reflection No. 30 On Current and Critical Issues in Curriculum, Learning and Assessment. UNESCO Burbait˙e R, Štuikys V, Kubili¯unas R, Ziberkas G (2022) A Vision to Develop Integrated Skills for STEM-driven Computer Science Education. In INTED 2022 16th International Technology, Education and Development Conference Bonfield CA, Salterh M, Longmuir A, Benson M, Adachi C (2020) Transformation or evolution?: Education 4.0, teaching and learning in the digital age. Higher Education Pedagogies, Vol. 5, No. 1, 223–246 Benita F, Virupaksha D, Wilhelm E et al. (2021) A smart learning ecosystem design for delivering Data-driven Thinking in STEM education. Smart Learn. Environ. 8, 11. https://doi.org/10.1186/s40561-021-00153-y Bell D, Wooff D, McLain M, Morrison-Love D (2017) Analysing design and technology as an educational construct: an investigation into its curriculum position and pedagogical identity. The Curriculum Journal, 1–20. doi:https://doi.org/10.1080/095 85176.2017.1286995 Cansu SK, Cansu FK (2019) An Overview of Computational Thinking. International Journal of Computer Science Education in Schools, Vol. 3, No. 1 ISSN 2513–8359 Ceri S (2017) On the Big Impact of “Big Computer Science“ H. Werthner, F. van Harmelen (eds.), Informatics in the Future, Springer, pp.17–26 Cheung KS (2017) Computational Thinking and STEM Education. The Education University of Hong Kong http://www.hkaect.org/hkaect-aect-2017/download/paper/ KS3.pdf Church M (2020) The Evolution of Education. Matt Church Pty Ltd 2020, www.mat tchurch.com/talkingpoint/education-evolution Crick T (2017) Computing Education: An Overview of Research in the Field. Royal Society Cheng YC, So WWM (2020) Managing STEM learning: a typology and four models of integration. International Journal of Educational Management, Vol. 34 No. 6, 2020, pp. 1063–1078 Dey AK (2001) Understanding and using context. Personal and ubiquitous computing, 5(1):4–7 Duggal AS, Malik PK, Gehlot A, Singh R, Gaba GS, Masud M, Al-Amri JF (2022) A sequential roadmap to Industry 6.0: Exploring future manufacturing trends. IET Commun.; 16:521–531, Willey

32

1 Context and Model for Writing This Book: An Idea of Big Concepts

[Dou04] Dourish P (2004) What We Talk about When We Talk about Context, Personal Ubiquitous Computing, vol. 8, pp. 19–30, Feb. 2004 [DRM17] Danah H, Richardson C, Mehta R (2017) Design thinking: A creative approach to educational problems of practice. Thinking skills and Creativity, 26:140–153 [DT19] Denning PJ, Tedre M (2019) Computational Thinking (MIT Press Essential Knowledge series). The MIT Press. Kindle Edition [Eic19] Eickelmann B (2019) Measuring Secondary School Students’ Competence in Computational Thinking in ICILS 2018—Challenges, Concepts, and Potential Implications for School Systems Around the World. In S.-C. Kong and H. Abelson (eds.), Computational Thinking Education, https://doi.org/10.1007/978-981-13-6528-7_4 [EK15] English LD, King DT (2015) STEM learning through engineering design: Fourthgrade students’ investigations in aerospace. International Journal of STEM Education, 2, 14. https://doi.org/10.1186/s40594-015-0027-7 [Eng16] English LD (2016) STEM education K–12: perspectives on integration. International Journal of STEM Education (2016) 3:3DOI https://doi.org/10.1186/s40594016-0036-1 [Eng18] English LD (2018) Learning while designing in a fourth-grade integrated STEM problem. International Journal of Technology and Design Education. https://doi.org/ 10.1007/s10798-018-9482-z [FAS+18] Fraillon J, Ainley J, Schulz W, Duckworth D (2018) IEA International Computer and Information Literacy Study 2018: Assessment Framework, Springer [Fin16] Finegold P (2016) Big Ideas: The future of engineering in schools. . Institution of Mechanical Engineers [Fra18] France B (2018) Modeling in technology education: A route to technological literacy. In M. J. de Vries (Ed.), Handbook of Technology Education. Dordrecht: Springer [FS14] Fullan M, Scott G (2014) New pedagogies for deep learning: Education plus [GK83] Gajski DD, Kuhn RH (1983) New VLSI tools. IEEE Computer, 16(12), 11–14 [GLG21] Guo X-R, Li X, GuoY-M (2021) Mapping Knowledge Domain Analysis in Smart Education Research. Sustainability 13, 13234. https:// doi.org/https://doi.org/10.3390/ su132313234 [GT03] Grubb P, Takang AA (2003) Software Maintenance: Concepts and Practice, World Scientific Publishing Co. Pte. Ltd [HF19] Hsu Y-S, Fang S-C (2019) Opportunities and Challenges of STEM Education. In Asia-Pacific STEM Teaching Practices; Springer International Publishing: Berlin/ Heidelberg, Germany, 2019; pp. 1–16 [HHU08] Hubaux A, Heymans P, Unphon H (2008) Separating variability concerns in a product line re-engineering project. In: EA-AOSD’08, Brussels, Belgium, 31 Mar 2008 [HKB+20] Han J, Kelley T, Bartholomew S, J. Knowles G (2020) Sharpening STELwith integrated STEM. Technology and Engineering Teacher, November 2020, pp.24–29 [HM18] Hoel T, Mason J (2018) Standards for smart education – towards a development framework. Smart Learn. Environ. 5, 3. https://doi.org/10.1186/s40561-018-0052-3 [HPO15] Hermann M, Pentek TT, Otto B (2015) Design Principles for Industrie 4.0 Scenarios: A Literature Review. Working paper No. 1. Dortmund, Germany: Technical University Dortmund [HPS14] Honey M, Pearson G, Schweingruber H (2014) STEM integration in K–12 education: status, prospects, and an agenda for research. Committee on integrated STEM education. The National Academies Press, Washington, DC [HS14] Halverson ER, Sheridan K (2014) The Maker Movement in Education, Harvard Educational Review, 84(4), 495–504. https://doi.org/10.17763/haer.84.4.34j1g6814 0382063 [HS19] Hallström J, Schönborn KJ (2019) Models and modelling for authentic STEM education: reinforcing the argument. IJ STEM Ed 6, 22 (2019). https://doi.org/10.1186/s40 594-019-0178-z

References

33

[iCGL16] Applying Computational Thinking in STEM Education. SOLUTIONS by iCarnegie Global Learning [Iye19] Iyer S (2019) Teaching-Learning of Computational Thinking in K–12 Schools in India. In S.-C. Kong and H. Abelson (eds.), Computational Thinking Education, https://doi. org/10.1007/978-981-13-6528-74 [KAL19] Kong S-C, Abelson H, Lai M (2019) Introduction to Computational Thinking Education In S.-C. Kong and H. Abelson (eds.), Computational Thinking Education, https:// doi.org/10.1007/978-981-13-6528-7_4 [KK16] Kelley TR, and Knowles JG (2016) A conceptual framework for integrated STEM education. International Journal of STEM Education, 3(1), 11. https://doi.org/10.1186/ s40594-016-0046-z [KM09] Koehler M, Mishra P (2009) What is Technological Pedagogical Content Knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1):60–70 [Kon18] Kong S Ch (2018) Computational Thinking and STEM education. The Education University of Hong Kong [KR08] Kotiadis K, Robinson S (2008, December) Conceptual modelling: knowledge acquisition and model abstraction. In 2008 Winter Simulation Conference (pp. 951–958). IEEE [KSP19] Keith PK, Sullivan FR, Pham D (2019) Roles, Collaboration, and the Development of Computational Thinking in a Robotics Learning Environment. In S.-C. Kong and H. Abelson (eds.), Computational Thinking Education, https://doi.org/10.1007/978981-13-6528-7_4 [Lar19] Larke LR (2019) Agentic neglect: Teachers as gatekeepers of England’s national computing curriculum. British Journal of Educational Technology, Vol. 50, No. 3, 2019, pp. 1137–1150 [LCW+09] Liu L, Chen H, Wang H, Zhao C. (2009) Construction of a student model in contextually aware pervasive learning. In: Pervasive Computing (JCPC), 2009 Joint Conferences on PC, pp. 511–514, IEEE [Lee11] Lee J-L (2011) Korea’s Choice: “Smart Education”. OECD. https://community.oecd. org/community/educationtoday/blog/2011/07/26/korea-s-choice-smart-education (accessed on 15 March, 2022) [LHW17] Liu D, Huang R, Wosinski M (2017) Smart Learning in Smart Cities. Springer Lecture Notes in Educational Technology, 1–240 [LSG+19] Li Y, Schoenfeld AH, diSessa AA, Graesser AC, Benson LC, English LD, Duschl RA (2019) Design and Design Thinking in STEM Education. Journal for STEM Education Research (2019) 2:93–104 [MAK19] Martín AC, Alario-Hoyos C, Kloos CD (2019) Smart Education: A Review and Future Research Directions. Proceedings 2019, 31, 57; doi:https://doi.org/10.3390/procee dings 2019031057 [MAP+19] Martín-Páez T, Aguilera D, Perales-Palacios FJ, Vílchez-González JM (2019) What are we talking about when we talk about STEM education? A review of literature. Sci. Educ., 103, 799–822 [MMH17] Maryam SAS, Mahyuddin A, Hasnah M (2017) The Framework for the Integration of Computational Thinking in Ideation Process,” 2017 IEEE Int. Conf. Teaching, Assessment, Learn. Eng., no. December, pp. 61–65 [MNN+21] Miranda J, Navarrete Ch, Noguez J, Molina-Espinosa J-M, Ramírez-Montoya M-S, Navarro-Tuch SA, Bustamante-Bello M-R, Rosas-Fernandez J-B, Molina A (2021) The core components of education 4.0 in higher education: Three case studies in engineering education. Elsevier. Computers and Electrical Engineering 93 (2021) 107278 [MRR+22] Mike K, Ragonis N, Rosenberg-Kima RB, & Hazzan O (2022) Computational thinking in the era of data science. Communications of the ACM, 65(8), 33–35 [MSR+19] Marques M, Simmonds J, Rossel PO, Bastarrica MC (2019) Software Product Line Evolution: A Systematic Literature Review. Information and Software Technology, 105:190–208

34

1 Context and Model for Writing This Book: An Idea of Big Concepts

[MT21] Mayr HC, Thalheim B (2021) The triptych of conceptual modeling: A framework for a better understanding of conceptual modeling. Software and Systems Modeling (2021) 20:7–24 [Myl08] Mylopoulos J (2008) Conceptual modelling and telos, Conceptual modeling, databases, and CASE, pp. 363–376 [NS17] Nadelson LS, Seifert AL (2017) Integrated STEM defined: Contexts, challenges, and the future, The Journal of Educational Research, 110:3, 221–223, DOI: https://doi. org/10.1080/00220671.2017.1289775 [NSB15] National Science Board (2015) Revisiting the STEM Workforce, A Companion to Science and Engineering Indicators 2014, Arlington, VA: National Science Foundation (NSB-2015–10). The Report available at http://www.nsf.gov/nsb/publications/ [PGJ17] Papavlasopoulou S, Giannakos MN, Jaccheri L (2017) Empirical studies on the Maker Movement, a promising approach to learning: A literature review. Entertain. Comput., vol. 18, pp. 57–78, 2017 [PP20] Palt T, Pedaste M (2020) Model for Developing Computational Thinking Skills. Informatics in Education, No 1, 113–128 [QVR+20] Quinton C, Vierhauser M, Rabiser R, Baresi L, Grünbacher P, Schuhmayer Ch (2020) Evolution in Dynamic Software Product Lines. Journal of Software: Evolution and Process, John Wiley & Sons, Ltd., 00:1–6 [RAE16] The Royal Academy of Engineering (2016) Big Ideas: The future of engineering in schools. www.imeche.org/policy-and-press/reports/detail/big-ideas-report-the-fut ure-of-engineering-in-schools (retrieved on 2 May, 2022) [RAL+15] Robinson S, Arbez G, Louis G, Birta LG, Tolk A, Wagner G (2015) Conceptual Modeling: Definition, Purpose and Benefits. Proceedings of the 2015 Winter Simulation Conference, L. Yilmaz, W. K. V. Chan, I. Moon, T. M. K. Roeder, C. Macal, and M. D. Rossetti, eds. [RDE+21] Roehrig, GH, Dare EA, Ellis JA, Ring-Whalen E (2021) Beyond the basics: a detailed conceptual framework of integrated STEM. Disciplinary and Interdisciplinary Science Education Research, 3(1), 1–18 [Ree15] Reeve EM (2015) STEM Thinking! Technology and Engineering Teacher, 75(4), 8–16 [RS12] Razzouk R, Shute V (2012) What is design thinking and why is it important? Review of Educational Research, 82(3), 330–348 [RZN+18] Rozali NF, Zaid NM, Noor NM, Ibrahim NH (2018) Developing a Unified Model of Teaching Computational Thinking. IEEE 10th International Conf. On Eng. Edu, Kuala Lumpur, Malaysia Nov. 8–9 [SAB+19] Swanson H, Anton G, Bain C, Horn M, Wilensky U (2019) Introducing and Assessing Computational Thinking in the Secondary Science Classroom. In: Kong SC., Abelson H. (eds.) Computational Thinking Education. Springer, Singapore [ŠB18] Štuikys V, Burbait˙e R (2018) Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices, Springer [Sch06] Schnieders A (2006) Variability Mechanism Centric Process Family Architectures. In Proceedings of ECBS’06. IEEE Computer Society Press, 2006 [ŠD12] Štuikys V, Damaševiˇcius R (2012). Meta-programming and model-driven metaprogram development: principles, processes and techniques (Vol. 5). Springer Science & Business Media [SDF18] Sengupta P, Dickes A, Farris AV (2018) Toward a Phenomenology of Computational Thinking in K–12 STEM. In: Khine, M.S., (Ed). Computational Thinking in STEM Discipline: Foundations and Research Highlights. Springer [Sha19] Sharma P (2019) Digital Revolution of Education 4.0. International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249–8958 (Online), Volume-9 Issue-2, December, 2019 [She01] Sheard T (2001) Accomplishments and Research Challenges in Meta-programming. In: Taha, W. (eds.) Semantics, Applications, and Implementation of Program Generation. SAIG 2001. Lecture Notes in Computer Science, vol 2196. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44806-3_2

References

35

[SR20] Stentoft J, Rajkumar C (2020) The relevance of Industry 4.0 and its relationship with moving manufacturing out, back and staying at home. International Journal of Production Research, 58(10), 2953–2973. https://doi.org/10.1080/00207543.2019. 1660823 [SW13] Selby CC, Woollard J (2013) Computational thinking: the developing definition. Southampton, UK: University of Southampton. Retrieved from https://eprints.soton. ac.uk/id/eprint/356481 [Swa15] Swaid SI (2015) Bringing computational thinking to STEM education. Procedia Manufacturing 3 (2015 ) Elsevier, 3657 – 3662 [TD16] Tedre M, Denning, PJ (2016) The Long Quest for Computational Thinking. Proceedings of the 16 th Koli Calling Conference on Computing Education Research, November 24–27, 2016, Koli, Finland: pp. 120–129 [TD17] Tedre M, Denning P (2017) Shifting identities in computing: From a useful tool to a new method and theory of science. In Informatics in the Future, pages 1–16. Springer, 2017 [TDK+10] Tolk A, Diallo SY, King RD, Turnitsa CD, Padilla JJ (2010) Conceptual Modeling for Composition of Model-Based Complex Systems. In Conceptual Modeling for Discrete-Event Simulation, 355–381, Boca Raton, Florida: CRC Press, Taylor and Francis Group [Ter97] Terry PD (1997) Compilers and compiler generators: an introduction with C++. International Thomson Computer Press, London [Tha10] Thalheim B (2010) Towards a theory of conceptual modelling. J. Univers. Comput. Sci., 16(20), 3102–3137 [TT20] Trevallion D, Trevallion T (2020) STEM: Design, Implement and Evaluate. International Journal of Innovation, Creativity and Change. www.ijicc.net Volume 14, Issue 8 [Tur50] Turing AM (1950) Computing Machinery and Intelligence, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433 [TYL+20] Tang X, Yin Y, Lin Q, Hadad R, Zha X (2020) Assessing computational thinking: A systematic review of empirical studies- Computers & Education, 2020 – Elsevier [UBH+18] Uskov VL, Bakken JP, Howlett RJ, Jain LC (Eds.) (2018) Smart Universities: Concepts, Systems, and Technologies, Springer [Vir76] Virth N (1976) Algorithms + Data structures = Programs. Prentice Hall, Englewood Cliffs [VMO+12] Verbert K, Manouselis N, Ochoa X, Wolpers M, Drachsler H, Bosnic I, Duval E (2012) Context-aware recommender systems for learning: a survey and future challenges. In: Learning Technologies, IEEE Transactions on, pp. 318–335 [Vri17] de Vries MJ (2017) Technology education: An international history. In M. J. de Vries (Ed.), Handbook of Technology Education. Dordrecht: Springer [WBH+16] Weintrop D, Beheshti E, Horn M, Orton K, Jona K, Trouille L, Wilensky U (2016) Defining Computational Thinking for Mathematics and Science Classrooms. J Sci Educ Technol (2016) 25:127–147 [WEF21] Building a Common Language for Skills at Work. A Global Taxonomy. WEF (2021) http://www3.weforum.org/docs/WEF_Skills_Taxonomy_2021.pdf [WF17] Weese JL, Feldhausen R (2017) STEM Outreach: Assessing Computational Thinking and Problem Solving. American Society for Engineering Education [Win06] Wing JM (2006) Computational thinking. Communications of the ACM, 49(3), 33–35 [Win11] Wing J M (2011, March 6). Research Notebook: Computational Thinking--What and Why? (J. Togyer, Ed.) The Link Magazine. Retrieved March 29, 2016, from http:// www.cs.cmu.edu/link/researchnotebook-computational-thinking-what-and-why [XY17] Xiao M, Yu X (2017) “A Model of Cultivating Computational Thinking Based on Visual Programming,” 2017 Int. Conf. Educ. Innov. through Technol., pp. 75–80, 2017

36

1 Context and Model for Writing This Book: An Idea of Big Concepts

[ZGW+22] Zhou D, Gomez R, Wright N, et al. (2022) A design-led conceptual framework for developing school integrated STEM programs: the Australian context. Int J Technol Des Educ 32, 383–411. https://doi.org/10.1007/s10798-020-09619-5 [ZLO07] Zimmermann A, Lorenz A, Oppermann R (2007) An Operational Definition of Context, Proc. Sixth Int’l and Interdisciplinary Conf. Modeling and Using Context (C Exploring STEM Competencies for the 21stCentury ONTEXT ’07), pp. 558–572

Part I

Pedagogical Aspects of STEM-Driven CS Education Evolution: Integrated STEM-CS Skills Model, Personalisation Aspects and Collaborative Learning

Summary. Part I covers four chapters: from Chaps. 2 to 5. These represent pedagogyrelated evolutionary aspects of smart STEM-driven computer science education. We express those aspects through (i) the introduction of integrated STEM-CS skills model (Chap. 2); (ii) personalised learning with the focus on smart content personalisation at the component level (Chap. 3); (iii) personalised learning at the sub-system level by using personal teacher’s and personal students’ libraries (Chap. 4); and (iv) robot-contest based collaborative learning combined with the task complexity research issues (Chap. 5). The content of all these chapters taken together is a significant component of Smart Education. The latter, according to our vision, is a Big Concept along with computational thinking and other skills types within the integrated STEM-CS skills model. Next, a summary of each chapter of this part follows. In Chap. 2, we introduce the term integrated STEM-driven CS skills (shortly ISTEM-CS skills) and define it as I ST E M − C S skills = ST E M skills + Metrics f or Skills Assessment. We refer to STEM skills as a compound of four top-level components: (1) learning and innovation skills; (2) scientific and mathematical thinking skills; (3) computational thinking skills; and (4) technological and engineering thinking skills. The introduced model enables, to some extent, to systemise and have a unified picture about skills-metrics relationships in STEM-driven CS education. This model, despite of its indicated inaccuracy, is suitable for the manual or automated modes of use. It is applicable and valuable for content designers, teachers, and students. The content designers are able to extend the description of metadata by including prognostic skills value for a more targeted content (i.e., learning object—LO) search. Teachers are benefited by skills evaluation template automatically derived using the developed generating tool. Using the discussed approach, students are aware in advance what skills development capabilities the given LO contain and are able to flexibly adopt his/her learning path.

38

Part I: Pedagogical Aspects of STEM-Driven CS Education Evolution: …

In Chap. 3, we introduce a generic structure of personalised learning content identified as learning objects (LOs) in three categories: component-based LO, generative LO, and smart LO. The latter is a combination of the first two. The generic structure integrates those entities with the assessment modules and specifies the distributed interface for connecting them with digital libraries. The other contribution is the learner’s knowledge assessment tool implementing the model that integrates attributes defined by the revised BLOOM taxonomy [AKA+01] and computational thinking skills [Cum16, AM16] with adequate questionnaires or solving the exam tasks we have developed. Both (the generic structure and assessment module) predefine a variety of personalised learning paths. The personalised smart LO is a mini scenario to form personalised learning paths by the learner and drive the personalised learning process. The basis of our methodology is the recognition, extraction, and explicit representation and then implementation of the STEM-driven learning variability in four dimensions, i.e., social, pedagogical, technological, and content. All these enforce integrative aspects of STEM-driven CS education and contribute to the evolution of pedagogical aspects we discuss in Part I of this book. Personalisation of STEM-driven CS education represents a new way for developing computational thinking and other skills in the process of gaining interdisciplinary knowledge; it contributes to achieving faster and deeper knowledge for decision-making skills and measuring progress through multiple assessments. In Chap. 4, we introduce the Personal Generative Library (PGL) concept. It describes automated content design and automated management for personalised learning. Based on this concept, we have built an experimental system that integrates conventional repositories, the teacher’s PGL, the students’ PGLs, their individual repositories, and personalised learning processes using the previously developed framework. The main contribution of this research is (i) a distributed architecture of the proposed system and (ii) generative capabilities of its constituents. That ensures flexibility and effectivity in managing the delivery of the personalised content, its renewal, and extension for the needs of personalised learning. It was possible to achieve that using model-driven approaches borrowed from software engineering and adapted in the context of our research. With the ever-growing types and amount of educational content, its retrieval from conventional digital libraries to ensure the specific learner’s needs, e.g., for personalised learning or course specificity, encounters many issues, such as time, quality, or even no search result. One way to overcome those issues and enforce learning performance is to apply a specialised approach, such as personalised digital libraries with personalised content. The basis of this methodology is a deep separation of concepts at the component level (i.e., content items) and the sub-system level (i.e., student’s PGL/repository, teacher’s PGL/repository, and their tools) along with generative technology applied. Either the content within the student’s PGL/repository is a direct product of personalised learning obtained during the classroom activities or is a by-product created during outside activities. We have presented a survey provided by students to evaluate the personalised content of the teacher’s PGL/repository. This survey, constructed on the well-known methodology, gave a good evaluation. We have also presented some quantitative characteristics of

Part I: Pedagogical Aspects of STEM-Driven CS Education Evolution: …

39

the students’ PGLs/repositories. We have obtained that a student can create 3–5 entities for PGL during classroom activities only. We have tested this approach for STEMdriven CS education in one high school. This approach, however, in terms of the concept itself and PGL tools proposed, is independent neither of the teaching course nor the teaching environment. Combining the personalised content with management procedures of this content using the developed tools enables enforcing and enhancing personalised learning significantly in terms of higher flexibility, more efficient search, and more efficient procedures, e.g., in creating the personalised learning paths. One can treat the student’s PGL/repository as an individual portfolio, as evidence of the progress, achievements, and competencies. In Chap. 5, we discuss collaborative learning through competition-based learning (CBL), such as robot contests, internationally recognised as the First Lego League (FLL) in informal learning. We consider this topic in two dimensions (i.e., preparation for this contest and the contest itself) by complex task solving. Both are highly related among themselves and with STEM education. We have proposed the complexity model of a real-world task that integrates a few known approaches and the specificity of STEM. We argue that, for broader context, we need a precise or, at least, explicit definition and interpretation of the complexity issues due to multiple reasons. (1) With the ever-growing technology advancements, the complexity of systems in all domains, including education, grows at the same or similar rate. (2) STEM evolves rapidly towards a higher integration using complex learning environments for solving more complex real-world tasks. (3) The complexity issues are essential to educational robotics used in two modes, for formal or informal learning, e.g., CBL.

References [AKA+01] Anderson LW, Krathwohl DR, Airasian P, Cruikshank K, Mayer R, Pintrich P, et al. (2001) A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy, New York, Longman Publishing [Cum16] Cummins K (2016) Teaching Digital Technologies & STEM: Computational Thinking, coding and robotics in the classroom, Amazon.com, Accessed 22 January 2019 [AM16] Atmatzidou S, Demetriadis S (2016) Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences, Robotics and Autonomous Systems, vol. 75, pp. 661–670

Chapter 2

Models for the Development and Assessment of Integrated STEM (ISTEM) Skills: A Case Study

2.1 Introduction STEM is the teaching/education paradigm that focuses on obtaining the interdisciplinary knowledge and skills needed for our life and works in the twenty-first century. In the digital age, all those are of great importance not only from the STEM education perspective but much broader—from the perspective of Industry 4.0 and Education 4.0 initiatives [SGT21, SF20]. Two communities (industry and education) therefore deal with skills issues needed now and for the nearest future with ever-growing intensity. The first do aiming at up-skilling, reskilling and deploying the workforce in the context of continuous technology advancements [Zah20]. The second deals with the education transformations due to the new technology expansion and enhanced requirements for a higher quality of education [HBK+19]. Broadly, the Global Partnership for Education defines skills for the twenty-first century as follows [GPE20]. Twenty-first-century skills are abilities and attributes that can be taught or learned in order to enhance ways of thinking, learning, working and living in the world. The skills include creativity and innovation, critical thinking/problem solving/decision making, learning to learn/metacognition, communication, collaboration (teamwork), information literacy, ICT literacy, citizenship (local and global), life and career skills and personal and social responsibility (including cultural awareness and competence).

Typically, STEM is defined as “an interdisciplinary approach to learning where rigorous academic concepts are coupled with real-world lessons as students apply science, technology, engineering, and mathematics in contexts that make connections between school, community, work, and the global enterprise enabling the development of STEM literacy and with it the ability to compete in the new economy” [TKHT9]. Currently, STEM education evolves rapidly towards a higher integration either among separate constituents (Science, Technology, Engineering, and Mathematics) or by adding new aspects and skills, such as social. Nadelson & Seifert [NS17] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Štuikys and R. Burbait˙e, Evolution of STEM-Driven Computer Science Education, https://doi.org/10.1007/978-3-031-48235-9_2

41

42

2 Models for the Development and Assessment of Integrated STEM …

define integrated STEM as the seamless amalgamation of content and concepts taken from different STEM disciplines. The integration appears in ways such that “knowledge and processes of the specific STEM disciplines are considered simultaneously without regard to the discipline, but rather in the context of a problem, project, or task”. In this chapter, we claim that teaching and learning based on integrated STEM through solving real-world tasks is a reliable way of obtaining, producing, and applying integrated skills. The term ‘integrated skills’ comes from the field of teaching languages and takes place when classes focus on more than one skill (i.e., listening, speaking, writing, and reading) and the components of language that provide support for the acquisition of these skills, such as grammar, vocabulary, discourse, and pronunciation (https://www.igi-global.com/dictionary/integrated-ski lls/95960). In addition, one can meet the other term, STEM skills, widely discussed in the literature. Note that definitions of this term may vary depending on the context, i.e., educational or labour market. For example, in the context of the twenty-first skills framework [FRA19] and in vocational education and training contexts, the report [SK16] gives the following definition. STEM skills are “a combination of the ability to produce scientific knowledge, supported by mathematical skills, in order to design and build (engineer) technological and scientific products or services”. For other STEM skills definitions, see Sect. 2.4 (Table 2.1). We introduce the term Integrated STEM-driven CS skills (shortly ISTEM-CS skills) and define it as. ISTEM-CS Skills = STEM Skills + Metrics for Skills Assessment. We refer to STEM skills as a compound of four top-level components: (1) learning and innovation skills; (2) scientific and mathematical thinking skills; (3) computational thinking skills; and (4) technological and engineering thinking skills. Each top-level component consists of a set of lower-level skills and relationships among them as defined by the devised models that we have developed [BŠK+22] using ideas taken from the recent (selected) Web of Science papers. In addition, we have extracted metrics for skills assessment from the relevant literature.

2.2 The Aim and Motivation This chapter provides research work in defining the ISTEM-CS skills categories with a focus on interrelationships among types of skills, defining the metrics for adequate skills categories to make their assessment and building explicit executables models. Those models serve for systematisation and a better understanding of the domain. In addition, they enable building tools to generate the skills descriptions for more effective use. Skills are abilities and attributes to be taught or learned by dealing with

2.2 The Aim and Motivation

43

Table 2.1 Definitions of STEM-driven skills by categories Category

Skill

Definitions/explanations

1. Learning and innovation [HAC21]

1.1 Critical thinking and problem solving

The intentional application of rational, higher- order thinking skills, such as analysis, synthesis, problem recognition and problem-solving, inference, and evaluation [BO18]

1.2 Creativity and innovation

Developing ideas, processes, and/ or products that are novel (original) and functional (effective or useful) [SR21]

1.3 Communication

Discussing new ideas, giving and sharing information, communicating in teamwork [AAN19, KD17, PK-B19]

1.4 Collaboration Is typically embedded in the problem-solving process and critical to honing negotiation skills to arrive at potential solutions [HQA+17] 2. Scientific and mathematical thinking 2.1 Forming and [ZK18] refining hypotheses

3. Computational thinking [Cum16, AD16]

Asking questions, developing and using models [ZK18]

2.2 Investigation skills

Planning and carrying out investigations [ZK18]

2.3 Evaluating evidence

Analysing and interpreting data/ evidence, constructing explanations [ZK18]

3.1 Abstraction

It is the process while simplifying from the concrete (something complicated) to the general as solutions are developed (by leaving out irrelevant details, finding the relevant patterns, and separating ideas from tangible details) [Cum16, AD16]

3.2 Decomposition

It is the process of breaking down problems into smaller parts that may be more easily solved [Cum16, AD16]

3.3 Generalization

It is transferring a problem-solving process to a wide variety of problems that allows expansion an existing solution in a given problem to cover more cases [Cum16, AD16] (continued)

44

2 Models for the Development and Assessment of Integrated STEM …

Table 2.1 (continued) Category

4. Technological thinking [GXH19, NN15, Doy21, Pir21, AJ18]

Skill

Definitions/explanations

3.4 Data representation

It is any sequence of one or more symbols given meaning by specific act(s) of interpretation. It is something more fundamental than an algorithm [Cum16, AD16]

3.5 Algorithm

It is a practice of writing specific and explicit step-by-step instructions for carrying out a process [Cum16, AD16]

4.1 Technological Consists of technological interest attitude and motivation, refers to technological beliefs, and includes technological critical thinking, decision-making, and an individual’s inclination towards a certain technological context [GXH19, AJ18] 4.2 Technological Includes technological concepts, knowledge technological theories, and technological methods [GXH19, AJ18] 4.3 Technological Refers to a psychological process capacity of executing skills and abilities in technological thinking, which is based on a larger share of psychomotor skills ensuring that technological tasks are accomplished or participation in technological activities [GXH19, AJ18]

5. Engineering thinking [YMV+17, Gre17]

5.1 Design thinking

“Closed” systems bounded with set parameters, prototype-driven, human-centred, solution-oriented [Gre17]

5.2 System thinking

Open systems with interaction, interdependence, abstraction-driven, system-centred, problem-oriented [Gre17]

5.3 Intersection between 5.1 and 5.2

Require similar cognitive skill set (analogy, ability to overcome fixation), require empathy/faculty for human relations, similar inquiry [Gre17]

© With kind permission from IATED [BŠK+22]

2.2 The Aim and Motivation

45

and interpreting the learning content. We motivate this research for the following reasons: (1) There is a direct relationship between the rapid technology changes and the need for new integrated skills. (2) Skills changes appear along with technological innovations practically in all sectors of the economy, especially in its flagman, i.e., education. (3) Two large communities, industry and education, are highly interested in various aspects of skills-related research and are devoting many efforts and resources to tackle the challenges in this domain. (4) Despite this enhanced interest, there are many open problems, such as inconsistency of terminology, managing the complexity due to high variability in types and categories of skills, and a lack of the quantitative measures and systemisation. (5) STEM-driven skills are highly heterogeneous and require specific approaches. Its specificity lies, for example, in real-world task solving using robots as physical Learning Objects (LOs) plus robot’s and sensor’s control programs as soft LOs. (6) We admit that solving skills-related problems should not be postponed to the final learner’s knowledge assessment phase. On the contrary, the teacher and learner should be aware of the prognostic capabilities to acquire adequate skills at the phase of content selection to start learning or even at the LO design phase. In our view, the description of the potential to deliver skills is to be incorporated, among other information, into a description of the metadata of LOs. Therefore, we make distinguish between the factual (achieved by the learner) level of skills that are measured during the learner’s assessment and the prognostic one (as defined in the curriculum). The latter should be embedded into the metadata description as an additional information for a more targeted search of LOs. We introduce the concept ‘skills development model’ as evolutionary changes by moving from the prognostic level of skills to the factual level of skills by solving the predefined set of real-world tasks. The essential aspect of evolutionary changes is an extension of the generic scenario (see 259–278 in [ŠB18]) by adding quantitative measurement of integrated STEM-CS skills to ensure the skills requirements for the twenty-first century. (7) Finally, we have a pure methodological intention of this research, knowing the book’s aim. This book is about the evolution/changes of STEM-driven CS education research and practice of recent years. The continuous technology expansion has significantly stimulated education and educators to accept and adopt new topics in the curriculum, such as AI, BD, IoT, etc. These topics have opened new possibilities for knowledge and skills development, and on the other hand, this process has become more complex and challenging. Therefore, research on skills development in school settings should be systemic, and we need new approaches.

46

2 Models for the Development and Assessment of Integrated STEM …

2.3 Research Tasks and Methodology To achieve the formulated aim, we consider the following tasks identified as Research Questions (RQs) below. RQ1: Identification and analysis of relevant research papers to define context, STEM-driven skills and their categories RQ2: Analysis of the interdependencies among different skills RQ3: Development of the feature-based skills model for STEM-driven CS education RQ4: Analysis of known metrics for skills assessment and skills metrics model development RQ5: The development of the ISTEM-CS skills model to combine the skills model, metrics model and their dependences (constraints) RQ6: Validation of the ISTEM-CS skills model semantics through a case study (task solving) RQ7: Implementation of the ISTEM-CS skills model through the generation of skills-metrics description instances for a given Learning Object (LO) selection or the learner’s knowledge assessment. The RQ6 and RQ7 are about the process of developing skills that can take place. To make the process more effective, the teacher needs to know in advance what skill types are embedded and reflected in a given LO before its selection. The student, for example in the case of personalised learning, needs to know in advance what type of knowledge and skills he/she can expect to obtain by selecting LOs from the library. We argue therefore that it is important to enrich the metadata description of the LO by indicating the prognostic skills possibilities, before sending this LO to a library. For dealing with these RQs, we have applied the following research methodology. (1) We have made a collection of recent, relevant and reliable information sources for analysis. We have included the publications with the publishing date between the years 2015–2021. Plenty of papers (about 150) we have collected were from the journals of Web of Science with STEM skills or related topics. (2) By providing analysis, we first focused on the context, definitions, categories, and possible relationships among skills categories. We have identified that as a structural analysis for dealing with RQ1. (3) Then we applied an extended analysis (RQ2) called functional analysis that connects analysis with skills metrics and modelling approaches (RQ3 and RQ4) to define more precise relationships for building the explicit executable model for ISTEM-CS skills. (4) Finally, we have validated the latter model with a case study (RQ6) and then implemented it (RQ7), using meta-programming techniques (for extended view, see [ŠD12], for application of those techniques in STEM education, see [ŠB18]).

2.4 Related Work

47

2.4 Related Work Stream A: General about skills for the twenty-first century and integrated STEM. Skills topics are the focus now. A huge number of organisations from the industry and academic sectors are involved in discussions on this topic. The reason is that, in the context of the Fourth Industrial Revolution (Industry 4.0), there is an urgent need to revisit and update education systems worldwide to equip children with the skills to navigate the future of work and the future of societies. Therefore, the Report “Schools of the Future” [SF20] prepared by WEF in 2020 introduces a framework and strategy called “Education 4.0” on how to provide transformations in education to achieve high-quality learning. This report identifies eight critical characteristics for learning content and experiences for the twenty-first century as follows: (i) Global citizenship skills; (ii) Innovation and creativity skills; (iii) Technology skills; (iv) Interpersonal skills; (v) Personalised and self-paced learning; (vi) Accessible and inclusive learning; (vii) Problem-based and collaborative learning; and (viii) Lifelong and student-driven learning. In the literature regarding the twenty-first century skills, one can meet a variety of categories of this term, such as hard skills, soft skills, cognitive skills, non-cognitive skills, socio-emotional skills, etc. Therefore, first we need to look at some studies to clarify the terminological issues. For example, The National Academy of Sciences produced a comprehensive study in 2012 entitled “Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century” edited by James W. Pellegrino and Margaret L. Hilton. The study analyses three broad domains of human’s competence, i.e., Cognitive, Intrapersonal and Interpersonal, and provides the following categorisation of skills. • The Cognitive Domain includes three clusters of competencies: cognitive processes and strategies; knowledge; and creativity. These clusters include competencies such as critical thinking, information literacy, reasoning and argumentation, and innovation. • The Intrapersonal Domain includes three clusters of competencies: intellectual openness; work ethic and conscientiousness; and positive core self-evaluation. These clusters include competencies such as flexibility, initiative, appreciation for diversity, and metacognition (the ability to reflect on one’s own learning and make adjustments accordingly). • The Interpersonal Domain includes two clusters of competencies: teamwork and collaboration, and leadership. These clusters include competencies such as communication, collaboration, responsibility, and conflict resolution. Competence is a compound of knowledge, skills and attitudes. Often technically oriented and industry people prefer to use the terms hard skills and soft skills. Hard skills concern an employee’s ability to do a specific task using specialised knowledge. Soft skills are more about how an employee does that, i.e., adapts, collaborates, solves problems, and makes decisions. According to Linkedin Learning [And20], for example, the Top Ten Hard Skills include Blockchain, Cloud

48

2 Models for the Development and Assessment of Integrated STEM …

Computing, Analytical reasoning, Artificial intelligence, UX design, Business analysis, Affiliate marketing, Sales, Scientific computing, and Video production. Top five Soft Skills include Creativity, Persuasion, Collaboration, Adaptability, and Emotional intelligence. Hard skills are often easier to define and measure than soft skills. The paper [GAK15] categorises skills as cognitive and non-cognitive skills. Cognitive skills are basic academic skills. Non-cognitive skills consist of intrapersonal and interpersonal skills. This paper calls all kinds of non-cognitive skills as Mindset, Essential Skills, and Habits (MESH) and claims that learners with stronger non-cognitive skills are able to demonstrate higher academic achievements throughout schooling. Howard et al. [HBK+19] analyse the needed skills for the twenty-first century and introduce the concept of the integrative transformative education called the Living School that connects K–12 educational reform with Education for Sustainability, sustainable community development, and individual well-being. The report prepared by National Academy of Engineering and National Research Council (USA) [HPS14] defines broadly the integrated STEM education in the context of K–12 through four general features or subcomponents (Goals, Outcomes, Nature and Scope of Integration and Implementation). These features, while considered in practice, are to be interdependent. Hasanah [Has20] provides a literature review that focuses on four key definitions based on the selected studies: (a) STEM as discipline, (b) STEM as instruction, (c) STEM as field, and (d) STEM as a career. According to the author, STEM discipline is a fundamental part of STEM education because most of the initiatives in STEM education would be related to the disciplines. Kubat & Guray [KG18] assume that these four disciplines are to be taught as a holistic and an undistinguished collective, rather than teaching these four disciplines independently. Thibaut et al. [TCL+18] propose the framework containing five key principles: integration of STEM content, problem-centred learning, inquiry-based learning, designbased learning and cooperative learning. According to the authors, this framework has several benefits, such as its applicability in the classroom and the possibility to describe integrated STEM on multiple dimensions. In addition, this paper calls for further research to investigate the effects of integrated STEM on students’ cognitive and affective learning outcomes. Hobbs et al. [HCP18] reveal four models of STEM implementation based on the discipline paradigm: (1) “Four STEM disciplines are taught separately”; (2) “Teaching all four but more emphasis on one or two”; (3) “Integration at least three disciplines”; (4) “The integration of all four subjects by a teacher”. Typically, acquiring of scientific knowledge goes through solving problems that are feasible, worthwhile, contextualised, meaningful, ethical, and sustainable [Kra15]. In addition, solving of authentic problems transcends a single discipline. Hence, problemsolving is an activity intrinsic to many (if not all) domains and, thus, it can serve as a general and common approach to teaching. Therefore, problem-solving must be seen as a multidisciplinary challenge along with the corresponding practices and processes, for example, in STEM education, in which different domains like science,

2.4 Related Work

49

technology, engineering, mathematics, and computer science are involved simultaneously. To do that systematically, Priemer et al. [PEF+20] present a fine-grained, integrated, and interdisciplinary framework of problem-solving for education in STEM and CS by cumulatively including ways of problem-solving from all of these domains. This framework includes twelve activities represented as processes within the given flowchart to foster students’ problem-solving competencies. These competencies (e.g., to identify problems, to review related information, to develop and evaluate options, to implement solutions and many others) are often seen as important twentyfirst-century skills [Jan16] along with computational and scientific thinking [Win06]. Sandall et al. [SSW18] analyse the integrated STEM from the phenomenological and educators’ perception perspectives and define the subject as follows: Integrated STEM education involves the purposeful integration of science, technology, engineering, and mathematics as well as other subject areas through project-based learning experiences that require the application of knowledge to solve authentic, real-world problems in collaborative environments for the benefit of students.

Moore et al. [MJG20] discuss different ways in which STEM integration is conceptualised and defined and provide an overview of the perspectives regarding this integration. Among other aspects for integration, the paper indicates the role of the background of the educator, the model in the school and the purpose of learning. English et al. [EAK20] focus on design learning in K–12 STEM education considering design thinking as an important component for complex problem-solving that includes problem identification, generating solution ideas, creating prototypes, and testing and refining outcomes. All these are borrowed from technology and engineering education. In all analysed papers, researchers commonly agree that a real-world problem (often identified as complex one) is a critical component of integrated STEM teaching. For support of this thesis, see also, [RDR+21, KK16, MSW+14]. Stream B: STEM-related skills definition. The representation of results of the analysis in this part differs from the previous one. Here, we summarise them in Table 2.1. This format allows capturing the explicit categorisation that will serve us to derive a more formalised model for skills description later. With this representation, it is easy to capture the rank or priority of skills. At the top level, we included five categories of skills (Learning and innovation, Scientific and mathematical thinking, computational thinking, technological thinking, and engineering thinking). This categorisation follows from the analysis of integrated STEM definitions and the role of real-world tasks (see Stream A). For example, the real-world tasks solving indeed requires learning and innovation skills. We present the results of skill metrics analysis in Sect. 2.9 along with the devised skills model for a more convenient reading and understanding.

50

2 Models for the Development and Assessment of Integrated STEM …

2.5 Defining Context and Functionality for STEM-CS Skills We refer to skills as the human ability to apply in practice the facts (i.e., knowledge) obtained through learning and experiences. From this definition, implicitly follows that learning processes and their basic components form the context to start dealing with the skills-related problems. In the most general way, we characterise learning/education as a domain of gaining knowledge through the interaction among three basic components (pedagogy, content and technology). More specifically, the TPACK framework [KM09] summarises that as the intersection of the Technological knowledge, Pedagogical knowledge And Content Knowledge within the overall educational context. This framework, in fact, is the conceptual model to understand the essence of the education domain. We have applied this framework (with the adequate adaptation) in the context of STEM-driven CS education for designing smart content (see, Sect. 7.5 in Chap. 7 [ŠB18]). We applied it for discovery of evolutionary model (see Chap. 1, Sect. 1.6). We use it here again, with the needed adaptation, for developing the conceptual context model to define STEM-CS skills (Fig. 2.1). In the STEM paradigm, the process of knowledge obtaining goes through the realworld tasks (also lessons, prototypes) solving (see Sect. 2.4). What is happening now with the advent of new technology is that we need to consider more and more complex real-world tasks. The real-world tasks and their applications give the external context of our context model (see Fig. 2.1). In constructing our model, we need to combine task solving with integrated skills gaining. We put the STEM-CS skills at the centre of this model. Components for task solving are as follows. (i) Pedagogy-driven activities for STEM. (ii) Content resources along with tools and knowledge transfer channels. (iii) Technology-driven learning processes. (iv) Assessment of learning outcomes. Fig. 2.1 Conceptual model for defining ISTEM skills context

2.5 Defining Context and Functionality for STEM-CS Skills

51

In our model, we separate the assessment of learning outcomes from the remaining learning processes because of the following reasons. The assessment, in fact, is or should be more individualised due to the focus on the personalised learning using the STEM paradigm. The task complexity, on the other hand, requires more flexibility for managing the feedback in a variety of learning paths. This makes the assessment process more complex because we need the reserve the room for the learner’s selfassessment through the feedback. Pedagogy-driven activities for STEM include any item from the list (inquiry-based, design-based, project-based, and game-based) or some combination of the items. By Content resources, we mean smart libraries, i.e., students’ personalised libraries and the teacher’s library (see Chap. 4). Altogether, the libraries, tools and knowledge transfer channels, are components of the Smart Learning Environment. By Technology-driven learning processes, we mean those that occur during real-world tasks solving in the student’s working place equipped with PC, Internet facilities, robot and smart devices (sensors, cameras, etc.). Twodirectional arrows in the model indicate on functional relationships and data transfer channels among components. In Fig. 2.2, we represent the generalised model for defining the ISTEM skills as a top-level structure composed of five skills (learning and innovation, scientific and mathematics thinking, computational thinking, engineering thinking and technological thinking) admitting that each top ingredient consists of a list of lower-level skills as it is identified in Table 2.1. Therefore, the devised models define the structural components of context for dealing with RQ1 and defining the ISTEM-CS skills. Next, we focus on the functional aspects of this model. We present that as the state diagram in Fig. 2.3. The right part of this model (from real-world task initialisation to the discussion, reflection, and Fig. 2.2 Structure of ISTEM skills: top-level skills derived from Table 2.1

52

2 Models for the Development and Assessment of Integrated STEM …

evaluation of outcomes initialisation) is a learning scenario derived from the generic scenario (see pp. 259–278 in [ŠB18]). The left part includes quantitative measurement of the integrated STEM-CS skills assessment to ensure the skills requirements for the twenty-first century. To ensure the skills assessment functionality, we need to rely on an explicit model that connects skills categories with skills’ metrics, which we identify in Fig. 2.3 as the integrated STEM-CS skills model. Note that the assessment is a multi-staged process, taking place after each LP step as outlined in the model (Fig. 2.3). In addition, the model describes the way on how the expected (prognostic) skills defined in the curriculum are transformed into a factual level of the skills through the multi-stage learning processes and activities.). This diagram motivates the need of explicit models to define and assess skills using adequate metrics.

Curriculum-related integrated STEM-CS driven skills & Students characteristics

Integrated STEM-CS driven skills model Integrated STEM-CS driven skills assessment initialization LP Integrated STEM-CS driven skills assessment LP LP

LP

LP

Integrated STEM-CS driven skills model initialization

Real-world task initialization Realworld task Learning processes (LP) initialization

Task analysis Tools selection initialization Tools selection Task solving initialization Problem active investigation through task solving Analysis of results initialization Analysis of results

LP

LP

LP

LP

Learning activities LP

LP

LP

Discussion, reflection, evaluation initialization Discussion, reflection, evaluation

Integrated STEM-CS driven skills achieved (factual)

Fig. 2.3 State diagram for defining a scenario for ISTEM-CS skills development

Content resources (LOs) Technological resources

2.7 Analysis of the Interdependencies Among Different Skills

53

2.6 Defining the Structure of STEM-CS Skills Model In Sect. 2.5, we have outlined the interaction of external components to define ISTEM-CS skills and scenario of its functionality. In Sect. 2.6, we analyse the STEM-CS skills themselves, i.e., looking at the skills internal structure without the assessment model. The first step in doing so is to categorise these skills. We rely on the introduced research methodology (see Sect. 2.3) and structural vision of STEM. Structurally, STEM is a compound of four components (Science, Technology, Engineering, and Mathematics), i.e., STEM = S + T + E + M. Each component brings the specific knowledge and forms the ability to obtain the adequate skills. For example, the M-component is responsible for scientific and mathematical thinking skills. Computer Science (we treat that as the essential part of the S-component here) is responsible for the formation the knowledge to obtain computational thinking skills. The remaining two components (T- and E-) are responsible for engineering thinking and technological thinking skills. In addition, we need yet to add to these the learning and innovation due to their importance for solving complex real-world tasks in the digital age [HAC21]. Altogether, the context model and scenario to describe its functionality (Figs. 2.1 and 2.3), the generalised structural model (by combining Fig. 2.2 and Table 2.1) are obtained using structural analysis. This is the outcome of RQ1.

2.7 Analysis of the Interdependencies Among Different Skills To provide this analysis, we first selected 30 papers from the whole list of retrieved and analysed (57 in total) and then analysed the frequency of appearing STEM skills in the text of the paper. We analysed papers published in Web of Science journals from 2015 to 2021. Among them, 23 were published in 2020–2021. Table 2.2 presents the list of journals used. We selected the six papers with the highest expressed relationships among skills [AS21, MSB+19, RG19, SC21, SJ19, SP-B19] (see references in Appendix) and applied the ordinal Likert scale (from 0 to 3) for the STEM-driven skills importance assessment. We have treated the interdependency among different skills as strong (denoted by the number 3) if we were able to find at least two attributes from the list ( framework, model and example(s)) and one, or both were presented explicitly in the paper. We have treated this interdependency as medium (2) if we found only one explicit attribute from the list. We have defined interdependency as weak (1) if there was the only implicit attribute. We have identified this relationship as zero if we do not find the quantitative attributes. One can interpret these as the four–scaled (0, 1, 2, 3) Likert’s metrics. We summarise this result in Fig. 2.4 and Table 2.5 (see Appendix). The analysis of the frequency of the terms used in the selected papers has enabled us to choose the most representative ones (we defined only six),

54

2 Models for the Development and Assessment of Integrated STEM …

Table 2.2 Web of Science Journals related to STEM education and # of papers taken for analysis Journal

Impact factor Citation indicator # Analysed papers

International Journal of STEM Education

5.012

2.04

9

ACM Transactions of Computing Education

1.526

1.18

6

Computer Applications in Engineering 1.532 Education

0.60

4

Journal of Science Education and Technology

2.315

1.29

3

Computers and Education

8.538

3.44

2

Journal of Engineering Education

3.146

1.57

2

Int. Journal of Technology and Design Education

2.177

1.04

2

International Journal of Engineering Education

0.969

0.38

2

© With kind permission from IATED [BŠK+22]

where the possible relationships (interdependences) are outlined the most evidently (see Fig. 2.5). The action we have applied next was drawing out the criteria for quantitative representation of these interdependencies.

Fig. 2.4 Frequency of describing STEM-driven skills in analysed 30 research papers

Sum of evaluations

2.8 Feature-Based STEM-CS Skills Model (RQ3)

55

60 55 50 45 40 35 30 25

30 papers

6 papers

Fig. 2.5 Summary evaluation of analysed papers (30 is a full list of papers, 6 papers with the strong focus on interdependences among skills)

Figure 2.5 shows a summary of evaluation charts. The trends in the charts are similar. Having these interdependency metrics, we were able to calculate Spearman’s rank correlation coefficients (see Table 2.5 in Appendix). These coefficients evaluate the strength of interdependencies among different skills (see Fig. 2.6). In addition, we present the interdependencies model as the three-dimensional matrix in Fig. 2.6. In summary, by analysing RQ2, we have achieved the following findings. (1) We have identified what skills are most frequently analysed in the selected research papers (see, e.g., Figs. 2.4 and 2.5). (2) We have introduced a model for evaluating the strengths of interdependencies among different types of skills in each category (through calculating the Spearman’s rank correlation coefficients see Table 2.6 in Appendix). (3) Using this model, we have obtained that in the analysed papers the highest interdependency of skills (in terms of the Spearman correlation coefficient values 0.900–0.999) is among the following skills categories (see Fig. 2.6): • Scientific-Mathematical thinking and Computational thinking • Scientific-Mathematical thinking and Technological thinking • Computational thinking and Technological thinking. Next, in Sect. 2.8, we consider the RQ3.

2.8 Feature-Based STEM-CS Skills Model (RQ3) According to our methodology, we seek for creating executable models. For doing that, we need to have adequate formalism for model representation. In Sect. 2.8, we use the feature-based notation adopted from the software engineering domain for defining models [KCH+90]. This graphical notation (it has many in common with ontology-based approaches) is both human-readable and machine-readable. One can understand it intuitively; therefore, it is applicable in educational research too. For

2 Models for the Development and Assessment of Integrated STEM …

e ity

dg

ng

ac

in th

s

n

rs

ec

at io n

tio

in th

ov

te

g

Le a

rn

in g

&

Critical thinking & problem solving Creativity & innovation

in n

In

ste

m

n ig

ng ki

ki

ng

ki

ap lc ca

th

in

gi lo

no ch

es

Sy

de

le

tu

w

tti la

lk

ca

ca

gi

gi lo

no

no ch

Te

ch

m ith or

Te

ar

lo

se re

ep

lg A

at D

Te

at nt

n

n io

io at

sit

al er en

G

ec D

iz

n om

po

io

tr a bs

A

no

n io

ce en id ev

g tin ua

ct

g

s

in

ill sk

f in

n al

re & g

ga

in

sti ve

In

rm

Fo

tio

n tio ra

bo lla

Co

Ev

n io at

ic un m

m Co

D

po

tio

pr

va

&

no

g

in

in

&

nk hi

ity

lt

tiv

ca

ea

iti

Cr

Cr

hy

n

ob

le

th

m

es

so

es

lv

in

g

56

S m cien at ti he fic m & at ic al th in ki n

Communication Collaboration

na l

th in ki ng

Forming & refining hypotheses Investigation skills Evaluating evidence

pu ta tio

Abstraction

C

om

Decomposition

g

Generalization

lt

hi

nk

in

Data representation

0.700-0.799 0.800-0.899 0.900-0.999

ng

ch ne

er

in

g

th

in

ki

Te gi

0.600-0.699

Technological attitude Technological knowledge Technological capacity Design thinking

En

Legend: values of Spearman correlation coefficient

no

lo

gi

ca

Algorithm

System thinking Intersection thinking

Fig. 2.6 Model describing the strengths (Spearman rank correlation coefficient) of interdependencies among different types of skills in each category

the use of this notation by the education research community, see papers [SAB18, CNC12, DDA12] or the book [ŠB18, pp. 71–94]. According to this notation, a type of skills is a feature. Typically, feature is defined as a distinguishing characteristic of a domain (system, component) to be modelled [KCH+90]. In this notation, we represent features by boxes with parent–child relationships (for feature types and other relationships, see legends within Figs. 2.7 and 2.8). In the tree, the parent–child relationships define the structure of the domain under consideration. With this notation, it is possible to express the functional relationship only partially, e.g., through feature types (mandatory, optional, OR-grouped features, see legend in Fig. 2.7 and Table 2.4) and possible constraints (e.g., Requires). In Fig. 2.7, we outline the feature-based STEM-CS skills model. Conceptually, we have derived it through analysis of the two previous models (see Figs. 2.2 and 2.3) by adding the feature–based notation. Here, skills appear as features represented by boxes combined within the tree in the parent–child relationship. It is a pure structural representation. What we have missed in this model is the constraints relationships.

2.8 Feature-Based STEM-CS Skills Model (RQ3)

57 Critical thinking & problem solving Creativity & innovation

Learning & innovation skills

Communication Collaboration

Scientific & Mathematical thinking

Forming & refining hypotheses Investigation skills Evaluating evidence Abstraction

Decomposition STEM-CS skills

Computational thinking

Generalization Data representation Algorithm Knowledge

Technological & Engineering thinking

Core concepts Designing Applying, Maintaining, and Assessing Technological Products and Systems Systems thinking Creativity Making & doing

Practices

Critical thinking Collaboration

Legend:

mandatory feature OR group

Fig. 2.7 STEM-CS skills feature-based model (RQ3)

Attention to ethics

C

P

M R

U Appl

Cognitive process

A

E

Quality

Cr

Variety

Novelty

Quantity

Creativity & innovation skills

D1.2

D1.4

Discussing ideas

D1.3

Share information

D1.1

Give information

Communication skills

Fig. 2.8 Feature-based model to describe metrics for skills assessment (RQ4)

F

Knowledge

Cognitive skills

Metrics for skills assessment

D1.5

D2.1

Peer interaction (D1)

D2.2

D2.3

Positive communication (D2)

D3.1

D3.2

Inquiry rich/ Multiple paths (D3)

Collaboration skills

L1

mandatory feature

L2

D4.2

L4

L5

D4.3

D5.1

D5.2

Transdisciplinary thinking (D5)

L3

Metric values (levels)

OR group

Authentic approach and tasks (D4)

D4.1

Legend:

58 2 Models for the Development and Assessment of Integrated STEM …

2.9 Analysis of Metrics and Defining Metrics Model for Skills Evaluation

59

We did that consciously due to two reasons. (1) We have considered interdependences among skills categories in another model (Fig. 2.5), though with the other intention, i.e., how the analysed papers reflect interdependences among skills in general. (2) In this chapter, our focus is not on the interaction among skills, but among skills and metrics. It is so, because knowing skills without their metrics (we introduce them later) is of low value. In comparison with the previous model (Fig. 2.5), we have changed the interpretation of some kind of the skills (technological thinking and engineering thinking skills) here. In the model (Fig. 2.7), we represent these skills as the one feature because these skills are highly overlapping. This finding follows from the literature analysis. Therefore, by considering RQ3, we have made the only one-step towards the building of a STEM-CS skills model. Next, we focus on skills metrics evaluation (Sect. 2.9) and its feature model.

2.9 Analysis of Metrics and Defining Metrics Model for Skills Evaluation Skills metrics help assess learning outcomes and for building an adequate assessment model. In addition, skills metrics serve for incorporating these into a given learning content, i.e., learning object (LO), for example, when specifying its metadata before sending this LO to a repository or library. According to the papers [Jag19, KZ19, KBS+19], metrics for Creativity and innovation skills are (i) Quantity, (ii) Novelty, (iii) Variety, and (iv) Quality. The first defines the total number of ideas generated. The second shows how unusual or unexpected an idea is. The third indicates the number of categories of ideas. Finally, the fourth describes the feasibility of an idea; and how closely it satisfies the design specifications. Communication metrics summarised from [AAN19, KD17, PK–B19] include: (i) Discussing ideas, (ii) Giving information, and (iii) Sharing information. Collaboration metrics are more powerful, and we have included the defining attributes in Table 2.3. Cognitive skills metrics are based on the Revised Bloom’s taxonomy that covers Knowledge (Factual—F, Conceptual—C, Procedural—P and Meta—M) and Processes (Remembering—R, Understanding—U, Applying—Appl, Analysing— A, Evaluating—E, and Creating—C) [AK01]. To build the ISTEM-CS skills model, we need firstly to represent the skill metrics discussed above using the feature-based notation. We present the skill metrics feature model in Fig. 2.7. This model was devised (1) using the analysed definitions (2) by adding metrics values or levels and (3) by applying the feature-based notation in the same way as described in Sect. 2.8. Therefore, the skill metrics feature model has an additional feature (named as “Metrics value (levels)”, see Fig. 2.7). This feature has five sub-features combined into OR-group. Semantically, the sub-features L1, L2, L3, L4, and L5 are items to define the level of metrics value according to the Ordinary Likert’s scale (L1 is the weakest and L5 is the strongest). Again, this model

60

2 Models for the Development and Assessment of Integrated STEM …

Table 2.3 Collaboration metrics derived from [HQA+17] Dimension D1: Peer Interaction D1.1. Monitors tasks/project with peers D1.2. Negotiates roles within group D1.3. Divides and works toward task completion D1.4. Checks for understanding regarding process and/or content D1.5. Provides peer feedback, assistance, and/or redirection Dimension D2: Positive Communication D2.1. Respects others’ ideas D2.2. Uses socially appropriate language and behaviour D2.3. Listens and takes turns Dimension D3: Inquiry Rich/Multiple Paths D3.1. Develops appropriate questions towards solving the problem D3.2. Verifies information and sources to support inquiry Dimension D4: Authentic Approach and Tasks D4.1. Shares connections to relevant knowledge D4.2. Negotiates method or materials relevant to solving the problem posed D4.3. Uses tools collaboratively to approach task Dimension D5: Transdisciplinary Thinking D5.1. Discusses approaching task, activity, or problem using multiple disciplines D5.2. Co-creates products by incorporating multiple disciplines

(Fig. 2.7) has no constraints, because we treat metrics as independent features here. However, this is by no means valid in the other context. All the above-mentioned metrics and model are findings of RQ3. Next, we describe the ISTEM-CS skills model for defining the ISTEM-CS skills and metrics relationship.

2.10 Model for Evaluating and Describing of the ISTEM-CS Skills We refer to the ISTEM skills model (RQ5) as a structure having three parts: (1) The STEM-CS skills model (Fig. 2.7). (2) The model defining metrics for skills assessment (Fig. 2.8). (3) The constraints relationships between the two mentioned models (see Table 2.4). This partitioning is due to representation issues. The model represented by its constituents gives more clarity for reading and better understanding. The ISTEM-CS skills model (when presented in the modelling tool [http://www. splot-research.org/]) has the root with two sub-features. The first sub-feature is the root of the skills model, i.e., the feature ‘STEM-CS skills’ (see Fig. 2.7). The second

2.10 Model for Evaluating and Describing of the ISTEM-CS Skills

61

Table 2.4 Constraints (dependencies) between STEM-CS skills and metrics for skills assessment Metrics STEM-CS skills

Cognitive skills

Creativity and innovation skills

Communication skills

Collaboration skills

Learning and innovation skills Critical thinking and problem-solving

Requires

*

*

*

Creativity and innovation

*

Requires

*

*

Communication

*

*

Requires

*

Collaboration

*

*

*

Requires

Scientific and mathematical thinking skills Forming and refining hypotheses

Requires

Requires

*

*

Investigation skills Requires

Requires

*

*

Evaluating evidence

Requires

*

*

Requires

Computational thinking skills Abstraction

Requires

*

*

*

Decomposition

Requires

*

*

*

Generalization

Requires

*

*

*

Data representation

Requires

*

*

*

Algorithm

Requires

Requires

*

*

Technological and Engineering thinking skills Knowledge Core concepts

Requires

*

*

*

Designing

Requires

Requires

*

*

Applying, Maintaining, and Assessing Technological Products and Systems

Requires

Requires

Requires

Requires

Systems thinking

Requires

*

*

*

Creativity

*

Requires

*

*

Making and doing

Requires

Requires

Requires

Requires

Critical thinking

Requires

*

*

*

Practices

(continued)

62

2 Models for the Development and Assessment of Integrated STEM …

Table 2.4 (continued) Metrics STEM-CS skills

Cognitive skills

Creativity and innovation skills

Communication skills

Collaboration skills

Collaboration

*

*

*

Requires

Attention to ethics *

*

Requires

Requires

© With kind permission from IATED [BŠK+22]

sub-feature is the root of the metrics model (i.e., the feature named ‘Metrics for skills assessment’, see Fig. 2.8). In other words, this root is a joining feature. Since we have already described the two first parts, we focus on the constraints relationship here. In the general case of using the feature-based notation, there are two types of constraints, defined as a logical relation ‘Requires’ (feature A ‘Requires’ feature B) and ‘Excludes’ (feature C‘Excludes’ feature D, when selected in some configuration of a feature model). Each constraint describes the relationship between two features and has two logical values (true or false). In graphical representations of feature models, typically, we indicate that by the directed broken line between two features A and B, writing beside the line the word (‘Requires’). It means that this relationship exists (i.e., true); otherwise, no need of graphical or textual notation. Further, we speak about the constraint ‘Requires’ only because the second one (‘Excludes’) is not applicable in our ISTEM skills feature model. We use the matrix form for presenting the constraint relationship between two sets of features, i.e., skills categories and metrics. The essential question is as follows. How to fill in the constraint ‘Requires’ in this matrix? It is no simple answer to this question. It relates to the other question: to what extent the expression of human mind terms (aka cognitive, psychological) using the binary relation is feasible and correct. One can understand that this relation is not binary. However, as an approximation, we can admit that there is some range of the values between [false, true]; or between [0, 1]. In other words, the fuzzy logic relation (strong ‘Requires’, weak ‘Requires’, etc.) instead of the propositional logic should be better. We cannot do that due to the consistency of feature notations and functionality of the existing tools that supports this approach. We are unaware of applying fuzzy logic to represent constraints in feature-based modelling. What is the solution to this problem we suggest? We admit there are tasks we can conceive intuitively, or rely on reliable sources with great certainty, to write the word ‘Requires’ in Table 2.4. We admit that, for some hypothetical tasks, all ‘Requires’ in Table 2.4 are strong and used in the conventional meaning of propositional logic. For those, we ignore the weak (or non-defined) ‘Requires’ by representing them with the asterisk in Table 2.4. The third row of Table 2.4, for example, has the following meaning: ‘Critical thinking and problem solving’ ‘Requires’ metrics of ‘Cognitive skills’ for most tasks. Here, we ignore the remaining metrics (there are ‘*’ in the adequate columns). What will occur, if we move to another, more complex task, e.g., in the case of collaborative learning? In this case, we need to revise and change this model

2.11 Validation of the ISTEM-CS Skills Model Through Case Study (RQ6)

63

by inserting the ‘Requires’ in all columns of the third row. Therefore, as presented, this model is a core or initial model. To make it attractive for practical use, we need to have tools for managing the possible reengineering of this core model. We return to this discussion later, considering the automated generation to evaluate this approach. The essential finding of solving RQ5 is the ISTEM-CS skills model and its quantitative characteristics. It is an executable verified model. In Table 2.7 (see Appendix), we present characteristics of this model (feature counts by types, crosstree, constraints and the number of valid configurations). A valid configuration is a subtree derived from the model’s tree with the following property. Syntactically, it is a path from the root to any variant node with a degree 1, including the existing constraints. Semantically, it is a description of skill sequences with applied metrics to measure them. A valid configuration defines the variability of the domain. The number of valid configurations is huge, more than a billion. Therefore, the manual selection of the most relevant description from this space is impossible—we need an automatic tool.

2.11 Validation of the ISTEM-CS Skills Model Through Case Study (RQ6) For this purpose, we need to have some learning object (LO) to be learned by learners. We have extracted the LO “A set of LEDs” from the published our paper [ŠBD+19]. The aim of it was to explain the concept of the array for acquiring appropriate skills in applying loops and functions in the personalised learning. This LO consists of physical part (see Fig. 2.9a) that represents an electrical circuit, and generative learning object (GLO) that represents a generic control program of LEDs (see Fig. 2.9b; in fact, the only interface is given). First, the learner constructs electrical circuit, then generates LEDs control program (see Fig. 2.9c) using the GLO’s specification. The learner should make changes in the electrical circuit and choose adequate parameters in the meta-interface of GLO. After completing the planned tasks, the student performs additional practical tasks. Task1. Modify the program so that the first LED starts to light firstly, it lights up for 1 second, then turns off. Actions are repeated for all LEDs. Task2. Add one more LED to the electrical circuit and create a program that represents a decimal number in a binary numeral system, with each LED corresponding to one bit. Here, the aim of this LO is to validate the developed ISTEM skills model. Each practical task covers only some aspects of ISTEM-CS skills that we need to extract from the ISTEM skills model. For that, we need to transform the ISTEM skills model into a more convenient representation, say the table form. The teacher should know what skills and what level of achieving these skills the proposed task can deliver for the learner. Of course, this requires an enhanced understanding and competency in the

64

2 Models for the Development and Assessment of Integrated STEM …

Fig. 2.9 Physical component (a) and software parts (b) and (c) of the LO [ŠBD+19]

pedagogical and content issues. The actual level of knowledge and skills achievable by a concrete student in a concrete situation is not a matter of this validation. Note that we speak about the semantical validation here, because the syntactic validation and correctness of the model is the matter of the automated tool used (SPLOT in this case). In Table 2.7, we have presented the results of the semantic validation for this LO. It is a sparse matrix, where the achievable levels of skills are indicated. Here, this matrix was derived from the ISTEM skills model manually. In Sect. 2.10, we will discuss the essence of the automated generation of this matrix. The empty elements mean that adequate skills are not deliverable by the given task (i.e., LO). What is the value of this matrix? It gives the additional information to extend the metadata description for moving this LO to a digital library. For the teacher, it is a piece of valuable information because she/he knows in advance what teaching result one can expect. It is helpful for students too, for example, in planning learning paths and strategies in case of personalised learning. Therefore, the non-empty elements (see Fig. 2.10) present the levels of skills (L1-lowest, L5-highest) along with relevant applied metrics so that the given LO potentially can to deliver, when the learner deals with it during the learning process. That is a solution of RQ6. The factual level of skills the student is able to gain is the matter of the student’s knowledge assessment, not considered here.

2.12 ISTEM-CS Skills and Their Metrics Generating Tool

65

Fig. 2.10 Assessment of ISTEM-CS skills for LO “A set of LEDs”. © With kind permission from IATED [BŠK+22]

2.12 ISTEM-CS Skills and Their Metrics Generating Tool As it is clear from the previous discussions, the skills categories and metrics for evaluation are entities with extremely high variability. Their relationship also contains high variability. Therefore, the flexible and effective manual management of the task for building the relationships ‘skills categories’—‘metrics types’ is impossible. The complexity of the task is that each LO differs in the power and capabilities to deliver the skills. In addition, the ISTEM skills model is the only approximation (we have called it as core or initial) in terms of constraints ‘Requires’ to define the binary relationship among skills and their metrics for the given LO. Therefore, the core model tends to be changed when it is used in a concrete context. All that motivates the need of automated tools for both more flexible changes of the core model and automatic generation of these relationships.

66

2 Models for the Development and Assessment of Integrated STEM …

Meta-interface model Module for initial matrix List of ISTEM-CS skills metrics

List of the levels of metrics values

Module for improvement and extension of ISTEM skills, metrics & metrics values

Meta-body model Preparation of prognostic matrix for assessment of ISTEM-CS skills

Preparation for improvement and extension of matrix of ISTEM-CS skills

Prognostic matrix for assessment of ISTEM-CS skills

Revised matrix for assessment of ISTEM-CS skills

Fig. 2.11 Model of ISTEM–CS skills assessment generator

In Fig. 2.11, we present the model of this generative tool using meta-programming approach. In this implementation, it uses two languages meta-language (C++) and target language (formalised natural language in matrix format) (for more details on meta-programming, see, e.g., Chap. 6 in [Štu15]). The output of the generator (metaprogram) is a prognostics matrix for assessment of ISTEM skills. In addition, the presented model (Fig. 2.11) includes capabilities for possible improvement due to the model’s limited accuracy. In Fig. 2.12, we present the use of this tool by teacher and student assuming that the given SLO is without the metadata regarding the skills. The branch a (see Fig. 2.12) stands for teacher’s actions for revising prognostic model (if needed) and b—means that the student does not achieve the needed level of skills and the feedback follows.

2.13 Summarising Discussion and Evaluation The rapid technological changes are highly challenging and influencing the industry, the educational sector and society as a whole. Among other challenges, the development of skills and competencies needed for the twenty-first century to keep pace with technological advancements is perhaps the biggest one. Competency is a compound of knowledge, skills and attitudes. We can obtain all these through learning and experience. Knowledge is the learned facts, either theoretical or practical. Skills are the human’s abilities to apply knowledge in practice. In this chapter, we have focused on considering skills-related problems as they appear in the context of integrated

2.13 Summarising Discussion and Evaluation

Teacher

67

Smart Learning Object (SLO) Student An assessment model of initial ISTEM-CS skills

a

b

ISTEM-CS skills assessment matrix generator

Smart Learning Object joined with ISTEM-CS skills assessment matrix Factual ISTEM skills assessment matrix

Prognostic ISTEM-CS skills assessment matrix

The result of comparison is not suitable

Comparison of prognostic & factual matrixes The result of comparison is suitable ISTEM-CS skills are gained

Fig. 2.12 Usage of the ISTEM-CS skills assessment generator by the teacher and student

STEM-driven Computer Science education. The skills, that this teaching paradigm can propose and bring to a learning setting, we have called Integrated STEM-CS (ISTEM-CS) skills. The essential property of the ISTEM-CS skills is the structure and interdependencies among constituents. The structure reflects the specific knowledge to be introduced by a separate STEM component (S-, T-, E-, and M-component, here we admit that CS is a part of S-component). This specificity appears in the categories and sub-categories of skills due to solving of real-world tasks using robotics. We have identified these skills in the top category as learning and innovation skills, scientific and mathematical thinking, computational thinking and technological and engineering thinking. Each category contains a sub-category with very rich types of skills. The level of these skills in each category and sub-category is not a constant item. It evolves and changes along with the process of analysis and teacher’s or learner’s efforts to understand and learn the essence of the analysed domain. To define changes, we distinguish between two dimensions of skills levels, prognostic and factual. Experts or CS teachers for each Learning Object (LO) define the prognostic skills at the content design phase. This information is a part of the metadata description for the targeted search of needed LO. The factual level of skills is a result of the learner’s knowledge assessment. For researching those aspects, we have proposed a methodology based on the structural analysis and modelling of the domain of skills. We summarise the applied methodology as follows. (1) The basic assumption and background of implementing this methodology is the retrieval and selection of reliable information sources to define skills categories (sub-categories) and relevant metrics for skills measurement. We have tried to use information extracted from the recent papers published in the Web of Science journals for both topics, skills and their possible metrics. (2) These sources have enabled us to define the categories of skills (including STEM and CS) and metrics to evaluate these skills. (3) We have provided a systematic analysis of

68

2 Models for the Development and Assessment of Integrated STEM …

these two domains and constructed the ISTEM skills executable model using the feature-based notation. This type of modelling defines the structural aspects of both skills and metrics well. It allows systemising the representation of skills and metrics uniformly by categorising items in each category and sub-category and anticipating possible constraints among these items. If one would add definitions of skills and metrics taken from reliable sources, the structural representation of the ISTEM skills model would be, in fact, a formalised taxonomy of this domain. However, using this notation, we can express the constraint relationships only approximately. The notation relies on using propositional logic. Features or attributes in education, however, do have not the binary values of the propositional logic (true, false), but typically these attributes are variables of the fuzzy logic transformed into a desirable scale (3-dimensional, 4-dimensional, or 5-dimensional such as Ordinary Likert’s scale we have applied here). The drawback of the proposed model, therefore, is the lack of accuracy. However, this is not only the limitation of expressiveness to present feature models but largely due to the complexity of analysed domain. To decrease this drawback, we have proposed the idea of an iterative approach of changing/adapting the model and then regenerating the description skills-metrics relationship using the developed tool. The basis of that is the concept ‘skills development model’, from which follows the prognostic and factual levels of skills. We argue that it is possible to achieve a more accurate description through a few iterations using the developed model tool. What is then the contribution and value of the proposed approach? The explicit model, even of limited accuracy, is better than the implicit intuitive one. Firstly, we have derived the explicit model (Table 2.7) manually from the formal model (feature-based one, see Sect. 2.8). Note that Table 2.7 contains the same information as the formal model, but the first is more convenient to read and use in practice. Even the empty Table 2.7 (not filled in by levels of skills values) is a valuable template for teachers in practical use. The developed tool generates the same information given in the table in a more concise form automatically. Therefore, automated generation adds additional value. In our view, the discussed approach is valuable and applicable in the following contexts. For the course designers, it enables the extension of the metadata descriptions by including skills-metrics attributes in the content (LO) description for a more accurate search. For teachers, it enforces personalised assessment and enables (partially) semi-automated assessments of skills gained by the student. For students, the approach enables reaching a more effective process through rapid qualitative results of learning for improving the learning path. We also admit that the developed models conceptually are domain independent and have the property and possibility of transforming and adapting in the other subject contexts. The research value in our context is as follows. The use of the developed generator enables (1) Extending LO metadata description for Personal Digital Libraries (see Chap. 3) (2) Supplementing LO by the assessment module and (3) Assessment of gained skills after solving open-ended and control tasks by a student.

Appendix

69

2.14 Conclusion The integrated skills for the twenty-first century are of great importance for education, industry, and society as a whole, due to the continuous technological advancements. STEM-driven Computer Science education has a great potential to deliver integrated skills. The domain of integrated STEM skills is very complex with a large variability space in terms of structural and functional relationships among skills and metrics. Analysis through modelling is perhaps the best way to understand this domain. The introduced model to define Integrated STEM-CS (ISTEM-CS) skills enables, to some extent, to systemise and have a unified picture of skills-metrics relationships in STEM-driven CS education. The ISTEM-CS skill model is complex, and we could not able to reveal all cognitive aspects of this complexity, such as design thinking, data-driven thinking. We return to this in other chapters, where design (e.g., Internet of Things and data concepts) are the focus. This model, despite its indicated inaccuracy, is suitable for manual or automated modes of use. It is applicable and valuable for content designers, teachers, and students. The content designers can extend the description of metadata by including prognostic skills value for a more targeted LO search. Teachers are benefited from the skills evaluation template automatically derived using the developed Generating tool. Using the discussed approach, students are aware in advance of what skills development capabilities the given LO contains and can flexibly adapt his/her learning path.

Appendix See Tables 2.5, 2.6 and 2.7.

Critical thinking and problem solving

3

3

3

3

2

1

0

0

0

0

0

0

0

3

0

0

0

0

3

Source

AS21

BMV+20

BPP21

CGP+20

CRF+21

Eng16

EOP15

ERC21

FIC17

GBB+17

GHB+20

HPH+21

I-FH21

YHT+19

JDD21

KK16

LGM+20

LM21

LS19

0

0

0

0

0

3

3

3

0

0

0

0

0

1

2

3

0

3

3

Creativity and innovation

0

0

0

0

0

0

0

0

0

0

0

0

0

1

2

0

3

3

3

Communication

0

0

0

0

0

3

0

3

0

1

1

0

0

3

3

3

3

3

3

Collaboration

3

0

1

3

0

0

0

0

3

1

1

3

3

1

0

1

1

1

2

Forming and refining hypotheses

Table 2.5 Relationships among STEM skills: 0—no, 1—weak, 2—medium, 3—strong

3

0

1

3

0

0

0

0

3

1

1

3

3

1

0

1

1

1

2

Investigation skills

3

0

1

3

0

0

0

0

3

1

1

3

3

1

0

1

1

1

2

Evaluating evidence

0

3

3

0

3

3

3

0

0

3

3

3

3

0

2

2

0

0

3

Abstraction

0

3

3

0

3

3

3

0

0

3

3

3

3

0

2

2

0

0

3

(continued)

Decomposition

70 2 Models for the Development and Assessment of Integrated STEM …

0

3

1

3

3

3

3

3

3

3

Generalization

3

0

0

2

2

0

MCK17

MSB+19

RG19

SC21

SJ19

SP-B19

ŠŽA20

WHS+17

WLC+20

WSK+21

Source

AS21

BMV+20

BPP21

CGP+20

CRF+21

Eng16

0

0

0

3

3

3

3

3

3

0

0

0

MC19

0

0

1

3

3

3

3

3

3

0

0

0

2

2

0

0

3

0

0

1

3

3

3

3

3

3

0

0

0

2

2

0

0

3

3

3

3

3

2

2

3

3

1

3

0

1

2

1

1

1

2

3

3

3

3

2

2

3

3

1

3

0

1

2

1

1

1

3

3

3

3

3

2

2

3

3

1

3

0

1

2

1

1

1

2

1

1

3

0

2

2

2

3

1

0

3

3

2

1

3

3

2

1

1

3

0

2

2

2

3

1

0

3

1

2

1

0

3

2

1

2

1

0

3

2

(continued)

Interaction thinking

Decomposition

System thinking

Abstraction

Design thinking

Evaluating evidence

Technological capacity

Investigation skills

Technological knowledge

Forming and refining hypotheses

Technological attitude

Collaboration

Algorithm

Communication

Data representation

Creativity and innovation

Critical thinking and problem solving

Source

Table 2.5 (continued)

Appendix 71

Generalization

3

3

3

3

0

0

3

3

3

0

3

3

0

3

0

1

3

2

2

Source

EOP15

ERC21

FIC17

GBB+17

GHB+20

HPH+21

I-FH21

YHT+19

JDD21

KK16

LGM+20

LM21

LS19

MC19

MCK17

MSB+19

RG19

SC21

SJ19

Table 2.5 (continued)

2

2

3

1

0

3

0

3

3

0

3

3

3

0

0

3

3

3

3

Data representation

2

2

3

1

0

3

0

3

3

0

3

3

3

0

0

3

3

3

3

Algorithm

2

2

2

1

1

1

3

0

0

3

1

3

0

0

1

0

0

0

0

Technological attitude

2

2

2

1

1

1

3

0

0

3

1

3

0

0

1

0

0

0

0

Technological knowledge

2

2

2

1

1

1

3

0

0

3

1

3

0

0

1

0

0

0

0

Technological capacity

3

3

3

1

1

1

3

0

0

3

3

3

0

3

1

0

0

3

3

Design thinking

1

3

3

3

1

1

3

0

0

3

0

1

0

0

1

0

0

0

0

System thinking

2

3

3

1

1

1

3

0

0

3

0

1

0

0

1

0

0

0

0

(continued)

Interaction thinking

72 2 Models for the Development and Assessment of Integrated STEM …

Generalization

2

0

3

1

1

Source

SP-B19

ŠŽA20

WHS+17

WLC+20

WSK+21

Table 2.5 (continued)

1

1

3

0

2

Data representation

1

1

3

0

2

Algorithm

1

1

2

2

2

Technological attitude

1

1

2

2

2

Technological knowledge

1

1

2

2

2

Technological capacity

1

1

2

3

2

Design thinking

1

1

2

1

1

System thinking

1

1

2

3

1

Interaction thinking

Appendix 73

5.3

5.2

5.1

0.914

0.914

4.3

0.943

0.914

0.914

0.886

0.971

0.943

0.943

0.914

0.914

0.971

0.971

0.971

0.829

0.829

0.829

0.714

5.1

0.914

0.943

3.5

0.971

0.971

0.943

0.943

0.943

0.943

0.943

0.743

0.743

0.743

0.743

4.3

4.2

0.943

0.971

0.971

0.914

0.914

0.914

0.771

0.771

0.771

0.771

4.2

4.1

0.943

0.943

3.4

0.943

0.943

0.943

0.743

0.743

0.743

0.743

4.1

3.3

0.943

0.943

0.943

0.800

0.800

0.800

0.686

3.5

0.943

0.943

0.943

0.943

0.800

0.800

0.800

0.686

3.4

3.2

0.943

0.943

0.943

0.800

0.800

0.800

0.686

3.3

0.943

0.943

0.943

0.943

0.800

0.800

0.800

0.686

3.2

3.1

2.3

0.800 0.943

0.800

0.800

0.800

0.686

3.1

2.2

0.800

0.800

0.800

0.686

2.3

0.943

0.800

1.4

0.800

0.800

0.686

2.2

2.1

0.800

1.3

2.1

0.800

1.4

0.686

1.3

1.2

1.2

1.1

1.1

Table 2.6 Spearman correlation coefficients to define STEM skills relationships

0.771

0.743

0.771

0.771

0.771

0.771

0.771

0.771

0.829

0.829

0.829

0.743

0.743

0.743

0.629

5.2

0.914

0.886

0.914

0.914

0.914

0.914

0.914

0.914

0.971

0.971

0.971

0.714

0.714

0.714

0.600

5.3

74 2 Models for the Development and Assessment of Integrated STEM …

References Table 2.7 Characteristics of ISTEM-CS skills model obtained using SPLOT tools [http://www.splot-research. org/]

75

Characteristics

Value

#Features

79

#Mandatory

15

#Optional

0

#XOR groups

0

#OR groups

16

#Grouped

63

Cross-tree constraints

31

Consistency

Consistent

Dead features

None

Core features

12

Valid configurations

More than 1 billion

References [AAN19] Ariyani F, Achmad A, Nurulsari N (2019, February) Designing an Inquiry-based STEM Learning strategy as a Powerful Alternative Solution to Enhance Students’ 21st-century Skills: A Preliminary Research. In Journal of Physics: Conference Series (Vol. 1155, No. 1, p. 012087). IOP Publishing [AD16] Atmatzidou S, Demetriadis S (2016) Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems, vol. 75, pp. 661–670 [AJ18] Avsec S, Jamšek J (2018) A path model of factors affecting secondary school students’ technological literacy. International Journal of Technology and Design Education, 28(1), 145–168 [AK01] Anderson LW, Krathwohl DR (2001) A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman [And20] Anderson BM (2020) The Most In-Demand Hard and Soft Skills of 2020. www.lin kedin.com/business/talent/blog/talent-strategy/linkedin-most-in-demand-hard-andsoft-skills [BO18] Belecina RR, Ocampo JM (2018) Effecting change on students’ critical thinking in problem solving. Educare, 10(2) [BŠK+22] Burbait˙e R, Štuikys V, Kubili¯unas R, Ziberkas G (2022) A Vision to Develop Integrated Skills for STEM-driven Computer Science Education, INTED2022 Proceedings, 5082–5091 [CNC12] Castro J, Nazar JM, Campos F (2012) EasyT: Apoiando a Construção de Objetos de Aprendizagem para uma Linha de Produtos de Software. Conferencias LACLO, 3(1) [Cum16] Cummins K (2016) Teaching Digital Technologies & STEM: Computational Thinking, coding and robotics in the classroom, 2016 [DDA12] Díez D, Díaz P, Aedo I (2012) The ComBLA Method: The Application of Domain Analysis to the Development of e-Learning Systems. Journal of Research & Practice in Information Technology, 44(3) [Doy21] Doyle A (2021) Important Technical Skills with Examples. www.thebalancecareers. com/technical-skills-list-2063775 [EAK20] English LD, Adams R, King D (2020) Design Learning in STEM Education. In Book Handbook of Research on STEM Education (C. Johnson, Margaret J. Mohr-Schroeder, Tamara J. Moore, Lyn D. English, eds.) Routledge

76

2 Models for the Development and Assessment of Integrated STEM …

[FRA19] Framework for 21st Century Learning (2019). Partnership for 21st Century Learning. A Network for Battelle for Kids. http://static.battelleforkids.org/documents/p21/P21_ Framework_Brief.pdf [GAK15] Gabrieli C, Ansel D, Krachman SB (2015) Ready to Be Counted: The Research Case for Education Policy Action on Non-Cognitive Skills. A Working Paper. Transforming Education [GPE20] 21st-Century Skills: What potential role for the Global Partnership for Education? A Landscape Review. www.globalpartnership.org/sites/default/files/document/ file/2020-01-GPE-21-century-skills-report.pdf [Gre17] Greene M (2017) Design Thinking vs. Systems Thinking for Engineering Design: What’s the difference? WWW Document. In DS 87–2 Proceedings of the 21st International Conference on Engineering Design (ICED 17), 2017, vol. 2, pp. 21–25 [GXH19] Gu J, Xu M, Hong J (2019). Development and validation of a technological literacy survey. International Journal of Science and Mathematics Education, 17, 109–124 ˘ [HAC21] HACIOGLU Y (2021) The effect of STEM education on 21th century skills: Preservice science teachers’ evaluations. Journal of STEAM Education, 4(2), 140–167 [Has20] Hasanah U (2020) Key Definitions of STEM Education: Literature Review, Interdisciplinary Journal of Environmental and Science Education, 16(3), e2217, 2020 [HBK+19] Howard P, O’Brien C, Kay B, O’Rourke K (2019) Leading Educational Change in the 21st Century: Creating Living Schools through Shared Vision and Transformative Governance. Sustainability 2019, 11, 4109; doi: https://doi.org/10.3390/su11154109 [HCP18] Hobbs L, Clark JC, Plant B (2018) Successful students–STEM program: Teacher learning through a multifaceted vision for STEM education, in STEM education in the junior secondary, Springer, Singapore, pp. 133–138 [HPS14] Honey M, Pearson G, Schweingruber H (eds.) (2014) STEM Integration in K–12 Education: Status, Prospects, and an Agenda for Research. National Academy of Sciences, The National Academies Press, Washington, D.C., 201 [HQA+17] Herro D, Quigley C, Andrews J, Delacruz G (2017) Co-Measure: developing an assessment for student collaboration in STEAM activities. International journal of STEM education, 4(1), 1–12 [Jag19] Jagta S (2019) Design creativity: Refined method for novelty assessment. International Journal of Design Creativity and Innovation, 7(1–2), 99–115 [Jan16] Jang H (2016) Identifying 21st Century STEM Competencies Using Workplace Data, Journal of Science Education and Technology, 25, pp. 284–301 [KBS+19] Kershaw TC, Bhowmick S, Seepersad CC, Hölttä-Otto K (2019) A decision tree based methodology for evaluating creativity in engineering design. Frontiers in psychology, 10, 32 [KCH+90] Kang KC, Cohen SG, Hess JA, Novak WE, Peterson AS (1990) Feature-oriented domain analysis (FODA) feasibility study (No. CMU/SEI-90-TR-21). CARNEGIEMELLON UNIV PITTSBURGH PA SOFTWARE ENGINEERING INST [KD17] Krajcik J, Delen ˙I (2017) Engaging learners in STEM education. Eesti Haridusteaduste Ajakiri. Estonian Journal of Education, 5(1), 35–58 [KG18] Kubat U, Guray E (2018) To STEM, or not to STEM? That is not the question, Cypriot Journal of Educational Sciences, 13(3), 388–399 [KK16] Kelley TR, Knowles JG (2016) A conceptual framework for integrated STEM education. International Journal of STEM Education, 3(1), 1–11. https://doi.org/10.1186/ s40594-016-0046-z [KM09] Koehler M, Mishra P (2009) What is Technological Pedagogical Content Knowledge (TPACK)? Contemporary Issues in Technology and Teacher Education, 9(1):60–70 [Kra15] Krajcik J (2015) Three-Dimensional Instruction: Using a New Type of Teaching in the Science Classroom, Science Scope, 39(3), pp. 16–18 [KZ19] Kim SH, Zimmerman HT (2019, June) Understanding the practices and the products of creativity: Making and tinkering family program at informal learning environments. In Proceedings of the 18th ACM International Conference on Interaction Design and Children (pp. 246–252)

References

77

[MJG20] MJG20 Moore TJ, Johnston AC, Glancy AW (2020) Integrated STEM: A Synthesis of Conceptual Frameworks and Definitions. In Handbook of Research on STEM Education (C. Johnson, Margaret J. Mohr-Schroeder, Tamara J. Moore, Lyn D. English, eds.) Routledge [MSW+14] Moore TJ, Stohlmann MS, Wang HH, Tank KM, Roehrig GH (2014) Implementation and integration of engineering in K–12 STEM education. In J.Strobel, S. Purzer, & M. Cardella (Eds.), Engineering in pre-college settings: Research into practice. Rotterdam: Sense Publishers [NN15] Nigmatov, ZG, Nugumanova IN (2015) Methods for developing technological thinking skills in the pupils of profession-oriented schools. Asian Social Science, 11(8), 207 [NS17] Nadelson LS, Seifert AL (2017) Integrated STEM defined: Contexts, challenges, and the future, The Journal of Educational Research, 110(3), pp. 221–223 [PEF+20] Priemer B, Eilerts K, Filler A, Pinkwart N, Rösken-Winter B, Tiemann R, Zu Belzen AU (2020) A framework to foster problem-solving in STEM and computing education, Research in Science & Technological Education, 38(1), pp. 105–130 [Pir21] Pirhonen A (2021) Promoting Technological Thinking: The Objective and the Means. Techne serien-Forskning i slöjdpedagogik och slöjdvetenskap, 28(2), 48–53 [PK-B19] Perignat E, Katz-Buonincontro J (2019) STEAM in practice and research: An integrative literature review. Thinking skills and creativity, 31, 31–43 [RDR+21] Roehrig GH, Dare EA, Ring-Whalen E, Wieselmann JR (2021) Understanding coherence and integration in integrated STEM curriculum. International Journal of STEM Education, 8:2 [SAB18] Sarinho VT, de Azevedo GS, Boaventura FM (2018, October) Askme: A feature-based approach to develop multiplatform quiz games. In 2018 17th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) (pp. 38–3809). IEEE [ŠB18] Štuikys V, Burbait˙e R (2018) Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices, Springer [ŠBD+19] Štuikys V, Burbait˙e R, Dr˛asut˙e V, Ziberkas G, Dr˛asutis S (2019) A framework for introducing personalisation into STEM-driven computer science education. The international journal of engineering education, 35(4), 1176–1193 [ŠD12] Štuikys V, Damaševiˇcius R (2012). Meta-programming and model-driven metaprogram development: principles, processes and techniques (Vol. 5). Springer Science & Business Media [SF20] Schools of the Future: Defining New Models of Education for the Fourth Industrial Revolution, 2020, http://www3.weforum.org/docs/WEF_Schools_of_the_Fut ure_Report_2019.pdf [SGT21] World Economic Forum (2021, January). Building a Common Language for Skills at Work: A Global Taxonomy. http://www3.weforum.org/docs/WEF_Skills_Taxonomy_ 2021.pdf [SK16] Siekmann G, Korbel P (2016) Defining ‘STEM’ skills: review and synthesis of the literature support document 2, NCVER, Adelaide. Published by NCVER, ABN 87 007 967 311 https://files.eric.ed.gov/fulltext/ED570655.pdf [SR21] Stretch E, Roehrig G (2021) Framing failure: Leveraging uncertainty to launch creativity in STEM education. International Journal of Learning and Teaching, 7(2), 123–133 [SSW18] Sandall BK, Sandall DL, Walton ALJ (2018) Educators’ Perceptions of Integrated STEM: A Phenomenological Study. Journal of STEM Teacher Education 2018, Vol. 53, No. 1, 27–42 [Štu15] Štuikys V (2015). Smart learning objects for smart education in computer science. Springer, 10, 978–3 [TCL+18] Thibaut L, Ceuppens S, De Loof H, De Meester J, Goovaerts L, Struyf A, Boevede Pauw J, Dehaene W, Deprez J, De Cock M, Hellinckx L, Knipprath H, Langie G, Struyven K, Van de Velde D, Van Petegem P, Depaepe F (2018) Integrated STEM education: A systematic review of instructional practices in secondary education,European Journal o STEM Education, 3(1), p. 2

78

2 Models for the Development and Assessment of Integrated STEM …

[TKHT9] Tsupros N, Kohler R, Hallinen J (2009) STEM education: A project to identify the missing components. Intermediate Unit 1: Center for STEM Education and Leonard Gelfand Center for Service Learning and Outreach [Win06] Wing JM (2006) Computational thinking, Communications of the ACM, 49(3), pp. 33– 35 [YMV+17] Yasar O, Maliekal J, Veronesi P, Little LJ (2017, June) The essence of scientific and engineering thinking and tools to promote it. In 2017 ASEE Annual Conference & Exposition [Zah20] Zahidi S (2020) We need a global reskilling revolution – here’s whyhttps://www.wef orum.org/agenda/2020/01/reskilling-revolution-jobs-future-skills/ [ZK18] Zimmerman C, Klahr D (2018) Development of scientific thinking. Stevens’ handbook of experimental psychology and cognitive neuroscience, 4, 1–25, https://www.wef orum.org/agenda/2020/01/reskilling-revolution-jobs-future-skills/

References for Skills Interdependences [AS21] Apiola M & Sutinen E (2021) Design science research for learning software engineering and computational thinking: Four cases. Computer Applications in Engineering Education, 29(1), 83–101 [BMV+20] Van den Beemt, A., MacLeod, M., Van der Veen, J., Van de Ven, A., van Baalen, S., Klaassen, R., & Boon, M. (2020). Interdisciplinary engineering education: A review of vision, teaching, and support. Journal of engineering education, 109(3), 508–555 [BPP21] Bosman, L., Paterson, K., & Phillips, M. (2021). Integrating Online Discussions into Engineering Curriculum to Endorse Interdisciplinary Viewpoints, Promote Authentic Learning, and Improve Information Literacy. International Journal of Engineering Education [CGP+20] Chevalier, M., Giang, C., Piatti, A., & Mondada, F. (2020). Fostering computational thinking through educational robotics: A model for creative computational problem solving. International Journal of STEM Education, 7(1), 1–18 [CRF+21] Conde, M. Á., Rodríguez-Sedano, F. J., Fernández-Llamas, C., Gonçalves, J., Lima, J., & García-Peñalvo, F. J. (2021). Fostering STEAM through challenge-based learning, robotics, and physical devices: A systematic mapping literature review. Computer Applications in Engineering Education, 29(1), 46–65 [Eng16] English, L. D. (2016). STEM education K–12: Perspectives on integration. International Journal of STEM education, 3(1), 1–8 [EOP15] Erduran, S., Ozdem, Y., & Park, J. Y. (2015). Research trends on argumentation in science education: A journal content analysis from 1998–2014. International Journal of STEM Education, 2(1), 1–12 [ERC21] Ehsan, H., Rehmat, A. P., & Cardella, M. E. (2020). Computational thinking embedded in engineering design: capturing computational thinking of children in an informal engineering design activity. International Journal of Technology and Design Education, 1–24 [FIC17] Fronza, I., Ioini, N. E., & Corral, L. (2017). Teaching computational thinking using agile software engineering methods: A framework for middle schools. ACM Transactions on Computing Education (TOCE), 17(4), 1–28. [GBB+17] Grover, S., Basu, S., Bienkowski, M., Eagle, M., Diana, N., & Stamper, J. (2017). A framework for using hypothesis-driven approaches to support data-driven learning analytics in measuring computational thinking in block-based programming environments. ACM Transactions on Computing Education (TOCE), 17(3), 1–25 [GHB+20] Garry, F., Hatzigianni, M., Bower, M., Forbes, A., & Stevenson, M. (2020). Understanding K–12 STEM education: a framework for developing STEM literacy. Journal of Science Education and Technology, 29(3), 369–385

References

79

[HPH+21] Han, J., Park, D., Hua, M., & Childs, P. (2021). Is group work beneficial for producing creative designs in STEM design education? International Journal of Technology and Design Education, 1–26. [I-FH21] Israel-Fishelson, R., & Hershkovitz, A. (2021). Studying interrelations of computational thinking and creativity: A scoping review (2011–2020). Computers & Education, 104353. [YHT+19] Yin, Y., Hadad, R., Tang, X., & Lin, Q. (2019). Improving and assessing computational thinking in maker activities: The integration with physics and engineering learning. Journal of Science Education and Technology, 1–26 [JDD21] Juškeviˇcien˙e, A., Dagien˙e, V., & Dolgopolovas, V. (2021). Integrated activities in STEM environment: Methodology and implementation practice. Computer Applications in Engineering Education, 29(1), 209–228 [KK16] Kelley, T. R., & Knowles, J. G. (2016). A conceptual framework for integrated STEM education. International Journal of STEM education, 3(1), 1–11 [LGM+20] Lee, I., Grover, S., Martin, F., Pillai, S., & Malyn-Smith, J. (2020). Computational thinking from a disciplinary perspective: Integrating computational thinking in K– 12 science, technology, engineering, and mathematics education. Journal of Science Education and Technology, 29(1), 1–8 [LM21] Lyon, J. A., & Magana, A. J. (2021). The use of engineering model-building activities to elicit computational thinking: A design-based research study. Journal of Engineering Education, 110(1), 184–206 [LS19] Li, Y., & Schoenfeld, A. H. (2019). Problematizing teaching and learning mathematics as “given” in STEM education. International Journal of STEM education, 6(44), 1–13 [MC19] Merkouris, A., & Chorianopoulos, K. (2019). Programming embodied interactions with a remotely controlled educational robot. ACM Transactions on Computing Education (TOCE), 19(4), 1–19 [MCK17] Merkouris, A., Chorianopoulos, K., & Kameas, A. (2017). Teaching programming in secondary education through embodied computing platforms: Robotics and wearables. ACM Transactions on Computing Education (TOCE), 17(2), 1–22 [MSB+19] Marandi RJ, Smith BK, Burch RF & Vick SC (2019) Engineering soft skills vs. engineering entrepreneurial skills. The International journal of engineering education, 35(4), 988–998 [RG19] Richard GT, & Giri S (2019) Digital and physical fabrication as multimodal learning: Understanding youth computational thinking when making integrated systems through bidirectionally responsive design. ACM Transactions on Computing Education (TOCE), 19(3), 1–35 [SC21] Simarro C & Couso D (2021) Engineering practices as a framework for STEM education: a proposal based on epistemic nuances. International Journal of STEM Education, 8(1), 1–12 [SJ19] Serrano Pérez E & Juárez López F (2019) An ultra-low cost line follower robot as educational tool for teaching programming and circuit’s foundations. Computer Applications in Engineering Education, 27(2), 288–302 [SP-B19] Stehle SM & Peters-Burton EE (2019) Developing student 21 st Century skills in selected exemplary inclusive STEM high schools. International journal of STEM education, 6(1), 1–15. [ŠŽA20] Šuligoj, V., Žavbi, R., & Avsec, S. (2020). Interdisciplinary critical and design thinking. Int. J. Eng. Educ, 36, 84–95 [WHS+17] Witherspoon, E. B., Higashi, R. M., Schunn, C. D., Baehr, E. C., & Shoop, R. (2017). Developing computational thinking through a virtual robotics programming curriculum. ACM Transactions on Computing Education (TOCE), 18(1), 1–20 [WLC+20] Wen, C. T., Liu, C. C., Chang, H. Y., Chang, C. J., Chang, M. H., Chiang, S. H. F., ... & Hwang, F. K. (2020). Students’ guided inquiry with simulation and its relation to school science achievement and scientific literacy. Computers & Education, 149, 103830

80

2 Models for the Development and Assessment of Integrated STEM …

[WSK+21] Wu, X. B., Sandoval, C., Knight, S., Jaime, X., Macik, M., & Schielack, J. F. (2021). Web-based authentic inquiry experiences in large introductory classes consistently associated with significant learning gains for all students. International Journal of STEM Education, 8(1), 1–18

Chapter 3

Enforcing STEM-Driven CS Education Through Personalisation

3.1 Introduction In this chapter, we discuss personalised learning in STEM/CS education with a focus on content personalisation, processes for presenting this content and learner’s knowledge assessment, and implicit self-assessment. Personalised learning (PL) is one of the current trends of Education 4.0 (see Table 1.1 in Chap. 1). PL places the learner’s needs at the centre of education. The report ‘Schools of the Future’ prepared by education experts characterises personalised and self-paced learning as one of the most important attributes of education in the twenty-first century [SF20]. It is so because PL seeks higher motivation and engagement. PL takes into account the learner’s differentiation and preferences in the continuous education cycle. As highly believed, this vision leads to faster and deeper knowledge. A majority of writers on PL accept the formal definition provided by the U.S. Department of Education in the 2017 National Education Technology Plan Update [NETP18]. This document defines PL as “instruction in which the pace of learning and the instructional approach are optimised for the needs of each learner. Learning objectives, instructional approaches, and instructional content (and its sequencing) may all vary based on learner needs. In addition, learning activities are meaningful and relevant to learners, driven by their interests, and often self-initiated”. As our analysis of the literature shows, currently, personalised learning is widely discussed by educational strategists, individual researchers, practitioners, and organisations. This interest arises from the numerous efforts to improve and advance the quality of education at all levels worldwide, starting from primary school and ending with the university or even lifelong learning. Therefore, PL is a long-standing vision in education [Doc18]. The Guo et al.’s theoretical framework [GLG21] views PL as a process within the smart educational environment. What is our vision for PL in this book? In this book, we treat PL as an important aspect (among multiple others) to characterise Smart Education (i.e., one from the list of Big Concepts) within the evolutionary

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Štuikys and R. Burbait˙e, Evolution of STEM-Driven Computer Science Education, https://doi.org/10.1007/978-3-031-48235-9_3

81

82

3 Enforcing STEM-Driven CS Education Through Personalisation

context of STEM-driven CS education. It is difficult to define Smart Education precisely because it encompasses multiple attributes (smart teacher, smart student, smart pedagogy, smart technology, smart educational environment, smart content, etc.). The Zhu et al.’s research framework defines Smart Education [ZYR16] as an interaction among three components, i.e., smart pedagogy, Smart Learning Environment, and smart learner. Zhu and He [ZH12] state this in this regard. “The essence of smart education is to create intelligent environments by using smart technologies, so that smart pedagogies can be facilitated as to provide personalised learning services and empower learners, and thus talents of wisdom who have better value orientation, higher thinking quality, and stronger conduct ability could be fostered “(p. 6). According to Spector and others [SS20], “digital learning environment to be classified as a personalised learning environment to be adaptive to individual knowledge, experience and interests and to be effective and efficient in supporting and promoting desired to learn outcomes”. The recent review of literature on smart learning [LW21] indicates that more than 14 features characterise smart education/ learning. According to this review, in (2010–2014), the most frequently reported features were context-awareness, personalised content and facilitation of interaction (with 33% each), while in (2015–2019) the personalised content was 19%. The figures on personalised learning supports were 0% and 6% in those periods adequately. On the other hand, the recent trends in STEM research focus on Integrated STEM Education (see Sect. 1.4). There is no consensus on the definition of this term. Bryan et al. [BMJ+15], for example, define integrated STEM as “the teaching and learning of the content and practices of the interdisciplinary knowledge which include science and/or mathematics through the integration of the practices of engineering and engineering design of relevant technologies”. Cheng and So [CS20] consider a typology of integration in STEM learning as a combination of three items (i) content integration, (ii) pedagogical integration, and (iii) learner integration. Roehrig et al. [RDE+21] propose an integrated STEM framework that includes seven key characteristics: (a) focus on real-world problems, (b) centrality of engineering, (c) context integration, (d) content integration, (e) STEM practices, (f) twenty-first-century skills, and (g) informing students about STEM careers. Despite the broad stream of research in each field (PL and STEM), we still know little about how it would be possible to combine both approaches into a coherent methodology seamlessly, aiming at winning benefits from each. Therefore, in this chapter, we propose an approach, explaining the way for introducing PL into STEMdriven CS education, thus extending this paradigm with new aspects of content integration. This approach outlines basic activities relevant to PL in STEM and focuses on three essential attributes of PL, i.e., the content personalisation, processes of this content delivery and learner’s knowledge assessment and self-assessment. From the evolution perspective, the aim is to re-design the former developed content for CS education using the STEM paradigm [Štu15, ŠB18], taking into account the requirements for the explicit personalisation of this content and the learner’s knowledge and skills explicit assessment. The contribution of this chapter is a generic structure of personalised learning content identified as Learning Objects (LOs) in three categories: component-based

3.2 Related Work

83

LO, generative LO and smart LO (the latter is a combination of the first two [ŠBB+17]). The generic structure integrates those entities with the assessment modules and specifies the distributed interface for connecting them with digital libraries. The other contribution is the learner’s knowledge assessment tool implementing the model that integrates attributes defined by the revised BLOOM taxonomy [AKA+01] and computational thinking skills [Cum16, AM16] with adequate questionnaires or solving the exam tasks we have developed. The basis of our methodology is the recognition, extraction and explicit representation and then implementation of the STEM-driven learning variability in four dimensions, i.e., social, pedagogical, technological, and content. All these enforce integrative aspects of STEM-driven CS education and contribute to the evolution of pedagogical aspects we discuss in Part 1 of this book. Next, in Sect. 3.2, we analyse the related work. In Sect. 3.3, we formulate requirements for personalised STEM-driven CS education and two research questions. In Sect. 3.4, we present the basic idea of the approach and methodology applied. In Sect. 3.5, we outline the theoretical background to motivate the research questions, proposed solutions and outcomes. In Sect. 3.6, we present the proposed framework for introducing PL into STEM-driven CS education with a focus on the generic structure for representing personalised content and assessment facilities. In Sect. 3.7, we present and analyse two case studies and some outcomes from practice. In Sect. 3.8, we provide a summarised discussion and evaluation of this approach, by indicating some drawbacks and providing concluding remarks.

3.2 Related Work In this chapter covers the literature on two topics, i.e., characterisation of Personal Learning (PL) (A) and STEM/CS content and environments of PL (B). A. Despite some hype in this field, the term ‘personalised learning’ is not yet well understood, with the multiple definitions proposed so far. Apart from the given definition in Sect. 2.1, we present this. “Personalised learning seeks to accelerate learning by tailoring the instructional environment—what, when, how and where students learn—to address the individual needs, skills and interests of each student. Students can take ownership of their own learning, while also developing deep personal connections with each other, their parents and other adults.” (The Bill & Melinda Gates Foundation statement taken from [PSB+17]). The White paper [PL_C14] indicates four defining structural elements of the PL system, i.e., competency-based learning, multiple paths of study, the use of variable time, and the inclusion of meaningful assessment and accountability. In addition, this paper concludes that PL is a conceptual subset of the student-centred learning. The report [DP_PL13] highlights only three components that should form the core of PL structures: (i) Learner Profiles (they convey how a student learns best using customised learning environment); (ii) Customised Learning Paths based on individual interests, strengths and learning

84

3 Enforcing STEM-Driven CS Education Through Personalisation

styles; (iii) Proficiency-Based Progress while advancement is tied to performance, not seat time or credit. At a 2010 symposium on PL, titled “Innovative to Educate: [Re] Design for Personalised learning”, experts identified five essential elements central to PL: (i) Flexible, Anytime/Everywhere Learning; (ii) Redefine Teacher Role and Expand “Teacher”; (iii) Project-Based and Authentic Learning Opportunities; (iv) Student-Driven Learning Path; (v) Mastery- or Competency-based Progression/ Pace. The Centre for Curriculum Redesign [Gof17] presents an extensive overview and focuses on a holistic approach, offering a complete framework across the four dimensions of an education, i.e., knowledge, skills, character, and meta-learning. According to this report, knowledge must reveal a better balance between traditional and modern subjects, including interdisciplinary subjects. Skills should relate to the use of knowledge and engage in a feedback loop with knowledge. Character qualities describe how one engages with and behaves in the world. Meta-learning fosters the process of self-reflection and learning how to learn, as well as the building of the other three dimensions. This report also delivers an extended list of technology used in PL. The paper [BHC+16] indicates five research areas to focus on advancing PL: (1) how educators and researchers use data; (2) how technology is designed to support learners and associated pedagogical practice; (3) how to educate personnel who are prepared to work in personalised settings; (4) how content is designed and (5) how curriculum is designed to support PL. In addition, this paper argues that PL requires “a unique approach to the design, implementation, and assessment of learning”. When implementing PL, “teachers become designers or engineers of learning”. The latter can design environments that “meet the parameters of success for all learners, and when these environments fail, must work to identify, solve, and test solutions through an iterative design process”. A group of experts [WD14] proposes the “working definition” containing four attributes of PL: (1) Competency-based progression, meaning that each student advances and gains credit as soon as he/she demonstrates mastery through the continually provided assessment. (2) Flexible learning environments, meaning that the student needs to drive the learning environment, i.e., all operational elements respond and adapt to support students in achieving their goals. (3) Personal learning paths, meaning that each student follows a customised path that responds and adapts based on his/her learning progress, motivations, and goals. (4) Learner profiles, meaning that each student has an up-to-date record of his/her strengths, needs, motivations, and goals. The paper [MS02] addresses the problem of dynamically selecting a knowledge route that suits best to the individual learner’s needs and profile through a set of learning resources. This paper engages parameters of the learner’s cognitive style to create a multi-criteria utility model that evaluates available didactic methods, using initial evaluations upon a set of basic cognitive categories. Järvelä [Jär06] examines seven critical dimensions of PL. They include (i) the development of key skills, which are often domain-specific; (ii) levelling the educational playing field through guidance for the improvement of students’ learning skills and motivation; (iii) encouragement of learning through “motivational scaffolding”. The others include (iv) collaboration in knowledge-building; (v) development of new models of

3.2 Related Work

85

assessment; (vi) use of technology as a personal cognitive and social tool; (vii) the new role of teachers in better integration of education within the learning society. Peirce et al. [PCW08] first observe that adapting a game to enhance its educational benefit endangers intrinsic motivation and flow. Next, having this in mind, this paper proposes a novel approach for non-invasively adapting a game to enable a personalised learning experience. Verpoorten et al. [VGK+09] present a different view on personalisation than what typically occurs in this field. The paper states “personalisation occurs when learning turns out to become personal in the learner’s mind”, meaning that there is a need of a special focus on confronting learners with tracked information. Dockterman [Doc18] presents insights to guide contemporary efforts in PL. He considers a brief overview of historical efforts to create a scaled system of education for all, also with the acknowledgement of individual learner variability. He proposes a concept of personalisation-based pedagogy. The latter should focus on variability across multiple dimensions, not just domain knowledge and skill. In this regard, he states the following: Instructional design and materials, informed by data and learning science, can focus on the anticipated variability within the target population that will matter most for learning and demonstrating competence with the academic goals.

Now the use of AI-based approaches in education, including personal learning (PL), becomes a common trend. In this regard, Aeiad & Meziane [AM19] describe the development of an architecture for Personalised and Adaptable E-Learning System (APELS) that aims to provide a personalised and adaptable learning environment to users from the freely available resources on the Web. This system uses natural language processing techniques to evaluate the content extracted from relevant resources against a set of learning outcomes as defined by standard curricula to enable the appropriate learning of the subject. Linnel et al. [LDG+20] discuss a web app, called Curated Pathways to Innovation that uses machine learning to customise students’ experience of discovering and learning CS (STEM) content to support PL. In addition, this paper discusses lessons learned around teacher engagement, student incentives, and data collection. The AI-based system proposed by Maghsudi et al. [MLX+21] recommends the most appropriate content among numerous accessible ones, advises a well-designed long-term curriculum, and connects appropriate learners by suggestion, accurate performance evaluation, and so forth. The study [PSK+22] investigates the effects of an andragogical design of Teachers Pedagogical Development (TPD) with an embedded personalised learning system on technological, pedagogical, and content knowledge (TPACK) of in-service teachers. The results are promising to the TPD intervention on improving in-service teachers’ professional knowledge, called TPACK, to integrate digital technologies into their STEM teaching practice in particular integrated STEM-related situations. Major and Francis [MF20] provide the Rapid Evidence Report of the potential benefits of technology-supported personalised learning as well as identifying possible limitations and challenges in the context of global shutdowns of schools due to

86

3 Enforcing STEM-Driven CS Education Through Personalisation

COVID-19. Kucirkova et al. [KGL21] discuss three research dilemmas in relation to three categories of personalisation: (i) customization by designers/teachers to support specific needs (e.g., grade level); (ii) individualization to support learner choice (e.g., project topic); and (iii) adaptation of instructional activities based on automated analysis of logged user performance (performance metrics, natural language processing, etc.). B: Next, we focus on some STEM/CS content issues. Motivating and engaging students to participate in STEM learning is a big issue [AMG15, LBP22] as well as integrating STEM-oriented aspects in the school curriculum [Rob15]. Note that often authors (we among them) treat CS as a part of STEM. It is so because “the U.S. Congress passed the STEM Education Act of 2015, which officially made computer science part of STEM” [GM16]. Gander [Gan15] observes that CS (aka Informatics) is the leading science of the twenty-first century. Similarly, to mathematics, practically all sciences use CS approaches. According to the author, it has to be a part of general knowledge in education. Now, in the scientific literature on STEM and CS education, one can find the use of the term “smart” very often. However, researchers try to assign a different meaning to this term, depending on the context. In this regard, for example, Brusilovsky et al. [BEK+14] state: “Computer science educators are increasingly using interactive learning content to enrich and enhance the pedagogy of their courses. A plethora of such learning content, specifically designed for computer science education, such as visualization, simulation and web-based environments for learning programming, are now available for various courses. We call such content smart learning content”. In [Štu15], we have defined smart GLO as an entity with the enhanced functionality, which implements a pre-specified learning variability. Note that Boyle and his colleagues pioneered in this field, by introducing the GLO concept yet in 2004 [BLC04]. For other followers of this approach, see papers [CCS15, Chi16]. Smart GLOs evolve over time (for the evolution curve; see pp. 138–140 in [ŠB18]). In this chapter, we have yet enforced the GLO “smartness” by changing its structure and introducing the concept smart personalised LO that includes GLO. Regarding other characteristics of GLO, there is a generic attribute (we mean computational thinking) to characterise learner’s ability in getting knowledge in CS or other disciplines. In this regard, Wing observes in [Win06] that (1) “Computational thinking is a fundamental skill for everyone, not just for computer scientists”. (2) “To reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability”. (3) “Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science”. Computational thinking is about separations of concepts; it is about the use of abstractions and generalisations too. Rad et al. [RRB+18] extend the understanding of the concept “computational thinking” to “AI thinking” and presents a Cloud-eLab education platform. It delivers a personalised content for each student with the flexibility to repeat the experiments at his or her own pace, which allows the learner to be in control of the whole learning process. Attwell [Att07] considers social and technological aspects and emphasises two important issues of personal learning environments (PLEs). The first is that PLEs “provide learners with their own spaces under their control to develop and share their

3.2 Related Work

87

ideas” The second is that PLEs “are not an application but rather a new approach to the use of new technologies for learning”. Boondao et al. [BHS08] consider principles for designing a personalised e-learning system, taking into account aspects of cultural backgrounds influences, i.e., differences among ethnic groups from Eastern and Western countries, on student learning approaches and learning styles. In doing that, the following needs to be considered: the issues of educational value differences, educational, cultural background differences, cultural communication differences, language usage differences and students’ individual learning style preferences. The Buchem et al.’ study [BAT11] takes a broader view on PLEs and provides an extensive literature review aiming at creating a better understanding of PLEs and developing a knowledge base to inform further research and effective practice. The authors treat PLEs as “a concept related to the use of technology for learning, focusing on the appropriation of tools and resources by the learner”. Castaneda et al. [CDT17] define PLE as a “structure and process that helps learners organise the influx of information, resources and interactions that they are faced with daily into a personalised learning space or experience”. Using a PLE, the student is able to develop “an individualised digital identity” through the perceptual cues and cognitive affordances that the PLE provides. Kiy and Lucke [KL16] review the variety of efforts and approaches on how to establish a PLE, and suggest a categorisation for them. Wild et al. [WMS08] first clarify key concepts and assumptions for personalised learning environments. Then they summarise the authors’ critique on the contemporary models for personalised adaptive learning. Subsequently, this paper proposes an alternative concept of a mash-up personal learning environment that provides adaptation mechanisms for learning environment construction and maintenance. Sampson and Karagiannidis [SK02] discuss the paradigm shift towards personalised learning, from the educational, technological and standardisation perspectives. Brady et al. [BCW08] present the motivation behind, the workflow supported by and the evaluation of the learning object Generator, a tool that offers personalised support and scaffolding for users. The paper emphasises that users are not necessarily content creation or pedagogical experts to provide assembling of pedagogically sound personalised Learning Objects. Costello and Mundy [CM09] discuss an adaptive intelligent personalised learning environment. The findings of this research are the development of a model and intelligent algorithms for personalised learning. This model bases on the premise of an ideal system being one, which does not just consider the individual, but also considers groupings of likeminded individuals and their power to influence learner choice. An important conceptual and methodological step towards the integration and possible personalisation within different educational paradigms is the recognition that since 2015 CS is a part of STEM in the U.S. education system [GM16]. This paper appeals for building the infrastructure for CS classes and providing steps towards pervasive computing education in order to reach the pervasiveness of mathematics and science education. The report [CSSC16] presents an extensive exploration and extended vision of CS education in the framework of the K–12 program in the U.S. This document also highlights a tight relationship between CS and STEM.

88

3 Enforcing STEM-Driven CS Education Through Personalisation

The next papers consider modern approaches to support personalised search, retrieval and delivery of personalised content. Those approaches are important in general, not only for STEM or CS. Pukkhem and Vatanawood [PV11] introduce a multi-agent model based on learning styles and a word analysis technique to create a LO recommendation system to filter out unsuitable learning concepts from a given course. This model classifies learners into eight styles and implements compatible computational methods consisting of three recommendations (i) non-personalised, (ii) preferred feature-based, and (iii) neighbour-based collaborative filtering. Algorithm implementing the second recommendation has been experimentally proven as the best in terms of the preference error. Zapata et al. [ZMP+11] proposes a hybrid recommendation method to assist user’s personal needs in the search and selection processes of learning objects in Learning Objects Repositories. The proposed method uses a combination of different filtering techniques, such as content comparison, and collaborative and demographic searches. To achieve this goal, metadata information, management activities of resources and user profiles are used. Drachsler et al. [DVS+15] present an extensive review of recommender systems aiming at supporting the educational community by personalising the learning process. The review includes 82 systems developed over 2000–2014 years and classifies them into seven clusters according to their characteristics and for their contribution to the evolution of this research field. In the context of using Learning Management Systems (LMS), Sun et al. [SWO+03] propose a method and techniques on how LMS can deliver personalised material suited to the learner’s learning requirements and their learning style. Dolog et al. [DKC+07] survey the existing approaches for the authoring and engineering of personalisation and adaptation in e-learning systems. This study enables to provide the comparison of various methods and techniques and facilitates their integration and reuse. In the context of personalisation of elearning environments, Santally and Senteni [SS13] discuss the learning preferences according to VAK model that classifies learners as visual, auditory or kinaesthetic. The findings of this experimental research show that learning styles as determined by self-assessment of using the VAK model do not necessarily improve performances. The other observation is that “working towards more flexibility and adaptability of the environment might be a better approach rather than to work on the adaptability of the environment”. In summary, we state. The research in both fields (personalised learning and STEM/SC education) is very broad and extremely intensive now. In the given format, however, we were able to reveal that only partially. Both are highly heterogeneous in multiple aspects, i.e., conceptual, pedagogical, social, technological, and methodological and others. Concepts on personalisation range from learner’s needs [Doc18, WD14], different learner’s profiles used [MS02, Jär06], to personalised content search, retrieval and delivery [ZMP+11, SWO+03, DVS+15, DKC+07] and personalised systems and environments [BAT11, KL16, CDT17, WMS08]. STEM/ CS education, on the other hand, are highly related fields [GM16], especially in terms of applying robotics [ŠB18]. In addition, there is some intersection or commonality between personalised and STEM/CS approaches in terms of students’ engagement and improving the quality of education [PCW08, VGK+09, AMG+15, Rob15,

3.3 Requirements for Personalised STEM-Driven CS Learning …

89

AEM+14]. Nevertheless, we have obtained that there is a large room for further research, especially in terms of integrative and quality ensuring aspects. This motivates our next steps well in presenting concepts we propose and consider throughout this chapter.

3.3 Requirements for Personalised STEM-Driven CS Learning and Research Questions Personalised learning (PL) puts the learner’s needs at the centre of the educational processes. As our literature survey shows, the following attributes are essential to characterise the personalisation in learning [WD14]: (1) The teacher’s role in directing the educational process is minimal, and students are willing to accept and provide self-initiation and self-direction of the process. (2) Competency-based progress has to be measured. This means that each student progresses and earns points as they demonstrate their proficiency through continuously provided assessments. (3) There are multiple options for designing individual learning paths. This means that each student follows an individual path that accommodates and adapts the individual learning progress, motivation, and goals. (4) The learners are aware of their profile, i.e., each student has an up-to-date inventory of their individual strengths, needs, abilities, level of previous knowledge, motivation, and goals. (5) Due to the flexible learning environment, all operational elements respond and adapt to help students achieve their goal. (6) Technology is a key factor that predetermines the capabilities of PL. These attributes are general and not very dependent on the learning paradigm used. However, the attributes (5) and (6) can be very specific, for example, in case of STEM-driven CS education, using robotics and smart devices [ŠB18]. The listed attributes have enabled us to formulate requirements for PL using the STEM-driven paradigm. The basic requirements (R) of the PL are in this case as follows. R1. Due to the high learning variability and aiming at automation, the modeldriven vision in defining the functionality of PL and environments in STEM-driven CS education should play a key role. R2. The learner’s profile model (learner’s model for short) has to be as generic as possible. This enables extracting from the generic specification a concrete model for each student or for a group of students semi-automatically. R3. The learner’s model (it is a part of the pedagogical-social model) must have the highest priority as compared to the remaining models (technological and content) in PL.

90

3 Enforcing STEM-Driven CS Education Through Personalisation

R4. One needs to focus on the explicit representation of those variability aspects of models in order it would be possible to design a large space to form the personalised choices in selecting and managing personalised learning paths. R5. The relevant personalised learners’ assessment models and tools should be developed or selected. The assessment process has to be at least at two levels: (a) during learning process; (b) after completing this process (https://hundred.org/en/ innovations/personalised-learning-paths). R6. The PL and environments supporting PL should be designed to allow providing analysing, investigating (e.g., through applying the inquiry-based approach), or sometimes even changing the personalised processes. For achieving this and due to the complexity of this problem, we need considering the personalised learning system (its structure and functioning processes) at three levels, i.e., component level, sub-system level, and system level. R7. The personalised processes of the component level cover activities with personalised Component-Based (CB) Learning Objects (shortly PCB LOs), personalised Generative Learning Objects (PGLOs), and personalised Smart Learning Objects (PSLOs), each having a separate/individual extension for providing the learner’s assessment procedure. R8.The sub-system level additionally includes such sub-systems as personalised libraries [ŠBK+20]. R9. The processes of the system level include all types of components and subsystems within the Smart Learning Environment [ŠB18], now extended with facilities for personalised learning. R10. A specific focus has to be taken to implement multiple feedback links in dealing with PL. R11. Ensuring the simplicity and flexibility in re-designing existing entities, for example, providing changes in the interface part, preserving the same functionality in the body. Based on the formulated requirements and taking into account the continuous evolution of our approaches ([Štu15, and Chap. 10 in ŠB18]), we consider the following research questions (RQs) in this chapter. RQ1: Transforming the structure of previously developed GLOs/SLOs into the personalised structure (PGLOs/PSLOs) enriched by the assessment modules. RQ2: Developing a component-level framework for integrating PCB LOs, PGLOs and PSLOs into the personalised learning process.

3.4 Basic Idea and Methodology The former developed content entities represent Component Based (CB) Learning Objects (LOs), Generative Learning Objects (GLOs) and Smart Learning Objects (SLOs) [ŠB18]. You should have a rough understanding of these entities here. CB

3.4 Basic Idea and Methodology

91

LO is an instant of a content piece (e.g., it may represent a film for enhancing motivation, instructional text of theory, explaining robot’s control program algorithm or physical characteristics of robots, or other smart devices or their parts), etc. Typically, CBLO is a fine-grained component retrieved from an external repository and used in the use-as-is mode. The teacher, however, is able to modify it, or even to create it anew from scratch if needed. GLO represents a set of the predefined related instances (such as CS teaching content, typically control programs for educational robots or microcontrollers) woven together with pedagogical–social and technological (e.g., robot characteristics) and content aspects in the same specification using heterogeneous meta-programming techniques (see Chap. 6 in [Štu15]). Note that the essence of those techniques is the external parameterisation, i.e., expressing different aspects of content through parameters. Therefore, GLO along with a meta-language processor is a content generator to produce instances on the learner’s demand (by selecting parameter values). SLO is a pre-specified set of both CB LOs and GLOs [ŠBB+17]. We have designed those entities so that they not only cover all topics of the curriculum completely, but also do that with a great degree of the surplus. For example, a topic may have a few CB LOs, GLOs or even SLOs. This is the result of the continuing evolution of the concept GLO [ŠB18, pp. 138–140]. We have introduced SLO in the context of STEM-driven robot-based CS education aiming at extending the capabilities of inquiry-based learning in this paradigm. We have developed those GLOs and SLOs previously not taking into account explicit requirements for personalisation and explicit assessment. Now having these requirements (see Sect. 3.3), we are able to enforce the content with new capabilities for PL. For doing that, it was necessary to make three innovations. The first refers to the explicit separation of learner’s profile and pedagogy from the technological and content aspects. This requires changing of the interface within the GLO and SLO specifications without changing their functionality and developing of the unified interfacing structure for external storing of those content entities (CB LOs, GLOs, and SLOs). The second innovation refers to the introduction of assessment capabilities for learners. Finally, the third innovation is the introduction of multiple feedbacks to make possible the measurement of a progress in the learner’s skill development during the learning process. By introducing those innovations, we were able to create a truly personalised learning entity renaming it adequately as personalised CB LO (shortly PCB LO), personalised GLO (PGLO), and personalised SLO (PSLO). We explain the idea of our approach in this way. We have developed a framework reflecting the processes, personalised learning paths, the use of personalised entities, assessment, and implicit self-assessment activities all together closed by multiple feedback links, as we will explain that in more detail later. The basis of our methodology for designing GLO/SLO is the STEM-driven learning variability, i.e., a compound of pedagogical-social variability, technological variability, content variability, and interaction variability among the enlisted variabilities [ŠB18]. This compound, in fact, is an anticipated variability predefined by modelling. By redesigning GLO/SLO, we have extended and reconstructed this variability space in the process dimension through adding new features related to personalisation such as for assessment. In addition, we have extended the learning variability space by

92

3 Enforcing STEM-Driven CS Education Through Personalisation

adding the personalised learning path variability. With regard to the role of variability aspects in PL, reader needs to look at the papers in Sect. 3.2 once again, especially at the extract from [Doc18]. To exploit the variability aspects as fully and efficiently as possible, we need first to use well-defined design principles. Those are the separation of concepts and reusability, followed by analysis and modelling. We have adopted them from software engineering and incorporated in our content design and re-design methodology. Analysis and modelling give us the possibility to recognise and extract variability in a variety of its kinds and then to represent it by adequate models explicitly. Finally, generative reuse brings the technology to implement the constituents of the framework as effectively as possible. A more extensive description of the methodology we use, one can find in [BBD+14, ŠBB+16]. Note that it is the same for the assessment modules development. Largely, this methodology is for researchers, content designers and, of course, for smart CS teachers who are able to play a role of the content designer. In the next section, we present a background of our approach.

3.5 Background The background includes definitions of the basic terms used, their properties, and relevant models to specify adequate entities more precisely as follows. Definition 1 Personalised learning (PL) is the approach that places the learner’s needs at the centre of learning, uses personalised learning objects and exploits through learning activities the attributes that enforce the personalisation (e.g., selfguidance, self-assessment, use of personalised learning paths) as much as possible (revised from [WD14]). Property 1 In case of the personalised STEM-driven CS education, there are three types of the personalised learning objects, i.e., Personalised Component-Based Learning Objects (shortly PCB LOs), Personalised Generative Learning Objects (PGLOs) and Personalised Smart Learning Objects (PSLOs). Definition 2 In terms of the IEEE definition of LO [LOStd07], Component-Based Learning Object (CB LO) is an instance, typically digital, either retrieved from external sources, or modified/created by the user. Definition 3 PCB LO is the structure consisting of two entities, i.e., the CB LO and the module that provides the learner’s knowledge and skills assessment and measurement of progress in learning, using this CB LO (revised from [LOStd07]). Simply, PCB LO = CB LO + LKSAM (where LKSAM – learner’s knowledge and skills assessment module). Definition 4 GLO is the specification that implements the pre-defined learning variability aspects (i.e., pedagogical, social, technological and content), using heterogeneous meta-programming techniques [ŠD12]. This specification consists of two

3.5 Background

93

interrelated parts, i.e., meta-interface and meta-body. The first stands for delivering parameters and their values. The second stands for implementing the functionality, i.e., learning variability aspects woven together specifically. Property 2 GLO is a domain-specific heterogeneous metaprogram; in other words, the latter, along with the meta-language processor, is the learning content generator on the user’s demand. The user (teacher or learner) operates with the meta-interface, seeing parameters and their values (that represents the variability space for choice). The meta-body is completely hidden from the user. The system operates with it only. Property 3 GLOs evolve over time in terms of their types, numbers, structure or even functionality (see evolution curve in [ŠB18, pp. 138–140], for more details). The earlier developed GLOs have the integrated meta-interface. The currently developed GLOs are PGLOs, having the distributed meta-interface, i.e., social (learner’s) and pedagogical parameters are placed within the metadata in the local library, while the remaining parameters, i.e., technological and content, are within the specification itself being stored in the external repository. Definition 5 The heterogeneous metaprogram is the specification that is implemented using at least two languages, i.e., meta-language and target language (languages). The latter stands for delivering the base functionality through a domain (target) program, for example, a robot control program in our case. The first stands for expressing the generalisation, i.e., learning variability aspects [ŠB18, p. 116]. Property 4 It is possible to use any programming language as a meta-language in the mode of the structured programming [Dij72]. One can find an approval of this property in [ŠD12]. Therefore, designers (e.g., CS teachers) have a broad possibility in creating GLOs/SLOs. Example 1 For a long time, we use PHP as a meta-language and C, RobotC, SQL as target languages. Property 5 The following expression (3.1) estimates the possible number of instances N that the user, using a meta-language processor, can derive from the given GLO (PGLO) specification: N ≤ |P1 | × |P2 | × · · · × |Pi | × · · · × |Pn |,

(3.1)

where |Pi |—the number of values of the parameter Pi , n—the total number of parameters. Note that the equality sign (=) in the expression (3.1) holds when all parameters are independent, i.e., not interacting [ŠB18, pp. 119–120]. Example 2 Let us have a GLO (PGLO) with six independent parameters, each having five values. In this case, the variability space of possible instances N = 5 * 5 * 5 * 5 * 5 * 5 = 15,625.

94

3 Enforcing STEM-Driven CS Education Through Personalisation

Property 6 The variability space predefined by (3.1) also specifies the possibility for the learner’s choices in forming personalised learning paths while using a single PGLO. As, in personalised learning, the learner uses some set of PGLOs even during a short session such as the lesson time, there is indeed a huge space for making choices to form personalised learning paths. Note that, typically, the size (in symbols) of a single GLO (PGLO) is the only 3–5 times larger than the instance derived from this specification. Considering this observation and Property 3, the following property holds. Property 7 GLO (1) contributes to saving the space within a digital library, (2) substitutes the instance search procedure by the generation procedure; (3) the latter brings the LO instance you intend to receive, i.e., resolves the problem of synonyms. Definition 6 PGLO is the structure of two entities, i.e., the GLO and the module that provides the learner’s knowledge and skills assessment and progress in learning, using this GLO. Simply, PGLO = GLO + LKSAM (where LKSAM—learner’s knowledge and skills assessment module). Definition 7 Pre-designed (Pd) SLO is a set consisting of two subsets taken from the available sources: (1) pre-specified subset of CB LOs (PCB LOs) and (2) pre-specified subset of GLOs (PGLOs) (revised from [ŠBB+17]) and additionally containing the cumulative assessment module CAM. Formally, PdSLOpre-designed = {PCB1 , . . . , PCBm } ∪ {PGLO1 , . . . , PGLOk } ∪ CAM, where {PCB1 , . . . , PCBm }—m-size pre-specified subset {PGLO1 , . . . , PGLOk }—k-size pre-specified subset of PGLOs.

of

PCB

LOs,

Definition 8 PSLO is the structure consisting of two entities, i.e., the SLO and the module that provides the learner’s knowledge and skills assessment and progress in learning, using this SLO. Simply, PSLO = SLO + LKSAM (where LKSAM— learner’s knowledge and skills assessment module). Property 8 Structure and functionality of the special part of LKSAM are different for each category of the personalised learning entities (we will explain that in more details later). Definition 9 Learner’s knowledge and skills assessment model (shortly assessment model) is the structure consisting of generic and specific parts. The generic part represents and integrates concepts taken from the revised BLOOM taxonomy [AKA+01] with the ones that characterise computational thinking skill [Cum16, AM16]. The specific part represents a set of tasks related to the personalised content components (LOs). Definition 10 Feature-based assessment model is the feature diagram that consists of the following elements: (i) features; (ii) parent–child relationships of features; and (iii) constrains (Fig. 3.1).

3.5 Background

95 Assessment model

Legend: Constraint requires Mandatory feature Optional feature OR-group of features

Generic part

Special part

requires

Task 1

...

Task n

Computational thinking skills requires

Knowledge dimension

Factual

Conceptual

Procedural

Process dimension

requires

Metacognitive

Remembering

Understanding

Applying

Analyzing

Abstraction

Evaluating

Decomposition

Generalization

Data representation

Algorithm

Creating

Fig. 3.1 Feature-based learner’s knowledge assessment model. Adapted from [BDŠ18, ŠBD+19]

Property 9 The learner’s knowledge and skills assessment module LKSAM is the implementation of the feature-based assessment model, for example, using metaprogramming techniques and the methodology [ŠBB+16, ŠD12]. Definition 11 Cumulative assessment is the process that involves continuous monitoring of student progress and is composed of partial assessments retrieved from LKSAMs. Definition 12 Learning variability is the characteristic of personalised learning that covers the following attributes: social variability (learner’s variation in the profiles, demands, etc.), pedagogical variability (STEM pedagogy, including inquiry-based approaches, etc.), and technological variability (variation in technology types, characteristics, e.g., software-based, hardware-based, such as robots and smart devices) and content variability (data, algorithms, programs) (revised from [Štu15], p. 106). Property 10 Social variability and pedagogical variability are the attributes of a higher priority with respect to the remaining variability types. Therefore, the social variability and pedagogical variability are the context to the remaining variability types. Property 11 Personalised learning path is the expression of possibilities in making a choice between the processes and activities predefined by the educational environment that implements and supports the learning variability in order to achieve the learning objectives. Note that the material of Sect. 3.5 has been taken (© with kind permission from [ŠBD+19]).

96

3 Enforcing STEM-Driven CS Education Through Personalisation

3.6 A Framework for Implementing Personalised STEM-Driven CS Education Figure 3.2 presents a framework outlining how personalised learning (PL) can be implemented using personalised LOs. The framework includes components such as actors, learning tasks and learning plans, learning activities and processes, resources used, learning pathways with feedback, learner assessment, and learner progress measurement. Bold rectangles represent learning activities and processes, and thin rectangles represent resources. In PL, the teacher’s role changes significantly because of the large differences in learner profiles. The teacher’s role moves from knowledge provider to personalised content creator and mentor for students. In the first stage, having a task and the plan for its interpretation, students need to identify the category they belong to (beginner, intermediate, advanced) in order to select an appropriate learning path. Next, we outline the personalised content, i.e., Personalised Learning Objects (see Definitions 3, 6 and 8 in Sect. 3.5) in Sect. 3.6.1. Then, in Sect. 3.6.2, we will describe processes and activities through the personalised learning paths.

Fig. 3.2 Proposed framework outlining activities, processes, and feedbacks in PL. © With kind permission from [ŠBD+19]

3.6 A Framework for Implementing Personalised STEM-Driven CS Education

97

3.6.1 Structural Models of Personalised LOs Considering the capabilities of interfacing, all types of personalised LOs have the identical structure identified here as a generic one. The generic structure consists of metadata part and connection with the learner’s knowledge and skills assessment module. However, the metadata part and special part of assessment modules are different for each type of entities (cp. Figs. 3.3, 3.4b, and 3.5). The commonality of this structure is that they all have to be stored in the same way for retrieving at the use time. In Fig. 3.3, we present the structure of PCB LOs. This structure corresponds to Definition 2 (see Sect. 3.5). In general, those components taken from external repositories may include presentation objects, practice objects, simulation objects, conceptual models, information objects, and objects for contextual representation [Chu07]. In case of STEM-driven CS education, LOs are specific for robotics (e.g., electrical circuits, guides for mechanical designing of robots). Furthermore, students are able to create own LOs

from library Metadata Resource description part

Link to repository

Personalised component-based learning object (PCB LO) from repository CB LO as instance

Learner‘s assessment module

Types of CB LOs - presentation objects - practice objects - simulation objects - conceptual models - information objects - contextual representations

Fig. 3.3 Structure of personalised component-based LOs. © With kind permission from [ŠBD+19] GLO

Personalised generative learning object (PGLO)

Meta-interface Pedagogical/ social parameters

Technological parameters

Context related functionality

from library Content parameters

Technology/ content related functionality

Meta-body

(a)

Metadata

from repository GLO Meta-interface

Resource description part

Link to repository

Concrete context parameters values (curricular & learner’s profile)

Technological parameters

Content parameters

Technology/ Context related content related functionality functionality

Learner‘s assessment module

Meta-body

(b)

Fig. 3.4 Former GLO structure with the integrated interface (a) and PGLO with distributed interface (b). © With kind permission from [ŠBD+19]

98

3 Enforcing STEM-Driven CS Education Through Personalisation

Personalised smart (Generative) learning object from library

from repository SLO

Metadata

Meta-interface Resource description part

Link to repository

CB LO

CB LO

GLO Concrete context parameters values (curricular& learner’s profile)

Learner‘s cumulative assessment module

GLO

Meta-body

Fig. 3.5 Structure of personalised smart LO. © With kind permission from [ŠBD+19]

through individualised learning paths and processes and collect them in personalised libraries for the future use and sharing. Figure 3.4a highlights the structure of the previously researched GLO [Štu15, ŠB18]. In this specification, we have the integrated interface (parameters in the three categories) and meta-body (see Definition 4 and Properties 2 and 3 in Sect. 3.5). The meta-body implements the functionality and hides it from the user (i.e., learner or teacher). Therefore, we present it as a darkened box (see also Property 2 in Sect. 3.5). Figure 3.4b outlines the structure of the personalised GLO with the distributed interface, where pedagogical/social parameters appear not in the structure of a GLO itself, but within the metadata description because of they are common for most GLOs. Note that, in developing phase of those structures, we widely exploited design principles formulated in Sect. 3.4. For example, the separation of concepts are evident in Fig. 3.4 (e.g., meta-interface is separated from the meta-body, context parameters—from remaining parameters and the local library—from external repositories). Regarding the reuse-based aspects and principles for designing context-aware GLOs, readers can learn more from [BBD+14]. Figure 3.5 illustrates the structure of the Personalised Smart LO (PSLO). It also can be treated as PdSLO. According to Definition 7 (see Sect. 3.5). PdSLO consists of a pre-designed set of CB LOs and GLOs. Why we need those sets? The reason is ensuring flexibility in making choices to form a variety of personalised learning paths for students with different profiles. The teacher creates those sets in advance. A given topic and learning objectives predefine the size, the number of entities of the different types (CB LO or GLO) within the set. In STEM-driven CS education, CB LO represents content used for motivation, various tutorials, and user guides. This also includes hardware-oriented things like electrical circuits, robot design guidelines, etc. GLO stands for generic (generative) control program for robots and microcontrollers.

3.6 A Framework for Implementing Personalised STEM-Driven CS Education

99

3.6.2 Personalised Processes and Activities Within the Framework When describing processes and activities, we intentionally do not use STEM learning semantics (we have discussed that more extensively in [ŠB18]). As it is clear from Fig. 3.2, personalised learning paths predefine PL. This process begins when the student receives a task set given by the teacher. In addition, the teacher creates a task solution plan. The teacher’s role varies in this way. (a) For the beginner, teacher provides a comprehensive support; (b) for the intermediate student, the teacher provides some support only; (c) for the advanced student, there is no help at all. The teacher’s assistance may include explaining tasks, clarifying learning goals, and providing keywords as part of metadata for students to retrieve the personalised content. The given plan allows learners to create learning scenarios with little or no teacher’s intervention. In Fig. 3.2, we call this the preparation activity. A personalised learning path has three components: (1) Pre-learning (includes planning and procurement of required resources). (2) Learning (includes activities and processes that contain personalised content). (3) After learning (includes knowledge assessments combined with multiple feedback links). A variation in the structure of the personalised SLO and multiple assessment options create a large space for students in making decisions, when designing his/her personalised learning paths. Furthermore, a student has the possibility to repeat the path or to modify it and the process itself through feedbacks based on the assessment and self-assessment. The learner drives those actions, fully or partially, taking ownership of his/her own learning. How that works in practice, we will show in Sect. 3.7. In Sect. 3.6.3, we outline software tools to support the implementation of the proposed framework.

3.6.3 Tools and Approaches to Implement the Proposed Framework For aims of this chapter, we use the following tools. Aiming at specifying the content (PGLO, PSLO) at a high level of abstraction with feature models, we use SPLOT (Software Product Lines Online Tools, www.splot-research.org). For the generative content (PGLO within PSLO) implementation, we apply heterogeneous meta-programming approaches [Štu15, ŠB18]. Regarding component-based objects (CB LOs), we apply a general purpose SW or/and domain-specific SW (Fritzing for circuits modelling, fritzing.org; LEGO Digital Designer for LEGO robots’ modelling, https://www.lego.com/en-us/ldd). The general-purpose SW includes Browsers, PDF readers, text Editors, Spreadsheets, different video-audio players, etc. Regarding domain-specific SW, we use environments for robot programming (ArduinoC, RobotC, Java). For the assessment purposes, we apply online quiz makers (Socrative, https://socrative.com; Google Forms).

100

3 Enforcing STEM-Driven CS Education Through Personalisation

3.7 Case Study Aiming at demonstrating what knowledge and skills students are able to gain, we describe the task on how to program LEDs (Light Emission Diodes) functionality, using Arduino microcontrollers. By developing PSLOs, learners improve knowledge and skills in Science (Physics, e.g., electricity; Computer Science, e.g., programming), Technology (microcontrollers), Engineering (Electrical Engineering) and some aspects of Math (e.g., representing decimal numbers in the binary numeral system). From the personalisation perspective, the presented PSLOs are extended by adding learner’s assessment modules aiming at monitoring the student’s progress and ensuring effective feedback. The aim of this Case Study (we call it “A set of LEDs”) is to explain the concept of the array and to acquire appropriate skills in applying loops and functions while programming. PSLO “A set of LEDs” consists of CB LO (see Fig. 3.6a) that represents a physical part of PSLO, and the interface of GLO (see Fig. 3.6b). This GLO represents a generic control program of LEDs. By generic program, we mean the parameterised one (in this case by two parameters with multiple values, see also Definitions 4 and 5 in Sect. 3.5). Firstly, the learner constructs the electrical circuit (see Fig. 3.6a). Then the user generates (from GLO) the LEDs control program (see Fig. 3.6c). In the mode of PL, the student needs to make changes in the electrical circuit and choose adequate values of parameters in the meta-interface of GLO working at his/her own pace. In fact, those activities mean the selecting of adequate learning path. After completing the planned tasks, the student performs additional practical tasks (see Fig. 3.7b, c) that check the student’s progress, according to the assessment model generic part (see Fig. 3.7a). Each practical task covers some aspects of knowledge, cognitive processes, and computational thinking skills dimensions. For example, the task presented in Fig. 2.7c covers aspects of factual, conceptual and procedural knowledge, involves cognitive processes, such as remembering, understanding, applying, analysing and evaluating and includes computational thinking skills, such as decomposition, generalisation, data representation, and algorithm. The teacher indicates the values of adequate dimensions in advance depending on the given PSLO. Note that this assessment model is a part of the integrated STEM skills assessment model [BŠK+22] (Also see, Chap. 2 for details).

3.8 Discussion and Concluding Remarks Two learning paradigms (STEM and personalised learning) are central topics among educational strategists, researchers, and practitioners now. This interest arises from the urgent need to improve education at all levels, responding to the existing and emerging new economic and social challenges in the twenty-first century. In this chapter, we have discussed the approach on how to combine both paradigms in

3.8 Discussion and Concluding Remarks

101

Fig. 3.6 A set of LEDs: a—physical part of SLO (CB LO), b—meta-interface of generalised control program (GLO), c—generated instance of the LEDs control program

102

3 Enforcing STEM-Driven CS Education Through Personalisation

Fig. 3.7 Learner’s assessment module related to SLO “A set of LEDs”: a—interface of the generic part of assessment module (see also Fig. 2.1); b and c—items of special part of assessment module (see also Fig. 2.1): b—practical task covering F, C; U, App, A; G, DR, Alg; c—practical task covering F, C, P; R, U, App, A, E; D, G, DR, Alg

case of STEM-driven CS education using robotics and microcontrollers at the high school. We have focused on the content personalisation and learner’s knowledge and skills assessment problems here. Both aspects are important attributes of Smart Education [Pal20]. Our aim was to make the personalisation aspects explicit of the integrated content researched in its variety of forms extensively in [Štu15, ŠB18]. By the integrated content, we mean Generative and Smart Learning Objects that integrate the social, pedagogical, technological, and content variability aspects in the same specification using meta-programming techniques. We have formulated requirements, outlined the methodology and theoretical background used. On this basis, we have developed the framework explaining on how it was possible to deal with and solve the personalisation problems (research questions) in this case. Two aspects are at the centre of this proposal. The first is the personalising of the socalled Component-Based Learning Objects (CB LOs), Generative Learning Objects (GLOs) and Smart LOs (SLOs) through redesigning the structure, i.e., moving pedagogical/social parameters into metadata within a library (see Fig. 2.4). The second is the extended facilities of the learner’s knowledge and skills assessment through partial, cumulative assessment and progress measurement combined with multiple feedbacks. To build this, we have proposed the learner’s knowledge assessment model that integrates two attributes taken from known approaches (revised BLOOM taxonomy and computational thinking skills) along with the adequate tasks. We have implemented the model by developing the adequate tool for assessment the learner’s knowledge. This tool is a generator to produce parameters and their values for assessment. Therefore, we can summarise the contribution of this chapter as follows. (1) The proposed personalised GLO has the improved structure, i.e., distributed interface (meta-interface) as compared to integrated interface in the structure of the former GLO (compare Fig. 3.4a, b). (2) The personalised SLO enables to enlarge the space for learners to choose the personalised learning paths significantly. (3) There are

3.8 Discussion and Concluding Remarks

103

possibilities for measuring the learner’s progress through multiple assessments. (4) We have introduced a generic (common) structure for all types of personalised LOs as well as combined them with assessment facilities. The basis of our methodology is the concept of learning variability covering the pedagogical, social (with the focus on learners), technological, and content aspects. The personalisation attributes we have taken into account extend the space and possibilities in forming personalised learning paths significantly. (5) With regard to contribution to STEM education, personalisation makes learning more effective due to personalised content and capabilities of learner’s progress measurements. Is this approach applicable to other teaching contexts? To do that there are a few possibilities. Firstly, the developed component-based, and Generative Smart Learning Objects could be used in the mode use-as-is, for example, for further modification of generated instances for learning to create robot control programs. Secondly, it is possible to apply those entities in physical and engineering experiments. Next, the assessment model can be adapted in the context of other subjects and schools. The limitation of personalised learning, in terms of how we presented it here, is that it requires additional collaborative efforts and considering of a broader educational context. It is so due to (i) the interdisciplinary nature of the STEM paradigm, (ii) the diversity of its own possibilities and those introduced through personalisation, (iii) diversity of skills and results achieved by different learners through their personalised learning paths. There is a need to compose the solution of the real-world task consisting of sub-tasks defined by fulfilling the separate personalised learning paths. All these factors result in the need of sharing the gained knowledge, providing discussions with others, including teachers, parents and other stakeholders. At the current state of this research, we have had a restricted possibility to provide a more extensive researching on self-assessment issues in terms of learning progress measurements. In summary, we have introduced a generic structure to compose Personalised Smart Learning Objects (LOs) combined with multiple assessment facilities. Both predefines a variety of personalised learning paths. In fact, the personalised smart LO is a mini scenario to form personalised learning paths by the learner and drive the personalised learning process. In addition, the introduced generic structure of the smart personalised LOs enables a flexible integration of those entities into personalised libraries or repositories and supports providing the managing procedures flexibly. Personalisation of STEM-driven CS education represents a new way for developing computational thinking skills in the process of gaining the interdisciplinary knowledge; it contributes in achieving a faster and deeper knowledge for decisionmaking skills and measuring progress through multiple assessments. This approach, by no means, does not exclude the teacher from the education process. Rather, it enforces the need of teacher’s participation, however, with a changed role. In this case, the teacher becomes an initiator and moderator of the process; he/she stands for a provider of the needed support and as a designer of the personalised content. From the perspective of evolution, the smart generative LOs transforms into Personalised LOs by adding attributes for personalised learning, i.e., we have introduced the distributed interface for the content personalisation along with the learning assessment model for the learner’s progress measurement. The assessment model,

104

3 Enforcing STEM-Driven CS Education Through Personalisation

in turn, evolves towards integrated skills model for STEM-driven CS education [BŠK+22] (see also Chap. 2).

References [AEM+14] Arshavsky N, Edmunds J, Mooney K, Thrift B, Wynn L, Center S, Samonte K, Janda L (2014) Race to the top STEM affinity network, 2014, http://cerenc.org/wp-content/ uploads/2014/12/FINAL-STEM-final-report-12-4-14.pdf, Accessed 6 February 2019 [AKA+01] Anderson LW, Krathwohl DR, Airasian P, Cruikshank K, Mayer R, Pintrich P, et al. (2001) A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy, New York, Longman Publishing [AM16] Atmatzidou S, Demetriadis S (2016) Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences, Robotics and Autonomous Systems, vol. 75, pp. 661–670 [AM19] Aeiad E, Meziane F (2019) An adaptable and personalised E-learning system applied to computer science Programmes design. Education and Information Technologies (2019) 24:1485–1509 [AMG+15] Ardies J, De Maeyer S, Gijbels D, van Keulen H (2015) Students attitudes towards technology, International Journal Technology Design Education, 25(1), pp. 43–65 [Att07] Attwell G (2007) Personal Learning Environments-the future of eLearning?, Elearning papers, 2(1), pp. 1–8 [BAT11] Buchem I, Attwell G, Torres R (2011) Understanding personal learning environments: Literature review and synthesis through the activity theory lens, The PLE Conference 2011, Southampton, UK [BBD+14] Burbait˙e R, Bespalova K, Damaševiˇcius R, Štuikys V (2014) Context-aware generative learning objects for teaching computer science, International Journal of Engineering Education, 30(4), 2014, pp. 929–936 [BCW08] Brady R, Conlan O, Wade V, Dagger D (2008) Supporting users in creating pedagogically sound personalised learning objects, International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems, July 2008, Springer, Berlin, Heidelberg, pp. 52–61 [BDŠ18] Burbait˙e R, Dr˛asut˙e V, Štuikys V (2018) Integration of computational thinking skills in 8STEM-driven computer science education, IEEE, IEEE Global Engineering Education Conference (EDUCON), pp. 1824–1832 [BEK+14] Brusilovsky P, Edwards S, A. Kumar, A. Malmi L, L. Benotti L, Buck, D ... J. Urquiza J (2016) Increasing adoption of smart learning content for computer science education, Proceedings of the Working Group Reports of the 2014 on Innovation & Technology in Computer Science Education Conference, ACM, pp. 31–57 [BHC+16] Basham JD, Hall TE, Carter RA, William M, Stahl WM (2016) An Operationalised Understanding of Personalised Learning, Journal of Special Education Technology, 31(3), pp. 126–136 [BHS08] Boondao R, Hurst AJ, Sheard JI (2008) Understanding cultural influences: Principles for personalised e-learning systems, World Academy of Science, Engineering and Technology, 48, pp. 1326–1330 [BLC04] Boyle T, Leeder D, Chase H (2004) To boldly GLO – towards the next generation of learning objects, World conference on eLearning in Corporate, Government, Healthcare and Higher Education, Washington USA, November 2004 [BMJ+15] Bryan LA, Moore T J, Johnson CC, Roehrig GH (2015) Integrated STEM education. In C. C. Johnson, E. E. Peters-Burton, & T. J. Moore (Eds.), STEM roadmap: A framework for integration (pp. 23–37). London: Taylor & Francis. https://doi.org/10. 4324/9781315753157-3

References

105

[BŠK+22] Burbait˙e R, Štuikys V, Kubili¯unas R, Ziberkas G (2022) A Vision to Develop Integrated Skills for STEM-driven Computer Science Education. In INTED2022 Proceedings (pp. 5082–5091). IATED [CCS15] Chirila CB, Ciocârlie H, Stoicu-Tivadar L (2015) Generative learning objects instantiated with random numbers based expressions, BRAIN Broad Research in Artificial Intelligence and Neuroscience, 6(1–2), 2015, pp. 70–83 [CDT17] Castaneda L, Dabbagh N, Torres-Kompen R (2017) Personal learning environments: Research-Based practices, frameworks and challenges, Journal of New Approaches in Educational Research. JOURNAL OF NEW APPROACHES IN EDUCATIONAL RESEARCH Vol. 6. No. 1. January 2017. pp. 1–2 [Chi16] Chirila CB (2016) Generative learning object assessment items for a set of computer science disciplines, in Balas V, Jain LC, Kovaˇcevi´c B (eds.), Soft computing applications, Advances in intelligent systems and computing, 356, Springer [Chu07] Churchill D (2007) Towards a useful classification of learning objects, Educational Technology Research and Development, 55(5), pp. 479–497 [CM09] Costello R, Mundy DP (2009) The adaptive intelligent personalised learning environment, Advanced Learning Technologies (ICALT2009), Ninth IEEE International Conference, July 2009, pp. 606–610 [CS20] Cheng YC, So WWM (2020) Managing STEM learning: a typology and four models of integration. International Journal of Educational Management, Vol. 34 No. 6, 2020, pp. 1063–1078 [CSSC16] K–12 Computer Science Framework Steering Committee, K–12 Computer Science Framework, Technical Report, ACM, 2016 [Cum16] Cummins K (2016) Teaching Digital Technologies & STEM: Computational Thinking, coding and robotics in the classroom, Amazon.com, Accessed 22 January 2019 [Dij72] Dijkstra EW (1972) Notes on Structured Programming, Dahl OJ, Dijkstra EW Hoare CAR (eds.), Structured Programming, Academic, London [DKC+07] Dolog P, Kravcik M, Cristea A, Burgos D, Bra P, Ceri S, ... and Melis E (2007) Specification, authoring and prototyping of personalised workplace learning solutions, International Journal of Learning Technology, 3(3), pp. 286–308 [Doc18] Dockterman D (2018) Insights from 200+ years of personalised learning, NPJ Science of Learning, 3(15), pp.1–6 [DP_PL13] Design Principles for Personalised Learning Environments, The Institute at CESA #1, Cooperative Educational Service Agency #1. http://www.wasb.org/websites/conven tion/File/2013, Accessed 20 December 2018 [DVS+15] Drachsler H, Verbert K, Santos OC, Manouselis N (2015) Panorama of recommender systems to support learning, Recommender systems handbook, Springer, Boston, MA, pp. 421–451 [Gan15] Gander W (2015) Informatics–New Basic Subject, Bulletin of EATCS, 2(116) [GLG21] Guo X-R, Li X, GuoY-M (2021) Mapping Knowledge Domain Analysis in Smart Education Research. Sustainability 13, 13234. https:// doi.org/https://doi.org/10.3390/ su132313234 [GM16] Guzdial M, Morrison B (2016) Growing Computer Science Education into a STEM Education Discipline, Commun. of the ACM, 59(11), pp. 31–33 [Gof17] Groff JS (2017) Personalised Learning: The State of the Field & Future Directions, Center for Curriculum Redesign, 2017, www.curriculumredesign.org, Accessed 23 November 2018 [Jär06] Järvelä S (2006) Personalised learning? New insights into fostering learning capacity. ocde-ceri (eds.), Personalising Education, París: ocde/ceri, pp. 31–46 [KGL21] Kucirkova N, Gerard L, Linn MC (2021) Designing personalised instruction: A research and design framework. British Journal of Educational Technology, 52, 1839–1861. https://doi.org/10.1111/bjet.13119 [KL16] Kiy A, Lucke U (2016) Technical Approaches for Personal Learning Environments: Identifying archetypes from a literature review, Advanced Learning Technologies (ICALT 2016), IEEE, 16th International Conference, pp. 473–477

106

3 Enforcing STEM-Driven CS Education Through Personalisation

[LBP22] De Loof H, Boeve-de Pauw J, Van Petegem P (2022) Engaging Students with Integrated STEM Education: a Happy Marriage or a Failed Engagement? International Journal of Science and Mathematics Education, 20:1291–1313 https://doi.org/10.1007/s10763021-10159-0 [LDG+20] Linnel N, Dayal A, Gonsalves P, Kakodkar M, Ribiero B, Starr A, Urdan T, Zdankus J (2020) Curated Pathways to Innovation: Personalised CS Education to Promote Diversity, Journal of Computing Sciences in Colleagues Papers of the 13th Annual CCSC Southwestern Conference March 20–21, 2020, California State University [LOStd07] The Learning Object Metadata Standard, IEEE Learning Technology Standards Committee, 2007 Revision, Accessed 18 January 2019 [LW21] Li KC, Wong BT-M (2021) Review of smart learning: Patterns and trends in research and practice. Australasian Journal of Educational Technology, 2021, 37(2) [MF20] Major L, Francis G (2020) Technology-Supported Personalised Learning: A Rapid Evidence Review. (EdTech Hub Rapid Evidence Review). DOI: https://doi.org/10. 5281/zenodo.4556925 [MLX+21] Maghsudi S, Lan A, Xu J, Schaar M (2021) Personalised Education in the Artificial Intelligence Era: What to Expect Next. In IEEE Signal Processing Magazine, vol. 38, no. 3, pp. 37–50, doi: https://doi.org/10.1109/MSP.2021.3055032 [MS02] Manouselis N, Sampson (2002) Dynamic knowledge route selection for personalised learning environments using multiple criteria, Applied Informatics Proceedings, No. 1, pp. 448–453 [NETP18] Reimagining the Role of Technology in Education: 2017 National Education Technology Plan Update, U.S. Department of Education, http://tech.ed.gov, Accessed 6 February 2018 [Pal20] Palanivel K (2020) Emerging Technologies to Smart Education. International Journal of Computer Trends and Technology (IJCTT), Vol. 68, Issue 2, p. 5–16. https://doi. org/10.14445/22312803/IJCTT-V68I2P102 [PCW08] Peirce N, Conlan O, Wade V (2008) Adaptive educational games: Providing noninvasive personalised learning experiences, Digital Games and Intelligent Toys Based Education, Second IEEE International Conference, 2008, pp. 28–35 [PL_C14] A Look to the Future: Personalised Learning in Connecticut, White Paper on Personalised Learning, https://www.capss.org/uploaded/2014_Redesign/Educat ional_Transformation/CAPSS_Whitepaper_FINAL_12-23-14_copy_2.pdf, Accessed 23 November 2018 [PSB+17] Pane JF, Steiner ED, Baird MD, Hamilton LS, Pane JD (2017) Informing Progress: Insights on Personalised Learning Implementation and Effects, 2017. https:// www.rand.org/content/dam/rand/pubs/research_reports/RR2000/RR, Accessed 23 November 2018 [PSK+22] Chaipidech P, Srisawasdi N, Kajornmanee T, Chaipah K (2021) A personalised learning system-supported professional training model for teachers’ TPACK development. Computers and Education: Artificial Intelligence, 3, Elsevier [PV11] Pukkhem N, Vatanawood W (2011) Personalised learning object based on multiagent model and learners’ learning styles, Maejo International Journal of Science and Technology, 5(3), p. 292 [RDE+21] Roehrig, GH, Dare EA, Ellis JA, Ring-Whalen E (2021) Beyond the basics: a detailed conceptual framework of integrated STEM. Disciplinary and Interdisciplinary Science Education Research, 3(1), 1–18 [Rob15] Robertson C (2015) Restructuring high school science curriculum: a program evaluation, 2015, http://scholarworks.waldenu.edu/dissertations/270/, Accessed 6 February 2019 [RRB+18] Rad P, Roopaei M, Beebe N, Shadaram M, Yoris A (2018) AI Thinking for Cloud Education Platform with Personalised Learning, Proceedings of the 51st Hawaii International Conference on System Sciences

References

107

[ŠB18] Štuikys V, Burbait˙e R (2018) Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices, Springer [ŠBB+16] Štuikys V, Burbait˙e R, Bespalova K, Ziberkas G (2016) Model-driven processes and tools to design robot-based generative learning objects for computer science education, Science of Computer Programming, Elsevier, Vol. 129,pp. 48–71 [ŠBB+17] Štuikys V, Burbait˙e R, Blažauskas T, Barisas D, Binkis M (2017) Model for introducing STEM into high school computer science education. International Journal of Engineering Education, 33 (5), pp. 1684–1698 [ŠBD+19] Štuikys V, Burbait˙e R, Dr˛asut˙e V, Ziberkas G, Dr˛asutis S (2019) A framework for introducing personalisation into STEM-driven computer science education. The international journal of engineering education, 35(4), 1176–1193 [ŠBK+20] Štuikys V, Burbait˙e R, Kubili¯unas R, Valinˇcius K (2020) Personal Generative Libraries for Smart Computer Science Education. In Smart Education and e-Learning 2020 (pp. 207–219). Springer Singapore [ŠD12] Štuikys V, Damaševiˇcius R (2012) Meta-Programming and Model-Driven MetaProgram Development: Principles Processes and Techniques, Springer [SF20] Schools of the Future (2020): Defining New Models of Education for the Fourth Industrial Revolution, https://www3.weforum.org/docs/WEF_Schools_of_the_Future_Rep ort_2019.pdf [SK02] Sampson D, Karagiannidis C (2002) Personalised learning: Educational, technological and standardisation perspective, Interactive educational multimedia: IEM, (4), pp. 24– 39 [SS13] Santally MI, Senteni A (2013) Effectiveness of Personalised Learning Paths on Students Learning Experiences in an e-Learning Environment, European Journal of Open, Distance and E-learning, 16(1), pp. 36–52 [SS20] Shemshack A, Spector JM (2020) A systematic literature review of personalised learning terms. Smart Learning Environments 7:33, https://doi.org/10.1186/s40561020-00140-9 [Štu15] Štuikys V (2015) Smart Learning Objects for Smart Computer Science Education: Theory, Methodology and Robot-based Implementation, Springer [SWO+03] Sun L, Williams S, Ousmanou K, Lubega J (2003) Building personalised functions into dynamic content packaging to support individual learners, Proceedings of the 2nd European Conference on e-Learning, Glasgow, pp. 439–448 [VGK+09] Verpoorten D, Glahn C, Kravcik M, Ternier S, Specht M (2009) Personalisation of learning in virtual learning environments, European Conference on Technology Enhanced Learning, Springer, Berlin, Heidelberg, 2009 [WD14] Personalised Learning: A Working Definition (October 22, 2014, Education Week), https://www.edweek.org/ew/collections/personalised-learning/, Accessed 5 February 2019 [Win06] Wing JM (2006) Computational thinking, Communications of ACM, 49(3), pp. 33–35 [WMS08] Wild F, Mödritscher F, Sigurdarson S (2008) Designing for change: mash-up personal learning environments, eLearning Papers, 9 [ZH12] Zhu ZT, He B (2012) Smart Education: new frontier of educational informatization. E-education Research 12, 1–13 (2012) [ZMP+11] Zapata A, Menendez VH, Prieto ME, Romero C (2011) A hybrid recommender method for learning objects, IJCA proceedings on design and evaluation of digital content for education (DEDCE), 1, pp. 1–7 [ZYR16] Zhu ZT, Yu MH, Riezebos P (2016) A research framework of smart education. Smart Learning Environments, 3:4. DOI https://doi.org/10.1186/s40561-016-0026-2

Chapter 4

Personal Generative Libraries for Personalised Learning: A Case Study

4.1 Introduction Nowadays, in the era of the Internet and omnipresent computing, digital libraries (DLs) play a significant role in multiple fields such as Data Science, e-commerce, health care, and education to name a few. In education, DLs are either local or distributed sub-systems to support the functionality of smart educational environments [ZYR16]. Trends in the development of DLs are towards transforming their static complex structures to systems with a dynamic federation of functional units [Gro17]. The primary role of DLs is to provide the teaching content along with the management procedures. The teaching content fuels educational processes resulting in the creation of a new content in various formats of data and knowledge. In the digital age, education encounters with multiple challenges in the content search and management. Firstly, with the extremely rapid technology advances, the amount and types of the teaching content tend to increase significantly leading to the advent of novel approaches in education, such as big data and data-centric learning [HSS19] and educational ecosystems [FKM19], etc. That extends the role of conventional DLs largely and, on the other hand, exacerbates their own problems and issues such as semantic interoperability [DK07], incompleteness of metadata standards [HPD+12], or quality of the content per se [CO14]. Secondly, nowadays there is a great need for gaining the interdisciplinary knowledge as effectively and efficiently as possible, e.g., through adequate facilities for creating, retrieving, and managing the relevant content [SEJ+18]. Teaching and learning based on the STEM paradigm are just a well-established way for introducing the interdisciplinary knowledge and content in schools, colleagues, and universities. STEM, in fact, is an interdisciplinary approach to learning. Its aim is to respond to the challenges of the twenty-first century for labour market and education [MW19]. This approach brings rigorous academic concepts coupled with the real-world lessons and tasks as students apply Science, Technology, Engineering, and Mathematics in contexts that make connections among school, community, work, and the global enterprise [Byb13]. Finally, the content search, the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Štuikys and R. Burbait˙e, Evolution of STEM-Driven Computer Science Education, https://doi.org/10.1007/978-3-031-48235-9_4

109

110

4 Personal Generative Libraries for Personalised Learning: A Case Study

quality of learning, and enhancement of individual and organisational effectiveness by connecting people and supporting them with a broad range of content, processes, and technologies to drive learning performance are big issues too, especially in terms of STEM [Byb13]. To overcome the existing challenges, there are extensive research efforts. Among other proposals, personalisation of the content and the personalised learning are at the focus now [Gro17, Doc18, ABB+16]. Personalised learning (PL) places the learner’s needs at the centre of education. That is a long-standing vision in education [PSB+17]. The personal space of content makes the adaptation easier and enables the construction of unique learning processes that suit the learner’s needs best [CRS15, ROD15]. PL seeks for a higher motivation and engagement, considering the learner’s differentiation and preferences in the continuing education cycle. This vision, as highly believed, leads to a faster and deeper knowledge. The U.S. Department of Education defines PL as “instruction in which the pace of learning and the instructional approach are optimized for the needs of each learner. Learning objectives, instructional approaches, and instructional content (and its sequencing) may all vary based on learner needs. In addition, learning activities are meaningful and relevant to learners, driven by their interests, and often self-initiated” [i3c19]. In the paper [ŠBD+18], we have introduced the concept of the personalised content and a framework to support the PL in STEM-driven computer science (CS) education (see also Chap. 3). We have focused there on three formats of the personalised content, i.e., component-based learning objects, generative learning objects, and Smart Learning Objects (i.e., a pre-specified compound of the first two) and learner’s knowledge assessment through the multiple feedbacks. We have identified that as a personalisation problem in PL at the component level. The aim of this chapter is to extend PL by considering this problem at the sub-system, i.e., the library level. We introduce the concept Personal Generative Library (PGL) here to support PL using the STEM paradigm, though this concept is independent upon a particular learning approach. By PGL, we mean either the teacher’s personal DL or student’s personal DL. Both kinds of libraries are generative tools due to their following features. (1) Those libraries contain within constituents implemented as generators [e.g., Metadata Generator (MDG), Query Generator (QG), Learning Object (LO) List Generator (LG)], using the generative approach such as metaprogramming [ŠD12]. (2) The user, having those tools and a metadata model, is able to generate his/her library semi-automatically easily. (3) We represent a large body of the personalised content within the library as a generative learning object [ŠBD+18]. In this context, we do not exclude the use of existing digital repositories and libraries. On contrary, we integrate them in our approach for finding some part of the relevant teaching materials. The contribution of this chapter is a distributed structure or architecture that integrates the Personal Generative Library, i.e., MDG, QG, LOLG, and Database storing a descriptive part of the content, with the external and individual repositories where the content itself resides. The structure of this Chap. 4 includes the following sections. Section 4.2 analyses a related work. Section 4.3 explains the concept Personal Generative Library (PGL) in some detail. Section 4.4 outlines the methodology and theoretical background

4.2 Related Work

111

used in creating PGLs. Section 4.5 describes the distributed architecture of PGL, its functionality, and user’s actions in creating it. Section 4.6 presents the integration of PGLs into the personalised learning framework developed previously. Section 4.7 analyses a case study covering a set of the real-world tasks (prototypes) used in STEM-driven CS education at the high school, some characteristics of student PGLs, and a survey of the student opinions about the content taken from the teacher’s PGL. Section 4.8 discusses and evaluates this approach. Section 4.9 concludes main results.

4.2 Related Work This analysis focuses on the design and use of educational Digital Libraries (DLs) in the context of personalised learning (PL) and STEM-driven CS education. Basham et al. [BHC+16] focus on five research areas to advancing PL: (i) how educators and researchers use data; (ii) how technology is designed to support learners and associated pedagogical practice; (iii) how to educate personnel who are prepared to work in personalised settings; (iv) how content is designed, and (v) how the curriculum is designed to support PL. With the continuous technology advances, the volume and complexity of educational content grow extremely rapidly. A large body of that information resides within educational DLs to fulfil users’ learning needs. Many conventional DLs aggregate resources taken from different content providers. That may lead to challenges with metadata management, metadata integration, and serious search problems Brusilovsky et al. [BCD+10]. One problem, known as semantic interoperability, is that the library users should understand and interpret any information given by the library creators in the same way correctly [DK07]. The internal structure of DLs, such as clustering of learning objects (LOs), is not always relevant to the teacher’s or learner’s profile, knowledge level, and learning style [DVN+12, SM12]. Therefore, the quality of DLs [Che17] and LOs themselves within DLs is also a big issue [CO14, Och11]. Now, there is the extremely intensive research to overcome those and other existing problems and difficulties in this field. One research stream focuses on automation issues in designing and supporting the use of DLs [Fox16]. Among other concepts and approaches, the personalisation of educational resources and personalised education is at the focus now [Doc18, ABB+16]. This interest goes from the numerous efforts to improve and advance education in the twenty-first century at all levels worldwide, starting from the primary school and ending with the university or even lifelong learning. Personalised learning (PL) places the learner’s needs at the centre of education. It is so, because the personal learning space encourages the reuse of learning materials, makes the adaptation easier, and enables the construction of unique learning processes and personalised learning paths that suit the learner’s needs best [CRS15]. In fact, personalisation nominates the paradigm shift from teachercentred learning to student-centred learning [ROD15]. This change also poses new problems. Among others, the report [ABB+16] emphasizes the tight relationship among personalisation and STEM paradigm. The STEM-driven education, on the

112

4 Personal Generative Libraries for Personalised Learning: A Case Study

other hand, brings the interdisciplinary knowledge so needed in the modern age. This knowledge has to ensure a better preparedness of former learners to enter the modern labour market fluently after graduating school, college, or university [Byb13]. A key challenge to personalised education is to foster a robust stream of diverse groups of students to STEM and related disciplines. Personalisation can meet the demands of different backgrounds and a variety of learning styles and ensure engagement and retention. Deng and Ruan [DR07] introduce the concept of Personal Digital Library (PDL) for e-learning, consider its characteristics and functionality, and briefly outline constructing and managing aspects. According to the authors, the distinguishing characteristics of PDL in comparison to conventional DLs are pertinence of information collection; substantiality of data storage; convenience of information utilisation; facility of information management; and economy of structuring and maintaining PDL. Ferran et al. [FMM05] discuss a browsing and searching personalisation system for digital libraries based on ontologies for describing the relationships between all the elements, which take part in a digital library scenario of use. The elements that determine the functionalities of the desired personalisation system include the user’s profile, navigational history, user preferences, and additionally the information collected from the navigational behaviour of the digital library users. A largescale research project investigates federated online laboratories for education in STEM at school [GDS+13]. The main educational focus is on inquiry learning and on personalised learning spaces. The learning spaces are offered through a single European social media portal supporting simultaneously teacher communities and student learning activities. The study [CRS15] explores personal spaces in a big learning object repository as a facilitator for adoption of Open Educational Resources. According to the authors, the personal space encourages the reuse of learning materials and enables the construction of unique learning processes that suit the learner’s needs. The personal space or personal collection may offer the possibility of personalising public repositories and promoting more insight into different types of resources and user’s behaviour regarding those resources. The report [ABB+16] focuses primarily at college-level STEM subjects, including CS, with the understanding that training of high school students in these topics is essential to success. This report presents a survey of the emerging trends in personalised education tools and science of learning, considers problems central to computer-aided personalised education, such as assessment, feedback, and content generation as computational problems, outline a collective vision of how technology can transform learning and conclude with research challenges to achieve this vision. Brusilovsky et al. [BCD+10] discuss the problem of social navigation in the context of Ensemble, the Computing Portal in the US National Science Digital Library to provide access to learning materials and resources for education in the STEM disciplines at all ages and education levels. The core component of Ensemble is the Web portal: www.computingportal.org. It presents easy access to recognised collections and to tools found useful by computing educators. It is also a central meeting place for communities who are interested in various aspects of computing education. The three themes of the portal, i.e., collections, communities, and tools,

4.2 Related Work

113

are interrelated and together represent a social connection between people and between people and resources. The paper [AP-MP16] discusses various techniques for customising and personalising the user access in DL. That includes user modelling and profiling, tracking user’s behaviour and personalising its stay in the library, enhancing learning experience to support e-learning on top of the DL. The paper [BAN17] proposes a semantic software ecosystem for metadata enrichment to support multi-platform metadata-driven approaches for integrating distributed content management applications, such as digital libraries. The paper [YAC+16] aims at bridging the interoperability gap between DLs and e-learning applications in order to enable the construction of e-learning applications that easily exploit DL contents. This paper provides solutions to support the construction of e-learning applications on top of DLs in order to be able for e-learning applications to exploit effectively the content residing in DLs. The paper [GMM17] reviews the trends and challenges encountered in integrating visual search interfaces into DLs and repositories highlighting that the aspects associated with the access of digital resources are based on human–computer interaction principles. The paper [Bra18] aims at assessing the efficacy of the IEEE Xplore DL search engine to return relevant materials on the information visualisation of the pedagogical literature and to recommend search strategies to assist the DL academic readership and improve the efficacy of their search tasks. This study also formulates some information retrieval issues relevancy, incomplete taxonomy, non-standard lexicon, diverse disciplines, and under-representation. Among others, recommendations include search strategies, alternative digital collections, a potential opportunity for research in information visualisation pedagogy to address this gap in an emerging field, and the need for more effective interactive tools to assist with keyword selection. Chen and Fox [CF15] apply machine learning methods to add thousands of new, relevant records to Ensemble, an educational digital library for computing education. There already are 20 metadata collections from content providers/contributors, leading to more than 5000 educational materials, along with archives of discussions of 58 community groups. Thousands of users and a broad set of CS communities have engaged with Ensemble, having many new records, and an automated process to add YouTube videos and SlideShare presentations. That should extend Ensemble’s value and level of usage for computing education. Park and Brenza [PB15] examine semi-automatic metadata generation tools, provide their features, functions, and techniques, and indicate on challenges and barriers in this field. The most semi-automatic generation tools (referred 39 ones) only address part of the metadata generation, providing solutions to one or a few metadata elements but not the full-range elements. This indicates on the need to integrate the various tools into a coherent set of working tools. Miller et al. [MSS+12] present a framework, called the Intelligent Learning Object Guide (iLOG) to investigate the problem of automating metadata generation for SCORM-complaint LOs. The basis of this study is user interactions with the LO and static information about LOs and users. The framework involves real-time tracking of each user sessions, offline data mining to identify key attributes or patterns on how the LOs have been used as well as characteristics of the

114

4 Personal Generative Libraries for Personalised Learning: A Case Study

users and the selection of these findings as metadata. The implementation of iLOG includes five LOs in introductory CS topics and collected data for over 1400 sessions. The success of DL and evolution on Semantic Web rely on efficient metadata generation. In this regard, Greenberg [Gre03] presents a metadata generation framework as a part of the adequate research project. The framework involves human metadata generation and automatic generation processes along with adequate tools, i.e., automatic indexing tools, Metadata Generators, and editors. A major attention in providing personalised information access in educational DLs has gained educational recommender systems. Klašnja-Mili´cevi´c et al. [KVI11] propose such a system that automatically recommends educational resources based on the learner’s learning style, learning characteristics, interests, habits and knowledge levels. The study [SPI+17] proposes an architecture for developing a Personal Learning Recommendation System that aims at supporting learners via a Learning Management System (LMS) to find the relevant materials for enhancing the student learning experiences. In addition, this study presents a methodology for building a collaborative dataset via LMS and educational repositories. This dataset enhances student learning by recommending learning materials from the former student’s competence qualifications. Alaofi and Rumantir [AR15] explore the application of implicit personalisation techniques in information retrieval in the context of education. This research investigates the potential of incorporating student enrolment information, that is, published information on the units/subjects they are enrolled in, to identify students’ learning needs and produce personalised search results. In summary, there are extremely intensive research efforts in the field of educational DLs. With regard to personalisation and computing aspects, multiple ideas and approaches have been proposed so far. They vary from the content search improvement, search engine and interfaces enforcement to personal libraries, specialised portals for computing educators, and recommendation systems. However, we still know little on how to combine personalised learning and personal DL into a coherent system and build it with extended capabilities for design and use automation. This analysis is by no means exhaustive; however, it gives us an evidence on the role and importance of PDLs and motivates well the approach we consider in this chapter.

4.3 The Concept of the Personal Generative Library Educational digital libraries (DLs) and repositories stand for tools to provide the educational content. The user, i.e., teacher or learner, can find some part of the needed content within existing conventional repositories, especially that of the common use, such as an instructive or motivating material. For specific courses, however, there might be specific requirements for DLs in terms of their functionality and the content itself, though the structure should not be much different. In STEM-driven CS personalised learning, we need to design and prepare anew in advance a large portion of the personalised content because of the following reasons. (1) It is difficult (in terms of search efforts needed [GMM17]) or even impossible to retrieve this content from

4.3 The Concept of the Personal Generative Library

115

the existing resources, especially in case of using specific courses. (2) The quality and the level of personalisation of the found content are not always enough. (3) We use a specific format for representing the content, i.e., generic/generative content aiming at introducing the possibility for automating and a more effective use of the content. Finally, in personalised learning, we need to ensure the easiness and flexibility in creating and using the content management tools by the users themselves. In other words, in this context, we need to have not only a personalised content, but also tools that support the managing procedures for handling the personalised content as effectively and efficiently as possible. Therefore, we introduce the concept Personal Generative Library (PGL) here. This library is personal in the following sense. The teacher has ownership on his/ her PGL. A student may have ownership on his/her PGL too. The content within the teacher’s PGL is highly personalised as outlined in [ŠBD+18]. The library is generative because of its two distinguishing attributes: (a) the structure of PGL contains within constituents implemented as generators (e.g., MDG, QG, LO LG), using the generative approach; (b) a large body of the personalised content we represent as a generative learning object. Therefore, the content types for PL include ComponentBased (CB) Learning Objects (LOs), Generative Learning Objects (GLOs), and Smart Learning Objects (SLOs) [ŠBD+18]. Roughly, one should understand those entities in this way here. CB LO is an instant of a content piece, e.g., quiz, film, for enhancing motivation, instructional text, etc. Typically, it is a fine-grained component retrieved from existing repositories. GLO represents a set of the related instances woven together in the same specification using generative techniques [ŠB18]. Usually, instances within the GLO structure represent either fine-grain or middle-grain components. SLO is a pre-specified suite consisting of CB LOs, and GLOs. Typically, a SLO covers middle-grain LOs. This suite depends on the learning objectives and task the learner needs to solve during the personalised learning (PL). The aim of the SLO is to enlarge the learning variability space to support PL through a variety of possible PL paths with multiple feedbacks. By learning variability space, we mean variants of the content represented in SLO with multiple attributes of pedagogical, social, technological, and content aspects that are pre-designed. Therefore, considering that we integrate learner’s model through social aspects into the GLO as a constituent of SLO, the content is already personalised. For the effective PL, this is very important, but not enough. We are able to enforce the personalised process through measuring and assessing on how deeply the learner studied and understood the personalised content [BDŠ18]. The specification of this content has two parts, a descriptive part and the content itself . We separate the descriptive part from the content explicitly using the design principle known as separation of concepts [ŠD12]. Those parts are stored in different places. The descriptive part resides within the PGL and forms the so-called Database (DB) using the following tools (MDG, QG, and LO LG). The content items are within the external repositories. Those are of two kinds, public (i.e., existing so far, typically open source) and personally created to support personalised learning. The descriptive part contains metadata of a given content item and link to the repositories where the content itself resides. Typically, the content of CB LOs for personalised

116

4 Personal Generative Libraries for Personalised Learning: A Case Study

learning resides within public repositories, though the user may resent this content to his/her repository, making the needed corrections in the metadata of his/her PGL. Typically, the teacher’s PGL and his/her repository contain all types of LOs, i.e., CB LO, GLOs, and SLO, pre-designed or searched in advance to cover the topics of the curriculum of a given course. Therefore, when we use the term PGL, we mean both parts, i.e., descriptive part of PGL along with the adequate tools within it and plus the personal/individual repository of learning resources. Regarding the student’s PGL, there are two possibilities. The first possibility is to make clones of the teacher’s PGL and his/her personal repository if needed. At this point, the following question may arise: Why is the student PGL needed at all? There are multiple reasons to motivate that. The concept of the student’s PGL has a far more importance than the momentum learning process of the given course. First, a student may want during learning to create own content quite different from the one given by the teacher. Second, a student may accumulate the description of additional resources derived from external repositories. Third, as the learning needs of refining the previous knowledge, it is much easier to find the needed resources in own library. Finally, the student’s PGL may stand for the individual portfolio for long-life learning. The second possibility is that a student can create his/her PGL and his/her personal repository from Scratch, of course, with the teacher’s help using the metadata model and adequate tools. Therefore, we have a system of PGLs. It opens a way for considering various scenarios for personalised learning. We outline them in the implementation part. Now, we focus on the methodology and background used.

4.4 Methodology and Background At the conceptual level, our methodology includes meta-analysis and conceptual modelling of those educational domain aspects that we consider relevant to the research aims and tasks. We provide those activities with learning variability in mind [ŠB18]. Personalisation attributes we deal with here add extra learning variability features. Those activities result in creating conceptual models. At the implementation level, i.e., in creating PGL components and GLOs, in developing personalised learning processes, etc., we apply principles and model-driven approaches borrowed from software engineering. They include (i) separation of concepts, (ii) feature-based modelling at a higher level of abstraction, and (iii) synergistic use of componentbased and generative reuse approaches. The first simplifies the design process and brings flexibility in the use of the designed product. The second ensures the correctness in applying of the model-driven design approaches. The third enables an effective implementation, more effective use, and maintenance. Using those principles, we suggest the distributed architecture of the teacher’s and students’ PGLs and present personalised learning processes integrated along with those libraries. We identify that as a sub-system level framework for personalised learning. In this section, we present the theoretical background by giving semi-formal definitions of basic terms and providing their relationships through key properties.

4.4 Methodology and Background

117

One can find some definitions related to the component-level vision of our approach in [ŠBD+18]. Here, we have added to some part of those the system-level definitions (we mean the concepts related to PGLs and extended personalised processes). We have done that for the consistency purposes. Definition 1 Personalised learning (PL) is the approach that places the learner’s needs at the centre of learning, uses personalised learning content, and exploits learning activities that enforce personalisation (e.g., self-guidance, personalised learning paths, self-assessment, multiple feedbacks, etc.) as much as possible. Personalised STEM-driven CS education is the learner-centred approach covering learning processes and personalised resources (content and its supporting tools) designed for this paradigm (adapted from [Gro17] and [ŠB18]). Definition 2 Personalised learning content is the one that is relevant to the learner’s learning objectives and preferences and is presented so that it would be possible to make choices of the adequate content variants in forming a variety of personalised learning paths. Property 1 The personalised learning content consists of a pre-defined set of learning objects at the beginning of learning. The learner may extend this set by retrieving learning objects from the repositories or creating new ones in the course of the process. This set may contain the learning objects of different types, i.e., either Component-Based LOs (shortly CB LOs), Generative LOs (shortly GLOs), or Smart LOs (shortly SLOs), i.e., composites of the aforementioned LO types. We omit the definition of those LOs because that can be found in Sect. 3.5 (see Chap. 3). Note that in our approach, there are specific learning objects for STEM education as physical entities consisting of two highly interrelated parts: hardware (mechanical parts, actuators, sensors of the robot) and software (robot control programs). GLO, in fact, is a generalised set of control programs for those physical entities. Property 2 GLO is a domain-specific heterogeneous metaprogram; in other words, the latter, along with the meta-language processor, is the learning content generator on the user’s demand. The user (teacher or learner) operates with the meta-interface, seeing parameters and their values (that represents the variability space for a choice). The meta-body hides implementation details from the user entirely. The system operates with it only (taken from [Štu15]). Definition 3 Heterogeneous metaprogram is the specification with two parts (metainterface and meta-body) implemented using at least two languages, i.e., metalanguage and target language (languages). The latter stands for delivering the base functionality through a domain (target) program, for example, a robot control program in our case. The first stands for expressing the generalization, i.e., learning variability aspects (taken from [ŠD12, Štu15]). Definition 4 Meta-interface is the pre-designed set of parameters and their values represented for the user so that it would be possible to make interactive choices easily.

118

4 Personal Generative Libraries for Personalised Learning: A Case Study

Property 6 There are metaprograms having (a) the integrated meta-interface while all parameters are stored along with the meta-body in the same environment (library), (b) distributed meta-interface while parameters are distributed among different environment (libraries), and (c) generated meta-interface while the other external program (generator) generates parameter values. Definition 5 Personal Generative Library. (PGL) is the tool with a distributed structure to provide the content managing support, i.e., metadata generation, query generation, the LO list generation, and formation of the Database (DB). The latter contains the descriptive part to supply links to the personalised learning content that is stored in the external repositories (public or individual). Property 7 The conceptual structural model (see Fig. 4.1) predefines the structure, functionality, and state of the PGL.

Content & PGL design phase Designer actions in creating the content & assessment

Designer

Assessment Model

Designer actions in PGL creating

General part

SLOs/ GLOs/ CB LOs

Special part

Metadata Model

Link to LOs in repositories

Metadata Generator 1 LOs

Query Generator

3

2

LOs List Generator

2'

User

3' 1'

1 4' SLOs/GLOs/ CB LOs Assessment tasks

LOs list

Personal Generative Library (PGL)

PGL filling in (

) & PGL using (

Link to LOs in repository

PGL usage

Fig. 4.1 Structural model of PGL along with external repositories

) phases

4.4 Methodology and Background

119

Property 8 Four constituents form the internal PGL structure, i.e., Metadata Generator (MDG), Query Generator (QG), LO List Generator (LO LG), and Database (DB) containing the content descriptive part only, while this content itself is within the external repositories, either public or individual. Therefore, we treat the PGL as a distributed system (sub-system). Property 9 PGL has two basic states, i.e., the creation state (see a sequence of solid lines in Fig. 4.1) and the use state (see a sequence of broken lines in Fig. 4.1). We present more details later. Property 10 In personalised learning, there are two kinds of PGLs for a given course, i.e., the teacher’s PGL and the student’s PGL. Each student may have own PGL. Each kind of libraries has the same structure; however, the content may differ significantly, especially after some time of use, though the same entities can be there too. Definition 6 Metadata model is the model to specify the learning resources, i.e., the content along with the related information such as technological and social formally (semi-formally), e.g., using feature-based notion (see Fig. 4.2). Definition 7 Feature model is the specification representing a domain (system, component) at a higher level of abstraction by feature diagram (diagrams) using the following attributes: (a) features, typically represented by boxes, (b) feature hierarchy to represent parent–child relationships, (c) constraints of possible types (requires and excludes, if any) among the features [ŠD12]. Property 11 Feature model is the abstract structure of some lowest level features, i.e., terminal vertices or leaves have not yet specified by concrete values (see Fig. 4.2). If all terminal features have concrete values, this model is a concrete feature model. The latter should be derived from its abstract model. Property 12 Concrete feature model is derived from its abstract feature model or some sub-model by adding to all terminal abstract features their possible values, i.e., transforming the abstract feature into the variation point with adequate variants (values). Property 13 Concrete feature model is the specification that describes the metaprogram at a higher-level of abstraction in such a way: variation points represent the parameters of the metaprogram and the variants of a given variation point represent the values of this parameter. Definition 8 Metadata Generator (MDG) is the heterogeneous metaprogram that (1) accepts values of parameters taken from the metadata concrete feature model as inputs, (2) accepts from the user (learner or teacher) the link to the content in a repository (public or individual), and (3) produces the descriptive part of this content and stores it into DB automatically.

...

n

T

O1

Profile

n

O

Learning style

Low

Medium High

Beginner

Intermediate

Expert

Picture

Learning preference

Preferences

Questionnaire

Research Survey

To be studied Learning intention

Studied

Learning state

Comprehension level

Requires

Not underCompLittle Well stood letely To be revised

Learning context

XOR group

Basic Quick AssigneSeminar Project reference introduction ment

Slow Medium Fast

Learning pace

Video audio

Text

Expositive

Active

LO type Mixed

Technological Interactivity type

Knowledge Expertise Kines- Concep- Example- Case Simula- DemonsVisual Auditory level level tration oriented study tual thetic tion

O2 ...

Objectives

Learner‘s

Interactivity level

Mandatory feature

Fig. 4.2 Metadata abstract feature model (adapted from IEEE LOM Standard [IEEE19] and learner’s preferences context model [DBC+10])

Age Gender

1

T

Topics

Curricular

Educational

Metadata

120 4 Personal Generative Libraries for Personalised Learning: A Case Study

4.4 Methodology and Background

121

Definition 9 Query Generator (QG) is the heterogeneous metaprogram that (1) forms the user interface for designing the query using parameters derived from the metadata model; (2) creates a query in SQL and sends it to the DB; (3) returns the description of the adequate content to form the LO list as a space for creating a personalised content. Definition 10 LO List Generator (LO LG) is a specific kind of the metaprogram without the meta-interface. Meta-body performs the function of representing query results for the user as a list of LOs with links to the repositories, public or individual. Definition 11 Personalised lesson plan is a learning scenario containing within the following attributes: task/sub-tasks, learning objectives, learning model, and the needed personalised resources, i.e., LO list. Definition 12 LO list is a set of independent objects formed from of all types of pre-designed entities, i.e., CB Los, GLOs, and/or SLOs [ŠBD+18], that corresponds to the requirements of the given task and is derived from the repositories using QG and LO LG. Property 14 User forms the LO list dynamically through personalised queries to support PL. Property 15 The independent objects of the LO list ensure the variability of learning paths. Property 16 The LO list differs from SLO by the following attributes. (1) LOs in a given SLO are interrelated through parameter values, while in the list, LOs are independent entities. (2) SLO is for a sub-task solving, while the LO list is for task solving, i.e., they have different granularity levels. (3) SLO is a pre-designed entity, while the LO list is created at run time. Property 17 Our approach, to design constituents of PGL (MDG, QG, LO LG) and GLOs as entities of individual repositories, uses a unified methodology and the same theoretical background, i.e., feature-based modelling to define personalisationdriven learning variability models explicitly and metaprogramming techniques to implement those models. Property 18 The way to understand personalised STEM-driven CS education and learning includes the following activities: (i) analysis of presented models, (ii) analysis of structural and functional aspects of the distributed system integrating PGLs/ repositories, and (iii) analysis of the process-based personalised content management scheme to be presented in the implementation part.

122

4 Personal Generative Libraries for Personalised Learning: A Case Study

4.5 Structure and Functionality of PGL In Fig. 4.1, we present the structural model of the PGL, containing two basic parts. The first part specifies the designer’s actions, while the second—the user’s actions. The CS teacher may act in two roles, i.e., as designer and user. The designer’s responsibility is to create the content, assessment modules for each kind of content (CB LO, GLO, and SLO) and constituents of the PGL. The constituents include Metadata Generator, Query Generator, LO List Generator, and Database (DB) for storing the descriptive part of the content itself. For definition of those entities, see Sect. 4.4. As we apply the model-driven design methodology (see Sect. 4.4), firstly the adequate models are developed. The central role in digital libraries design plays the metadata model. In our case, we use the feature-based metadata model (see Fig. 4.2). It is an abstract model created on IEEE LOM standard [IEEE19] basis along with the enrichment by the learner’s profile attributes taken from [DBC+10]. Next, the designer needs to concretise this abstract model by specifying concrete values for each abstract terminal node. Finally, having a concrete feature model, the designer can develop the adequate metaprograms to implement adequate generators. We omit presenting details on the assessment model here because of that can be found in [BDŠ18]. The teacher’s responsibility is to fill in DB with descriptive data of the content items. We outline those actions by arrows 1–1–2–3 (see Fig. 4.1). The user, i.e., teacher or student, starts working with the PGL when the content items are already within the individual repository and the DB already contains the descriptive data of those items. We outline user’s actions by arrows 1' –2' –3' –4' to form a list of LOs. With those resources, it is possible to start personalised learning activities. We describe that later. Now, we extend the learner’s possibilities to contribute to his/her personalised learning at the library level. We argue that a student can act not only as the user of the teacher’s PGL but also as a designer of own PGL. Structure and functionality of PGL constituents. The reader needs to return to Fig. 4.1 for seeing PGL constituents. We represent them by generators. There are three generators, i.e., Metadata Generator, Query Generator, and LO List Generator (see Definitions 8, 9, and 10 in the background part). We have implemented all generators as metaprograms (see Property 17). Typically, a heterogeneous metaprogram requires parameter values supplied by the user through the meta-interface. However, there might be cases of generating those values by other generators automatically. For example, the Query Generator generates parameter values for the metaprogram that implements LO List Generator (see Fig. 4.1 and Definition 10). This eliminates the need of using the user’s meta-interfaces because of the generator that generates parameter values which stands for this meta-interface.

4.6 Integration of PGLs into the Framework of Personalised Learning

123

4.6 Integration of PGLs into the Framework of Personalised Learning In Fig. 4.3, we outline a process-based extended framework of personalised learning. We have developed this framework by integrating the teacher’s and students’ PGLs into the framework proposed in [ŠBD+18]. Therefore, the extended framework includes two kinds of processes, i.e., (a) gaining the needed resources from PGLs/ repositories and then (b) using those resources for personalised learning activities. In this chapter, largely, we focus on the resource gaining process because the personalised learning activities were described intensively in [ŠBD+18]. The resource gaining starts with creating the Personalised lesson plan (scenario) to solve the given task (see Definition 11 in Sect. 4.4). Among other attributes, this lesson plan includes the list of Los. Typically, for each lesson, the teacher creates this list automatically using two tools residing within PGL, i.e., Query Generator and LOs’ List Generator. This list is a resource part of the teacher’s plan to provide personalised teaching and learning. Somebody of students, typically beginners or intermediates, use the resource list residing within the teacher’s plan to start the forming own personalised learning paths. However, those students who already achieved the status of the advanced learner during the course of learning are able to create either own personalised lesson plan with the list of LOs, or at least, to modify the one given by the teacher. This plan, whoever developed, includes the list of LOs (in fact, links to the teacher’s PGL/repository). Advanced students may also use own PGLs, during the

Fig. 4.3 Content management scheme to support personalised learning using PGLs (adapted from [ŠBD+18])

124

4 Personal Generative Libraries for Personalised Learning: A Case Study

classroom time; typically, all students use the resources taken from the teacher’s PGL. The student’s PGL serves for storing the personalised content created or retrieved by the student himself/herself. Of course, a student may provide cloning of some content from the teacher’s repository and send this material to own library/repository. Therefore, the PGLs/repositories stand for providing learning resources of any given type (CB LO, GLO, and SLO) to support personalised learning. Now, we describe the teacher’s and students’ actions in more detail. In Fig. 4.3, we present learning activities and processes by shadowed rectangles and doubleline arrows with the text inscription inside. The white rectangles (without shadow) represent the needed resources obtained from the PGLs/repositories. We specify the teacher’s and student’s actions through adequate paths indicated by the numbers as follows (note that numbers specify either actions or the outcome of the adequate action). In our scheme, we define activities by the numbers as follows: • 1—task formulation. • 2, 2' , and 2'' —the queries to PGLs adequately: teacher (T) to teacher’s PGL (TPGL), student (S) to TPGL, student to student’s PGL (SPGL). • 3, 3' , and 3'' — the list of LOs derived from PGL adequately: by T from TPGL, by S from TPGL, by S from SPGL. • 4 and 4' —lesson plan created by T and S adequately. • 5 and 5' —the use of the task solution plan created by T and S adequately. • 6, 6' , and 6'' —the process of resource retrieving, using T scenario from TPGL, using S scenario from TPGL, and using S scenario from SPGL adequately. The teacher’s activities include the sequence 1 → 2 → 3 → 4 (see Fig. 4.3). That is, the teacher applies this scheme in the formal learning during classroom lessons. The teacher develops the lesson plan/scenario for all students for solving a given task. Actions for that are the query to the teacher’s PGL (TPGL) to obtain the list of LOs for forming the scenario. The student’s activities are more diverse. A student has three possibilities: (1) To use the teacher’s lesson plan and TPGL entirely. The student’s path includes 1 → 5 → 4 → 6 in this case. Note that at the point 6, student has the links to all needed resources and he/she can extract the content (i.e., CB LO, GLO, or SLO) from the adequate repositories as outlined that in Fig. 4.1 according to the given scenario. (2) To create his/her own scenario using resources from TPGL or/and SPGL. There are three possibilities: (1) student uses TPGL resources only, and the activities include the sequence 1 → 2' → 3' → 4' → 5' → 4' → 6' (see Fig. 4.3); (2) student uses SPGL resources only and the scenario includes the sequence of activities such as: 1 → 2'' → 3'' → 4' → 5' → 4' → 6'' (see Fig. 4.3); (3) student uses resources from TPGL and SPGL. In this case, the learning activities are arranged in this order: 1 → 2' and 2'' → 3' and 3'' → 4' → 5' → 4' → 6' and 6'' .

4.7 Case Study and Results

125

(3) To use the teacher’s plan as a template to modify it by selecting other resources either from the TPGL or/and SPGL (the full paths are not marked in the Fig. 4.3). The choice/creating/modifying of the learning scenario depends on the student’s previous knowledge, preferences, abilities, motivation, etc. We treat all these activities (task analysis, creating of the scenario with resources) as an initial phase of the personalisation. Therefore, this gives the possibility to form personalised learning paths dynamically. Next, when the scenario/resources are identified, the personalised learning activities start. They include the resource analysis, working with the LO from the list created at the initial phase. This list may contain entities of any type (CB LO, GLO, SLO). Then, the task-solving procedures defined by the scenario start. In addition, for the solitary GLO or the GLO within SLO, there is the generation process that follows the retrieval. During the generation process (it is not shown in Fig. 4.3), the user selects parameter values related to his/her profile to produce a personalised instance for the further examination. The learning activities using obtained resources, including multiple knowledge assessments with multiple feedbacks, follow (see Fig. 4.3 and [ŠBD+18]). Note that during the learning activities, the learner can retrieve/create a new content, e.g., through modifications and changes or even from Scratch, and to store it into his/her PGL/repository for future learning. In this case, a student becomes a new content and knowledge creator.

4.7 Case Study and Results This section includes (i) the description of STEM-oriented real-world tasks (prototypes) integrated into the course “Programming basics”; (ii) the characteristics of student PGLs; and (iii) results of how students evaluate the personalised content taken from the teacher’s PGL. In Table 4.1, we present three representatives of real-world tasks (prototypes) explaining what type of the STEM-driven knowledge students can gain through solving those tasks. The presented tasks include (1) obstacle detection by a robot; (2) traffic light; and (3) fire alarm system. The list includes 15 items (robot sensors’ testing, robot calibration, baseball batter, wall detection, help system, light metre, line following, ornament drawing, light ruler, smart thermometer, alarm clock, recognition of material property (magnetic or nonmagnetic), plus items in Table 4.1). Note that this list is regularly renewed and each task may contain a few sub-tasks to form an adequate learning object. In Column 2, we indicate also what CS topics the given task covers. The remaining columns indicate the STEM knowledge in Science (in fact, in physics and CS), Technology, Engineering, and Mathematics. In Fig. 4.4, we outline the distribution of students’ PGL items created by students in the course of personalised learning. Data were retrieved from PGLs of the 63 students (26—girls, 37—boys of the 10th grade). Content items were created during one school year (30 academic hours in the classroom) for the course “Programming

126

4 Personal Generative Libraries for Personalised Learning: A Case Study

Table 4.1 Real-world STEM-oriented tasks integrated into course “Programming Basics” RealCS topic world task prototype

Science

Technology Engineering Mathematics

Obstacle detection

Conditional and loop-based algorithms

Physics—different forms of movement, motors and ultrasonic sensors, physical properties; CS—ultrasonic sensor’s and motors’ movement programming principles

LEGO, Arduino, MakeBlock robot platforms

Robot vehicle with ultrasonic sensor designing and constructing

Traffic light

Conditional and loop-based algorithms

Physics—LED working principle; CS—different traffic light control algorithms

Different robotic platforms with LEDs

Traffic light Traffic light designing working accuracy and measurement constructing

Fire alarm system

Conditional and loop-based algorithms

Physics—flame sensor working principles and properties, buzzer working principles and properties; CS—flame sensor’s programming principles

Arduino platform

Fire alarm system designing and constructing

Calculating optimal values of robot speed depending on the characteristics of the ultrasonic sensor

Fire alarm system accuracy calculation, interdependencies of variables

basics”. Students stored the programming tasks’ solutions (computer programs; robot control programs) with comments in their PGLs. On average, during one academic hour, a student was able to create the 3–6 library items. Students used items from their PGLs as previous knowledge in studying new topics and as a support material for assessment tasks or project-based activities. The distribution trend is practically the same for girls and boys. From this data, we can know about the learning pace. We do not examine the similarity of items from different libraries. We suppose that knowing that it would be possible to extract more information on how learners learn. In Fig. 4.5 and Table 4.2, we provide a survey for evaluating the personalised content by the students. The respondents were 37 students (13 girls and 24 boys), most beginners for “Programming Basics” with the dominating knowledge marked by 9 in the ten-point evaluation system. The survey consists of ten questions related to the use of LOs and LOs’ sequences taken from the teachers’ PGL. Would you recommend this approach to other students? About 92% of boys and 54% of the girls recommend this approach to the others; however, 8% of boys and 46% of the girls answered: “I don’t know”.

4.7 Case Study and Results

127

Fig. 4.4 Distribution of learning objects items in students’ PGLs created by students themselves Your level of experience.

Intermediate B2

To which extent did the topic fit your needs and constraints?

3% 4% 0%

22% Completely fit

29% 8%

Intermediate B1

5% 4% 8%

76% Fit

30% Beginner A2

67%

38%

92%

15% 62% Beginner A1

3%

54%

No opinion

77%

4% 0%

All

Male

Female

All

Your knowledge assessment value in the module “Programming basics”.

Female

Do you consider this approach useful?

41% 10

Male

27%

46%

Very useful

31%

38% 8%

57% 9

75% 23%

70% Useful

92%

22% 21% 23%

8

3% No opinion

7

58%

16% 13%

4% 0%

23% All

All Male

Female

Fig. 4.5 Survey of the quality of teacher’s PGL

Male

Female

128

4 Personal Generative Libraries for Personalised Learning: A Case Study

Table 4.2 Content quality-related questions (possible answers according to Likert scale: strongly agree, agree, neither agree nor disagree, disagree*, strongly disagree**) Questions

Strongly agree All

Agree

Neither agree nor disagree

Male

Female

All

Male

Female

All

Male

Female

The sequence 32 of LOs is consistent with the objectives of the topic

38

23

68

58

77

3

4

0

The size (number of LOs) of the topic is appropriate

32

38

23

68

58

77

3

4

0

The duration of 30 getting the topic is appropriate

38

15

59

54

69

11

8

15

The content is adapted to the student’s profile

38

23

65

58

77

3

4

0

32

Note * and ** those were not selected

Would you suggest some changes in the topic structure? Respondents of the intermediate level suggested extending the resources search possibilities by adding keyword-based search. Those results were obtained after only one-year learning practice using this approach. Therefore, it is difficult to reason about the impact of the approach on learning outcomes. However, the current teacher’s observation is that students use the learning time more efficiently and effectively.

4.8 Discussion and Evaluation The approach based on using PGLs that we have discussed in this chapter is a specialised solution for digital libraries in education. Specialisation has many advantages in terms of flexibility, simplicity, and efficiency. We have combined specialisation with personalisation in the context of STEM-driven CS education. How does the use of the concept PGL contribute to personalised learning (PL)? Firstly, it is much easier and faster to find the needed material that is indeed personalised, especially for beginners. Secondly, it is possible to organise and manage multiple feedbacks, make changes in selecting resources on the fly to form learning paths, the main attributes of PL [ŠBD+18], more efficiently. Next, the ownership of the content and the tools

4.8 Discussion and Evaluation

129

to manage the content enables to systemise the information handling and contributes to a better organisation of learning activities by students. In addition, a learner works with personalised metadata model that leads to a more efficient learning. Finally, the important result of this chapter is the possibility (1) to anticipate the personalisation level of the content in advance by the teacher’s pre-searched and pre-designed content and (2) to adapt the content personalisation level dynamically by the learner herself/ himself during the process of forming a list of LOs from PGLs. The contribution of the PGL to STEM-driven CS education is the domain-specific interdisciplinary-based content represented in the different forms, such as tutorials, quizzes, guides to obtain the solutions of real-world tasks (prototypes). Two representation formats of learning content, i.e., GLO and SLO containing GLOs and CB LOs, contribute to the learners’ knowledge of using generative reuse, i.e., in automated design of programs. From the methodological perspective, we apply the same background for both the content and the tool, i.e., the PGL design. This methodology uses higher-level abstractions and model-driven approaches (feature models, metaprogramming) borrowed from software engineering. Therefore, students have the possibility to develop the computational thinking skills so needed for the twenty-first century [Win06]. In addition, the described processes and student design activities promote gaining the integrated STEM thinking skills. As we focus on personalisation at a broader context here, the student may want to have own PGL for any other course. It is possible because of the design and implementation of PGLs’ structure practically independent on a course, though the content within the PGLs is quite different. For example, after accumulating some experience in using the proposed tools and own PGL for the CS course, a student is able to create a system of the PGLs for different subjects. In addition, there is a possibility to enrich interdisciplinary aspects of STEM by taking some content from PGLs of the other subject (if any). That gives many advantages in terms of flexibility, independency, and individual mode of using own content anywhere at any time, modifying old or introducing a new material. There are two ways in creating the student’s PGL along with his/her repository. First, a student can do that by cloning the initial state of the teacher’s library. Second, a student can create his/her PGL from Scratch, using the relevant models and tools (Metadata Generator, Query Generator, and LOs’ List Generator, see Fig. 4.2) with no or little help by the teacher. The teacher’s PGL evolves over time, starting from the initial state. So do the student’s PGL, however, the evolution path may differ from the teacher’s one significantly. That depends on (1) how his/her library was created and (2) what are the student’s personal cognitive and social interests. For example, during the planned activities, student may rely on using the teacher’s PGL entirely, but storing in own library only the links to the newly created/modified or retrieved content during the planned or additional activities. The student’s actions with the content and the student’s behaviour in using own and teacher’s PGLs are of the great importance. It is possible to measure what kind of the content was used, how frequently the same content was used, how different were the learning paths, how long the given task solving takes time for the concrete student and much more. This opens the way for accumulating data about how the learner learns, what his/her preferences are, how different students’ behaviours are,

130

4 Personal Generative Libraries for Personalised Learning: A Case Study

etc. The student may allow the teacher to read his/her personal content, for example, for gathering the needed statistics. All these open the room for a wider discussion and may serve as a hook to hang the concept ‘PGLs for personalised learning’ in the context of Big Data (BD) and data-driven research in education [YKS17, RIZ18]. More precisely, PGLs and their content along with the processes of PL can serve as the important source, though not unique, to produce BD and provide a support for data-driven education. Therefore, this concept and its implementation are helpful not only for personalised learning in STEM-driven CS education, but in a much wider context (we have had in mind not only the results of this research, but the broad stream of the research work in the field of digital libraries in general such as the one given in [BCD+10]). This work is an extension of the proposal presented in [ŠBD+18], where the contribution to engineering education was clearly stated. The concept of PGL is independent upon neither the learning and teaching paradigm, nor the teaching course. Therefore, the discussed ideas in total extend the applicability of our approach to engineering education additionally. The discussed approach has some limitations and a few unclear questions for further research too. We have presented some characteristics of the student PGLs. However, we still need to have a broader vision on the statistical data in using the students’ PGLs. The performed survey has shown that the students of the intermediate level want to have the searching not only based on the metadata model, but also using keywords to form the LO list more flexibly. We have still little data on measuring performance in personalised STEM-driven CS education. Personalised learning is a dynamic process. Personalised learning and students’ PGLs evolve over time, even in the short-term period, for example, during a few lectures. After accumulating some knowledge and new resources in PGL, a learner may want to share this knowledge and resources with others, i.e., other students, parents, etc. Therefore, the process tends to move to the phase when the collaborative learning can start. It is unclear how and when to harmonise those paradigms. This does not deny the importance of personalised learning, but rather enforces it. However, we do not able to reveal the relationship between personalised learning and collaborative learning and their interaction to advance further education using this paradigm. The set of learning resources should be as wide as possible to enable and ensure the effective and efficient personalisation processes.

4.9 Conclusion With the ever-growing types and amount of the educational content, its retrieval from conventional digital libraries to ensure the specific learner’s needs, e.g., for personalised learning or course specificity, encounters with many issues such as time, quality, or even no search result at all. One way to overcome those issues and enforce learning performance is to apply a specialised approach, such as Personal Digital Libraries with personalised content. In this chapter, we have introduced the concept

References

131

Personal Generative Library (PGL) describing its automated design and automated content management for personalised learning. Based on this concept, we have built an experimental system that integrates conventional repositories, the teacher’s PGL, the students’ PGLs, their individual repositories along with the personalised learning processes using the previously developed framework. The main contribution of this research is the distributed architecture of the proposed system and generative capabilities of its constituents that ensure a great deal of flexibility and effectivity to manage the delivery of the personalised content as well as its renewal and extension for the needs of personalised learning. It was possible to achieve this result using model-driven approaches borrowed from software engineering and adapted in the context of our research. The basis of this methodology is a deep separation of concepts at the component level (i.e., content items) and the system level (i.e., student’s PGL/repository, teacher’s PGL/repository and their tools) along with generative technology applied. Either the content within the student’s PGL/repository is a direct product of personalised learning obtained during the classroom activities or is a by-product created during outside activities. We have presented a survey provided by students to evaluate the personalised content of the teacher’s PGL/repository. This survey constructed on the well-known methodology gave a good evaluation. We have also presented some quantitative characteristics of the students’ PGLs/repositories. We have obtained that a student is able to create 3–5 entities for his/her repository during classroom activities only. We have tested this approach for STEM-driven CS education in one high school. This approach, however, in terms of the concept itself and PGL tools proposed, is independent neither of the teaching course, nor the teaching environment. The possibility to combine the personalised content with management procedures of this content using the developed tools enables enforcing and enhancing the personalised learning significantly in terms of higher flexibility, more efficient search, and more efficient procedures to form personalised learning paths. One can treat the student’s PGL/repository as his/her individual portfolio, as the evidence of the progress, achievements, and competencies. That is a significant contribution to smart education and STEM-driven CS education as Big Concepts in terms of our vision.

References [ABB+16] Alur R, Baraniuk R, Bodik R, Drobnis A, Gulwani S, Hartmann B, Kafai Y, Karpicke J, Libeskind-Hadas R, Richardson D, Solar-Lezama A, Thille C, Vardi M (2016) Computer-aided personalized education, www.cis.upenn.edu/~alur/cape16.pdf [AP-MP16] Arapi P, Paneva-Marinova D, Pavlov R, Christodoulakis S (2016) Techniques to personalized observation and improved learning experience in digital libraries, Proceeding of the International Conference on e-Learning, Vol. 16, pp. 94–100 [AR15] Alaofi M, Rumantir G (2015) Personalisation of Generic Library Search Results Using Student Enrolment, Information Journal of Educational Data Mining, 7(3), pp. 68–88 [BAN17] Brisebois R, Abran A, Nadembega A (2017) A semantic metadata enrichment software ecosystem (SMESE) based on a multi-platform metadata model for digital libraries, Journal of Software Engineering and Applications, 10(04), pp. 370–405

132

4 Personal Generative Libraries for Personalised Learning: A Case Study

[BCD+10] Brusilovsky P, Cassel LN, Delcambre LM, Fox EA, Furuta R, Garcia DD, Shipman FM, Yudelson M (2010) Social navigation for educational digital libraries, Procedia Computer Science, 1(2), pp. 2889–2897 [BDŠ18] Burbait˙e R, Dr˛asut˙e V, Štuikys V (2018) Integration of computational thinking skills in STEM-driven computer science education, 2018 IEEE Global Engineering Education Conference (EDUCON), IEEE, pp. 1824–1832 [BHC+16] Basham JD, Hall TE, Carter Jr. RA, Stahl WM (2016) An operationalized understanding of personalized learning, Journal of Special Education Technology, 31(3), pp. 126–136 [Bra18] Bratt S (2018) Digital library keyword analysis for visualization education research: Issues and recommendations, Journal of Applied Research in Higher Education, 10 (4), pp. 595–611 [Byb13] Bybee RW (2013) The case for STEM education: Challenges and opportunities, NSTA press [CF15] Chen Y, Fox EA (2015) Extending ensemble: an education digital library for computer science education. Journal of Computing Sciences in Colleges, 31(2), pp. 201–207 [Che17] Chen Y (2017) A High-quality Digital Library Supporting Computing Education: The Ensemble Approach, Doctoral dissertation, Virginia Tech [CO14] Cechinel C, Ochoa X (2014) A brief overview of quality inside learning object repositories, Proceedings of the XV International Conference on Human Computer Interaction, ACM, p. 83 [CRS15] Cohen A, Reisman S, Sperling BB (2015) Personal spaces in public repositories as a facilitator for Open Educational Resource usage, The International Review of Research in Open and Distributed Learning, 16(4), pp. 156–176 [DBC+10] Das M, Bhaskar M, Chithralekha T, Sivasathya S (2010) Context Aware E-Learning System with Dynamically Composable Learning Objects, International Journal on Computer Science and Engineering, 2(4), pp. 1245–1253 [DK07] Dagien˙e V, Kurilovas E (2007) Design of Lithuanian digital library of educational resources and services: the problem of interoperability, Information Technology and Control, 36(4), pp. 402–411 [Doc18] Dockterman D (2018) Insights from 200+ years of personalised learning, NPJ science of learning, 3(1), p. 15 [DR07] Deng X, Ruan J (2007) The Personal Digital Library (PDL)-based e-learning: Using the PDL as an e-learning support tool, Integration and Innovation Orient to E-Society Volume 2, pp. 549–555 [DVN+12] Domazet D, Veljkovi´c D, Nikoli´c B, Jovev, L (2012) Clustering of learning objects for different knowledge levels as an approach to adaptive e-learning based on SCORM AND DITA, The third international conference on e-learning (e-Learning-2012), pp. 27–28 [FKM19] Fournier H, Kop RH, Molyneaux H (2019) New Personal Learning Ecosystems: A Decade of Research in Review, Emerging Technologies in Virtual Learning Environments, pp. 1–19 [FMM05] Ferran N, Mor E, Minguillón J (2005) Towards personalization in digital libraries through ontologies, Library management, 26(4/5), pp. 206–217 [Fox16] Fox EA (2016) Introduction to digital libraries, Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, ACM, pp. 283–284 [GDS+13] Gillet D, De Jong T, Sotirou S, Salzmann C (2013) Personalised learning spaces and federated online labs for stem education at school, 2013 IEEE Global Engineering Education Conference (EDUCON), IEEE, pp. 769–773 [GMM17] Gaona-García PA, Martin-Moncunill D, Montenegro-Marin CE (2017) Trends and challenges of visual search interfaces in digital libraries and repositories. The Electronic Library, 35(1), pp. 69–98 [Gre03] Greenberg J (2003) Metadata Generation: Processes, People and Tools, Bulletin of the American Society for Information Science and Technology, December/January

References

133

[Gro17] Groff JS (2017) Personalised Learning: The State of the Field & Future Directions, Center for Curriculum Redesign, 2017. www.curriculumredesign.org, Accessed 23 November 2018 [HPD+12] Hendrix M, Protopsaltis A, Dunwell I, de Freitas S, Arnab S, Petridis P, Rolland C, Lanas JL (2012) Defining a metadata schema for serious games as learning objects, 4th International Conference on Mobile, Hybrid, and On-Line Learning, eL and mL 2012, 30 January - 4 February, Valencia, Spain, pp. 14–19 [HSS19] Halibas AS, Sathyaseelan B, Shahzad M (2019) Learning Analytics: Developing a Data-Centric Teaching-Research Skill, Smart Technologies and Innovation for a Sustainable Future, pp. 213–219 [i3c19] i3community.ed.gov (2019), https://i3community.ed.gov/impact-stories/2231, Accessed 24 June 2019 [IEEE19] IEEE Learning Standards Committee (2019) WG 12: learning object metadata, https:// ieee-sa.imeetcentral.com/ltsc/, Accessed 24 June, 2019 [KVI11] Klašnja-Mili´cevi´c A, Vesin B, Ivanovi´c M, Budimac Z (2011) E-Learning personalization based on hybrid recommendation strategy and learning style identification, Computers & Education, 56(3), pp. 885–899 [MSS+12] Miller LD, Soh K, Samal A, Nugent G (2012) iLOG: a framework for automatic annotation of learning objects with empirical usage metadata, International Journal of Artificial Intelligence in Education, 21(3), pp. 215–236 [MW19] McDonald KS, Waite AM (19) Future Directions: Challenges and Solutions Facing Career Readiness and Development in STEM Fields, Advances in Developing Human Resources, 21(1), pp. 133–138 [Och11] Ochoa X (2011) Learnometrics: Metrics for learning objects, Proceedings of the 1st International Conference on Learning Analytics and Knowledge, ACM, pp. 1–8 [PB15] Park JR, Brenza A (2015) Evaluation of semi-automatic metadata generation tools: A survey of the current state of the art, Inform. tech. and libraries, 34(3), pp. 22–42 [PSB+17] Pane JF, Steiner ED, Baird MD, Hamilton LS, Pane JD (2017) Informing Progress: Insights on Personalized Learning Implementation and Effects. Santa Monica, CA: RAND Corporation, https://www.rand.org/pubs/research_reports/RR2042.html [RIZ18] Rodriguesa MW, Isotanib S, Zárate LE (2018) Educational Data Mining: A review of evaluation process in the e-learning. Telematics and Informatics, 35(6), pp. 1701– 1717, September 2018 [ROD15] Rodríguez PA, Ovalle DA, Duque ND (2015) A student-centered hybrid recommender system to provide relevant learning objects from repositories, International Conference on Learning and Collaboration Technologies, pp. 291–300 [ŠB18] Štuikys V, Burbait˙e R (2018) Smart STEM-Driven Computer Science Education: Theory, Methodology and Robot-based Practices, Springer [ŠBD+18] Štuikys V, Burbait˙e R, Dr˛asut˙e V, Ziberkas G, Dr˛asutis S (2019) A Framework for Introducing Personalisation into STEM-Driven Computer Science Education, International Journal of Engineering Education, 35(4), pp. 1–18 [ŠD12] Štuikys V, Damaševiˇcius R (2012) Meta-programming and model-driven metaprogram development: principles, processes and techniques, Springer Science & Business Media [SEJ+18] Self JA, Evans M, Jun T, Southee D (2018) Interdisciplinary: challenges and opportunities for design education, International Journal of Technology and Design Education, pp. 1–34 [SM12] Sabitha AS, Mehrotra D (2012) User centric retrieval of learning objects in LMS, 2012 Third Intern. Conf. on Comp. and Comm. Technology, IEEEE, pp. 14–19 [SPI+17] Syed TA, Palade V, Iqbal R, Nair SSK (2017) A Personalized Learning Recommendation System Architecture for Learning Management System, Proceedings of the 9th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (KDIR 2017), pp. 275–282

134

4 Personal Generative Libraries for Personalised Learning: A Case Study

[Štu15] Štuikys V (2015) Smart learning objects for smart education in computer science, Springer, New York [Win06] Wing JM (2006) Computational thinking. Communications of the ACM, 49(3), pp. 33– 35, March 2006 [YAC+16] Yoshinov R, Arapi P, Christodoulakis S, Kotseva M (2016) Supporting Personalized Learning Experiences on top of Multimedia Digital Libraries, International journal of education and information technologies, 10, pp. 152–158 [YKS17] Yassine S, Kadry S, Sicilia MA (2017) Learning Analytics and Learning Objects Repositories: overview and future directions. Learning, Design, and Technology: an international compendium of theory, research, practice, and policy, pp. 1–30 [ZYR16] Zhu ZT, Yu MH, Riezebos P (2016) A research framework of smart education. Smart learning environments, 3(1), 1–17

Chapter 5

Enforcing STEM-Driven CS Education Through Collaborative Learning

5.1 Introduction The challenges of the twenty-first century require learning environments and new teaching methods that develop critical thinking, open-ended problems solving, teamwork skills, creativity, etc. (see Sect. 1.3). The STEM-driven paradigm combines knowledge of different subjects and creates preconditions for the development of those skills and competencies. The basis for that is solving of complex real-world tasks within Smart Learning Environments and the application of effective learning methods. Currently, there are evident signs of shifting STEM education from the segregated STEM towards the integrated STEM (see Sect. 1.4, Chap. 1). Nadelson and Seifert define integrated STEM as “the seamless amalgamation of content and concepts from multiple STEM disciplines” [NS17]. The integration typically focuses on a wider context and a more complex task solving that requires the knowledge of different subjects, as well as of different visions and approaches. The complexity of tasks is largely due to their structure, context, and environment to implement them. Often, they hold the name of ill-structured or open-ended problems. Typically, those problems have multiple potential solutions; they require the application of knowledge and practices from multiple STEM disciplines and collective efforts in finding a relevant solution. In this regard, the collaborative learning (CL) seems as very attractive and perspective to combine it with the STEM paradigm. Laal and Laal define CL as a learner-centred approach to teaching and learning that involves teams of learners working together to solve a problem, complete a task, or create a product [LL12]. However, the effective CL typically relies on the complex task solving [ZKS+19]. The term ‘complex task’ is widely exploited in the literature on STEM education. Often, it is possible to understand its meaning intuitively without explaining what in essence this term means in a given context. We argue that, in a wider context, we need a precise or, at least, explicit definition and interpretation of complexity aspects. Why? There are multiple reasons for that. Firstly, with the ever-growing technology advancements, the complexity of systems

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 V. Štuikys and R. Burbait˙e, Evolution of STEM-Driven Computer Science Education, https://doi.org/10.1007/978-3-031-48235-9_5

135

136

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

in all domains, including education, grows at the same or similar rate. The best examples in education are smart learning ecosystems [Gio22]. Secondly, STEM evolves rapidly towards a higher integration that tends to use complex learning environments for solving more complex real-world tasks [RD22]. Thirdly, the complexity issues have direct links with educational robotics used in two modes either as a conventional facility in formal learning [ABM+19], or a competition-based learning (CBL) such as robot contests, internationally recognized as the First Lego League (FLL) in informal learning [Che18, Har18]. Note that CBL, in fact, is a demonstration of the knowledge and competency gained during the preparatory learning activities done far earlier the competitions start. Therefore, it is possible to treat CBL as an extension of CL that takes place in formal learning. Finally, the evolution of smart STEMdriven CS education based on Big Concepts requires to look at the complexity issues with the increased attention. Therefore, in this chapter, we focus on two research questions: RQ1: Task complexity issues in STEM-driven CS education. RQ2: Collaborative learning (CL) in STEM-driven CS education: Contest-based perspective. With regard to the RQ1, we seek to develop a model to evaluate the real-world task complexity to define the context for considering RQ2. We will provide a more extended motivation of RQ1 later. For dealing with the RQ2, we assume that all the three approaches (STEM education, CL, and CBL) have much in common. Therefore, the RQ2 is as follows: on what basis and how to combine these approaches (STEM and CL) into a coherent methodology aiming at enforcing and extending the integrated knowledge and competency so needed for learners now and for their future works in our highly technologised age. Therefore, the objectives of this chapter are two-folded. Regarding the RQ1, we aim at providing analysis of the complexity issues initially, to some extent independently from STEM when we analyse the related work, and then discovering metrics for evaluating the complexity of real-world tasks within the proposed model dedicated for STEM-driven CS education. Regarding the RQ2, we aim at (i) proposing a framework describing the capabilities of CL in the context of STEM-driven computer science (CS) education through the preparation to the global robotics competition FLL and (ii) to implementing and presenting outcomes of using this framework. The basis of the methodology we use in describing the proposed framework is a set of active learning methods such as inquiry-based, project-based, contest-based, modelling, design, and testing used through experimental trials and approximations. The contribution of this chapter is (i) the proposed task complexity model; (ii) the proposed framework for RQ2 and its implementation; (iii) the confirmation (at least through accumulated experience) that the synergy of different learning paradigms largely motivates learners to learn more efficiently and brings a variety of competencies. They include higher order thinking, risk management in complex task solving, collaborative skills, etc. The following parts of this chapter include Related Work (Sect. 5.2), Basic Idea and Methodology (Sect. 5.3), The Concept ‘Real-world task’ in STEM Research and Its Complexity (Sect. 5.4), Framework for STEM-Driven Contest-Based CL and

5.2 Related Work

137

Its Implementation (Sect. 5.5), Case Study and Results (Sect. 5.6), Discussion and Evaluation (Sect. 5.7), and Conclusion (Sect. 5.8).

5.2 Related Work Stream A: Complexity issues. Complexity is a cross-disciplinary concept. This concept therefore is under consideration in multiple domains such as design of technical systems [LS09], software engineering [DHN+15], education [RE16], and many more. It is a difficult concept to define precisely, since it highly depends on the context of use and because of the complexity is the inherent property of a system. As a result, there is no unique understanding of the term “complexity”. This highly depends on the context and system under consideration. For example, Sussman [Sus00] presents twenty slightly different views on complexity in systems from different fields given by different authors. The two most general follow below. A system is complex when “it is composed of a group of related units (subsystems), for which the degree and nature of the relationships is imperfectly known”. “Complex: composed of a set of interconnected or interwoven parts”.

As stated by Northrop [Nor11], complexity is a subjective measure of the difficulty in describing and modelling a system (thing or process) and thus being able to predict its behaviour. Complexity also is a global characteristic of a system that represents the gap between component and parameter knowledge and knowledge of overall behaviour. In addition, complexity generally has three attributes: (1) a complex system has many parts (or items, units, or individuals). (2) There are many relationships/interactions between the parts. (3) The parts produce combined effects (synergies) that not easily foreseen and may often be novel or surprising (chaotic). Quantitative measures of complexity necessarily must rely on the structure of mathematical models of complex systems, most of which are nonlinear and time-variable (nonstationary) [Nor11]. According to Sinha [Sin14], there are three main dimensions of the complexity as they appear in the context of cyber-physical system design and development. Those are (1) Structural, (2) Dynamic, and (3) Organisational Complexities. Structural complexity pertains to the underlying system architecture, or more generally, the enabling infrastructure. Dynamic complexity refers to the complexity of the system behaviour or process running on the underlying infrastructure. Organisational Complexity relates to the system development process and the organisational structure of the development team. The complexity of technical systems depends on the quantity of different elements and their connectivity, i.e., complexity means a measurable system characteristic. According to Efatmaneshnik and Ryan [ER16], complexity is a compound of objective and subjective complexities. Objective complexity is a measure of the system size and independent of an observer; however, it may be domain, context, and object/goal dependent. Subjective complexity is relative to an observer. It is a

138

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

“measure of the departure from a reference simplicity” and therefore depends on the observer’s selection of a suitable reference model. We are interested in “task complexity”, the term widely exploited in education, and particularly in STEM. In large, one can understand the STEM domain as an approach of gaining the interdisciplinary knowledge in Science, Technology, Engineering, and Mathematics through complex tasks solving. Knowing the fact that with technology advancements, the complexity of systems and tasks steadily grows, it is important to clarify what, in essence, the phrase “complex tasks solving” means. Especially it is important in the context of collaborative learning, when STEM is a focus. Task complexity is, in fact, an umbrella term to express the integrative or structural aspects and interdependences among those aspects. This term covers a variety of other concepts within, often identified as sub-tasks [EH18], task components [LL12A], or defining characteristics. The common way to understand this term better is to look at the complex tasks definitions, tasks models, and those tasks frameworks. Liu and Li define task complexity as “the aggregation of any intrinsic task characteristic that influences the performance of a task” [LL12A]. The following presents a cognitive vision on task complexity. Task complexity is “the determinant of the required cognitive and information processing capacities to perform a task” [BT96], or “a representation of the general and specific knowledge required for performing a task [Woo86] or “an indicator of the required training” [BKK01]. To deal with the task/ system complexity, Efatmaneshnik and Handley [EH18] introduce a task model that consists of input, processing, constraints, and output, each with the four-dimensional context (task subdomain, scope, resolution, and aspect). Choi and Arguello [CA20] focus on the effect of cognitive task complexity on the types of information considered useful for the task. They characterise information as: (1) problem information, which helps the task performer understand the task requirements and structure; (2) problem-solving information, which helps the task performer strategize on how to approach the task; and (3) domain information, which helps the task performer to learn about the task domain. Fisher et al. [FGF12] portray the complex problem-solving as (i) knowledge acquisition and (ii) knowledge application concerning the goal-oriented control of complex systems that contain many highly interrelated elements. Problem-solving abilities are critical components in the context of contemporary STEM education [DSL20]. This paper addresses the cognitive mechanisms of problem conceptualisation that may play a supporting role in reasoning success. Therefore, enhancing understanding of this particular element of problem-solving could have profound benefits to the design of instructional interventions in developing problem-solving skills within STEM education. In addition, it presents the specific role of visuospatial cognition in the conceptualisation of problems by facilitating students’ access to problem schema in long-term memory and subsequent manipulation in working memory. Doctor et al. [DTK+22] propose a path to a general, domain-independent measure of domain complexity level in the context of building AI systems. Authors distinguish two aspects of domain complexity: intrinsic and extrinsic. The intrinsic domain complexity is the complexity that exists by itself without any action or interaction from an AI agent performing a task on that domain. This is an agent-independent

5.2 Related Work

139

aspect of the domain complexity. The extrinsic domain complexity is agent- and taskdependent. Intrinsic and extrinsic elements combined capture the overall complexity of the domain. Stream B: STEM and collaborative learning. In this analysis, we seek to characterise each domain (STEM, collaborative learning, and Contest-Based Learning) shortly and therefore rely on the more recent and more extensive studies only. A systematic review [TCL+18] summarises the reviewed papers by presenting the theoretical framework for instructional practices in integrated STEM. The pedagogical part covers social constructivism. The practice includes (i) integration of STEM content, (ii) problem-centred learning, (iii) inquiry-based learning, (iv) design-based learning, (v) cooperative learning. The paper [NS17] focuses on evolutionary aspects (from segregated STEM to integrated STEM) and outlines two big challenges (teachers’ knowledgeability in STEM and the urgent need of structural changes). The monograph [ŠB18] discusses a broad range of problems identified as smart STEM-driven CS education within a huge learning variability space that includes pedagogy, technology, social aspects, content variability, and interaction among these variabilities. Multiple case studies rely on using educational robotics that enable to define the so-called S-, T-, E-, and M-components and adequate knowledge fragments, including I-knowledge (i.e., integrative knowledge). There are indicators on how robotics is influential to future studies in STEM [LB17] or on how they interact each other [Wan16]. Educational robotics often serves as a platform for achieving the following three main objectives. (1) To teach STEM. (2) To develop broad learning skills (such as problem-solving, scientific inquiry, engineering design, creative thinking, and teamwork). (3) To foster the students’ engagement in science and technology, and to reducing psychological or cultural barriers in dealing with these subjects, for example, among girls or students coming from underprivileged communities. The paper [HJF+17] (i) indicates the pervasiveness of Computer-Supported Collaborative Learning (CSCL) research into STEM education in the last decades and (ii) provides a preliminary meta-analysis aiming to know on how CSCL penetrates STEM domains. This analysis focuses the three key topics of CSCL: the nature of collaboration, the technologies that are employed, and the pedagogical designs. In the context of problem-solving using the CL, the paper [FC89] indicates the following stages: Engagement, Exploration, Modelling, Design, Testing, and Evaluation. Salmons [Sal19] presents a taxonomy of collaboration through six levels from the lowest to the highest as follows. (1) Reflection—individuals align their own knowledge, attitudes, and skills with group efforts. (2) Dialogue—participants agree upon and work with the group’s communication expectations, timelines, processes, and tools. (3) Review—participants exchange work for constructive mutual critique and to incorporate other perspectives. (4) Parallel collaboration—participants work to each complete a component of the project. Elements are combined into a collective final product, or the process moves to another level of collaboration. (5) Sequential collaboration—participants complete stages of the work, building on each other’s contributions through a series of progressive steps. (6) Synergistic collaboration— participants synthesise their ideas to plan, organise, and complete the creation of

140

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

a product that melds all contributions into a collective final product. The paper [CAS+16] introduces and discusses the following collaborative aspects (i) Exploring and Understanding, (ii) Representing and Formulating, (iii) Planning and Executing, (iv) Monitoring and Reflecting. We explain their meaning later because we incorporated them into our framework. The following publications cover concepts, frameworks, approaches, and used techniques in CBL [ZKS+19, Che18, Har18]. We provide more information on the format of the FLL (in terms of Challenges, Project, and Values) along with case study and results. One can learn a conceptual definition of FLL activities from [FLL20]. The paper [Jon19] provides the extensive analysis of inquiry-based approach in STEM disciplines and claims the following. “Inquiry learning and the associated inquiry cycle give a good basis for collaborative learning, because the inquiry cycle offers a number of concrete points at which students need to make common decisions in order to continue (e.g., what hypothesis to test and which experiment to perform)”. Chen et al. [CYG+19] explore the factors that might affect learning performance and collaborative problem-solving (CPS) awareness in STEM education. The collected and analysed data on these factors includes learning strategy, learning behaviours, and their interrelationships with learning performance and CPS awareness in STEM education. This review, as presented here, is by no means exhaustive due to the wideness of research topics. Nonetheless, we have selected those works fitting best in our context to achieve the formulated objectives and usefulness for RQs. We can conclude the following. 1. The external driving forces to advancing STEM, collaborative learning (CL), and Contest-Based Learning (CBL) are the same—the economic and social challenges for the twenty-first century in terms of the needed skills, competencies, and ever-growing needs to harness modern technologies for solving global problems of today and the near future. 2. With the technology advancements, the complexity of systems and tasks we need to tackle grows at the same rate, or even faster. Complexity is a cross-disciplinary issue. Researchers deal with it in multiple contexts and fields, including education. Complexity of tasks is inseparable part of STEM education. The term ‘complex real-world task’ often appears in the literature on STEM; however, we still little know what in essence it means. As a starting point of a broader discussion, we have provided an analysis of the complexity issues relying on publications different of STEM. 3. The task complexity and its model highly depend on the domain to which the given task pertains. The task complexity model as presented by Wood [Woo86] and Efatmaneshnik and Handley [EH18] is a compound of four components: Input, Constraints, Processing, and Output. Each may have multiple dimensions. Furthermore, complex tasks may have multiple criteria for evaluation, including the objective and subjective ones. 4. Within each approach (STEM, CL, CBL), the focus is on the complex tasks solving and project doing. The problems (tasks) typically range from complex

5.3 Basic Idea of the Approach and Methodology

141

to open-ended ones. Therefore, the broader context, the wider content is and therefore the richer visions, frameworks, models and processes we need to apply.

5.3 Basic Idea of the Approach and Methodology We discuss collaborative learning (CL) and competition-based learning (CBL) in the context of STEM-driven CS education. In large, one can understand the STEM domain as an approach of gaining the interdisciplinary knowledge in Science, Technology, Engineering, and Mathematics through complex tasks solving. Knowing the fact that with technology advancements the complexity of systems and tasks steadily grows, it is important to clarify what, in essence, the phrase “complex tasks solving” means. Especially, it is important in the context of CL or CBL, when STEM is under consideration. As it is clear from our literature analysis (see Sect. 5.2), task complexity is, in fact, an umbrella term to express the integrative or structural aspects and interdependences among them. This topic has been studied either as an independent domain (Sect. 5.2 focuses on this aspect only), or as an aspect of complex systems [Nor11]. Here, we interpret the term task complexity as a bridge to combine STEM with CL (CBL). We define the term “complex task” in the context of STEM-driven CS education using the following attributes: (i) domain context; (ii) complexity measures [LL12A]; (iii) learning objectives and learning content; (iv) learner’s previous knowledge regarding the task domain and profile (Beginner, Intermediate, Advanced) among students of K9-K12 classes; and (vi) teacher’s research preferences. In our case, tasks come from domains we are interested in (robotics and CS apps, IoT and Big Data apps, AI-based apps). Three domains, i.e., STEM, CL, CBL, are highly heterogeneous in their own rank. On the other hand, they have many common attributes that form a conceptual basis to focus on their synergy effect for enhancing learning. For doing that, we need to relay on some well-established approaches or frameworks (we disclose them in Sect. 5.4). However, before introducing them, firstly we need to accept the following assumptions. (i) Here, we consider learning as entirely problem-solving activities with the focus on complex tasks as a bridge to combine STEM with CL and CBL. (ii) Complex task solving starts from objectives’ formulation (refining) or even from analysis of a given challenge and completes with presenting outcomes that may include quantitative (measurable results) and qualitative outcomes. (iii) The complexity of tasks may vary from low to high complexity or even may include open-ended problems. (iv) For managing (reducing) complexity, we need to decompose the STEM real-world task into a set of task components for each STEM constituents (S that covers physics and CS, T, E, and M). For evaluating the level of complexity in a wider context, we rely on two approaches: (a) cognitive complexity measures defined through Miller’s magic seven problem [Mil56] and (b) defining values of fuzzy variables through monitoring for the selected complexity dimensions taken from [LL12A]. (vi) We admit that there might be specific requirements and limitations for each constituent we consider. For example, in our STEM, the interdisciplinary aspects (S-knowledge, T-knowledge,

142

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

E-knowledge, M-knowledge, and I-knowledge) are integrated within one CS course based on using robotics for practising and exploration [ŠB18]. Regarding CBL, the strict FLL guidelines define the rules and constraints that the participants must follow. In addition, the CBL has two clearly expressed phases (preparation and execution) both entailing the learning differentiating in aims, physical, emotional, and cognitive loads. Our methodology includes a variety of processes and actions within two parts (for considering RQ1 and RQ2 adequately). Part (1) includes (i) the definition of task components for each STEM constituent [S (in our case S covers physics and CS), T, E, and M] and development of the STEM-driven generic task model; (ii) the development of the conceptual model to evaluate complexity aspects through introduced metrics (fuzzy variables); and (iii) calculation of fuzzy variables for each complexity dimension (size, variability, interaction, etc., [LL12A]). Part (2) includes the proposed framework. It defines the processes of combining STEM with CL and CBL and the implementation and use of the proposed framework for achieving the prescribed objectives. The implementation also covers testing and evaluating the model and the frameworks themselves. We consider the Part (1), i.e., RQ1 in the next section.

5.4 The Concept ‘Real-World Task’ in STEM Research and Its Complexity STEM is the educational paradigm aiming at bringing the interdisciplinary knowledge taken from four fields (Science, Technology, Engineering, and Mathematics) through the real-world tasks solving. Researchers and many other stakeholders use the term real-world task, sometimes adding the attributes ‘authentic’ or ‘complex’ very often; however, they typically use these terms, without the explicit defining, describing, or explaining what these terms mean in essence, though these are drivers to initiate and execute the STEM-driven educational processes in the whole. One reason of this is the huge variation of STEM fields each having specific attributes, specific context of use, and variety of different tasks. The other reason is that the real-world tasks are indeed complex, and complexity issues due to commonality and generosity are a separate topic for investigation. A rational approach therefore is not to conceptualise the real-world tasks themselves, but rather to consider the STEM problems as a context to the real-world tasks. The conceptual Pleasant’s work [Ple20] is a relevant example of such an approach. Here, the author examines the nature of STEM problems and introduces a typology of STEM problems along with defining characteristics. Those are (i) Foreground novel technologies; (ii) Foreground knowledge for each S-T-E-M component; (iii) Foreground methods for each S-T-E-M component; (iv) Context-specific; (v) Reductive. We interpret these characteristics as a contextual information for inventing the conceptual model for the real-world task solving as related to STEM in the real educational setting. Now let us look on how

5.4 The Concept ‘Real-World Task’ in STEM Research and Its Complexity

143

the real-world task is introduced and how its understanding is changing. At the very beginning, we need to formulate (understand) the term more precisely by adding the word ‘prototype’. Indeed, typically all tasks dealt with around STEM are prototypes of the real tasks. A prototype is a simplification of a reality, i.e., the way for reducing complexity. Having this understanding, we typically omit this word. What we need to do next is to understand the given (reduced) tasks through the lens of STEM. This vision is a decomposition of STEM as a whole domain into its constituents, i.e., S-, T-, E-, and M-components. The next state of task context understanding is the acceptance of the relevant environment, where the task solution takes place. We rely on using robotics (i.e., different types of robots and their components such as sensors, cameras) to demonstrate tasks solving taken from the real-world applications. Applying robotics in practice correctly requires a set of multiple actions, such as modelling, constructing, designing, and testing. In general, we identify all these actions as task components or sub-tasks for possible solving of real-world tasks. At the conceptual level, sub-tasks appear without a concrete content of the task. The concrete content of the real-world task appears at the phase of the case study. The next action (state) is to express the STEM constituents (S, T, E, and M) through the generic actions taken place within the robot-based environment. Next, we summarise the above-given narrative description through a set of predefined sequence of states. Each next state gives an extended understanding about the given real-world task, its processes ending with skills obtained. State 0: Building a prototype of the real-world task without the STEM context. State 1: Introducing the STEM context for dealing with the real-world task. State 2: Expressing the task required knowledge through STEM constituents (S, T, E, and M). State 3: Introducing the robot-based environment abstractly through its characteristics. State 4: Linking STEM constituents with the characteristics of the environment to specify actions, i.e., tasks components within the environment for gaining the required knowledge. State 5: Introducing the concrete content for a given real-world task and playing actions through processing tasks components to obtain the output. State 6: Evaluating output through measuring obtained skills. In Fig. 5.1, we interpret the states (from 0 to 4) by creating a structural vision on how to understand the real-word task through the lens of STEM-driven CS education. It is our approach. Here, the Science dimension has two constituents (physics and computer science). The first is about modelling and measuring of the mechanical characteristics of the robots or their parts. Robots are either moving robots such as LEGO or stationary robots such as Arduino. The second is about programming of robots or their parts. The Technology dimension is about tools (software and hardware) for ensuring the correct functioning of robots or their parts. The engineering dimension is about designing and constructing of robots from the simplest to more complex ones. Finally, the mathematics dimension is about the functional dependences among physical characteristics of robots or their parts. We express each dimension through the possible list of task components (see the right column in

144

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

Fig. 5.1). We present the interrelationships among task components in Fig. 5.8 (see Appendix). What is the semantics of these components? This list is generic and the contentfree; however, it is the context dependent, i.e., dependent on the characteristics of the robot-based environment. This list as stated above represents tasks’ components (sub-tasks) covering all STEM constituents. The task components are generic in the following sense. Firstly, they are without a concrete content. Secondly, they may cover many tasks taken from real world and related to robotics. Finally, they have the common context, i.e., robotics. What is the concrete real-world task? When you add the concrete content to some task component, the generic task becomes concrete. For example, when yours real-world task is investigating the temperature changes within a classroom, you need to use the adequate sensors (parts of robots) with concrete characteristics and capabilities. Note that for real task concretisation, not all task components may require a concrete content. In addition, it is not obligatory that all task components would appear in the concrete real-world task. Next, we move to considering complexity issues.

5.4.1 Complexity Issues of Real-World Tasks Real-world tasks, even their prototypes, are complex tasks. There are two common visions on tasks complexity, i.e., tasks of pre-defined complexity and open-ended tasks. We refer to the pre-defined complexity as the one, for which complexity measures, context, and approaches for solving are known and the complexity can be measured in somewhat way. In contrast, the open-ended tasks are those, for which neither exact context, nor approaches for solving them are prior unknown and the interactive approximation through trials and experimentation is applied, as we will illustrate later in Contest-Based Learning. Here, we focus on of the pre-defined complexity of the task. There are two visions to task complexity (objective and subjective). When combined, they yield the total complexity [EH18]. On the other hand, it is possible to characterise complexity through Complexity contributory factors (CCFs) and Complexity dimensions. According to [LL12A], the difference between a complexity dimension and a CCF exists. First, a complexity dimension is a more abstract concept, compared with a CCF. A CCF is a metric or an indicator of task complexity and externally reflects the complexity level of the task. Complexity dimensions describe the interior structure of the task. Furthermore, a complexity dimension is composed of several related CCFs. In general, a complexity dimension is context-free and a CCF context-specific. According to [LL12A], complexity dimensions are as follows (the highlighted are selected in our approach): • Size (the number of task components). • Variety (the number of distinguishable and dissimilar task components).

5.4 The Concept ‘Real-World Task’ in STEM Research and Its Complexity

145

Measuring physical characteristics of the robot components used Modelling of mechanical movement of robot

Science (Physics)

Correcting of initial model Identifying physical interactions among components Obtaining suitability of physical characteristics of components

Programming of functionality for measuring physical characteristics of components

Computer Science

Creating of robot virtual model Creating of robot control programs

Correcting of robot control programs Obtaining suitability of robot control programs

Investigating of relevant robot components & software

STEMdriven real world task*

Selecting of adequate components and software

Technology

Correcting component list, searching of alternative solutions Selecting of adequate software & hardware for robot testing Obtaining suitability of components & software used Creating simple prototypes for components physical characteristics measurement Designing of initial robot

Engineering

Designing of final robot

Correcting of final robot Obtaining of suitability of mechanical robot design

Obtaining functional dependencies among component physical characteristics Obtaining functional dependencies among physical characteristics of initial robot

Mathematics

Obtaining functional dependencies among physical characteristics of final robot Obtaining subtasks performance accuracy through measurements and calculations Obtaining task performance accuracy through measurements and calculations

Fig. 5.1 Structural vision of the robot-based generic real-world task as related to STEM-driven CS education

146

5 Enforcing STEM-Driven CS Education Through Collaborative Learning

• Ambiguity (degree of unclear, incomplete, or non-specific task components). • Relationship interdependency (e.g., conflict, redundancy, dependency) among task components. • Variability (in our case the number of parameters of a task component). • Unreliability (inaccurate and misleading information). • Novelty (appearance of novel, irregular and non-routine events (e.g., interruption), or tasks that are not performed with regularity). • Incongruity (inconsistency, mismatch, incompatibility, and heterogeneity of task components). • Action complexity (cognitive and physical requirements inherent in human actions during the performance of a task). • Temporal demand (task requirement caused by time pressure, concurrency between tasks and between presentations, or other time-related constraints). What complexity dimensions should be selected and applied for evaluating complexity and how in the concrete situation, e.g., in case of STEM? Most likely, this depends on the task’s context. Let us return to our STEM-driven generic task model (Fig. 5.1). It contains 25 task components (sub-tasks) in total (see right column in Fig. 5.1). What is the background for introducing these sub-tasks? The teacher (the second author of the book) has derived this list from the real-world tasks solved by students practically during the past six years when they executed the preparatory work for the First Lego League contest. During those preparing sessions, the teacher has provided monitoring and fixing results on students’ actions and outcomes (e.g., sub-task solving time, difficulty, etc.). The collected data, though subjective, is a valuable asset for reasoning on sub-task complexity from a pedagogical perspective. The solving time consumed by a student is the objective measure of complexity, though the time estimation itself is subjective. In our vision, the time spent for subtask solving (see Sect. 5.1) depends on two constituents: T1—time spent on acquiring new knowledge and T2—time needed for applying knowledge and skills in solving an application task. T = T 1(new knowledge) + T 2(knowledge and skills application).

(5.1)

Now, we are able to motivate the selected items of complexity dimensions given above (see items in bold). We found reasonable to combine the size and temporal demand dimensions into the one time dimension due to (5.1). The remaining dimensions (variety, ambiguity, unreliability, and incongruity) are either not significant for our task model or are partially reflected by the selected dimensions (e.g., variability, to some extent, covers variety). What yet needs to evaluate the complexity level quantitatively is the quantitative measures and their boundaries. The best-known quantitative measure is the cognitive difficulty based on the Miller’s magic seven problem [Mil56]. Miller states that humans can hold 7 (± 2) chunks of information in their short-term memory at one time. Therefore, if the task N contains n sub-tasks (items, features, etc.) within indicated limits 5