Modeling Programming Competency 9783031471476, 9783031471483

This book covers a qualitative study on the programming competencies of novice learners in higher education. To be preci

125 72 4MB

English Pages 291 [170] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Acknowledgments
Contents
Acronyms
Part I Background and Context
1 Introduction
1.1 Background and Motivation
1.2 Goal and Research Questions
1.3 Contextualization of This Research
1.4 Structure of the Book
References
2 Approaching the Concept of Competency
2.1 Competency Definition
2.1.1 Psychological Perspective on Competency
2.1.2 Historical Perspective on Competency
2.1.3 Recent Perspectives and Discussions
2.2 Taxonomies and Competency Models for Computing
2.2.1 Bloom's and Anderson-Krathwohl's Taxonomy
2.2.2 Competency Model of the German Informatics Society
2.3 Competency-Based Curricula Recommendations in Computing
2.3.1 Information Technology 2017
2.3.2 Computing Curricula 2020
2.3.3 National Curricula Recommendations
2.4 Related Research in Computing Education
References
3 Research Design
3.1 Summary of Research Desiderata
3.2 Research Goals
3.3 Research Questions
3.4 Study Design
References
Part II Data Gathering and Analysis of University Curricula
4 Data Gathering of University Curricula
4.1 Goals of Gathering and Analyzing University Curricula
4.2 Relevance of Gathering and Analyzing University Curricula
4.3 Expectations and Limitations
4.4 Sampling and Data Gathering
4.4.1 Selection of Bachelor Degree Programs
4.4.2 Selection of Content Area
4.4.3 Selection of Institutions and Study Programs
4.4.4 Selection of Modules
References
5 Data Analysis of University Curricula
5.1 Methodology of the Data Analysis
5.2 Pre-processing of Data
5.2.1 Linguistic Smoothing of Competency Goals
5.2.2 Basic Coding Guidelines
5.2.3 Computer-Assisted Analysis
5.3 Data Analysis
5.3.1 Deductive Category Development
5.3.2 Inductive Category Development
5.3.3 Deductive-Inductive Category Development
5.4 Application of Quality Criteria
References
Part III Data Gathering and Analysis of Expert Interviews
6 Data Gathering of Guided Expert Interviews
6.1 Goals of Conducting and Analyzing Guided Expert Interviews
6.2 Relevance of Conducting and Analyzing Guided Expert Interviews
6.3 Expectations and Limitations
6.4 Developing an Interview Guide and Questions
6.5 Data Gathering and Sampling
6.5.1 Selecting and Contacting Experts
6.5.2 Conducting the Interviews
6.5.3 Recording the Interviews
References
7 Data Analysis of Guided Expert Interviews
7.1 Pre-processing of Data
7.1.1 Transcription Guidelines
7.1.2 Transcription System
7.1.3 Transcription Process
7.2 Data Analysis
7.3 Application of Quality Criteria
References
Part IV Results
8 Results of University Curricula Analysis
8.1 Cognitive Competencies
8.1.1 Cognitive Process Dimension Remembering
8.1.2 Cognitive Process Dimension Understanding
8.1.3 Cognitive Process Dimension Applying
8.1.4 Cognitive Process Dimension Analyzing
8.1.5 Cognitive Process Dimension Evaluating
8.1.6 Cognitive Process Dimension Creating
8.1.7 Knowledge Dimensions
8.2 Other Competencies
8.3 Reliability
8.4 Discussion of Results
References
9 Results of Guided Expert Interviews
9.1 Cognitive Competencies
9.2 Other Competencies
9.3 Factors Preventing Programming Competency
9.4 Factors Contributing to Programming Competency
9.5 Reliability
9.6 Discussion of Results
References
10 Summarizing and Reviewing the Components of Programming Competency
10.1 Summary of Cognitive Programming Competencies
10.2 Summary of Other Programming Competency Components
10.3 Review of the Anderson Krathwohl Taxonomy
References
Part V Wrap Up
11 Conclusion
11.1 Brief Summary of Results
11.1.1 Competencies Expected from Novice Programmers
11.1.2 Adequacy of the Anderson Krathwohl Taxonomy for Programming Education
11.1.3 Factors Influencing Students' CompetencyDevelopment
11.2 Conclusions
11.3 Future Work
References
Recommend Papers

Modeling Programming Competency
 9783031471476, 9783031471483

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Natalie Kiesler

Modeling Programming Competency A Qualitative Analysis

Modeling Programming Competency

Natalie Kiesler

Modeling Programming Competency A Qualitative Analysis

Natalie Kiesler Educational Computer Science Group DIPF | Leibniz Institute for Research and Information in Education Frankfurt am Main, Germany

ISBN 978-3-031-47147-6 ISBN 978-3-031-47148-3 https://doi.org/10.1007/978-3-031-47148-3

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Foreword

People experience many examples of quantitative analysis through statistical measures and results in today’s world. Mean, median, mode, and standard deviation are basic quantitative measures. Quantitative research has grown to an accepted level of maturity. However, rarely does one find concrete methods for qualitative analysis. This emerging field of study and application is making inroads in various areas such as behavioral science, natural science, and even the computing world. There is much to learn from applying qualitative analysis in the computing sciences. This qualitative approach has penetrated modern institutions such as finance, research, government, and education. Qualitative analysis has reflected current educational degree programs. Studies using qualitative research are starting to emerge. University teachers and students are beginning to appreciate the importance of qualitative methods, and researchers seek this avenue to obtain scientific results. This new interest in qualitative studies is taking root, and people need to learn more about the approach to understand its benefits. As a result, Dr. Natalie Kiesler has written this research book, adequately titled Modeling Programming Competency: A Qualitative Analysis. In brief, Natalie Kiesler has hit the mark in transforming intellectual and practical thought to this vital subject. Natalie and I met virtually as members of a working group in 2021 that produced a paper on competency. Since then, she and I have developed a close professional bond in our mutual promotion of quality computing education. We both believe competency should be part of every student’s university education. Since then, Natalie and I have produced several papers together. It is no secret that I encouraged her to develop this research work, and I am delighted she decided to do so. Her efforts have created a helpful book in a clear and concise style. The book’s content promotes thought and diligence. Readers should appreciate this direct approach as they dwell among the elements surrounding the qualitative analysis field. The content style of the work is refreshing. The author uses methods and non-confidential data to form a fundamental basis to present ideas and processes for researchers and students to consider. The author presented this work in five phases that include (a) background and context, (b) data gathering and analysis of v

vi

Foreword

university curricula, (c) data gathering and analysis of expert interviews, (d) results, and (e) wrap-up. The content is easy to follow. Hence, Natalie Kiesler has addressed the areas of qualitative analysis and has done so convincingly and pragmatically. All readers should benefit from the experience derived from this work. The speed at which information occurs triggers a need for a greater understanding of qualitative analysis. Students and researchers should understand such expansion by ensuring qualitative tools are in place to meet new trends and challenges. They must be able to create and design methods for phenomena qualitatively. The work by Natalie Kiesler provides them with a valuable pathway to understanding and addressing new challenges. The work uses current, authentic data to offer practical approaches to solving problems on local and global scales. The use of qualitative analysis should grow and become prevalent for decades to come. What is important today may not be necessary for the future; likewise, what is not essential today could be important for tomorrow. Students and computing professionals pragmatically need knowledge and preparation. Therefore, students and researchers should learn much from experiencing Natalie Kiesler’s work because it emphasizes realistic strategies and approaches toward addressing qualitative problems. This book by Natalie Kiesler represents a crucial step in the qualitative study of challenges. Professor Emeritus, Hofstra University IEEE Fellow and Life Member ACM Distinguished Educator New York, NY, USA August 2023

John Impagliazzo, Ph.D.

Preface

As a student, I was always eager to learn new things. So I could not wait and get into the newly developed study program Linguistics and Web Technology at Marburg University, where I would dive into the world of databases, servers, and programming. I immediately knew what I was learning would empower me to create new things, solutions, and perhaps products. So as a Type A student, I graduated top of my class a few semesters later. Then I switched roles and started teaching. One of my first classes focused on developing multimedia on the web and web programming. The class was full of student teachers and undergraduate students without prior knowledge of computer science. So students started to work on their assignments, wrote their first code snippets, and added more code. However, it was going slowly. The in-class time felt repetitive. I had to show them how to do anything multiple times. And still, not everyone in the class got to the desired solution. After a few weeks, I was concerned about the students achieving the learning objectives. And then, at some point, it struck me. What students had to do in my class was nothing like the classes in other disciplines. Students did not have to read a book and memorize its contents. Neither had they to listen and take notes. They had to do something by themselves and create something new. Once I had noticed this difference, I was more relaxed while instructing students, being aware of the challenging nature of the tasks. But this epiphany stuck with me ever since. It was not until a few years later that I remembered this experience. I had just started my Ph.D. in Computer Science and was searching for a topic and research questions. Understanding programming competency was just the right choice for me, given these experiences. So this book is rooted in a part of my Ph.D. research, in which I focused on introductory programming courses in higher education. The goal was to understand how complex it is to become a good programmer. Another question was about which competencies students need to develop to program. My research and this book are thus guided by several considerations related to teaching and learning. For this reason, this book addresses educators, student teachers of computer science and related fields, instructional designers, and faculty responsible for computing curricula. vii

viii

Preface

As a part of my early research, I soon noticed that the challenges of novice programmers are well known. A great body of research addresses pedagogical approaches or assessing novice learners of programming. At the same time, I was getting more aware of the concept of competency and how it could help model the ability to program and perform successfully. However, a competency model of programming was nowhere to be found. If at all, some specific areas had been subject to research and competency modeling, such as object-oriented programming. I still wonder if this is due to the complexity of programming, the novelty of computer science as a scientific discipline, or rapidly changing technologies. Anyway, I started to search for curricular recommendations and guidelines from national and international professional societies. In the absence of detailed competency-based objectives, I decided to investigate the educational practices in higher education institutes. This led to the analysis of several introductory programming curricula from 35 German universities. Due to the huge volume of data and courses, I had to limit my research to the first three to four semesters and basic programming education. Nonetheless, I enriched my data sources with experienced educators whom I interviewed for this research. With the curricula data and the experts’ insights, I was able to start modeling programming competency, and what it means to program. So this is not just another book on programming education and learners’ challenges. It is the first of its kind to reveal the complexity of programming from a pedagogical perspective. One of the results presented in this book is a synopsis of programming competencies, and their classification in terms of the socalled Anderson Krathwohl Taxonomy—a well-known taxonomy to evaluate the complexity of cognitive learning objectives and activities. The resulting competency model can serve as a guide for educators and curriculum designers, who want to scaffold programming competencies or address certain programming competencies more explicitly in their courses and study programs. Introductory programming courses can become the breaking point within a computer science study program. For this reason, educators must notice its cognitive complexity and other demanding expectations, including knowledge, skills, and dispositions. It seems that introductory programming education moves too quickly to complex tasks and exercises while students do not understand basic concepts and processes. With the advent of Large Language Models, such as Copilot and ChatGPT, students’ understanding of (generated) program code becomes even more relevant. So this research is a must-read for everyone trying to understand the challenging nature of learning and teaching programming. It is also a call for more student support in introductory programming. Educators need to take into account students’ individuality and start designing learning environments, learning activities, and assessments fostering competency instead of knowledge. This book will help educators get started on that pathway. Geisa, Germany August 2023

Natalie Kiesler

Acknowledgments

I want to thank John for encouraging me to publish this research in the present form. John is an extraordinary mentor, colleague, and friend, and I greatly appreciate his experience and advice. Furthermore, I want to acknowledge Benedikt’s support and understanding during the research for and writing of this book. He is a real LATEXwizard, and the best of a friend and partner I could imagine. Thank you!

ix

Contents

Part I Background and Context 1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Goal and Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Contextualization of This Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 7 8 10 11

2

Approaching the Concept of Competency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Competency Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Psychological Perspective on Competency . . . . . . . . . . . . . . . . 2.1.2 Historical Perspective on Competency . . . . . . . . . . . . . . . . . . . . 2.1.3 Recent Perspectives and Discussions . . . . . . . . . . . . . . . . . . . . . . 2.2 Taxonomies and Competency Models for Computing . . . . . . . . . . . . . 2.2.1 Bloom’s and Anderson-Krathwohl’s Taxonomy . . . . . . . . . . 2.2.2 Competency Model of the German Informatics Society . . 2.3 Competency-Based Curricula Recommendations in Computing . . 2.3.1 Information Technology 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Computing Curricula 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 National Curricula Recommendations . . . . . . . . . . . . . . . . . . . . . 2.4 Related Research in Computing Education . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 17 17 18 20 21 21 24 24 25 27 28 29 31

3

Research Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Summary of Research Desiderata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Research Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 37 38 38 39 41

xi

xii

Contents

Part II Data Gathering and Analysis of University Curricula 4

Data Gathering of University Curricula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Goals of Gathering and Analyzing University Curricula . . . . . . . . . . . 4.2 Relevance of Gathering and Analyzing University Curricula . . . . . . 4.3 Expectations and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Sampling and Data Gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Selection of Bachelor Degree Programs . . . . . . . . . . . . . . . . . . . 4.4.2 Selection of Content Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Selection of Institutions and Study Programs . . . . . . . . . . . . . 4.4.4 Selection of Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45 45 46 46 47 47 49 51 54 55

5

Data Analysis of University Curricula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Methodology of the Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Pre-processing of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Linguistic Smoothing of Competency Goals . . . . . . . . . . . . . . 5.2.2 Basic Coding Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Computer-Assisted Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Deductive Category Development . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Inductive Category Development . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Deductive-Inductive Category Development . . . . . . . . . . . . . . 5.4 Application of Quality Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 57 58 58 61 63 63 63 65 66 68 68

Part III Data Gathering and Analysis of Expert Interviews 6

7

Data Gathering of Guided Expert Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Goals of Conducting and Analyzing Guided Expert Interviews . . . 6.2 Relevance of Conducting and Analyzing Guided Expert Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Expectations and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Developing an Interview Guide and Questions . . . . . . . . . . . . . . . . . . . . . 6.5 Data Gathering and Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Selecting and Contacting Experts . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Conducting the Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Recording the Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73 73

Data Analysis of Guided Expert Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Pre-processing of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Transcription Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Transcription System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Transcription Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Application of Quality Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81 81 81 82 84 86 88 89

73 74 75 77 77 78 79 79

Contents

xiii

Part IV Results 8

Results of University Curricula Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Cognitive Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Cognitive Process Dimension Remembering . . . . . . . . . . . . . 8.1.2 Cognitive Process Dimension Understanding . . . . . . . . . . . . . 8.1.3 Cognitive Process Dimension Applying . . . . . . . . . . . . . . . . . . . 8.1.4 Cognitive Process Dimension Analyzing. . . . . . . . . . . . . . . . . . 8.1.5 Cognitive Process Dimension Evaluating . . . . . . . . . . . . . . . . . 8.1.6 Cognitive Process Dimension Creating. . . . . . . . . . . . . . . . . . . . 8.1.7 Knowledge Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Other Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Discussion of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93 93 97 98 99 99 100 100 102 103 104 105 108

9

Results of Guided Expert Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Cognitive Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Other Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Factors Preventing Programming Competency . . . . . . . . . . . . . . . . . . . . . 9.4 Factors Contributing to Programming Competency . . . . . . . . . . . . . . . . 9.5 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Discussion of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

111 111 115 119 122 124 125 129

10

Summarizing and Reviewing the Components of Programming Competency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Summary of Cognitive Programming Competencies . . . . . . . . . . . . . . . 10.2 Summary of Other Programming Competency Components . . . . . . 10.3 Review of the Anderson Krathwohl Taxonomy. . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133 133 140 142 147

Part V Wrap Up 11

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Brief Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Competencies Expected from Novice Programmers . . . . . . 11.1.2 Adequacy of the Anderson Krathwohl Taxonomy for Programming Education . . . . . . . . . . . . . . . . . . . 11.1.3 Factors Influencing Students’ Competency Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

153 153 153 155 156 158 160 162

Acronyms

AAT ACM AKT AKoFOOP

B. Sc. BMBF CAQDAS CaTS CC2020 CoLeaF CS CSS DIPF

DFG ECA ECTS EQANIE FH GER

Automated Assessment Tools Association for Computer Machinery Anderson Krathwohl Taxonomy Automated competence measurement and feedback generation for object-oriented programming (German=Automatisierte Kompetenzmessung und Feedbackerzeugung zum objektorientierten Programmieren) Bachelor of Science Federal Ministry of Education and Research (German=Bundesministerium für Bildung und Forschung) Computer-Assisted Qualitative Data Analysis Software Computerized adaptive testing in study programs (German=Computerisiertes adaptives Testen im Studium) Computing Curricula 2020 Competency Learning Framework Computer Science Cascading Style Sheet Leibniz Institute for Research and Information in Education (German=Leibniz-Institut für Bildungsforschung und Bildungsinformation) German Research Foundation (German=Deutsche Forschungsgemeinschaft) European Consortium for Accreditation European Credit Transfer System Accreditation Committee of the European Quality Assurance Network for Informatics Education University of Applied Sciences (German=Fachhochschule, deprecated) Common European Framework of Reference for Languages (German=Gemeinsamer Europäischer Referenzrahmen für Sprachen)

xv

xvi

GeRRI GI GU HAW HIS IDE IEEE IT2017 ITF ITiCSE JSON KETTI KMK KoKoHs

KUI LLM LLMs LMS LOM MIT MNU MoKoM

MOOC MSIS OECD PISA PPIG QDAS RQ

Acronyms

Common Reference Framework for Computer Science (German=Gemeinsamer Referenzrahmen Informatik) German Informatics Society (German=Gesellschaft für Informatik e.V.) Goethe University Frankfurt University of Applied Sciences (German=Hochschule für Angewandte Wissenschaften) University Information System Ltd. (German=HochschulInformations-System GmbH) Integrated Development Environment Institute of Electrical and Electronics Engineers Information Technology Curricula 2017 Informative Tutoring Feedback Innovation and Technology in Computer Science Education JavaScript Object Notation Competence acquisition of tutors in computer science (German= Kompetenzerwerb von Tutorinnen und Tutoren in der Informatik) Standing Conference of the Ministers of Education and Cultural Affairs (German=Kultusministerkonferenz) Competence models and instruments of competence assessment in the higher education sector - validation and methodological innovation (German=Kompetenzmodelle und Instrumente der Kompetenzerfassung im Hochschulsektor - Validierungen und methodische Innovationen) Competencies for teaching computer science (German=Kompetenzen für das Unterrichten in Informatik) Large Language Model Large Language Models Learning Management System Learning Object Metadata Massachusetts Institute of Technology Association for the Promotion of STEM Education (German=Verband zur Förderung des MINT-Unterrichts) Development of qualitative and quantitative measurement methods of teaching and learning processes for modeling and systems understanding in computer science (German=Entwicklung von qualitativen und quantitativen Messverfahren zu Lehr-Lern-Prozessen für Modellierung und Systemverständnis in der Informatik) Massive Open Online Course Master of Science in Information Systems Organisation for Economic Co-operation and Development Programme for International Student Assessment Psychology of Programming Interest Group Qualitative Data Analysis Software Research Question

Acronyms

RTF STEM SWECOM SWS TIMSS TUM WAV XML

xvii

Rich-Text Format Science, Technology, Engineering, Mathematics Software Engineering Competency Model hours of a class per semester (German=Semesterwochenstunden) Trends in International Mathematics and Science Study Technical University of Munich Waveform Audio File Extensible Markup Language

Part I

Background and Context

The first part of this book introduces the background of the conducted study, which is situated in introductory programming education at the university level. The motivation for this research is due to the challenging nature of learning how to program, and the repeated evidence of students failing CS1 or programming courses in the first and second year. At the same time, our rapidly changing society requires programming literacy and graduates who can shape digital transformation. With the concept of competency in mind, this work aims at the definition and modeling of programming competency, and factors preventing or fostering the development of programming competency. By modeling the components of programming competency, their cognitive complexity is also supposed to become apparent. This is crucial for both educators and learners, as the challenging nature of programming still seems to be underestimated in teaching, learning, and assessing. In this part, the present work is thus introduced and motivated, but also contextualized by integrating related research and projects on (computing) competencies. The introduction in Chap. 1 also summarizes the goals and research questions and details the structure of the book. Then the concept of competency is approached in Chap. 2 by attempting to define it and presenting taxonomies and frameworks that can serve as a basis for competency modeling. International and national competency-based curricula recommendations in computing further illustrate the educational standards expected in higher education programs. In Chap. 3, the research design is presented as a basis for the other parts of the book. For this reason, the research desiderata are summarized, goals and research questions are specified, and the general design of the study is introduced to the reader. This entails the description of study instruments to gather and analyze data, and how the various data sources and methods are going to be integrated to answer the overall research question.

Chapter 1

Introduction

1.1 Background and Motivation Computer Science (CS) is a relatively young and dynamic scientific discipline that is subject to constant changes. The internet, digital mass media, smartphones, apps, and web applications, for example, have become an integral part of our everyday lives. These developments are accompanied by changes in ways and means of communication, values, and job profiles, as pointed out by the Organisation for Economic Co-operation & Development (OECD): Technological progress is transforming societies, economies, and people’s lives as never before. The way we work, learn, communicate, and consume is being turned upsidedown by digitalization. As artificial intelligence and machine learning begin to reveal their potential, the new wave of change seems set to roll on for decades. (OECD 2019, p. 5)

A recent example is the trained language model known as GPT, which interacts with users in conversations and has the potential to change our lives and education (Prather et al. 2023b; Kiesler et al. 2023a; Kiesler and Schiffner 2023). Today, there is a consensus that schools and academic institutions must address the requirements of our changing society by, for example, offering contemporary, online teaching and learning opportunities to prepare students for life and their career (KMK 2016). In addition, there is a need for an increasing number of students in Science, Technology, Engineering, and Mathematics (STEM) disciplines, as the demand for qualified employees with technology-related competencies is growing rapidly (OECD 2019). At the same time, about two out of five CS students fail due to too highperformance requirements and other reasons. This affects 45% of CS students at German universities and 41% at universities of applied sciences (Heublein et al. 2017). In a consecutive study by Heublein et al. (2018), the study trajectories of freshmen from the class of 2012/2013 were examined, and evaluated by looking at the actual graduates in 2016. Dropout rates slightly shifted to 46% at universities and 39% at universities of applied sciences (Heublein et al. 2018). More recent © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_1

3

4

1 Introduction

figures refer to CS students graduating in 2018. These reflect a dropout rate of 44% in the university programs and 37% at universities of applied sciences (Heublein et al. 2020). Introductory programming education as part of the core curriculum of CS reflects the universal problem of high failure rates and dropouts in German CS study programs (Gomes and Mendes 2007). General reasons presented in the literature refer to (Heublein et al. 2017; Heublein and Wolter 2011; Neugebauer et al. 2019): • • • • •

Challenges in the study entry phase, High study requirements in mathematics, A lack of academic integration, Students’ attitudes and motivation, A lack of identification with the discipline.

In introductory programming, domain-specific challenges add to the list. Among them are problem comprehension, the conceptual development of algorithms, understanding program structures, translating algorithms into a programming language, partitioning programs, testing, debugging, understanding error messages, and many others (Du Boulay 1986; Luxton-Reilly et al. 2018; Prather et al. 2023a; Robins et al. 2003; Spohrer and Soloway 1986; Winslow 1996; Xinogalos 2014). Learners’ self-confidence in their abilities, self-efficacy, and the active decision of students to continue despite difficulties can nonetheless have positive effects on the learning progress. Therefore, motivation is considered highly important (Neugebauer et al. 2019; Petersen et al. 2016). The same applies to self-regulation (Loksa et al. 2022). In addition, neither programming nor CS as a whole, is integrated into the curricula of Germany’s federal education system. As a result of this structure, CS education in schools differs in every one of the 16 states, not to mention deviations between school types and levels. Hence, the individual prior knowledge of first-year students greatly diverges. It is therefore not surprising that the literature is full of demands for the transformation of programming education. These include, for example, addressing problem-solving competencies before or while learning how to program (Gomes and Mendes 2007). Jenkins (2002) adds more specific measures: 1. Programming should not be taught before the second year of study (and not in the first semester), 2. The programming language should be evaluated and selected based on pedagogical criteria and suitability for the target group (and not because of its use in the economy), 3. Programming should be taught by pedagogically trained instructors (and not by the best programmers), 4. Programming courses should be designed in a way that allows individual learning, 5. Continuous and summative assessments should be avoided to lighten the pressure put on students, and

1.1 Background and Motivation

5

6. Departments, faculty, and staff must acknowledge the difficulties in introductory programming education and provide supporting measures to students. Very recent approaches include Large Language Models (LLMs) and applications like Github’s Copilot or OPenAI’s ChatGPT as tools to teach and learn programming with artificial intelligence (AI) assistance (Porter and Zingaro 2023). According to Luxton-Reilly (2016), challenges for learners of programming primarily arise from overly high, unrealistic expectations and, most importantly, from too much content in programming courses. Similarly, Whalley et al. (2007) suggest that the difficulty of assignments given to first-year CS students is continuously being underestimated. Indeed, programming primarily comprises cognitively complex tasks (Kiesler 2020b,c,d, 2022b), while study after study concludes that novice programmers cannot perform as expected of them (Tew et al. 2005; Whalley and Lister 2009; Luxton-Reilly et al. 2018). Furthermore, programming requires mathematical-analytical thinking (Gomes and Mendes 2007), and the ability to abstract and solve complex problems (Xinogalos 2014). The latter usually requires the restructuring of already acquired knowledge and the environment, as problems need to be analyzed into their parts, and knowledge needs to be reorganized. In many cases, multiple solutions may lead to the desired result. Thus, learning via understanding (Köhler 1963; Wertheimer 1964) is anticipated, which demands sophisticated cognitive processes from students. In this context, it is crucial to look into educators’ expectations and the pedagogical design of learning objectives, course activities, and assessments. According to the concept of Constructive Alignment (Biggs 1996; Biggs and Tang 2011), those three components should be aligned and resemble each other. However, the cognitive requirements demanded of learners and their classification into different levels of cognitive complexity are not always correct, as the following quote suggests: Most teachers have had students come to them and say things like: “I follow what you do in class, but when I try to apply what you did, I am unable to do it.” Bloom’s taxonomy may explain what is going on with these comments. Following the instructor is the Bloom comprehension category, but applying that to a programming assignment is in the Bloom synthesis category. (Scott 2003)

This exemplary excerpt of a CS classroom interaction illustrates a particular challenge in learning and teaching how to program. Instructors assume to teach and assess competencies at low-level cognitive process dimensions such as knowledge, comprehension, and application, when in fact they demand cognitively complex actions, such as analyzing, evaluating, and creating (Scott 2003). Due to the large cognitive gap between these activities, learners cannot follow or simply imitate an instructor’s actions. In many cases, exercises in CS classes involve more cognitively complex competencies than assumed by the instructor. The literature is full of examples where computing educators have been unable to correctly apply common taxonomies such as those of Bloom (1956) or Anderson et al. (2001) to classify teaching and learning objectives (Gluga et al. 2012; Johnson and Fuller 2006; Masapanta-Carrión and Velázquez-Iturbide 2018; Whalley et al. 2006).

6

1 Introduction

Both Bloom’s (1956) taxonomy and the revised version of Anderson et al. (2001) are limited to cognitive aspects and have a relatively high degree of abstraction (Baumgartner 2011). Furthermore, the knowledge dimensions and cognitive process dimensions of competencies are context-specific (Clear et al. 2020). Thus, the abstract cognitive process and knowledge dimensions defined by, for example, Anderson et al. (2001) take on different meanings in the CS domain (Kiesler 2020a,b,d). Competency-based learning has recently started to gain traction in computing education (Raj et al. 2021a,b). Early in the 2000s, the political and educational discourse in Germany started to shift towards competencies and competency models. Weinert (2001) defines competency as the sum of individual dispositions of problem-solving skills that can be acquired to master new, previously unknown situations. Cognitive and meta-cognitive competencies are added to the formula, whereas competencies are always considered to be context-specific. Weinert (2001) also refers to the importance of motivational and volitional aspects, which contribute to actual performance. As individual performance is crucial (Klieme and Hartig 2007), competency research usually aims at the investigation of individual characteristics of competencies by clearly defining them. This entails an operationalization, i.e. making competencies visible and measurable by relating them to domain-specific knowledge (Klieme and Hartig 2007). Competency models must not be confused with lists of contents or knowledge. Competencies rather build up on one another within a certain domain, which can be visualized in the form of a matrix containing different dimensions of content-related competencies (Klieme 2004). However, the development of context-specific competency models for CS education has received little attention so far. This is due to several reasons. CS is a constantly evolving discipline. Moreover, valid instruments for the measurement or assessment of competencies require empirical data as a basis (Koeppen et al. 2008; Kramer et al. 2016a). In recent years, educational standards for CS primary and secondary education were developed (Working Group Educational Standards in Primary Education 2019; Working Group Educational Standards SI 2008; Working Group Educational Standards SII 2016). Empirically derived competency models for CS higher education programs have only been subject to research within special content areas, but not CS as a whole (Linck et al. 2013; Kramer et al. 2016b; Schäfer 2012). The computer science-specific model for the classification of cognitive competencies suggested by the German Informatics Society (GI) reduces the more complex model of Anderson et al. (2001). It does not reflect all content areas and competencies, reduces cognitive process dimensions (e.g., remembering and creating are eliminated), and summarizes knowledge dimensions (Kiesler 2020a,d). As the GI’s model does not have an empirical basis (GI, German Informatics Society 2016), it must be critically examined and, if necessary, revised to mirror competencies and their complexity instead of further summarizing competency levels and limiting their informational value.

1.2 Goal and Research Questions

7

1.2 Goal and Research Questions The overall goal of this research study is to help foster learners’ competency development in introductory programming education, which is one of the core tiers of any computer science degree program. To achieve this goal, the study aims at modeling the competencies expected in the first semesters of CS study programs in German higher education institutions. The research questions (RQs) are as follows.

Research Questions 1. Which programming competencies are currently expected in the introductory phase of German computer science study programs? a. Which cognitive process and knowledge dimensions according to the Anderson Krathwohl Taxonomy are expected as part of the teaching and learning objectives of introductory programming education at German universities? b. What other teaching and learning objectives are explicitly expected as part of introductory programming education at German universities? c. To what extent can the Anderson Krathwohl Taxonomy be applied for the classification of programming competencies in the context of introductory programming education? d. Which factors prevent the acquisition of competencies in introductory programming education at German universities? e. Which factors help students succeed in introductory programming education at German universities?

The first step is to operationalize programming competency in the first semesters of selected CS study programs, so that the competencies expected of novice learners become observable, which allows for their classification in a consecutive step. A core content area comprising, for example, basic algorithms and data structures in conjunction with programming language-specific knowledge and skills, allows for the investigation of competencies expected across different institutions and CS study programs. In particular, basic programming courses of Bachelor’s degree programs will be investigated. Those are likely to be part of every CS study program so that the results of this study can be transferred to other institutions and countries. The operationalization of teaching and learning objectives helps to make student performance measurable, and competencies observable. Modeling programming competence can thus contribute to the conceptualization and implementation of valid assessments, adaptive learning environments including learner models, and educational standards for CS education. So far, educational standards have only been published for German primary and secondary education. The development

8

1 Introduction

of an empirically validated competency model for the context of CS is still pending (Working Group Educational Standards in Primary Education 2019; Working Group Educational Standards SI 2008; Working Group Educational Standards SII 2016). As a first step in that direction, the taxonomy for learning, teaching, and assessing proposed by Anderson et al. (2001) is applied to the defined content area of introductory programming education, and its applicability is examined. The modeling of programming competencies will further be useful for the classification of exercises, tasks, and (formative) assessments in introductory programming education. The resulting model can, for example, support educators in the selection and development of tasks, how to weigh them in assessments, or how to sequence them for classroom practice and learning activities. In the long run, the competency model will help develop instruments for competency diagnostics, self-assessments, and the evaluation of students’ learning processes in introductory programming education. One concrete application may be a pre-test for first-year CS students to diagnose their programming competencies, which could help educators address their learners’ needs in alignment with their prior knowledge and experience. As any assessment can be evaluated through user data, learning analytics (Shacklock 2016) in the form of dashboards providing personalized feedback due to students’ individual progress in learning environments is considered future work. Related research already provides insight into state-of-the-art learning environments offering feedback to novice learners of programming (Jeuring et al. 2022a,b; Kiesler 2016a,b, 2022a, 2023a), and into students’ mental models (Kiesler 2023b). Matching the identified feedback types to students’ individual informational needs, cognition, and programming performance is a consecutive step. The qualitative modeling of programming competency constitutes the groundwork for any approach classifying students’ abilities and offering supportive measures.

1.3 Contextualization of This Research In this book, current competency requirements in German higher education, as well as associated challenges in the area of basic programming education are identified. Moreover, the programming competencies are classified by using the Anderson Krathwohl Taxonomy (AKT) (Anderson et al. 2001) which allows for the estimation of cognitive complexity. This is how the research gap of an empirically validated competency model for computer science is addressed, and programming competency is modeled. The analysis of both, competency requirements and obstacles for novice learners of programming further constitute a starting point for the development of supportive measures for educators and students. Research projects and funding lines with similar research questions and foci include, for example, the project Development of qualitative and quantitative measurement methods of teaching and learning processes for modeling and systems understanding in computer science (MoKoM) at the universities of Siegen and

1.3 Contextualization of This Research

9

Paderborn. In this project, the competencies of secondary school students and their understanding of information systems and object-oriented modeling were investigated (Linck et al. 2013; Magenheim et al. 2012; Bröker et al. 2014; Rhode 2013). Another example of the Competence models and instruments of competence assessment in the higher education sector - validation and methodological innovation (KoKoHS) funding line is the Competencies for teaching computer science (KUI) project. The KUI project was concerned with the development of models for the assessment of generic competencies and the development of respective instruments for object-oriented programming, whereas the focus was on the training of student teachers at elementary schools (BMBF 2020; JGU 2012; Lautenbach et al. 2018). Yet another example of the research accompanying the German Quality Pact for Teaching is the project Competence acquisition of tutors in computer science (KETTI). KETTI attempts to capture what students need for successful learning and translate those needs into an empirically developed competence model. Their goal was to support the training of tutors and students (Danielsiek et al. 2017; Krugel and Hubwieser 2017). Another related project at the Technical University of Munich (TUM) investigated Automated competence measurement and feedback generation for object-oriented programming (AKoFOOP) between 2019 and 2021. Its goal was to analyze students’ solutions concerning competency, and the effects of automated feedback (TUM 2020). In the international context, a study of non-technical expectations towards firstyear CS students was published in 2020 (Groeneveld et al. 2020). Sultana (2016) pursued the goal to identify the most relevant and appropriate competencies, programming languages, and assessments for introductory computer science courses. The study used the Delphi method to summarize the expert views of two stakeholder groups. Moreover, several working groups at the Innovation and Technology in Computer Science Education (ITiCSE) conference focused on competency. One of them investigated the competencies expected of students in the first, respectively fourth year of study from a student perspective (Frezza et al. 2018). In addition, a 2021 ITiCSE working group focused on the pedagogy and assessment of competencies in CS study programs (Raj et al. 2021a,b). Other recent discussion formats and panels in the context of ITiCSE, the Special Interest Group Computer Science Education (SIGCSE), and Frontiers in Education (FIE) focus on dispositions as a component of competency (Impagliazzo et al. 2022; Kiesler et al. 2023b; Kiesler and Impagliazzo 2023; Kumar et al. 2023; MacKellar et al. 2023; McCauley et al. 2023; Sabin et al. 2023). To conclude, this research monograph is a step towards the development of an empirically validated competency model that classifies competencies into dimensions of cognitive processes and knowledge. It thus represents an example of empirical research while applying the qualitative paradigm in the CS domain. Based on the data gathering and analysis, the actual expectations towards novice programmers in introductory programming courses at German universities will be captured. The results can be useful for other (interdisciplinary) research projects at the intersection of computer science, educational psychology, and, for example,

10

1 Introduction

pedagogy. Furthermore, the modeling of programming competencies can contribute to the improvement of introductory programming education not only in Germany but worldwide.

1.4 Structure of the Book The present chapter briefly introduced the background of computer science education, and challenges for learners and educators of basic programming education, which is the motivation of this study. Moreover, the goal of this research is presented along with the research questions, and the key characteristics of this investigation are outlined. This section also presents the structure of the book chapter by chapter: Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Chapter 8

Chapter 2 addresses central themes, as well as related work and projects on competency modeling in the context of CS study programs and higher education. Moreover, the term competency is defined, before competency models, competency-based curricula recommendations, and related research in computing are presented. Next, the research design of the present work is introduced. Therefore, research desiderata are summarized, and the goals of this work are presented. This is accompanied by the research questions and the methodology applied to answer them. In Chap. 4, the process of data gathering is in focus. As this research utilizes different data sources and methods, the gathering of university curricula is presented first. All steps of this qualitative process are summarized, including the sampling method, selection of study programs, and courses. Then the data of the university curricula is analyzed in Chap. 5. The presented steps include the pre-processing of the data, the data analysis via qualitative content analysis, and the application of quality criteria to the analysis results. Chapter 6 covers the data gathering of the guided expert interviews by illustrating the development of the interview guide and questions. Moreover, the sampling method and selection of experts are described. In addition, the process of conducting and recording the interview is being elaborated. In analogy to the previous chapters, the data analysis process of the guided expert interviews is presented next. This involves the transcription process and the application of the summarizing content analysis according to Mayring. Likewise, the application of quality criteria to the results is explained. In Chap. 8, the results of the curricula analysis follow. In this chapter, the identified and classified programming competencies are presented according to their cognitive process dimensions.

References

11

Chapter 9

Consecutively, the results of the guided expert interviews are presented. They follow the logic of the previous chapter by identifying and classifying programming competencies according to their cognitive process and knowledge dimensions. In addition, factors preventing the acquisition of programming competencies, and factors helping students succeed in introductory programming courses are summarized. Chapter 10 summarizes the results of the two research methods, which yields a competency model of basic programming competencies. It further presents a summary of other components of competency expected from novice learners of programming. Furthermore, the Anderson Krathwohl Taxonomy is revisited and adapted to the context of programming education. The last chapter of this book wraps up the research study as well as the most important findings and derives implications for introductory programming education. Finally, perspectives for future work are presented.

Chapter 10

Chapter 11

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) P. Baumgartner, Taxonomie von Unterrichtsmethoden: ein Plädoyer für didaktische Vielfalt (Waxmann Verlag, Münster, 2011) J. Biggs, Enhancing teaching through constructive alignment. High. Educ. 32(3), 347–364 (1996) J. Biggs, C. Tang, Teaching for Quality Learning at University (McGraw-Hill Education, Maidenhead, 2011) B.S. Bloom, Taxonomy of educational objectives: the classification of educational goals. Cogn. Domain (1956) BMBF, Kompetenzmodellierung und Instrumente der Kompetenzerfassung im Hochschulsektor (2020). Online Publication K. Bröker, U. Kastens, J. Magenheim, Competences of undergraduate computer science students. KEYCIT 2014 Key Competencies Informatics ICT, 7:77 (2014) A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing Curricula 2020 (CC2020): Paradigms for Future Computing Curricula. Technical report (Association for Computing Machinery/IEEE Computer Society, New York, 2020). http://www.cc2020.net/ H. Danielsiek, P. Hubwieser, J. Krugel, J. Magenheim, L. Ohrndorf, D. Ossenschmidt, N. Schaper, J. Vahrenhold, Kompetenzbasierte Gestaltungsempfehlungen für Informatik-Tutorenschulungen. INFORMATIK 2017 (2017) B. Du Boulay, Some difficulties of learning to program. J. Educ. Comput. Res. 2(1), 57–73 (1986) S. Frezza, M. Daniels, A. Pears, r. Cajander, V. Kann, A. Kapoor, R. McDermott, A.-K. Peters, M. Sabin, C. Wallace, Modelling competencies for computing education beyond 2020: a research based approach to defining competencies in the computing disciplines, in Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2018 Companion (Association for Computing Machinery, New York, 2018), pp. 148–174

12

1 Introduction

GI, German Informatics Society, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen (2016). Online Publication R. Gluga, J. Kay, R. Lister, S. Kleitman, T. Lever, Coming to terms with bloom: an online tutorial for teachers of programming fundamentals, in Proceedings of the Fourteenth Australasian Computing Education Conference - Volume 123, ACE ’12 (AUS. Australian Computer Society, Inc., Darlinghurst, 2012), pp. 147–156 A. Gomes, A.J. Mendes, Learning to program-difficulties and solutions, in International Conference on Engineering Education (ICEE) (2007) W. Groeneveld, B.A. Becker, J. Vennekens, Soft skills: What do computing program syllabi reveal about non-technical expectations of undergraduate students? in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’20 (Association for Computing Machinery, New York, 2020), pp. 287–293 U. Heublein, A. Wolter, Studienabbruch in Deutschland. Definition, Häufigkeit, Ursachen, Maßnahmen. Z. Pädagogik 57(2), 214–236 (2011) U. Heublein, J. Ebert, C. Hutzsch, S. Isleib, R. König, J. Richter, A. Woisch, Zwischen Studienerwartungen und Studienwirklichkeit, in Forum Hochschule, vol. 1 (2017), pp. 134– 136 U. Heublein, R. Schmelzer, D. Sommer, Die Entwicklung der Studienabbruchquote an den deutschen Hochschulen. Technical report, Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Hannover (2018) U. Heublein, J. Richter, R. Schmelzer, Die Entwicklung der Studienabbruchquoten in Deutschland. DZHW Brief (3, 2020) H. Horz, D. Krömker, F. Goldhammer, D. Bengs, S. Fabriz, F. Horn, U. Kröhne, P. Libbrecht, J. Niemeyer, D. Schiffner, A. Tillmann, F.C. Wenzel, Computerbasiertes adaptives Testen im Studium–CaTS. Fachtagung Hochschulen im digitalen Zeitalter. 3.–4. Juli 2017, Berlin (2017) J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE ’22 (ACM, New York, 2022), pp. 662–663 T. Jenkins, On the difficulty of learning to program, in Proceedings of the 3rd Annual Conference of the LTSN Centre for Information and Computer Sciences, vol. 4 (2002), pp. 53–58. Citeseer J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Petersen, S. Sarsa, Steps learners take when solving programming tasks, and how learning environments (should) respond to them, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE ’22 (Association for Computing Machinery, New York, 2022a), pp. 570–571 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Peterson, S. Sarsa, Towards giving timely formative feedback and hints to novice programmers, in Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’22 (Association for Computing Machinery, New York, 2022b), pp. 95–115 JGU, Kompetenzen für das Unterrichten in Informatik. Förderkennzeichen: 01PK11019A (2012). Online Publication C.G. Johnson, U. Fuller, Is Bloom’s taxonomy appropriate for computer science? in Proceedings of the 6th Baltic Sea Conference on Computing Education Research: Koli Calling 2006, Baltic Sea ’06 (Association for Computing Machinery, New York, 2006), pp. 120–123 N. Kiesler, Ein Bild sagt mehr als tausend Worte–interaktive Visualisierungen in webbasierten Programmieraufgaben, in DeLFI 2016–Die 14. E-Learning Fachtagung Informatik, 11.–14. September 2016, Potsdam, ed. by U. Lucke, A. Schwill, R. Zender, vol. P-262. LNI. GI (2016a), pp. 335–337 N. Kiesler, Teaching programming 201 with visual code blocks instead of VI, eclipse or visual studio–experiences and potential use cases for higher education, in EDULEARN16 Proceedings, 8th International Conference on Education and New Learning Technologies. IATED (2016b), pp. 3171–3179

References

13

N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, in DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020, ed. by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, vol. P-308. LNI. Gesellschaft für Informatik e.V. (2020a), pp. 187–192 N. Kiesler, On programming competence and its classification, in Koli Calling ’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling ’20 (Association for Computing Machinery, New York, 2020b) N. Kiesler, Towards a competence model for the novice programmer using Bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’20 (Association for Computing Machinery, New York, 2020c), pp. 459–465 N. Kiesler, Zur Modellierung und Klassifizierung von Kompetenzen in der grundlegenden Programmierausbildung anhand der Anderson Krathwohl Taxonomie (2020d). CoRR abs/2006.16922. arXiv: 2006.16922. https://arxiv.org/abs/2006.16922 N. Kiesler, An exploratory analysis of feedback types used in online coding exercises (2022a). CoRR abs/2206.03077v2. arXiv: 2206.03077v2. https://doi.org/10.48550/arXiv.2206.03077 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022b) N. Kiesler, Investigating the use and effects of feedback in CodingBat exercises: an exploratory thinking aloud study, in 2023 Future of Educational Innovation-Workshop Series Data in Action (2023a), pp. 1–12 N. Kiesler, Mental models of recursion: a secondary analysis of novice learners’ steps and errors in Java exercises, in Psychology of Programming Interest Group 2022 – 33rd Annual Workshop, ed. by S. Holland, M. Petre, L. Church, M. Marasoiu (2023b), pp. 226–240 N. Kiesler, J. Impagliazzo, Industry’s expectations of graduate dispositions, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 N. Kiesler, D. Schiffner, Large language models in introductory programming education: Chatgpt’s performance and implications for assessments (2023). CoRR abs/2308.08572. arXiv: 2308.08572. https://doi.org/10.48550/arXiv.2308.08572 N. Kiesler, D. Lohr, H. Keuning, Exploring the potential of large language models to generate formative programming feedback, in 2023 IEEE Frontiers in Education Conference (FIE) (2023a), pp. 1–5 N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023b) E. Klieme, Was sind Kompetenzen und wie lassen sie sich messen? Pädagogik 6, 10–13 (2004) E. Klieme, J. Hartig, Kompetenzkonzepte in den Sozialwissenschaften und im erziehungswissenschaftlichen Diskurs, in Kompetenzdiagnostik, Zeitschrift für Erziehungswissenschaft, ed. by M. Prenzel, I. Gogolin, H.-H. Krüger, vol. Sonderheft 8 (Springer, Berlin, 2007), pp. 11–29 KMK, Bildung in der digitalen Welt. Strategie der Kultusministerkonferenz. Technical report, Beschluss der Kultusministerkonferenz vom 08.12.2016 (2016) K. Koeppen, J. Hartig, E. Klieme, D. Leutner, Current issues in competence modeling and assessment. Z. Psychol./J. Psychol. 216(2), 61–73 (2008) W. Köhler, Intelligenzprüfungen am Menschenaffen (Springer, Berlin, 1963) M. Kramer, P. Hubwieser, T. Brinda, A competency structure model of object–oriented programming, in Proceedings of the 4th International Conference on Learning and Teaching in Computing and Engineering (LATICE) (IEEE, 2016a), pp. 1–8 M. Kramer, D.A. Tobinski, T. Brinda, Modelling competency in the field of OOP: from investigating computer science curricula to developing test items, in Stakeholders and Information Technology in Education (SaITE) (IEEE, 2016b), pp. 1–8

14

1 Introduction

J. Krugel, P. Hubwieser, Kompetenzerwerb von Tutorinnen und Tutoren in der Informatik: Schlussbericht der Technischen Universität München. Technical report, Technische Universität München (2017) A.N. Kumar, R. McCauley, B. MacKellar, M. Sabin, N. Kiesler, R.K. Raj, J. Impagliazzo, . Quantitative results from a study of professional dispositions, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) C. Lautenbach, M. Toepper, O. Zlatkin-Troitschanskaia, H.A. Pant, D. Molerov, Kompetenzen von Studierenden-Ergebnisse des “KoKoHs”-Programms im Kontext der nationalen und internationalen Assessmentpraxis, in Hochschulen im Spannungsfeld der Bologna-Reform, ed. by N. Hericks (Springer, Wiesbaden, 2018), pp. 199–216 B. Linck, L. Ohrndorf, S. Schubert, P. Stechert, J. Magenheim, W. Nelles, J. Neugebauer, N. Schaper, Competence model for informatics modelling and system comprehension, in 2013 IEEE Global Engineering Education Conference (EDUCON) (IEEE, 2013), pp. 85–93 D. Loksa, L. Margulieux, B.A. Becker, M. Craig, P. Denny, R. Pettit, J. Prather, Metacognition and self-regulation in programming education: theories and exemplars of use. ACM Trans. Comput. Educ. 22(4), 1–31 (2022). A. Luxton-Reilly, Learning to program is easy, in Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’16 (Association for Computing Machinery, New York, 2016), pp. 284–289 A. Luxton-Reilly, Simon, I. Albluwi, B.A. Becker, M. Giannakos, A.N. Kumar, L. Ott, J. Paterson, M.J. Scott, J. Sheard, C. Szabo, Introductory programming: a systematic literature review, in Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ACM, New York, 2018), pp. 55–106 B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023). https://peer.asee.org/43018 J. Magenheim, S. Schubert, N. Schaper, Entwicklung von qualitativen und quantitativen Messverfahren zu Lehr-Lern-Prozessen für Modellierung und Systemverständnis in der Informatik (MoKoM), in Formate Fachdidaktischer Forschung: Empirische Projekte – historische Analysen – theoretische Grundlagen, ed. by H. Bayrhuber, U. Harms, B. Muszynski, B. Ralle, M. Rothgangel, L.-H. Schön, H.J. Vollmer, H.-G. Weigand (Waxmann, Münster, 2012), pp. 109–128 S. Masapanta-Carrión, J.A. Velázquez-Iturbide, A systematic review of the use of Bloom’s taxonomy in computer science education, in Proceedings of the 49th ACM Technical Symposium on Computer Science Education, SIGCSE ’18 (Association for Computing Machinery, New York, 2018), pp. 441–446 R. McCauley, M. Sabin, A.N. Kumar, N. Kiesler, B. MacKellar, R.K. Raj, J. Impagliazzo, Using vignettes to elicit students’ understanding of dispositions in computing education, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 M. Neugebauer, U. Heublein, A. Daniel, Studienabbruch in Deutschland: Ausmaß, Ursachen, Folgen, Präventionsmöglichkeiten. Z. Erzieh. 22(5), 1025–1046 (2019) O. OECD, OECD Skills Outlook 2019: thriving in a Digital world (Organisation for Economic Co-operation & Development, Paris, 2019) A. Petersen, M. Craig, J. Campbell, A. Tafliovich, Revisiting why students drop CS1, in Proceedings of the 16th Koli Calling International Conference on Computing Education Research, Koli Calling ’16 (ACM, New York, 2016), pp. 71–80 L. Porter, D. Zingaro, Learn AI-assisted Python Programming with GitHub Copilot and ChatGPT. Manning Early Access Program (MEAP) (2023) J. Prather, P. Denny, B.A. Becker, R. Nix, B.N. Reeves, A.S. Randrianasolo, G. Powell, First steps towards predicting the readability of programming error messages, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023 (ACM, New York, 2023a), pp. 549–555

References

15

J. Prather, P. Denny, J. Leinonen, B.A. Becker, I. Albluwi, M.E. Caspersen, M. Craig, H. Keuning, N. Kiesler, T. Kohn, A. Luxton-Reilly, S. MacNeil, A. Petersen, R. Pettit, B.N. Reeves, J. Savelka, Transformed by transformers: navigating the AI coding revolution for computing education: an ITiCSE working group conducted by humans, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE 2023 (Association for Computing Machinery, New York, 2023b), pp. 561–562 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’21 (Association for Computing Machinery, New York, 2021a), pp. 133–161 R.K. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Toward practical computing competencies, in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE ’21 (Association for Computing Machinery, New York, 2021b), pp. 603–604 T. Rhode, Entwicklung und Erprobung eines Instruments zur Messung informatischer Modellierungskompetenz im fachdidaktischen Kontext. PhD thesis, Universität Paderborn (2013) A. Robins, J. Rountree, N. Rountree, Learning and teaching programming: a review and discussion. Comput. Sci. Educ. 13(2), 137–172 (2003) M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) A.e.a. Schäfer, The empirically refined competence structure model for embedded microand nanosystems, in Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE ’12 (ACM, New York, 2012), pp. 57–62 T. Scott, Bloom’s taxonomy applied to testing in computer science classes. J. Comput. Sci. Coll. 19(1), 267–274 (2003) X. Shacklock, From bricks to clicks: the potential of data and analytics in higher education. Technical report, The Higher Education Commission’s (HEC) report (2016) J.C. Spohrer, E. Soloway, Novice mistakes: Are the folk wisdoms correct? Commun. ACM 29(7), 624–632 (1986) S. Sultana, Defining the competencies, programming languages, and assessments for an introductory computer science course. PhD thesis, Old Dominion University (2016) A.E. Tew, W.M. McCracken, M. Guzdial, Impact of alternative introductory courses on programming concept understanding, in Proceedings of the First International Workshop on Computing Education Research (2005), pp. 25–35 T.U.M. TUM, Forschung: Laufende Projekte: Messung und Bewertung informatischer Kompetenzen (2020). Online Publication F.E. Weinert, Leistungsmessungen in Schulen. Druck nach Typoskript (Beltz (Beltz Pädagogik), Weinheim, Basel, 2001) S.F.C. Wenzel, S. Fabriz, H. Horz, Paper Presented at the Annual Meeting of the American Educational Research Association (AERA), New York (2018) S.F.C. Wenzel, C. Krille, S. Fabriz, U. Kröhne, F. Goldhammer, D. Bengs, P. Libbrecht, H. Horz, Paper at the 2019 Meeting of the American Educational Research Association (AERA), Toronto (2019) M. Wertheimer, Produktives Denken (Harper and Brothers Publishers, New York, 1964) J.L. Whalley, R. Lister, The BRACElet 2009.1 (Wellington) specification, in Conferences in Research and Practice in Information Technology Series (2009) J.L. Whalley, R. Lister, E. Thompson, T. Clear, P. Robbins, P.K.A. Kumar, C. Prasad, An Australasian study of reading and comprehension skills in novice programmers, using the Bloom and SOLO taxonomies, in Proceedings of the 8th Australasian Conference on Computing Education - Volume 52, ACE ’06 (AUS. Australian Computer Society, Inc., Darlinghurst, 2006), pp. 243–252

16

1 Introduction

J. Whalley, T. Clear, R. Lister, The many ways of the Bracelet project. Bull. Appl. Comput. Inf. Technol. 5(1), 1–16 (2007) L.E. Winslow, Programming pedagogy–a psychological overview. ACM Sigcse Bull. 28(3), 17–22 (1996) Working Group Educational Standards in Primary Education, Kompetenzen für inf ormatische Bildung im Primarbereich. Beilage zu LOG IN, 39. Jahrgang(191/192) (2019) Working Group Educational Standards SI, Grundsätze und Standards für die Informatik in der Schule Bildungsstandards Informatik für die Sekundarstufe I. Beilage zu LOG IN, 28. Jahrgang(150/151) (2008) Working Group Educational Standards SII, Bildungsstandards Informatik für die Sekundarstufe II. Beilage zu LOG IN, 36. Jahrgang(183/184) (2016) S. Xinogalos, Designing and deploying programming courses: strategies, tools, difficulties and pedagogy. Educ. Inf. Technol. 21(3), 559–588 (2014)

Chapter 2

Approaching the Concept of Competency

2.1 Competency Definition Due to Germany’s poor results in the large-scale Programme for International Student Assessment (PISA), educational standards became a common and wellaccepted instrument for the development of curricula and student assessment. Moreover, the concept of competency was increasingly recognized (Waldow 2009). In pedagogy and education, the concept of competency has been discussed for several years now. There is consensus that it has to be distinguished from other concepts, such as qualification or performance. This section attempts to define the term competency from a psychological perspective according to Weinert (2001a,b). In addition, the historical development of the concept of competency is illustrated. This section closes with some key elements of the recent discourse related to competency-based education in computing.

2.1.1 Psychological Perspective on Competency Competency is a relatively young term in pedagogical psychology that has been subject to debate from the 1950s onward. The attempt to define competency thus has to consider its evolution and elaboration from earlier concepts commonly used in pedagogical psychology, such as qualification, key qualification, or achievement (Mertens 1974; Weinert 2001b,a).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_2

17

18

2 Approaching the Concept of Competency

Weinert (2001a) defines competency as the set of cognitive abilities and skills that can be learned by individuals to solve problems. Motivational capacities and volitional and social readiness are crucial to successfully using one’s dispositions in new environments and situations in the real world.

Thus, competence becomes visible through performance and how cognitive demands are met in actual contexts (Connell et al. 2003; Klieme and Hartig 2007). For this reason, Klieme also argues for the use of cognitive competency domains that are context specific (Klieme 2004; Kiesler 2020c). Contrary to the terms qualification and performance, competency describes the individual learning process and evolution of a person. It thus reflects on the extent to which a subject has acquired skills and abilities to deal with the situational demands of a constantly changing society. The performance in a particular context makes competency visible and observable (Connell et al. 2003). In the German educational discourse, competencies are often referred to as dispositions to emphasize the role of the active application of knowledge and skills in a specific context (Rychen and Salganik 2003; Klieme and Hartig 2007; Weinert 2001a,b). Knowledge alone is not sufficient, as cognitive, and psycho-social skills, a subject’s attitude, values, and motivation are crucial for competency. The concept of competency, therefore, is a holistic concept referring to the whole person. Other concepts, such as performance, specialized knowledge, or skills alone, play a minor role in competency compared to a learner’s ability for selfreflection (Rychen and Salganik 2003; Klieme and Hartig 2007). Competency depends on the individual so the term is strongly characterized by individuality. This is likely to be one of the reasons for constantly changing definitions within social and educational sciences (Klieme and Hartig 2007). The increasing focus on the learner allows for a connection of competency to constructivist theories of educational psychology (Merrill 1992; Maturana and Varela 1987). Such theories understand learning as an individual, subjective construction process of meaning. They consider the active engagement of an individual with the environment as essential for learning (Maturana 1985; Maturana and Varela 1987; Reinmann et al. 2013).

2.1.2 Historical Perspective on Competency Competency, derived from the Latin word “competentia”, literally means “gathering together”. Whereas it referred to the right to taxes, income, and livelihood in the German-speaking area in the sixteenth century, its meaning changed over time. In

2.1 Competency Definition

19

the nineteenth century, it was used to express jurisdiction, ability, expertise (DWDS 2020). According to Grebe and Drosdowski (1963), the adjective competent started to enter the common language use in the eighteenth century. Being competent at the time meant that someone would be responsible, decisive, and empowered to do something (Grebe and Drosdowski 1963). These characteristics were transferred to the educational context and the humanities (Pikkarainen 2014). In the U.S., the concept of competency was introduced in the behaviorist branch of educational psychology in the mid-nineteenth century by Robert White and consecutively Davic McClelland (Grzeda 2005; Mulder et al. 2009). This development was followed by psychological studies exploring above average performance of learners and personality traits. What was central to the discourse at the time was the perception of competence as the sum of behavioral and task-based skills (Winterton 2009; Le Deist and Winterton 2005; Pikkarainen 2014). Heinrich Roth provided a more systematic approach toward competency in the 1970s. He distinguishes the maturity of individuals according to their selfcompetency, technical competency, and social competency. Self-competency refers to the disposition to self-regulation Moreover, it subsumes learning strategies, but also meta-cognitive strategies for planning, monitoring, and regulating one’s behavior. Technical competency describes skills that are related to a specific domain. For this reason, Roth adds subject and methodological competency to this realm. Social competency describes the ability to appropriately act in social, political, and societal contexts. In addition to these cognitive areas, Roth distinguishes affectivemotivational competencies as a component of competence (Roth 1971). Bader (1989) transferred Roth’s triad into vocational training and its pedagogy. Heidack (1993) later subdivided the concept of action competency into components. One of those components is technical competency as the ability to implement professional knowledge. Social competency as another component describes the perception of thoughts, feelings, and attitudes of others, resulting in socially acceptable behavior. Methodological competency refers to the process of finding solutions to problems. Thus, learners must be able to identify a problem, know problem-solving strategies, select one (or more), and implement the solution. Heidack (1993) further distinguishes personality competency and self-competency. These abilities denote personal values, awareness of one’s competencies, and their implementation in decision-making processes (Heidack 1993). According to Bader (1989) and Heidack (1993), cooperative self-competency plays a crucial role in vocational training. The tendency to replace technical learning objectives with more general abilities and skills continued so this development soon expanded to educational settings other than vocational training. Ever since the OECD conducted its large-scale assessments, the acquisition of certain skills to adapt to economic and social changes has become increasingly important. Likewise, the Memorandum of the European Commission highlighted the importance of lifelong learning and further training for active participation in shaping the future (European Commission 2000).

20

2 Approaching the Concept of Competency

2.1.3 Recent Perspectives and Discussions Following the discussion of Germany’s poor results in the OECD’s large-scale international student assessments (Baumert et al. 2001), Weinert (2001a,b, 2014) pointed out the need for technical knowledge versus cognitive competencies and action. Ever since, educational standards in Germany have been improved to focus more on outcomes, and allow for centralized assessments of student performance (Klieme et al. 2003). Competency is not limited to technical knowledge or cognitive processes though. It also includes the affective domain, and the readiness to apply acquired knowledge and skills. This is how lifelong learning is supposed to be achieved (Weinert 2001a, 2014; Klieme and Hartig 2007). In 2003, the OECD listed three competency areas in their final report on key competencies (Rychen and Salganik 2003). These include: 1. interaction with heterogeneous social groups, 2. independent or autonomous action, and 3. the interactive use of tools. The first of these three key competency areas describes socially acceptable interaction with others, especially in the context of a diverse and multicultural society. Independent and autonomous action as the second area concerns the work environment, private life, and engagement in civil society and politics. Executing control over these areas of life, asserting and realizing personal goals, and knowing one’s abilities and limits are subsumed here. The last key competency area represents dealing with the demands of today’s information society that highly depends on digital tools. These tools include language, information, and knowledge, but also hard- and software. What distinguishes key competencies from other competencies is their potentially immense contribution to the development of an individual and society as a whole. Furthermore, key competencies can be defined as tools for overcoming complex challenges in a variety of contexts (Rychen and Salganik 2003). The development of competence-oriented educational goals and standards has evolved in almost all domains, scientific disciplines, and educational institutions ever since. The distinction of self-competency, technical competency, and social competency was no longer limited to vocational training. Instead, the development of a uniform conceptual framework for describing successful performance in a constantly changing world became the goal. This goal was addressed by both academia and professional environments (Klieme and Hartig 2007). For example, the guidelines for the development of framework curricula of the Standing Conference of the Ministers of Education and Cultural Affairs (KMK) started to integrate the concept of competency into German education. As a result, action competencies were developed and assigned to knowledge areas (KMK 2007). However, the concept of competency is not without criticism. Critical voices and concerns are particularly raised related to international, comparative assessments, such as the Trends in International Mathematics and Science Study (TIMSS)

2.2 Taxonomies and Competency Models for Computing

21

and PISA. Referring to competency appears to be pragmatic. In contrast to competency, the term education is characterized by German representatives and German discourse, where it has a long and well-documented history. As a result, the term education is challenging to apply in international comparisons. Nevertheless, competency with its broader, and holistic perspective adds value to the educational discourse, as it not just focuses on specific jobs or specialized skills (Clarke and Winch 2006; Pikkarainen 2014; Wheelahan and Moodie 2011). Professional disagreement over specific terms and micro-aspects of various definitions is recurrent in the literature (Arnold 2002; Frezza et al. 2018; Kiesler 2020b,d). The extent to which competencies are subject-specific is another subject of debate. The rationale behind this assumption is a pragmatic one, such as the composition of curricula recommendations. Similarly, pedagogical psychology principles tend to argue for the close relationship of competency to a specific context, and thus against the transfer of competencies across domain boundaries (Klieme 2004). Another crucial aspect is that competency and performance must not be confused with each other since the concept of competency inhibits the disposition to perform in a given context or task. For this reason, there is criticism of empirical procedures for competency measurement and modeling. This is also why the operationalization of competencies must be unambiguous and closely aligned to the content’s complexity (Klieme and Hartig 2007).

2.2 Taxonomies and Competency Models for Computing The goal of competency modeling and research is to investigate the individual characteristics of competencies by clearly defining, i.e. operationalizing them, and making them measurable (Klieme and Hartig 2007). This process requires competencies to be concrete, and subject-specific. Competency models must not be confused with mere lists of topics or knowledge areas. Instead, competencies of a subject area are systematically structured and mapped while considering knowledge domains, which can result, for example, in a matrix with different dimensions (Klieme 2004). This section provides an overview of the cross-disciplinary taxonomy for the classification of cognitive competencies developed by B. Bloom, and its revision by Anderson et al. (2001). Moreover, a computer science-specific competency model proposed for higher education (GI 2016) is introduced.

2.2.1 Bloom’s and Anderson-Krathwohl’s Taxonomy Bloom’s taxonomy of educational objectives (Bloom 1956) is increasingly being used in CS education and introductory programming courses to express and classify cognitive learning goals and competencies (Lister 2000; Lister and Leaney 2003;

22

2 Approaching the Concept of Competency

Thompson et al. 2008). It distinguishes six dimensions of cognitive complexity, whereby the complexity increases level by level. The different dimensions are assumed to be built upon each other so that higher levels can only be achieved by successfully mastering lower-level tasks. Bloom’s original taxonomy comprises the following six dimensions, starting with the most simple one: (1) Knowledge, (2) Comprehension, (3) Application, (4) Analysis, (5) Synthesis, and (6) Evaluation. Bloom’s taxonomy has been applied to plenty of courses and study programs. However, CS educators experience several difficulties when applying the dimensions of cognitive elaboration to learning objectives, learning activities, or assessments (Gluga et al. 2012; Johnson and Fuller 2006; Masapanta-Carrión and Velázquez-Iturbide 2018; Whalley et al. 2006). Due to the criticism of Bloom’s taxonomy regarding the hierarchy of dimensions, challenges in their application, and the sequence of dimensions, Anderson et al. (2001) proposed a revised version of the taxonomy. The Anderson Krathwohl Taxonomy, also known as AKT, exchanged the fifth and sixth dimensions of the original taxonomy, they introduced action verbs instead of nouns and added subdimensions to the levels of cognitive complexity. Similar to Bloom’s taxonomy, the AKT describes categories for the classification of cognitive learning objectives. Anderson et al. (2001) define the six cognitive process dimensions as follows: 1. Remembering - describes the recalling, repeating, verbalizing, or reciting of previously heard information or knowledge by memorizing it. 2. Understanding - describes the explanation of facts and their meaning in one’s own words through, for example, interpretation, examples, classification, summarizing, or comparisons. 3. Applying - describes the execution, application, or implementation of processes, operations, or strategies on previously unknown examples. 4. Analyzing - describes the decomposition of a problem or concept into their components and their explanation by, for example, outlining their relationship, organization, characteristics, or their meaning in the overall context. 5. Evaluating - describes the reasonable selection of a solution among multiple alternatives via the careful review of criteria or standards. 6. Creating - describes the combination of elements, or components to a new, functioning whole through planning, design, or implementation. Yet another addition of Anderson et al. (2001) is the four knowledge dimensions to the taxonomy. They distinguish between (A) Factual knowledge, (B) Conceptual knowledge, (C) Procedural knowledge, and (D) Meta-cognitive knowledge to better reflect on individual knowledge construction and the role of prior knowledge learners may have. The four knowledge dimensions are assumed to form a continuum, whereas the level of abstraction increases significantly from factual knowledge to meta-cognitive knowledge. According to Anderson et al. (2001), conceptual and procedural knowledge may overlap in some cases, because, procedural knowledge can be more concrete than very abstract, conceptual knowledge.

2.2 Taxonomies and Competency Models for Computing

23

Anderson et al. (2001) illustrate each of the four different knowledge dimensions by using a persona for each dimension: A Factual knowledge (Mrs. Patterson) - In-depth knowledge of contents and their details and contexts. Statements, facts, or contents are repeated, or cited by learners. This dimension is mainly used to describe the reproduction of facts or the knowledge of isolated concepts. Its level of abstraction is very low. B Conceptual knowledge (Ms. Chang) - Knowledge of higher-order ideas and concepts. Connections between contents, as well as their relation to more general concepts, principles, models, and theories, become clear by forming mental models. The category describes more complex knowledge that is somehow organized and can be related to other disciplines. C Procedural knowledge (Mr. Jefferson) - Content is used as a starting point to convey strategies, processes, algorithms, procedures, and methods. Procedural knowledge can be used, for example, for comparisons. It focuses on how to do something. D Meta-cognitive knowledge (Mrs. Weinberg) - Learners should also use the content as a starting point to develop methods, procedures, and strategies for understanding, analyzing, and elaborating knowledge. Learners should reflect upon these strategies by, for example, questioning procedures, learning from mistakes, planning, and regulating their learning activities. Furthermore, the consciousness of knowledge and one’s cognition are central. The meta-cognitive dimension further represents self-reflection, self-control, and self-management. The six cognitive process dimensions and the four knowledge dimensions can be illustrated as a two-dimensional matrix. Table 2.1 represents this continuum of cognitive complexity. Masapanta-Carrión and Velázquez-Iturbide (2019) were able to show that educators classifying programming tasks based on the AKT were performing better than educators using the original taxonomy by Bloom. Nonetheless, they recommend more training for teachers, especially for those who lack basic pedagogical training. Alaoutinen and Smolander (2010) further emphasize the depth of knowledge that can be represented by the AKT, especially when it is used as a benchmark for selfassessments in programming courses.

Table 2.1 Anderson Krathwohl Taxonomy (AKT) (adapted from Kiesler (2022, 2020c)) Cognitive process dimension Remember Knowledge dimension Understand Apply Analyze Evaluate Create Factual knowledge Conceptual knowledge Procedural knowledge Meta-cognitive knowledge

24

2 Approaching the Concept of Competency

Table 2.2 Competency model of the GI published in the curricular recommendations for bachelor’s and master’s programs (GI 2016) (adapted from Kiesler (2020b, 2022)) Level 1 Understand Level 2 Apply Level 3 Analyze Level 4 Create Cognitive processes Low contextualizsation and low complexity Level 2a Level 3a High contextualizsation and high complexity Transfer Evaluate

2.2.2 Competency Model of the German Informatics Society The GI is a professional organization proposing educational standards and curricula recommendations for computing education in Germany. They published a so-called competency model as part of their curricula recommendation for baccalaureate and master programs in 2016 (GI 2019). It can be described as a reduced version of the AKT. Their competency model aims at the description and classification of competencies in various content areas of CS study programs. Table 2.2 illustrates the GI’s competency model for CS education (Kiesler 2020b). The most obvious alteration of the AKT is the reduction of cognitive process dimensions in the GI model, such as remembering (see Table 2.2). The GI argues that remembering is not relevant to CS education, as the latter only strives for understanding (GI 2016). Another claim is that the most complex cognitive process dimension create is only if at all, expected from graduates who write their final theses. Instead, they introduce a new category as a subcategory of Level 2 Apply, and refer to it as Level 2a Transfer. This is supposed to express a higher level of contextualization. Similarly, Level 3a Evaluate is a specification of Level 3 Analyze to show greater contextualization. Contextualization further replaces the knowledge dimensions of the AKT. The GI taxonomy table thus represents a high or low degree of contextualization (K1 to K5). Other sub-types, i.e., types of scientific work (T1 to T6) and knowledge dimensions (W1 to W4) are supposed to be represented by annotations (GI 2016). However, the knowledge dimensions (W1 to W4) are not specified in the examples provided in the appendix (GI 2016), which contributes to the lack of clarity. According to the GI’s model, acquiring meta-cognitive knowledge seems to be exclusively limited to the completion of doctoral dissertations. Moreover, the GI’s failure to include the dimensions remember and create need to be questioned (Kiesler 2020a,b,d).

2.3 Competency-Based Curricula Recommendations in Computing In the context of computing and related disciplines, the concept of competency only recently started to gain traction. It has received increasing recognition ever since curricula recommendations for computing programs started to transform from

2.3 Competency-Based Curricula Recommendations in Computing

25

knowledge-based to competency-based learning. Several examples highlight the developments within the past years. In 2017, the Accreditation Committee of the European Quality Assurance Network for Informatics Education (EQANIE) published new recommendations for the accreditation of business informatics or information systems study programs. Their program outcomes are defined as “quality standards for knowledge, skills, and competencies that graduates of an accredited course should have achieved (EQANIE, European Quality Assurance Network for Informatics Education 2017).” Similarly, the Digital Competence Framework 2.0 proposed by the European Commission identified the components of digital competency, which include (Carretero et al. 2017): 1. 2. 3. 4. 5.

information and data literacy, communication and collaboration, digital content creation, safety, and problem-solving.

The ACM and IEEE as professional societies began to transform their baccalaureate curricula with the IT2017 project (Sabin et al. 2017). At the time, they started to embrace the concept of competency and integrated it into the curriculum definition. As a major change, they shifted from knowledge areas, knowledge units, learning outcomes, and topics towards competency and performance: “competence refers to the performance standards associated with a profession” (Sabin et al. 2017). Besides this development, the 2016 Master of Science in Information Systems (MSIS) report introduced competency-based learning at the level of master programs: “competencies represent a dynamic combination of cognitive and metacognitive skills, demonstration of knowledge and understanding, interpersonal, intellectual and practical skills, and ethical values (Topi et al. 2017).” Before these recommendations, the Software Engineering Competency Model (SWECOM) had defined competency as “demonstrated ability to perform work activities at a stated competency level (IEEE, IEEE Computer Society 2014).” The transformation of these curricula towards competencies indicates a major shift in computing education and professional societies. Therefore, two examples of international competency-based curricula recommendations are introduced: the IT2017 (Sabin et al. 2017) as the first competency-based baccalaureate curricula, and the CC2020 report as a more recent example (Clear et al. 2020). Curricula recommendations for baccalaureate and master degree programs by the German Informatics Society (GI) further serve as a national example.

2.3.1 Information Technology 2017 The Information Technology IT2017 report was among the first in the computing context to explicitly address competency-based education. They thus moved away from knowledge areas, knowledge units, learning outcomes, and topic models to

26

2 Approaching the Concept of Competency

reflect on competency and students’ readiness for their careers. One of the reasons for this shift is that the majority of computing graduates enter the workplace (and not academia). It is thus the goal to have graduates who are ready to perform in an industry job.

The IT 2017 report defines competency as the sum of knowledge, skills, and dispositions taken in a professional context (Sabin et al. 2017).

The three dimensions mentioned in the definition are interrelated and defined in greater detail in the report: 1. Knowledge is the know-what dimension, which reflects core concepts and contents. In the past, knowledge has received the greatest attention from teachers, departments, and accreditation bodies. Educators usually do not have any difficulties recognizing this dimension in their syllabi and curricula, where knowledge is represented as a list of topics. 2. The second component refers to skills as the know-how dimension of competency. Skills comprise strategies, methods, and capabilities which learners develop over time, for example, via practice, taking action, or interacting with other people. They are considered higher-order cognitive abilities that should be addressed in the curricula. A recommendation is to integrate problem-based teaching, learning, and instruction, practice via authentic problems, and utilize laboratory assignments that are relevant to the workplace. 3. The third component, dispositions, “encompass socio-emotional skills, behaviors, and attitudes that characterize the inclination to carry out tasks and the sensitivity to know when and how to engage in those tasks” (Sabin et al. 2017; Perkins et al. 1993). The IT2017 report notices the origin of dispositions in the field of vocational training, and further notes its lack of recognition within computing education. Nonetheless, dispositions are defined via their notion of a learner’s “values, motivation, feelings, stereotypes, and attitudes such as confidence in dealing with complexity, tolerance to ambiguity, persistence in working with difficult problems, knowing one’s strengths and weaknesses, and setting aside differences when working with others” (Sabin et al. 2017). Dispositions are also referred to as the know-why dimension of competency (Schussler 2006), and the basis for industry hirings. However, it is still perceived as a new concept in computing education. Readiness for the career usually requires students to acquire the qualities arranged in these three dimensions, instead of remembering a body of knowledge. The professional context is defined by employer involvement, expert mentorship, authentic problems, diverse teams, and other aspects. The IT 2017 report thus reflects a competency-based curricular framework, wherein knowledge, skills, and

2.3 Competency-Based Curricula Recommendations in Computing

27

dispositions blend in together for succeeding in authentic tasks, and eventually in the workplace (Wiggins and McTighe 2005).

2.3.2 Computing Curricula 2020 Another example of curricula recommendations addressing competency is the Computing Curricula 2020 (CC2020) project as an initiative of several professional computing societies (Clear et al. 2020). It encompasses curricular guidelines for study programs in computing aiming at baccalaureate-level degrees (e.g., Computer Engineering, Computer Science, Cybersecurity, Information Systems, Information Technology, Software Engineering, with data science). The CC2020 project has adopted a definition of competency similar to that of the IT2017 report. It thus highlights the role of skills and dispositions, rather than knowledge alone. Moreover, the CC2020 project notes the pragmatism of using the concept of competency.

The CC2020 project report defines competency as a formula of the three dimensions within the performance of a task, where: Competency = [Knowledge + Skills + Dispositions] in Task.

.

In the CC2020 curricula report, competency is defined as closely related to behaviors required for a successful job performance and career. It is defined as a concept that focuses on individuals, and the whole person, as it considers the triad of knowledge, technical skills, and human behavior (Clear et al. 2020). As within the IT2017 report, knowledge is defined as the know-what dimension, skills are the know-how dimension, and dispositions are referred to as the knowwhy dimension. The CC2020 report adds the task to the formula instead of the professional context in the IT2017 report. A task is the construct that serves as a frame for the application of knowledge and skills, making dispositions concrete, and observable. It is the setting in which competency manifests, as tasks provide the context for action (Clear et al. 2020). The CC2020 shows that a skill couples technical knowledge with performance levels in doing a task. Dispositions are different from knowledge and skills because they refer to the intent and willingness to perform in a context (Perkins et al. 1993; Schussler 2006; Freeman 2007) Dispositions “control whether and how an individual is inclined to use their skills” (Clear et al. 2020), they reflect on an individual’s habits, attitudes, and socio-emotional tendencies. The CC2020 report gives 11 examples of prospective dispositions comprising proactive, self-directed,

28

2 Approaching the Concept of Competency

passionate, purpose-driven, professional, responsible, adaptable, collaborative, responsive, meticulous, and inventive. The report as a whole further provides plenty of resources for students, educational institutions, government, industry, and the public globally. It summarizes and synthesizes the current state of curricular guidelines for baccalaureate study programs in computing, and provides insights into the future of computing education for the 2020s and beyond.

2.3.3 National Curricula Recommendations As a consequence of the poor performance of German students in international assessments, 20 years ago (Baumert et al. 2001; Waldow 2009) standards for K-12 CS education have been established (Brinda et al. 2009; Seehorn et al. 2011). In higher education, however, descriptive competency models (Schecker and Parchmann 2006) are rare in the context of computing (Linck et al. 2013; Kramer et al. 2016b; Schäfer 2012). Domain-specific, cognitive competency models should also have an empirical basis (Koeppen et al. 2008; Kramer et al. 2016b). Thus, the development of a competency model for higher education computer science, and particularly programming requires more research (Kiesler 2020a,b,d). Nonetheless, the GI published a prescriptive competency model as part of their curricula recommendations for baccalaureate and master programs in 2016. The GI as a professional computing organization, for example, develops and publishes educational standards and curricular recommendations. Their latest recommendation includes a model aiming at the description of competencies in various content areas of CS bachelor’s and master’s programs (see Table 2.2). The model is supposed to be a guide for the design, development, and assessment of recent CS curricula (GI 2016).

Competency is the sum of cognitive skills and abilities that enable an individual to solve a given task or problem in a context. Motivational, volitional, and social dispositions contribute to such skills and abilities, meaning that cognitive and non-cognitive competencies are interconnected.

In the GI’s report, competency is defined by adopting Weinert’s (2001a) position. The GI further highlights its orientation on outcomes from a workplace perspective (GI 2016). It is thus the main goal of the professional society to prepare students for the profession and to join the industry workforce. The curricula recommendations further categorize CS knowledge areas, and add the expected cognitive competencies to them. Knowledge areas built upon the ACM’s and IEEE’s curricula recommendations (ACM, Joint Task Force on

2.4 Related Research in Computing Education

29

Computing Curricula 2013). Competencies other than within the cognitive domain (so-called non-cognitive competencies) are listed without referring to knowledge areas, resulting in the following short list of competencies: • • • • • •

Formal, algorithmic, and mathematical competency Analysis, design, implementation, and project management competency Technological competency Interdisciplinary competency Methodological, and transfer competency Social competency and self-competency

Even though these content areas mainly represent cognitive competencies, it is the GI’s position that “non-cognitive skills” are also addressed as part of these areas. It is claimed that these are acquired in connection with problem-solving and cognitive tasks, but rather implicitly than explicitly. This is why they are usually not listed as explicit learning objectives. The acquisition of non-cognitive competencies is further assumed to be part of the student’s personal development and therefore only part of promoting graduates’ professionalism. The GI only sees some overlap concerning some elements of project and team competency (GI 2016). The following list summarizes the GI’s non-cognitive competencies that can be acquired in CS study programs: • • • • • • • • • •

Self-management competency Cooperation competency Learning competency Media literacy Literacy (writing and reading competency) Attitude and mindset Empathy Motivational and volitional competency Motivation for learning Commitment and dedication

Another national recommendation comprises standards for general computing competencies, which include programming, but also a critical-reflective application of technologies (Working Group GeRRI 2020). Even though the GeRRI framework aims at the description of basic computing competencies, it lacks several cognitive process dimensions of the AKT and an empirical basis so it cannot be considered in the present work (Kiesler 2021).

2.4 Related Research in Computing Education Modeling computing competencies has not yet been the subject of extensive research. Next, a few examples of international and national studies on the modeling of competencies are introduced.

30

2 Approaching the Concept of Competency

In a 2018 ITiCSE working group, Frezza et al. (2018) defined competency as a basis for competency modeling in computing. In their report, the components of knowledge, skills, dispositions, and context form the so-called Competency Learning Framework (CoLeaF). The CoLeaF framework was then applied in a case study, where students’ view of the competencies expected from them during their studies was analyzed. As a result, a hierarchy of competencies was developed. The working group further analyzed the competencies expected in a software engineering course (Frezza et al. 2018). Their example, however, was of small scope and did not clearly distinguish skills and dispositions. Another weakness was the broad and arbitrary language that was used in the competency statements. Competency has been the focus of an ITiCSE working group in 2021 (Raj et al. 2021b,a). Their report highlights the integrative nature of the three components of competency knowledge, skills, and professional dispositions situated in a context. Competency is characterized by the human aspect in professional development, as affective and social aspects play a role in learning: Attributing competencies to the “whole person” requires that educators recognize the presence of all three dimensions of competencies (cognitive, affective, and social) and their intrinsic interdependence and dynamics. (Raj et al. 2021a)

Moreover, the working group presents pedagogical approaches and forms of assessment that are discussed along with challenges and opportunities related to the shift towards competency-based education. One of them is that: Learning, developing, and practicing professional competencies reveal dependency relationships among learners. Competencies do not exist in isolation, and different individuals will experience different progressions. (Raj et al. 2021a)

This aspect requires educators’ attention, as it has implications for teaching and assessing. Especially dispositions as the most complex component of competency are challenging. At the same time, dispositions are known to be expected by industry (Chen et al. 2022; Kiesler and Impagliazzo 2023; Kiesler and Thorbrügge 2023). Yet, dispositions may require new pedagogical approaches and instruments adequate for a course’s objectives, its setting, course size, and students’ prior competencies (Raj et al. 2021a; Impagliazzo et al. 2022; Kumar et al. 2023; MacKellar et al. 2023; McCauley et al. 2023; Sabin et al. 2023). Moreover, students’ understanding of dispositions must be assured, which is subject to recent research (Kiesler et al. 2023). A few projects and funding lines in Germany have been concerned with the modeling of competency. Within the priority program “Competency Models”, funded by the German Research Foundation, German researchers from the fields of education and psychology started to develop competency models for several disciplines, such as science, technology, engineering, and mathematics (STEM) (Klieme et al. 2010; Kramer et al. 2016a). Havenga et al. (2008) constructed the first descriptive learning repertoire for third-year CS students’ abilities in Object-Oriented Programming (OOP) by using qualitative and quantitative methodologies. Their model presents knowledge, skills, and abilities in four categories: construction, reflection, selection,

References

31

and application (Havenga et al. 2008). Cognitive, metacognitive, and problemsolving knowledge skills are represented as part of each category, but the repertoire lacks subcategories distinguishing levels of complexity. As part of the project Competence development with embedded micro- and nanosystems (KOMINA) (Schäfer 2012), a previously developed normative competence structure model was refined through a survey with experts. The advanced empirical model presents three cognitive dimensions (Precondition, Development C, Multi-level Development) and one dimension including non-cognitive competencies for the chosen content area. Linck et al. (2013) developed a competence model for the “informatics modeling and system comprehension” in secondary education using a similar methodology combining normative and empirical methodologies. The final model describes four content-related categories (System Application, System Comprehension, System Development, and Dealing with system complexity) including learning objectives on several levels of Bloom’s revised taxonomy (Anderson et al. 2001). The fifth dimension presents competencies from domains other than the cognitive one (Linck et al. 2013). Few other projects address the need for an empirically proven competence model for CS education. However, these projects rather focus on teaching concepts related to selected programming languages and paradigms, like OOP (Kramer et al. 2016a,b,c). Others address the qualification of teaching assistants (Danielsiek et al. 2017) and teacher students (Lautenbach et al. 2018). Thus, a transferable competency model or framework for programming which is independent of a certain programming language or paradigm has not yet been developed.

References ACM, Joint Task Force on Computing Curricula, Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science (Association for Computing Machinery, New York, 2013) S. Alaoutinen, K. Smolander, Student self-assessment in a programming course using Bloom’s revised taxonomy, in Proceedings of the Fifteenth Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE ’10 (Association for Computing Machinery, New York, 2010), pp. 155–159 L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) R. Arnold, Von der Bildung zur Kompetenzentwicklung: Anmerkungen zu einem erwachsenenpädagogischen Perspektivwechsel, in Literatur-und Forschungsreport Weiterbildung, ed. by E. Nuissl, C. Schiersmann, H. Siebert, D.D.I. für Erwachsenenbildung e.V. (DIE), vol. 49 (Bertelsmann Verlag, Bielefeld, 2002), pp. 26–38 R. Bader, Berufliche Handlungskompetenz. Die berufsbildende Schule 41(2), 73–77 (1989) J. Baumert, P. Stanat, A. Demmrich, PISA 2000: Untersuchungsgegenstand, theoretische Grundlagen und Durchführung der Studie, in PISA 2000 (Springer, Berlin, 2001), pp. 15–68 B.S. Bloom, Taxonomy of educational objectives: the classification of educational goals. Cogn. Domain (1956)

32

2 Approaching the Concept of Competency

T. Brinda, H. Puhlmann, C. Schulte, Bridging ICT and CS: educational standards for computer science in lower secondary education. SIGCSE Bull. 41(3), 288–292 (2009) S. Carretero, R. Vuorikari, Y. Punie, DigComp 2.1: The Digital Competence Framework for Citizens with Eight Proficiency Levels and Examples of Use. Technical Report JRC Working Papers JRC106281, Joint Research Centre (Seville site) (2017) J. Chen, S. Ghafoor, J. Impagliazzo, Producing competent HPC graduates. Commun. ACM 65(12), 56–65 (2022) L. Clarke, C. Winch, A European skills framework?—but what are skills? Anglo-Saxon versus German concepts. J. Educ. Work 19(3), 255–269 (2006) A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing Curricula 2020 (CC2020): Paradigms for Future Computing Curricula. Technical report (Association for Computing Machinery/IEEE Computer Society, New York, 2020). http://www.cc2020.net/ M.W. Connell, K. Sheridan, H. Gardner, On abilities and domains, in The Psychology of Abilities, Competencies, and Expertise, ed. by R.J. Sternberg, E.L. Grigorenke (Cambridge University Press, Cambridge, 2003), pp. 126–155 H. Danielsiek, P. Hubwieser, J. Krugel, J. Magenheim, L. Ohrndorf, D. Ossenschmidt, N. Schaper, J. Vahrenhold, Kompetenzbasierte Gestaltungsempfehlungen für Informatik-Tutorenschulungen, in INFORMATIK 2017 (2017) DWDS, Kompetenz (2020). Online Publication EQANIE, European Quality Assurance Network for Informatics Education, EURO-INF Framework Stands and Accreditation Criteria for Informatics Degree Programmes. Technical report, European Quality Assurance Network for Informatics Education (EQANIE) (2017) European Commission, Memorandum über Lebenslanges Lernen, Brüssel (2000) L. Freeman, An overview of dispositions in teacher education, in Dispositions in Teacher Education, ed. by M.E. Diez, J. Raths (Information Age Publishing, Inc., Charlotte, 2007) S. Frezza, M. Daniels, A. Pears, r. Cajander, V. Kann, A. Kapoor, R. McDermott, A.-K. Peters, M. Sabin, C. Wallace, Modelling competencies for computing education beyond 2020: a research based approach to defining competencies in the computing disciplines, in Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2018 Companion (Association for Computing Machinery, New York, 2018), pp. 148–174 GI, German Informatics Society, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen (2016). Online Publication GI, German Informatics Society, Bildungsstandards Informatik. Kompetenzmodell (2019). Online Publication R. Gluga, J. Kay, R. Lister, S. Kleitman, T. Lever, Coming to terms with Bloom: an online tutorial for teachers of programming fundamentals, in Proceedings of the Fourteenth Australasian Computing Education Conference - Volume 123, ACE ’12 (AUS. Australian Computer Society, Inc., Darlinghurst, 2012), pp. 147–156 P. Grebe, G. Drosdowski, Duden “Etymologie”: Herkunftswörterbuch der deutschen Sprache (Bibliographisches Institut, Mannheim, 1963) M.M. Grzeda, In competence we trust? Addressing conceptual ambiguity. J. Manag. Dev. 24, 530–545 (2005) M. Havenga, E. Mentz, R. De Villiers, Knowledge, skills and strategies for successful objectoriented programming: a proposed learning repertoire. S. Afr. Comput. J. 12(Dec 2008), 1–8 (2008) C. Heidack, Kooperative Selbstqualifikation als geistige Wertschöpfungskette im Prozeß eines ganzheitlichen, wechselseitigen Lernprozesses in der Organisationsentwicklung, in Lernen der Zukunft. Kooperative Selbstqualifikation—die effektivste Form der Aus-und Weiterbildung im Betrieb, ed. by C. Heidack, vol. 2 (Lexika, München, 1993), pp. 21–42 IEEE, IEEE Computer Society, Software Engineering Competency Model. Version 1.0, SWECOM. Technical report, IEEE Computer Society (2014)

References

33

J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE ’22 (ACM, New York, 2022), pp. 662–663 C.G. Johnson, U. Fuller, Is Bloom’s taxonomy appropriate for computer science?, in Proceedings of the 6th Baltic Sea Conference on Computing Education Research: Koli Calling 2006, Baltic Sea ’06 (Association for Computing Machinery, New York, 2006), pp. 120–123 N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, in DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020, ed. by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, vol. P-308. LNI (Gesellschaft für Informatik e.V., 2020a), pp. 187–192 N. Kiesler, On programming competence and its classification, in Koli Calling ’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling ’20 (Association for Computing Machinery, New York, 2020b) N. Kiesler, Towards a competence model for the novice programmer using Bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’20 (Association for Computing Machinery, New York, 2020c), pp. 459–465 N. Kiesler, Zur Modellierung und Klassifizierung von Kompetenzen in der grundlegenden Programmierausbildung anhand der Anderson Krathwohl Taxonomie (2020d). CoRR abs/2006.16922. arXiv: 2006.16922. https://arxiv.org/abs/2006.16922 N. Kiesler, Wer ist GeRRI? Eine kritische Diskussion des Gemeinsamen Referenzrahmens Informatik, in DeLFI 2021–Die 19. E-Learning Fachtagung Informatik, 15.–16. September 2021, Dortmund, ed. by A. Kienle, A. Harrer, J.M. Haake, A. Lingnau, LNI. GI (2021), pp. 343–348 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022) N. Kiesler, J. Impagliazzo, Industry’s expectations of graduate dispositions, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 N. Kiesler, C. Thorbrügge, Socially responsible programming in computing education and expectations in the profession, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) E. Klieme, Was sind Kompetenzen und wie lassen sie sich messen? Pädagogik 6, 10–13 (2004) E. Klieme, J. Hartig, Kompetenzkonzepte in den Sozialwissenschaften und im erziehungswissenschaftlichen Diskurs, in Kompetenzdiagnostik, Zeitschrift für Erziehungswissenschaft, ed. by M. Prenzel, I. Gogolin, H.-H. Krüger, vol. Sonderheft 8 (Springer, Berlin, 2007), pp. 11–29 E. Klieme, H. Avenarius, W. Blum, P. Döbrich, H. Gruber, M. Prenzel, K. Reiss, K. Riquarts, J. Rost, H.-E. Tenorth, H.J. Vollmer, The development of national educational standards: an expertise. Bundesministerium für Bildung und Forschung, Referat Bildungsforschung (2003) E. Klieme, D. Leutner, M. Kenk, et al., Kompetenzmodellierung: Zwischenbilanz des DFGSchwerpunktprogramms und Perspektiven des Forschungsansatzes. Zeitschrift für Pädagogik, 56. Beiheft (2010) KMK, Kultusministerkonferenz, Handreichung für die Erarbeitung von Rahmenlehrplänen der Kultusministerkonferenz für den berufsbezogenen Unterricht in der Berufsschule und ihre Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte Ausbildungsberufe. Technical report, Ständige Konferenz der Kultusminister der Länder (2007)

34

2 Approaching the Concept of Competency

K. Koeppen, J. Hartig, E. Klieme, D. Leutner, Current issues in competence modeling and assessment. Z. Psychol./J. Psychol. 216(2), 61–73 (2008) M. Kramer, D.A. Tobinski, T. Brinda, On the way to a test instrument for object-oriented programming competencies, in Proceedings of the 16th Koli Calling International Conference on Computing Education Research, Koli Calling ’16 (ACM, New York, 2016c), pp. 145–149 M. Kramer, D.A. Tobinski, T. Brinda, Modelling competency in the field of OOP: from investigating computer science curricula to developing test items, in Stakeholders and Information Technology in Education (SaITE) (IEEE, 2016b), pp. 1–8 M. Kramer, P. Hubwieser, T. Brinda, A competency structure model of object–oriented programming, in Proceedings of the 4th International Conference on Learning and Teaching in Computing and Engineering (LATICE) (IEEE, 2016a), pp. 1–8 A.N. Kumar, R. McCauley, B. MacKellar, M. Sabin, N. Kiesler, R.K. Raj, J. Impagliazzo, Quantitative results from a study of professional dispositions, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) C. Lautenbach, M. Toepper, O. Zlatkin-Troitschanskaia, H.A. Pant, D. Molerov, Kompetenzen von Studierenden-Ergebnisse des “KoKoHs”-Programms im Kontext der nationalen und internationalen Assessmentpraxis, in Hochschulen im Spannungsfeld der Bologna-Reform, ed. by N. Hericks (Springer, Wiesbaden, 2018), pp. 199–216 F.D. Le Deist, J. Winterton, What is competence? Hum. Resour. Dev. Int. 8(1), 27–46 (2005) B. Linck, L. Ohrndorf, S. Schubert, P. Stechert, J. Magenheim, W. Nelles, J. Neugebauer, N. Schaper, Competence model for informatics modelling and system comprehension, in 2013 IEEE Global Engineering Education Conference (EDUCON) (IEEE, 2013), pp. 85–93 R. Lister, On blooming first year programming, and its blooming assessment, in Proceedings of the Australasian Conference on Computing Education, ACSE ’00 (Association for Computing Machinery, New York, 2000), pp. 158–162 R. Lister, J. Leaney, First year programming: let all the flowers bloom, in Proceedings of the Fifth Australasian Conference on Computing Education - Volume 20, ACE ’03 (AUS. Australian Computer Society, Inc., Darlinghurst, 2003), pp. 221–230 B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023). https://peer.asee.org/43018 S. Masapanta-Carrión, J.A. Velázquez-Iturbide, A systematic review of the use of Bloom’s taxonomy in computer science education, in Proceedings of the 49th ACM Technical Symposium on Computer Science Education, SIGCSE ’18 (Association for Computing Machinery, New York, 2018), pp. 441–446 S. Masapanta-Carrión, J.A. Velázquez-Iturbide, Evaluating instructors’ classification of programming exercises using the revised Bloom’s taxonomy, in Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’19 (Association for Computing Machinery, New York, 2019), pp. 541–547 H.R. Maturana, Erkennen: Die Organisation und Verkörperung von Wirklichkeit. Friedrich Vieweg & Sohn, Braunschweig, 2. durchgesehene auflage (autorisierte deutsche fassung von wolfram k. köck) edition (1985) H.R. Maturana, F.J. Varela, Der Baum der Erkenntnis. Die biologischen Wurzeln des menschlichen Erkennens (FScherz Verlag, Bern, 1987) R. McCauley, M. Sabin, A.N. Kumar, N. Kiesler, B. MacKellar, R.K. Raj, J. Impagliazzo, Using vignettes to elicit students’ understanding of dispositions in computing education, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 D.C. Merrill, Constructivism and instructional design: epistemology and the construction of meaning, in Constructivism and the Technology of Instruction, ed. by T. Duffy, D. Jonassen (Lawrence Erlbaum, Hillsdale, 1992), pp. 17–34 D. Mertens, Schlüsselqualifikationen-Thesen zur Schulung für eine moderne Gesellschaft. Mitt. Arbeitsmarkt Berufsforsch. 7, 36–43 (1974)

References

35

M. Mulder, J. Gulikers, H. Biemans, R. Wesselink, The new competence concept in higher education: error or enrichment. J. Eur. Ind. Train. 33, 755–770 (2009) D.N. Perkins, E. Jay, S. Tishman, Beyond abilities: a dispositional theory of thinking. MerrillPalmer Q. 39(1), 1–21 (1993) E. Pikkarainen, Competence as a key concept of educational theory: a semiotic point of view. J. Philos. Educ. 48(4), 621–636 (2014) R.K. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Toward practical computing competencies, in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE ’21 (Association for Computing Machinery, New York, 2021b), pp. 603–604 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’21 (Association for Computing Machinery, New York, 2021a), pp. 133–161 G. Reinmann, S. Hartung, A. Florian, Akademische Medienkompetenz im Schnittfeld von Lehren, Lernen, Forschen und Verwalten. Grundbildung Medien in pädagogischen Studiengängen (2013), pp. 1–11 H. Roth, Pädagogische Anthropologie/Bd. 2 Entwicklung und Erziehung: Grundlagen einer Entwicklungspädagogik, in Pädagogische Anthropologie, 2 (1971) D.S. Rychen, L.H. Salganik, Definition and selection of key competencies: theoretical and conceptual foundations. Summary of the final report „Key competencies for a successful life and a well-functioning society. Technical report, OECD (2003) M. Sabin, H. Alrumaih, J. Impagliazzo, B. Lunt, M. Zhang, B. Byers, W. Newhouse, B. Paterson, C. Tang, G. van der Veer, B. Viola, Information Technology Curricula 2017: Curriculum Guidelines for Baccalaureate Degree Programs in Information Technology (ACM, New York, 2017) M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) A.e.a. Schäfer, The empirically refined competence structure model for embedded microand nanosystems, in Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE ’12 (ACM, New York, 2012), pp. 57–62 H. Schecker, I. Parchmann, Modellierung naturwissenschaftlicher Kompetenz. Z. Didakt. Naturwiss. 12, 45–66 (2006) D.L. Schussler, Defining dispositions: wading through murky waters. Teacher Educ. 41(4), 251– 268 (2006) D. Seehorn, S. Carey, B. Fuschetto, I. Lee, D. Moix, D. O’Grady-Cunniff, B.B. Owens, C. Stephenson, A. Verno, CSTA K–12 Computer Science Standards: Revised 2011 (ACM, New York, 2011) E. Thompson, A. Luxton-Reilly, J.L. Whalley, M. Hu, P. Robbins, Bloom’s taxonomy for CS assessment, in Proceedings of the Tenth Conference on Australasian Computing Education Volume 78, ACE ’08 (AUS. Australian Computer Society, Inc., Darlinghurst, 2008), pp. 155– 161 H. Topi, H. Karsten, S.A. Brown, J.a.A. Carvalho, B. Donnellan, J. Shen, B.C.Y. Tan, M.F. Thouin, MSIS 2016: Global Competency Model for Graduate Degree Programs in Information Systems. Technical report (ACM, New York, 2017) F. Waldow, What PISA did and did not do: Germany after the ‘PISA-shock’. Eur. Educ. Res. J. 8(3), 476–483 (2009) F.E. Weinert, Concept of competence: a conceptual clarification, in Defining and Selecting Key Competencies (Hogrefe & Huber, Seattle, 2001a), pp. 45–65

36

2 Approaching the Concept of Competency

F.E. Weinert, Leistungsmessungen in Schulen. Druck nach Typoskript (Beltz (Beltz Pädagogik), Weinheim, Basel, 2001b) F.E. Weinert, Leistungsmessungen in Schulen, 3. auflage edn. (Beltz, Weinheim, Basel, 2014) J.L. Whalley, R. Lister, E. Thompson, T. Clear, P. Robbins, P.K.A. Kumar, C. Prasad, An Australasian study of reading and comprehension skills in novice programmers, using the Bloom and SOLO taxonomies, in Proceedings of the 8th Australasian Conference on Computing Education - Volume 52, ACE ’06 (AUS. Australian Computer Society, Inc., Darlinghurst, 2006), pp. 243–252 L. Wheelahan, G. Moodie, Rethinking skills in vocational education and training: from competencies to capabilities. NSW Department of Education and Communities, 13 (2011) G.P. Wiggins, J. McTighe, Understanding by Design (Association for Supervision and Curriculum Development, Alexandria, 2005) J. Winterton, Competence across Europe: highest common factor or lowest common denominator? J. Eur. Ind. Train. 33, 681–700 (2009) Working Group GeRRI, Gemeinsamer Referenzrahmen Informatik (GeRRI) (2020). Online Publication

Chapter 3

Research Design

3.1 Summary of Research Desiderata The motivation of this research was the author’s personal experience at different higher education institutions. Programming was taught as part of the core curriculum of computer science and related study programs. These experiences include both the challenges from a learner’s perspective as part of web programming courses, as well as the educator’s perspective in the same context. At the same time, this research study was encouraged by recent studies on student dropout, which suggest that computer science degrees suffer from particularly high dropout rates (Heublein et al. 2017, 2018, 2020). It seems as if computer science departments in higher education experience many challenges related to introductory programming courses. Cognitive challenges of first-year students are documented in numerous studies (Spohrer and Soloway 1986; Winslow 1996; Gomes and Mendes 2007; Xinogalos 2014). Such challenges may be due to extensive new content, and demanding expectations students are uncapable to meet. While programming is not necessarily difficult, the expectations towards novice learners may be too high to achieve, and therefore unrealistic (Luxton-Reilly 2016; McCracken et al. 2001). In addition, individual factors related to the whole person and structural conditions can cause challenges to learners. Despite recent developments, the rise of new programming languages, and tools, students’ challenges remain an issue. Failure rates in introductory programming courses are known to be high. Moreover, it takes about ten years of experience to become a programming expert (Gomes and Mendes 2007; Winslow 1996). Therefore, the question arises of how well students can and should be able to program after a few semesters, thus the first and second year of their studies. Programming is part of every computer science study program and, therefore, a common core tier among different educational institutions. Introductory programming courses also provide an entry point into the entire study program, as it is often a precondition for more

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_3

37

38

3 Research Design

advanced courses, and it conveys the culture and implicit expectations within the profession (Sultana 2016; Shulman 2005). Given this perspective, introductory programming education represents the starting point for this research and the investigation of programming competency. According to Klieme and Hartig (2007), competencies must be well-defined without ambiguity, and they should reflect the complexity of the conveyed contents. It is thus crucial to systematically structure subject-specific competencies and map them to the knowledge domain and their complexity (Klieme 2004). However, an empirically grounded competency model for basic programming in higher education institutions does not yet exist, as stated by professional societies like the German Informatics Society (GI 2016). Despite the publication of some competency-based curricula recommendations for computing educations (Sabin et al. 2017; Clear et al. 2020), programming competency is not yet operationalized. As a consequence, it is, for example, challenging for educators to design learning activities with a gradually increasing complexity, to scaffold them, or to design wellstructured curricula and assessments for novice learners.

3.2 Research Goals The overall goal of the present work is to (1) contribute to an increased understanding of programming competency and the required competencies for successful performance in introductory programming education at the university level. For this reason, a competency model based on the Anderson Krathwohl Taxonomy is developed. It can help illustrate the cognitive complexity and knowledge dimensions of the programing competencies expected from novice learners. An improved understanding of programming competency, its components, and its complexity will contribute to fostering learner competency development in introductory programming education, which is one of the core tiers of computer science study programs. Moreover, it is the goal to (2) explore opportunities for promoting competencybased teaching, learning, and assessing, and to identify further challenges related to attaining programming competency. These opportunities and challenges will lead to recommendations for educators, curriculum designers, and educational institutions in general.

3.3 Research Questions The main research question of this work addresses the need to identify programming competencies by investigating and classifying the competencies expected of first and second-year computer science students. The main research question is specified by five subordinate questions. Thus, the research questions (RQs) of this study are as follows:

3.4 Study Design

39

Research Questions 1. Which programming competencies are currently expected in the introductory phase of German computer science study programs? a. Which cognitive process and knowledge dimensions according to the Anderson Krathwohl Taxonomy are expected as part of the teaching and learning objectives of introductory programming education at German universities? b. What other teaching and learning objectives are explicitly expected as part of introductory programming education at German universities? c. To what extent can the Anderson Krathwohl Taxonomy be applied for the classification of programming competencies in the context of introductory programming education? d. Which factors prevent the acquisition of competencies in introductory programming education at German universities? e. Which factors help students succeed in introductory programming education at German universities?

3.4 Study Design Learning a programming language, its syntax and rules, and understanding the structure and logic of algorithms and data are integral parts of every computer science curriculum - not just in Germany but around the globe. Usually, these elements are a key part of basic courses within the first three to four semesters of a CS bachelor’s degree. However, students do often not learn how to write programs as a solution to a given problem (McCracken et al. 2001). Programming competency is not achievable within a few semesters or in the context of a project. It takes much more time and practice, meaning the course of a bachelor’s or even master’s degree does not suffice to become an expert programmer (Winslow 1996). Despite the long-lasting nature of the learning process and the sum of requirements for novice learners of programming, it is crucial to investigate the reality of higher education and what is expected of CS students. This is especially true as curricula are constantly changing (Becker and Quille 2019), and respective studies are rare. Thus, a study of the state of the art in programming education and curricular expectations is required to gain these insights. Such a study should incorporate the concept of competency (Clear et al. 2020; Weinert 2001; Raj et al. 2021). To approach a definition of programming competency, one has to analyze the competencies expected in introductory programming courses where students are supposed to acquire all the basic knowledge, skill, and disposition components of

40

3 Research Design

programming competency. The competencies are then classified into the cognitive process and knowledge dimensions to reflect their complexity. Moreover, the study aims to identify supportive measures and challenges related to learning how to program. Computing education experts will help gain such insights into actual practices in programming courses. RQ 1.a, 1.b, and 1.c A first step towards answering RQ 1.a and 1.b is a thorough literature review and the identification of a subject area that can serve as a basis for the selection of exemplary German universities, study programs, modules, and courses. As a next step, the learning outcomes and competency-based objectives of the selected modules are extracted from the material. The subsequent qualitative analysis of the cognitive objectives is guided by the cognitive process dimension and knowledge dimensions according to Anderson et al. (2001), which will result in an answer to RQ 1.a. Additional competencies are identified to answer RQ 1.b. During the qualitative analysis according to Mayring (2000) with AKT dimensions as deductive categories, the components of programming competency will be developed as inductive categories based on the material. This is how the socalled subject or knowledge areas can be replaced and filled with competency-based objectives. At the same time, the extent to which the AKT and its dimensions can be applied to the context of introductory programming education is evaluated to answer RQ 1.c. As a result, an adaption of the AKT towards the representation of programming concepts and tasks may be developed. RQ 1.d and 1.e In addition to the gathering of curricula document and their qualitative analysis, guided interviews with experts in higher education programming courses are conducted and qualitatively analyzed (Mayring 2000). These interviews are supposed to help identify factors promoting or preventing the acquisition of competencies. Professors with years of experience in teaching programming courses are assumed to know typical challenges and success factors for student learning. The experts have the potential to provide insights into the specialized knowledge and experiences at a specific institution. Despite the notion of subjectivity, these insights are likely transferable to other higher education institutions. By using open-ended, guided interview questions, it is possible to elicit more details on the programming competencies expected of novice learners from the educator’s perspective. Applying this method may help describe the competency to program in greater detail. At the same time, it becomes clear which aspects or components of programming competency are particularly challenging for learners, and where they may need more or less support. Integration of Data, Methods, and Results As this study uses different data sources and methods for data gathering, the results and answers to the subordinate research questions need to be merged to answer the overall research question: Which programming competencies are currently expected in the introductory phase of German computer science study programs?

References

41

The inductively built competency categories formed during the qualitative content analysis of programming modules will be combined with the inductively-built categories of competencies stressed in the expert interviews to answer this question. The programming competencies identified in both qualitative data analyses will be merged and aligned within the cognitive process and knowledge dimensions of the AKT to help build a competency model of basic programming. Hypotheses The review of prior and related work on programming competencies and corresponding models (see Chap. 2) along with a search for module handbooks of German CS study programs, led to the hypothesis that programming competency consists of different components. In formal educational settings such as higher education, these components of programming competency can be acquired and assessed in dedicated courses, or in an integrated manner. What educators expect of first-year students is the elaboration of cognitively complex concepts, procedures, and tasks, such as developing and implementing programs. According to Anderson et al. (2001), this is the composition of individual elements into a new, functioning whole by planning, designing, or implementing them. The AKT’s cognitive process and knowledge dimensions thus help make the cognitive complexity of programming tasks visible. The following hypotheses serve as an addition to the research questions, as they help structure the data gathering and analysis of the present work: • Programming competency consists of various competencies or components which likely influence each other. • Programming competency primarily includes tasks in cognitively complex process dimensions, i.e., analyzing, evaluating, and creating. • In particular, the cognitive process dimension of “creating” requires the distinction of competencies and learning activities. • The Anderson Krathwohl Taxonomy has the potential to help operationalize and structure programming competency and its components. • Challenges related to learning programming entail cognitive tasks, and inter- or intrapersonal aspects of learning.

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) B.A. Becker, K. Quille, 50 years of CS1 at SIGCSE: a review of the evolution of introductory programming education research, in Proceedings of the 50th ACM Technical Symposium on Computer Science Education, SIGCSE ’19 (Association for Computing Machinery, New York, 2019), pp. 338–344 A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing Curricula

42

3 Research Design

2020 (CC2020): Paradigms for Future Computing Curricula. Technical report (Association for Computing Machinery/IEEE Computer Society, New York, 2020). http://www.cc2020.net/ GI, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen (2016). Online Publication A. Gomes, A.J. Mendes, Learning to program-difficulties and solutions, in International Conference on Engineering Education (ICEE) (2007) U. Heublein, J. Ebert, C. Hutzsch, S. Isleib, R. König, J. Richter, A. Woisch, Zwischen Studienerwartungen und Studienwirklichkeit, in Forum Hochschule, vol. 1 (2017), pp. 134– 136 U. Heublein, R. Schmelzer, D. Sommer, Die Entwicklung der Studienabbruchquote an den deutschen Hochschulen. Technical report, Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Hannover (2018) U. Heublein, J. Richter, R. Schmelzer, Die Entwicklung der Studienabbruchquoten in Deutschland. DZHW Brief (3, 2020) E. Klieme, Was sind Kompetenzen und wie lassen sie sich messen? Pädagogik 6, 10–13 (2004) E. Klieme, J. Hartig, Kompetenzkonzepte in den Sozialwissenschaften und im erziehungswissenschaftlichen Diskurs, in Kompetenzdiagnostik, Zeitschrift für Erziehungswissenschaft, ed. by M. Prenzel, I. Gogolin, H.-H. Krüger, vol. Sonderheft 8 (Springer, Berlin, 2007), pp. 11–29 A. Luxton-Reilly, Learning to program is easy, in Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’16 (Association for Computing Machinery, New York, 2016), pp. 284–289 P. Mayring, Qualitative content analysis forum qualitative sozialforschung. Forum Qual. Soc. Res. 1(2) (2000) M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y.B.-D. Kolikant, C. Laxer, L. Thomas, I. Utting, T. Wilusz, A multi-national, multi-institutional study of assessment of programming skills of first-year CS students, in Working Group Reports from ITiCSE on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’01 (ACM, New York, 2001), pp. 125–180 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’21 (Association for Computing Machinery, New York, 2021), pp. 133–161 M. Sabin, H. Alrumaih, J. Impagliazzo, B. Lunt, M. Zhang, B. Byers, W. Newhouse, B. Paterson, C. Tang, G. van der Veer, B. Viola, Information Technology Curricula 2017: Curriculum Guidelines for Baccalaureate Degree Programs in Information Technology (ACM, New York, 2017) L.S. Shulman, Signature pedagogies in the professions. Daedalus 134(3), 52–59 (2005) J.C. Spohrer, E. Soloway, Novice mistakes: Are the folk wisdoms correct? Commun. ACM 29(7), 624–632 (1986) S. Sultana, Defining the Competencies, Programming Languages, and Assessments for an Introductory Computer Science Course. PhD thesis, Old Dominion University (2016) F.E. Weinert, Concept of competence: a conceptual clarification, in Defining and Selecting Key Competencies (Hogrefe & Huber, Seattle, 2001), pp. 45–65 L.E. Winslow, Programming pedagogy–a psychological overview. ACM Sigcse Bull. 28(3), 17–22 (1996) S. Xinogalos, Designing and deploying programming courses: strategies, tools, difficulties and pedagogy. Educ. Inf. Technol. 21(3), 559–588 (2014)

Part II

Data Gathering and Analysis of University Curricula

The second part of this book addresses one of the two main methods to gather and analyze data and derive programming competencies. In this context, university curricula of computer science study programs serve as a data basis. Hence, the sampling process is described in great detail in Chap. 4. Selecting degree programs, contents, universities, study programs, modules, and courses resulted in a sample of 129 introductory programming courses from 35 German universities. The analysis of curricula data is subject to Chap. 5. It summarizes the methodology used for the qualitative content analysis of course objectives. One important step described here is the pre-processing of the data, as learning objectives greatly varied with regard to sentence structure and expressions. Hence, the linguistic smoothing is presented along with the basic coding guidelines and the support through qualitative data analysis software. Moreover, the rule-guided formation of deductive and inductive categories, and their application to the material is described. The quality criteria for this applied method conclude this chapter.

Chapter 4

Data Gathering of University Curricula

4.1 Goals of Gathering and Analyzing University Curricula The goal of gathering and analyzing university curricula is to answer RQ 1.a, 1.b, and 1.c, which relate to the competencies expected in introductory programming education at German universities, and the application of the AKT for competency modeling. The analysis of curricula data can help outline the teaching and learning goals within introductory programming. These objectives are then clustered, and classified in terms of their domain (e.g., cognitive), knowledge dimension, and cognitive process dimension according to (Anderson et al. 2001). This methodology answers RQ 1.a and 1.b. As part of the classification into AKT dimensions, it will become clear to what extent the AKT can be applied to the context of programming competency and introductory programming education, thereby answering RQ 1.c. Gathering and analyzing curricula data will thus contribute to developing a competency profile or, respectively, a competency model including levels of cognitive complexity. The deductively inductively built categories help classify teaching and learning objectives into the cognitive process and knowledge dimensions. Findings of the expert interviews will later supplement these categories. The model resulting from the curricula data analysis will thus be extended or modified by incorporating the expert perspective. At the same time, this triangulation at the data level will help validate the model of programming competency. To conclude, the resulting model is supposed to help identify the components of programming competency as addressed by the overall research question. It will help make the expected competencies explicit, transparent, and observable. The classification of programming competencies into their complexity further has implications for the design of (formative and summative) assessments, curricula, and study programs. Identifying and denoting observable competency-based learning objectives will also help clarify and align expectations among educators and learners. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_4

45

46

4 Data Gathering of University Curricula

4.2 Relevance of Gathering and Analyzing University Curricula Gathering university curricula will lead to an authentic, normative dataset containing real-world expectations toward novice learners of programming (Kiesler 2020; Kiesler and Pfülb 2023). The data will be analyzed using the qualitative content analysis method coined by Philipp Mayring (2014). Understanding and interpreting a text are central elements in qualitative content analysis (Kuckartz 2014). Moreover, the method is adequate for analyzing large amounts of data in response to a research question. In the present work, the module descriptions serve as fixed communication. The data is thus evaluated in a methodologically controlled, systematic, and rule-governed manner by applying a system of categories (Mayring 1995, 2015). In addition to the qualitative content analysis, quantitative analysis steps are applied in the form of frequency evaluations of the identified categories (GläserZikuda 2013; Krüger and Riemeier 2014; Mayring 1995, 2000, 2015). The strength of the qualitative approach to research in general and qualitative content analysis, in particular, is its qualitative-interpretative approach. It allows for the identification of (new) aspects in the material (Mayring 2000). This method helps gain insights into the structure and key competency goals at the selected educational institutions. The classification of programming competencies into the respective levels of cognitive process and knowledge dimensions of the AKT (Anderson et al. 2001) will make their complexity transparent. The inductivedeductive construction of categories contributes to an increased understanding of learning programming, and the entailed competency components. The evaluation of the categories’ frequencies will further indicate the relevance and weight of each competency identified in the material. Another contribution of this research method is the identification of the extent to which the AKT can be applied to programming competency and the context of computing education.

4.3 Expectations and Limitations The gathering and analysis of recent introductory programming module descriptions will help survey the state-of-the-art of competency-based educational goals at German higher education institutions. The data collection and analysis method is adequate for such large amounts of data and will contribute to gaining an overview of introductory programming education. Via this method, conclusions on the addressed programming competencies will become possible. Furthermore, competency goals can be classified into cognitive process and knowledge dimensions and quantified via frequency evaluations. The material will be structured based on the deductively built categories (i.e., along the AKT cognitive process and knowledge dimensions) and competencies

4.4 Sampling and Data Gathering

47

from domains other than the cognitive one. By this method, the anticipation is to gather a full spectrum of expected competencies without losing meaningful components. Constructing inductive categories and assigning them to competency components are consecutive steps. The available deductive categories from the AKT and the literature will complement the concrete, inductive categories from the material. A degree of subjectivity is unavoidable within the inductive category development. This aspect is due to a coder’s natural language habits, prior knowledge, inferences, and elaboration of concepts. Yet, the rule-governed assignment of categories based on a so-called coding guide with definitions and anchor examples supports the intersubjective understanding of the coding process. The subsequent guided expert interviews will help to explore the field further, as they are more open by design and can help generate new material. A limitation of the document gathering and analysis process is thus its size and composition. Some universities and programming modules are not analyzed. Moreover, discrepancies between the written, formal module descriptions and actual teaching practices may exist. This aspect limits the validity and meaningfulness of the findings to some extent. However, linking the curricula document analysis to the expert interviews will reduce this limitation. One last aspect is that the underlying research questions determine the data sample, the qualitative content analysis process, and the interpretation of results. Therefore, findings from the material always allow conclusions about the intentions and views of the analyzing participants (Mayring 2000, 2015; Mayring and Fenzl 2014; Stamann et al. 2016).

4.4 Sampling and Data Gathering Next, the sampling and gathering process of module descriptions from computer science curricula of German higher education institutions is presented. The first step in the sampling process is the selection of study programs, and the definition of a content area to collect a common core of comparable introductory programming modules. Then further requirements for inclusion and exclusion are justified, leading to the selection of specific institutions and modules.

4.4.1 Selection of Bachelor Degree Programs As a part of the Bologna Process, the focus was on a structural agreement on bachelor’s and master’s degrees, their scope, and objectives. In this context, the so-called “Qualifications Framework for German Higher Education Qualifications” (KMK 2005) provides a framework for the uniform description of these

48

4 Data Gathering of University Curricula

degrees and qualification profiles. The goal of this standardization is increased transparency, comprehensibility, and comparability of study programs. International initiatives such as the “European Consortium for Accreditation” or the “Joint Quality Initiative” share the same goals. The compatibility with European qualification frameworks, and the subjectunspecific, university-type-independent description of skills and abilities, contribute to the assumption that the qualification framework is a suitable starting point for the definition of programming competency. The qualification framework at the bachelor’s level specifies formal aspects, such as the duration of study programs to three to four years, and the number of ECTS (European Credit Transfer System) points (180 to 240 ECTS) that should be acquired. The following categorization is used to describe skills and abilities (KMK 2005): • Knowledge and Understanding – Expansion of Knowledge – Deepening of Knowledge • Knowledge Development – Instrumental Competency – Systemic Competency • Formal Aspects – – – –

Access Requirements Duration Follow-Up Possibilities Transitions from Vocational Education

Within the area of knowledge and understanding, the scope and depth of knowledge are described, as well as a critical understanding of the current state of research. Knowledge development as the second category distinguishes instrumental and systemic competency. Instrumental competencies refer to the application of skills relevant to the profession and the elaboration of problem-solving strategies. Systemic competencies denote abilities that go beyond a study program, e.g., to obtain additional information and be able to reflect on them. In this context, planning, monitoring, and regulating one’s learning processes should further be noted as relevant. Formal aspects, however, do not play a role in defining programming competencies. They nonetheless serve as a basis for selecting study programs for the sample, as the goal is to find comparable study programs with a similar modular structure (KMK 2005). Hence, the sample is supposed to consist of bachelor’s degree programs following the logic of the qualification framework and their formal aspects, such as study duration and number of ECTS points.

4.4 Sampling and Data Gathering

49

4.4.2 Selection of Content Area A common core content area has to be defined to gather a set of comparable module descriptions and competency-based objectives of bachelor’s degree programs. This content area should be common among computing study programs, regardless of their specifications or specific profiles. A content area related to basic programming knowledge, skills, and dispositions is considered a starting point. Unlike the 2013 curricular recommendations of the ACM (ACM, Joint Task Force on Computing Curricula 2013), the present work specifies competencies based on specific courses of existing curricula related to the development of programming competency. Accordingly, the descriptive approach analyzes current practices at German universities and universities of applied sciences offering bachelor’s degrees in computing. The ACM’s curricular recommendations provide orientation within the broad range of content related to programming. However, the ACM’s defined content areas need to be resolved and replaced with actual competency-based learning objectives addressed in introductory programming education. Fortunately, the research questions further help narrow the content area to the introductory phase of programming education, which usually comprises the first three to four semesters of a bachelor’s degree. In this context, it is important to include not only third but also fourth-semester courses, as the timing of the enrollment in such a course may depend on the semester in which a student entered the study program. If a student, for example, enters a study program in the summer term, it may not be possible to attend every basic programming course until the fourth semester. Thus, including the fourth semester is due to organizational aspects of study programs, as not every module takes place every term. The next step is to identify relevant modules and courses. Only a few selected modules and courses focus on basic programming competency. A working definition of programming competency is the combination of knowledge, skills, and dispositions related to a specific programming language, basic knowledge of algorithms, data structures, and their implementation. Following the ACM’s recommendations (ACM, Joint Task Force on Computing Curricula 2013), the core content area related to common objectives in basic programming education (1st–3rd or 1st–4th semester) of every computing bachelor’s degree is composed of:

• Basic knowledge of programming – Compiler, Interpreter • Basic knowledge of a programming language and its paradigm, constructs (e.g., keywords, syntax, expressions, functions, variables, lists, control structures, recursion) (continued)

50

4 Data Gathering of University Curricula

• Tools and utilities, IDEs, development environments • Knowledge of basic data structures and their representation, properties, and application – Stacks, queues, heaps, etc. • Knowledge of basic, simple algorithms and design patterns, as well as their properties as solutions for well-known problems (e.g., sorting) – – – –

Greedy algorithms, divide-and-conquer, backtracking Best, average, and worst cases to measure an algorithm’s performance O-notation and its formal definition Memory, runtime

• Solving problems, developing and implementing own programs – Object orientation (e.g., classes, inheritance, polymorphism, etc.) – Functional programming

Usually, this list of basic knowledge is addressed in courses entitled Programming 1 and 2, CS1 and CS2, or Algorithms and Data Structures. Such courses are part of every CS bachelor’s degree in German higher education. The enlisted content area also coincides with the competency areas for CS bachelor’s defined by the GI. Their selection of relevant content areas is thus in line with the ACM recommendations and hints at course titles of interest. Examples of the GI’s competency areas are as follows (GI, German Informatics Society 2016): • Formal, algorithmic, and mathematical competency, to be acquired in the following content areas: – ... – Modeling – Algorithms and Data Structures • Analysis, design, implementation, and project management competency, to be acquired in the following content areas: – – – –

Programming Languages and Methodology Software Engineering Human-Computer-Interaction Project and Team Competency

It is common to integrate software development principles within each of these content areas. However, the ACM describes software development and the required fundamental skills abilities as a category of its own (ACM, Joint Task Force on Computing Curricula 2013). As a result, the ACM’s content area relating to software development and software engineering covers a broad spectrum of content from the

4.4 Sampling and Data Gathering

51

entire software development process, which goes beyond introductory programming education. Hence, the present research only uses the basic content areas highlighted above for selecting sample modules. Software engineering modules rather build upon basic programming knowledge. For this reason, corresponding modules are usually located in the main study phase, for example, in the fourth semester of a bachelor’s program and later. Thus, basic programming education is needed before accomplishing software projects. Software engineering and respective projects require even more competencies than just being able to program. These include, for example, competencies in mathematics, theory, tools, technologies, software quality, project management, and many more. At the same time, programming competency is a prerequisite for successful software engineering. Since this study focuses on programming competency only, the broader field of software engineering is not considered. Respective content areas and modules are thus not integrated into the sample. Basic courses related to software development or software engineering are only being taken into account if they reflect the defined content area, and if they are part of the program’s introductory phase.

4.4.3 Selection of Institutions and Study Programs After the definition of the content area, the selection of suitable modules or courses that cover this content area should follow. To do this, a selection of study programs with corresponding curricula and coverage of these contents has to come first. For this purpose, the freely available online “University Compass” was used (HRK and Hippler 2018), which is a database representing German study programs. The University Compass was used to search for the study programs “Computer Science” and “Applied Computer Science”. The search for “Applied Computer Science” studies yielded 65 results, while the search for “Computer Science” yielded 216 study programs, whereas a few of them were overlaps. This high number of available German CS study programs required further narrowing before conducting the curricula analysis. Master’s programs, part-time studies, and dual studies were each excluded as search criteria from the start to obtain more comparable study programs. The study was thus limited to full universities and universities of applied sciences. Furthermore, only public institutions are relevant for this research, as they have comparable funding and conditions for their teaching and instruction. Privately funded universities are also not included in the sample. Moreover, theological colleges, art and music colleges were excluded. Dual CS study programs had to be ignored due to their characteristically high amount of practice and experience in the profession. Study programs leading to a Bachelor of Education were not considered. Only study programs leading to a Bachelor of Science (B.Sc.) were recognized for the sample. Programs awarding a Bachelor of Engineering degree for computer science programs were considered, as long as they fulfilled the other criteria for the selection process.

52

4 Data Gathering of University Curricula

After this first screening and selection, there was a great number of CS programs left. The problem was that the remaining study programs were still not comparable in terms of their introductory programming education, curriculum, and other parameters. So at this point, the search for computer science programs as a prerequisite for identifying relevant introductory programming modules and courses required an examination of the study programs and their curricula. The sampling criteria had to be limited further to identify comparable CS study programs and modules addressing the defined content area. To achieve this goal, computing degrees, such as “Business CS”, “Information Technology”, “Health Technology”, or “Media CS” were excluded, as they did not offer a well-grounded introductory programming education. This was evaluated by counting the ECTS credit points awarded for programming courses. In peripheral CS programs, the ECTS credits for introductory programming were less than 30, which expresses a decreased focus on programming. This aspect is not necessarily a problem, but it would limit the comparability of the selected CS study programs. Hence, peripheral CS programs were excluded from the sample. Moreover, relevant modules for the sample had to be scheduled in the first, second, third, or fourth semester according to the standard study plan for the selected CS bachelor’s degree programs. Moreover, they must reflect the defined content area. The selected study programs should be full-time studies, and the time required to complete the programs should range from six to eight semesters. The initial review of the study programs revealed that modules such as Programming 1 or 2, and Algorithms and Data Structures are available at almost all academic institutions within the first and second year. Hence, the selection of these modules is likely to yield comparable programming competencies. Furthermore, it confirms that the defined content area is a common core tier within introductory programming education. In contrast, software engineering or development courses are rather not integrated into every CS study program, and if so, they are offered in higher semesters. This observation confirms the decision not to include such modules in the sample of introductory programming modules. For the qualitative content analysis of introductory programming modules, finding at least 30 institutions was the target. One full university and one university of applied sciences per German federal state were included in the sample to obtain an overview of the CS programs in this country. For this purpose, the remaining CS study programs from the University Compass were each assigned a number. Then a random number generator was used to make the random selection. Then the institutions and degree programs associated with these numbers were included in the sample. As a result, the sample contains Computer Science degree programs offered at full universities and applied CS bachelor’s degrees offered at universities of applied sciences.

4.4 Sampling and Data Gathering

53

The experts’ institutions (i.e., their affiliation) supplement the sample. For this reason, more than two institutions are listed for the state of Hesse. The intended overlap at the data, person, and design level is supposed to integrate and validate the applied qualitative methodologies. Finally, 35 programs from 18 universities and 17 universities of applied sciences were in the sample. Table 4.1 lists all institutions per German federal state. Table 4.1 Overview of the selected institutions as a basis for the sample (Kiesler 2022) Baden-Württemberg Lower Saxony 1 Karlsruhe University of Applied Sciences 20 Hochschule Hannover University of Applied Sciences 2 University of Stuttgart 21 University of Oldenburg Bavaria Northrine-Westphalia 3 Ludwig-Maximilians-Universität 22 RWTH Aachen University München 4 Technical University of Applied Sciences 23 University of Düsseldorf Würzburg-Schweinfurt Berlin Rhineland Palatinate 5 Freie Universität Berlin 24 Trier University of Applied Sciences 6 Hochschule für Technik und Wirtschaft 25 Johannes Gutenberg University Mainz Berlin University of Applied Sciences Brandenburg Saarland 7 Technische Hochschule Brandenburg 26 Hochschule für Technik und Wirtschaft des University of Applied Sciences Saarlandes University of Applied Sciences 8 University of Potsdam 27 Saarland University Bremen Saxony 9 Hochschule Bremen City 28 Hochschule Mittweida University of University of Applied Sciences Applied Sciences 10 University of Bremen 29 Technische Universität Dresden Hamburg Saxony-Anhalt 11 Hamburg University of Applied Sciences 30 Harz University of Applied Sciences 12 University of Hamburg 31 Otto von Guericke University Magdeburg Hesse Schleswig-Holstein 13 Fulda University of Applied Science 32 Technische Hochschule Lübeck 14 University of Kassel 33 Universität zu Lübeck 15 RheinMain University of Applied Thuringia Science 16 Goethe University Frankfurt 34 University of Applied Sciences Erfurt 17 Technical University of Darmstadt 35 Technische Universität Ilmenau Mecklenburg-West Pomerania 18 Wismar University of Applied Science 19 University of Rostock

54

4 Data Gathering of University Curricula

4.4.4 Selection of Modules For the selection of programming modules, the core curricula during the first and second years had to be analyzed. Both module handbooks and study plans proved to be helpful for the identification of respective modules. Thus, published study plans were used to identify the courses for students in the first and second years. Elective modules were not considered, as only the core modules in which all students participate are of interest. Moreover, elective modules usually occur later in a study program. It should be noted that at some educational institutions, it is possible to enroll in a degree program either in the winter or the summer semester. As a result, introductory programming courses may still occur in the fourth semester. Modules from the first, second, third, and fourth semesters were included in the sample to account for this aspect. However, fourth-semester courses were only integrated from a few institutions, and only if they aligned to the defined content area. In general, only the first three semesters of a CS study program are considered when referring to basic programming education. Modules focusing on technology, databases, hardware, and digital technology basics (e.g., signal processing) are not included in the sample. Similarly, mathematics modules are ignored. Nonetheless, overlaps and parallels between the competency areas in mathematics and programming do exist. Modules such as project management (e.g., at the University of Magdeburg) were also not integrated into the sample, as the content area did not match, and programming competencies were not addressed in the learning objectives. Nonetheless, it is interesting to note that some universities offer modules addressing specific competencies, such as personal, social, or scientific competencies, but also domain-specific, methodological competencies for problem-solving, or communication (e.g., at LMU Munich, Hannover University of Applied Sciences, Trier University of Applied Sciences, or at RWTH Aachen University). As a result of the sampling process, 129 introductory programming modules were selected from the 35 institutions listed in Table 4.1. Due to the great variety in the nomenclature of the modules and courses, the following list only represents a few examples of course titles (and their frequency) selected for the present study (Kiesler 2022): • • • • • • • • •

Algorithms and Data Structures (14) Programming 1 (7) Programming 2 (7) Introduction to Programming (6) Object-oriented Programming (4) Programming Paradigms (3) Programming Methods (2) Programming (2) ...

References

55

References ACM, Joint Task Force on Computing Curricula, Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science (Association for Computing Machinery, New York, 2013) L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) GI, German Informatics Society, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen (2016). Online Publication M. Gläser-Zikuda, Qualitative Inhaltsanalyse in der Bildungsforschung–Beispiele aus diversen Studien, in Introspektive Verfahren und qualitative Inhaltsanalyse in der Fremdsprachenforschung, ed. by K. Aguado, L. Heine, K. Schramm (Lang, Frankfurt, 2013), pp. 136–159 HRK, H. Hippler, Hochschulkompass (2018). Online Publication N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, in DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020, ed. by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, vol. P-308. LNI (Gesellschaft für Informatik e.V., 2020), pp. 187–192 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022) N. Kiesler, B. Pfülb, Higher education programming competencies: a novel dataset, in Artificial Neural Networks and Machine Learning – ICANN 2023. Lecture Notes in Computer Science (Springer, Cham, 2023) KMK, Kultusministerkonferenz, Qualifikationsrahmen für deutsche Hochschulabschlüsse. Im Zusammenwirken von Hochschulrektorenkonferenz, Kultusministerkonferenz und Bundesministerium für Bildung und Forschung erarbeitet und von der Kultusministerkonferenz am 21.04.2005 beschlossen, 21:2005 (2005) D. Krüger, T. Riemeier, Die Qualitative Inhaltsanalyse – eine Methode zur Auswertung von Interviews, in Methoden in der naturwissenschaftsdidaktischen Forschung, ed. by D. Krüger, I. Parchmann, H. Schecker (Springer Spektrum, Berlin, 2014), pp. 133–145 U. Kuckartz, Qualitative Inhaltsanalyse. Methoden, Praxis, Computerunterstützung (2014) P. Mayring, Qualitative Inhaltsanalyse, in Handbuch Qualitative Sozialforschung. Grundlagen, Konzepte, Methoden und Anwendungen, ed. by U. Flick, E. von Kardorff, H. Keupp, L. von Rosenstiel, S. Wolff, 2nd edn. (Beltz Psychologie Verlagsunion, Weinheim, 1995), pp. 209– 212 P. Mayring, Qualitative content analysis forum qualitative sozialforschung. Forum Qual. Soc. Res. 1(2) (2000) P. Mayring, Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software Solution. Technical report, Leibniz Institute for Psychology, Klagenfurt, Austria (2014) P. Mayring, Qualitative Inhaltsanalyse: Grundlagen und Techniken, 12. auflage edition (Beltz, Weinheim, 2015) P. Mayring, T. Fenzl, Qualitative Inhaltsanalyse, in Handbuch Methoden der empirischen Sozialforschung, ed. by N. Baur, J. Blasius (Springer, Wiesbaden, 2014), pp. 543–556 C. Stamann, M. Janssen, M. Schreier, Qualitative Inhaltsanalyse-Versuch einer Begriffsbestimmung und Systematisierung. Forum Qualitative Sozialforschung/Forum Qual. Soc. Res. 17(3), 16 (2016)

Chapter 5

Data Analysis of University Curricula

5.1 Methodology of the Data Analysis The data analysis consists of two crucial steps involving deductive and inductive category development and coding. Both steps are illustrated in Fig. 5.1. The final step is the classification of categories representing programming competencies into the AKT’s dimensions (knowledge and cognitive processes). Figure 5.1 displays how these steps are realized for both cognitive competencies and competencies other than that. The first step of the qualitative content analysis is determined by the application of deductive categories to the material. Deductive categories comprise the AKT’s six cognitive process dimensions, and the four knowledge dimensions (Anderson et al. 2001). As these dimensions apply to cognitive learning objectives exclusively, other competencies remain uncategorized in the data during this step. This process is depicted at the bottom of the box representing the first step in Fig. 5.1. As a second step, the inductive development of categories follows. For the cognitive competencies, subcategories are built based on the material to reflect on the actual competency (e.g., remembering keywords of a programming language). Similarly, categories are constructed for the remaining competencies other than cognitive. As the competencies from the literature serve as a basis for category formation, this process is deductive-inductive. The last action displayed on the right-hand side in Fig. 5.1 shows that the inductive categories representing cognitive competencies are reevaluated and classified into the AKT dimensions (cognitive process and knowledge). Furthermore, the deductive-inductive competencies other than cognitive are classified where possible based on prior work. This development of categories reflecting competencies and their classification will answer RQ 1.a and RQ 1.b. As an additional result of the qualitative content analysis and the classification within AKT dimensions, RQ 1.c will be answered, as the process indicates to what extent the AKT is applicable within the context of introductory programming education. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_5

57

1. Coding cognitive process dimensions and 2. Coding konwledge dimensions remaining, uncategorized competencies other than cognitive

Inductive development of subcategories of cognitive process dimensions

+ Deductive-inductive development of categories for competencies other than cognitive

RQ 1.a and RQ 1.b

5 Data Analysis of University Curricula

Step 2 - inductive

Step 1 - deductive

58

Classification of competencies into AKT dimensions + Classification of competencies other than cognitive

Fig. 5.1 Steps of the category development as part of the qualitative content analyses of programming modules (Kiesler 2022)

5.2 Pre-processing of Data The gathered data from university curricula comprise the learning objectives addressed in introductory programming modules. These are usually enlisted or elaborated within a special section of the module description. Since the format of the targeted competencies greatly varied, the data had to be pre-processed to transform it into a common form that is easy to code. Therefore, the linguistic smoothing of the competency-based learning goals is introduced next. Moreover, the basic coding guidelines are presented. In addition, the role of specialized software in the analysis is described.

5.2.1 Linguistic Smoothing of Competency Goals Due to several reasons, the qualitative content analysis required pre-processing of the gathered competency-based learning objectives expressed in the 129 module descriptions. Among them are varying formats, incomplete syntax, and long sentences with enumerations of expected competencies. The content of the module description and its meaning, however, is not altered by the transformation of the data. Instead, spelling errors are corrected. Empty lines and redundant paragraphs (e.g., lists of nouns representing contents, or introductory passages without competency expectations) are omitted. These actions not only increase the readability of the material, they further help clarify which elements to code and how. So the goal of the pre-processing is to remove items that do not represent competencies expected from students and to form sentences that contain one competency-based learning objective each. The latter goal will ease the coding process, as one sentence can be treated as one coding unit. The material was transformed from long lists of enumerated learning objectives to single sentences with one competency goal each based on several guidelines. The

5.2 Pre-processing of Data

59

linguistic smoothing is described in the following, whereas examples illustrate the process where appropriate. 1. Subheadings such as “Competencies”, “Learning Objectives”, or “Qualification Goals” are removed from the sample because they carry no meaning. 2. If learning objectives are enumerated as main clauses within a sentence, each main clause is considered a representation of one competency-based objective. As a result, such enumerations of main clauses are separated between the last word in the first main clause (usually a verb) and the following conjunction (e.g., and, or, as well as). The conjunction itself is removed. If necessary, constituents of the original sentence are added in square brackets so that each sentence remains legible and comprehensible. In case a sentence contains the enumeration of more than two main clauses separated by a comma, the commas are removed. Then the sentences are grammatically completed and supplemented with punctuation marks. 3. If one sentence contains two or more competencies of the same AKT dimension, they are split into two sentences not to lose meaning. By following this procedure, every learning objective can be coded once, as is the regular process for this research. 4. In case of a simple enumeration of main clauses or operationalized verbs, the sentence’s subject is usually omitted from the second and each subsequent main clause of the original module description. Grammar-wise, this is fine, but for the data analysis, it results in incomplete and hardly understandable objectives. The omitted constituents (e.g., the subject) has to be inserted back into the second, third, and any enlisted clause to improve the readability of such enumerations. Square brackets indicate such insertions. As a result, every objective can be coded once in a fully legible sentence. The following example shows such a sentence with two learning objectives before and after the transformation: • Before: In addition, you will learn to evaluate and apply existing algorithms. • After: (1) In addition, you will learn to evaluate existing algorithms. (2) [In addition you will learn to] apply existing algorithms. 5. A similar procedure is applied to sentence constructions in which two learning objectives are included and in which the operationalized verb is part of the main clause. In this case, the object has to be inserted again in square brackets in the second and each subsequent main clause. The omission of constituents is indicated by square brackets with three dots [.. . .] in the original sentence. • Before: They know the definition and understand the practical importance of NPcompleteness for the efficient solution of problems. • After: (1) They know the definition [. . . ] [of NP-completeness].

60

5 Data Analysis of University Curricula

(2) [They] understand the practical importance of NP-completeness for the efficient solution of problems. 6. Any duplication of personal pronouns, or other deictic elements in the original data is removed. The constituent is then inserted into the transformed sentences in square brackets. Main clauses are separated according to the previously defined guidelines. Thereby, the original learning objective can be reconstructed. • Before: Students learn to analyze small problems and how to solve them with the help of programs. • After: (1) Students learn to analyze small problems. (2) [Students learn] [. . . ] how to solve small problems with the help of programs. 7. The same procedure as in 6. is applied when operationalized verbs or learning objectives are enumerated. If the enumeration is preceded by an introductory sentence required for understanding the learning objective, the introductory sentence is added in square brackets before each of the enumerated learning objectives. Colons or commas are removed, and a punctuation mark is added to complete each transformed sentence. Punctuation is added to all transformed sentences. • Before: Upon successful completion of the module, participants should have acquired knowledge of the following topics: Basic properties and design methods for algorithms, Efficient algorithms, and data structures for basic problems, Basic complexity classes for the runtime behavior, and memory requirements of Algorithms • After: (1) Upon successful completion of the module, participants should have acquired knowledge of the following topics: Basic properties and design methods for algorithms. (2) [Upon successful completion of the module, participants should have acquired knowledge of the following topics:] Efficient algorithms and data structures for basic problems. (3) [Upon successful completion of the module, participants should have acquired knowledge of the following topics:] Basic complexity classes for the runtime behavior and memory requirements of Algorithms. 8. Introductory sentences or statements preceding the actual competencies expected in the module are removed from the data due to the lack of an operationalized learning objective.

5.2 Pre-processing of Data

61

• Before: The learning objectives can be summarized as follows: Knowledge of the characteristics of elementary algorithms, understanding of the implications of theoretical and actual complexity, [. . . ]. • After: (1) Knowledge of the characteristics of elementary algorithms. (2) Understanding of the implications of theoretical and actual complexity. 9. Deictic expressions are used in the data to refer to previous sentences and constituents. The linguistic transformation aims at legible, valid, and understandable sentences as coding units. Therefore, deictic elements are replaced with the reference word itself, which is inserted in square brackets. • Before: Students gain an in-depth understanding of professional code, automated testing, and test-driven software development. They know the necessary methods and tools for these processes. • After: (1) Students gain an in-depth understanding of professional code, automated testing, and test-driven software development. (2) [Students] know the necessary methods and tools for [professional code, automated testing, and test-driven software development]. 10. Mere lists of contents remain in the sample, as long as they were listed under the competency goals of a module. They are later coded as non-operationalized. The result of the linguistic smoothing of the data is a set of learning objectives expressing one expected programming competency per sentence. Even though the effort for this transformation is extensive, it is significantly easing the subsequent coding process and is therefore highly recommended.

5.2.2 Basic Coding Guidelines The basic coding guidelines for the data analysis in response to research questions 1.a and 1.b are summarized next. They define how cognitive and other competencies are coded in the sample. Before the actual coding of competencies, module names are coded with their names as code. This procedure will help gain an overview of all coded modules and their labels. Then cognitive competencies are coded. In this rule-guided process, each coding unit containing a cognitive competency is coded twice, once concerning the deductively built cognitive process and once more related to its knowledge dimension according to Anderson et al. (2001). In addition, an inductive category representing the competency itself is developed based on the material (see steps 1

62

5 Data Analysis of University Curricula

and 2 in Fig. 5.1). These inductive categories are later placed into the AKT matrix to represent the analysis results. The remaining competencies other than cognitive are coded through deductively inductively built categories developed from both the literature and the material itself (see step 2 in Fig. 5.1). Subordinate categories are developed depending on the breadth and depth of competencies. Every coding unit is coded as cognitive competency, non-cognitive, or nonoperationalized in case the targeted competency is not observable. Thus, a coding unit cannot be assigned multiple of these three mutually exclusive superordinate categories, but only to one of them. In case a cognitive or other competency is identified in the data, subordinate deductive or inductive categories are applied to specify them. In general, the coding is generous to keep all competency-bearing elements of the sample. The following guidelines, however, help rationalize these generous coding principles. Hence, expressions without operationalized verbs conveying observable actions are accepted if expressed as a gerund or the respective noun form. Knowledge, for example, is evaluated as knowing or reciting, which is classified as the first cognitive process dimension remembering according to the AKT (Anderson et al. 2001). Similarly, in-depth knowledge is evaluated as remembering. If the classification of a coding unit within the deductive categories of the AKT is impossible due to, for example, a lack of context, the code “non-operationalized” is assigned. The same is true for learning objectives without an observable and measurable cognitive competency, i.e., if they lack an operationalized verb. Without an action verb, it becomes difficult to impossible to comprehend the desired competency in action. This lack of observable action verbs and outcomes is common among the cognitive process dimension understand. According to Anderson et al. (2001), understanding itself does not denote an observable learning outcome or a competency. Therefore, the following example verbs receive the code nonoperationalized in the data: • • • • •

understand comprehend gather master ...

In other module descriptions, only the course’s methods, or contents are summarized so that competency-based learning objectives are not present. These examples also receive the code non-operationalized. In case cognitive process and knowledge dimensions do not apply, there may be a non-cognitive learning goal present in the data. These are coded in a second step of the data analysis as indicated in Fig. 5.1. Thus, every sentence in the linguistically smoothed, transformed data set is treated as one coding unit, whereby every coding unit is coded at least once (Kiesler 2020a; Kiesler and Pfülb 2023).

5.3 Data Analysis

63

5.2.3 Computer-Assisted Analysis The qualitative content analysis of the curricula data is supported utilizing a Computer Assisted Qualitative Data Analysis Software (CAQDAS), or short Qualitative Data Analysis Software (QDAS). Such programs are explicitly designed for qualitative data analysis and thus allow the handling of non-numerical, unstructured data. The specific software chosen for this research is MAXQDA. Especially the functions supporting the coding process are used, i.e., the creation of codes and assignment to text passages, the development of inductive categories from the material, the use of text markers, code memos, comments, and the search for coded data segments in selected documents. Furthermore, the keyword search function across all documents proves helpful, and the coding scheme can easily be managed or changed. MAXQDA is also helpful for presenting results, as it can generate a codebook, count frequencies, calculate the intracoder reliability, and keep track of changes made during the analysis.

5.3 Data Analysis In this section, the focus is on the use of categories to identify competencies within the cognitive domain. This analysis step comprises the coding of deductive categories from the literature and the construction of inductive subcategories based on the data. Subsequently, the deductive-inductive development of categories denoting competency components other than cognitive ones is presented.

5.3.1 Deductive Category Development The analysis method is the qualitative content analysis according to Mayring (Mayring 2000, 2014). Summary, explication, and structuring are considered basic techniques of qualitative text comprehension and text interpretation (Mayring 2010, 2015). The structuring content analysis aims to identify certain aspects or criteria in the material in advance so that a cross-section (i.e., a smaller sample) of the data is created and an assessment of these criteria becomes possible. The procedure is deductive, as the main categories are determined in advance. Mayring (2015) distinguishes formal, content-related, typifying, and scaling types of structuring. For the present data analysis, the content-related structuring method is applied. This encompasses using the taxonomy of cognitive teaching and learning objectives (Anderson et al. 2001). The objective is to identify educational goals and the corresponding cognitive process dimensions and knowledge dimensions based on

64

5 Data Analysis of University Curricula

the AKT. The summary will result in an answer to RQ 1.a and RQ 1.c. In addition to the summary of categories, a frequency analysis of the categories and subcategories is presented to the reader. The analysis follows these steps based on Mayring (Mayring 2015): 1. Linguistic smoothing of the material (e.g., deletion of non-content-bearing text passages and reconstruction of grammatically incomplete sentences); 2. Setup of a category system and selection of elements whose frequencies are determined; 3. Definition of the coding scheme and guidelines with description and anchor examples from piloting the material (i.e., Anderson et al. 2001); 4. Definition of analysis units: (a) A coding unit corresponds to one competency objective. (b) A context unit corresponds to the entire material of the respective institution. (c) An evaluation unit corresponds to a sentence at its exact place of occurrence in the material. 5. Apply the categories via coding along the category system (i.e., Anderson et al. 2001); 6. Summary and comparison of results (as a starting point for the next step of the analysis in step 2, the inductive category building and coding); 7. Application of content-analytical quality criteria (Mayring 2015). The first step of the qualitative coding is rooted in the taxonomy of teaching and learning objectives by Anderson et al. (2001). The six cognitive process dimensions and four knowledge dimensions are used as a starting point to define the coding scheme and guidelines. Within this first step, the cognitive process dimension of each learning objective in the material is coded first. It is followed by the coding of the knowledge dimension in another run through the material. This step-bystep procedure simplifies the coding process as it is less error-prone than coding two or more dimensions within one run through the material. One competency goal is the minimum coding unit. The entire module description of the same educational institution serves as a context unit. The evaluation unit is characterized as a sentence and by its position in the material. Based on this rule-guided procedure, the systematic nature of the method becomes obvious (Mayring and Fenzl 2014; Mayring 2015). The analysis follows the rule-guided assignment of deductive categories to the sample. While applying the cognitive process and knowledge dimensions of the AKT to the material, the degree to which the taxonomy is suitable to the defined content area and context becomes apparent.

5.3 Data Analysis

65

5.3.2 Inductive Category Development For a more thorough answer to RQ 1.a, 1.b, and 1.c, the summarization technique helps to inductively form subcategories of the cognitive process dimensions found in the material. These, in turn, can help verify or evaluate which of the four knowledge dimensions applies. The advantage of this inductive step is that it preserves the context and objectives of the competencies of the material. It further helps ease the classification into the knowledge dimensions, as competencies are already clustered and more abstract or generalized. Furthermore, the inductive category construction is applied to competencies other than cognitive. During the inductive development of the subcategories, the material is summarized to represent essential contents. This process follows several steps (paraphrasing, generalization to abstraction level, reduction/summary). How exactly the analysis steps of the summary are conducted is determined in advance. The psychology of text processing offers strategies for a summarization (Ballstaedt et al. 1981; Mandl 1981). In a text-guided processing direction, as depicted in Fig. 5.2, meaningful units are formed by semantic and semantic-syntactic processes applied to the original text. These units of meaning are then further extended by a target person, and their prior knowledge, inferences, and elaborations. This results in socalled micro-propositions. The latter is the basis of further reductive processes, which are executed based on macro-operations (Mayring 2015). Macro-operators include omission of text passages, generalization, construction of propositions, integration into existing propositions, selection, and clustering (Ballstaedt et al. 1981). The resulting macro-propositions, again, contain the meanings added by an individual (i.e., their inferences, elaborations). Therefore, the final assignment of categories to text passages requires the rule-guided definition of the categories. By applying this methodology, cognitive schemata as an organizational unit of knowledge can be identified (Mayring 2015). The following steps illustrate the development of inductive categories. The linguistic smoothing has already been accomplished at this point in the process, as it is a prerequisite for all analyses. Another advantage of conducting this step early in the process is that codes are applied to the same material. The material also becomes much more legible, contains syntactically correct sentences only, and text components without meaning are removed. The coding, context, and evaluation unit are identical during the inductive category-building and coding process. For this reason, respective steps are omitted

Fig. 5.2 The processing steps of text comprehension (Ballstaedt et al. 1981)

66

5 Data Analysis of University Curricula

in the following list of actions describing the second, inductive phase of category building: 1. Preliminary determination of subcategory scheme, coding guidelines, definitions, and anchor examples within the cognitive domain and the remaining noncognitive competencies from step 1 of the curricula analysis while taking into account the literature; 2. Preliminary coding of subcategories according to preliminary coding scheme and guidelines for each one of the six cognitive process dimensions; 3. Determination of the level of abstraction and reduction (using macro operators); 4. Revision of the coding scheme and guidelines; 5. Reexamination of the category system based on the original material; 6. Recode another five to ten occurrences in the material; 7. Revision of the coding scheme and guidelines; 8. Upon changes, return to step 5. and recode the material within the respective category/cognitive process dimension; 9. Finalization of the sub-coding scheme, and coding of the next cognitive process dimension’s subcodes and non-cognitive competencies via repeating steps 2–8; 10. Confirmatory classification of the subcategories into the knowledge dimensions of the AKT; 11. Presentation of results and interpretation to answer RQ 1.a, 1.b and 1.c; 12. Application of content-analytic quality criteria.

5.3.3 Deductive-Inductive Category Development The process of inductive category building described in Sect. 5.3.2 is also relevant to the non-cognitive elements of programming competency. After all coding units representing cognitive competencies were assigned a cognitive process and knowledge dimension and inductive subcategories were built for them, several coding units remained. Those coding units represent the other programming competency components. They were also analyzed via a summarization technique while acknowledging existing categories from the literature. This is why the categories denoting non-cognitive competencies are deductive-inductive. The summarization steps coincide with those used for the cognitive process dimensions subcategories. For this reason, the 12-step process is not repeated here (see Sect. 5.3.2). Generally speaking, categories can be developed based on the literature (deductively) or the material (inductively). To complement the model of programming competency, additional, non-cognitive competency components from the literature were considered (GI, German Informatics Society 2016; Linck et al. 2013; Mandl and Krause 2001; Ferrari 2012; Ilomäki et al. 2011). The competencies from the literature were used to identify and categorize non-cognitive competencies (e.g., social competencies). The few available other competency components were tested on the material in case of deductive categories from the AKT were not applicable.

5.3 Data Analysis

67

The following list shows the competencies resulting from the literature review serving as a basis for the deductive-inductive category formation regarding noncognitive competencies: • Professional competencies – Formal, algorithmic, and mathematical competency – Analysis, design, implementation, and project management competencies – Technological competencies • Interdisciplinary competencies – Social and ethical aspects of computer science systems in an application’s context – Economic and ecological aspects of computer science systems in an application’s context – Legal aspects of information systems in an application’s context • Methodological competencies and transfer of knowledge – Strategies of knowledge acquisition and scientific training – Analysis of computer science systems in their context – Implementation and evaluation strategies • Social competencies – Cooperation – Communication – Empathy • Self competencies – – – –

Self-control competency Learning competency Information competency (media competency/literacy) Writing and reading competency (literacy)

• Motivation and volition – Openness to new ideas and requirements – Motivation to learn new things – Dedication and commitment (volition) • Attitude and mindset – – – –

Perception Expectations of own learning actions and the effects of information systems Identification with the subject Desire to acquire professional experience

These summarized categories from the literature represent the starting point of the last analysis and the deductive-inductive category formation. It is, however, not

68

5 Data Analysis of University Curricula

expected that all categories will be evident in the remaining sample since the sample only includes programming courses in the first three to four semesters. Furthermore, it is assumed that the Anderson Krathwohl Taxonomy will cover some of the metacognitive competencies in that list so that some of them will be redundant. During a first run through the material, the list of non-cognitive categories is used for initial structuring. Then the category system will be expanded and adapted inductively step-by-step based on five to ten occurrences in a module. After that, the coding of the material is repeated step-by-step with ten further module descriptions each time until the category system is completely adapted to the entire material. The goal of this procedure is to provide a more detailed distinction between non-cognitive teaching and learning objectives and competencies in introductory programming courses while integrating existing models and research.

5.4 Application of Quality Criteria Various content-analytical quality criteria apply to the type of content analysis used in this book. One of them is the thorough description of the research process (e.g., sampling), and category formation. It is furthermore crucial to continuously compare categories with those in the literature. In addition, both deductively and inductively formed categories are subject to an intracoder reliability test two weeks after the initial coding. Based on the second, renewed coding of a certain portion of the total material (approx. 10–15%), the internal validity or coherence of the material, the categories, anchor examples, etc. is evaluated. Moreover, the results of the analysis are subject to discussion with other experts. The identified competencies are also compared with the results of the guided expert interviews. By supplementing and extending the results of the curricula analysis with the help of the mixed method approach at the level of the research design, the results of the different methods are triangulated and thereby validated. Finally, the results will be discussed by referring to existing theories and models, such as the AKT. The AKT is expected to be adapted to the context of basic programming education as an answer to RQ 1.c.

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M. C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) S.P., Ballstaedt, H. Mandl, W. Schnotz, S.O. Tergan, Texte verstehen, Texte gestalten (Urban u. Schwarzenberg, München, 1981) A. Ferrari, Digital competence in practice: an analysis of frameworks. Technical report, European Commission, Sevilla (2012)

References

69

O. Zukunft, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen. Gesellschaft für Informatik e.V., Bonn, Juli 2016. https://dl.gi.de/items/ 0986c100-a3b9-47c8-8173-54c16d16c24e L. Ilomäki, A. Kantosalo, M. Lakkala, What is digital competence? Technical report, European Schoolnet, Brussels (2011) N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, ed by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020. Lecture Notes in Informatics, vol. P-308. (Gesellschaft für Informatik e.V., 2020a), pp. 187–192 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022) N. Kiesler, B. Pfülb, Higher education programming competencies: a novel dataset, in Artificial Neural Networks and Machine Learning – ICANN 2023. Lecture Notes in Computer Science (Springer, Cham, 2023) B. Linck, L. Ohrndorf, S. Schubert, P. Stechert, J. Magenheim, W. Nelles, J. Neugebauer, N. Schaper, Competence model for informatics modelling and system comprehension, in 2013 IEEE Global Engineering Education Conference (EDUCON) (IEEE, Piscataway, 2013), pp. 85–93 H. Mandl, Zur Psychologie der Textverarbeitung: Ansätze, Befunde, Probleme (Urban & Schwarzenberg, München, 1981) H. Mandl, U.M. Krause, Lernkompetenz für die Wissensgesellschaft. Technical Report, LudwigMaximilians-Universität, Lehrstuhl für Empirische Pädagogik und Pädagogische Psychologie, München. Forschungsbericht Nr. 145 (2001) P. Mayring, Qualitative content analysis forum qualitative sozialforschung. Forum: Qualitat. Soc. Res. 1(2), 1–10 (2000) P. Mayring, Qualitative inhaltsanalyse: grundlagen und techniken, 11 auflage edn. (Beltz, Weinheim, 2010) P. Mayring, Qualitative content analysis: theoretical foundation, basic procedures and software solution. Technical report, Leibniz Institute for Psychology, Klagenfurt (2014) P. Mayring, Qualitative inhaltsanalyse: grundlagen und techniken, 12 auflage edn. (Beltz, Weinheim, 2015) P. Mayring, T. Fenzl, Qualitative inhaltsanalyse, in Handbuch Methoden der empirischen Sozialforschung, ed. by N. Baur, J. Blasius (Springer, Wiesbaden, 2014), pp. 543–556

Part III

Data Gathering and Analysis of Expert Interviews

In the third part of the book, the second of the two main methods of gathering and analyzing data to model programming competency is introduced. That is the preparation and interviewing of experts in introductory programming education. Interviewing experts with the help of guiding questions is supposed to elicit actual practices within higher education institutions. The experts’ insights are assumed to help identify programming competencies expected from novice learners. Moreover, the interview transcripts are expected to reveal common factors hindering and contributing to student learning. In Chap. 6, the preparation and gathering of data via expert interviews is in focus. Accordingly, the goals, relevance, expectations, and limitations of conducting and analyzing guided expert interviews are introduced to the reader. Moreover, the development of the interview guide and respective questions is presented. Next, the sampling process is characterized in detail by explaining the selection of experts, how they were contacted and interviewed, and how some of these interviews were recorded. The thorough description does not only serve as a quality criterion of qualitative research but also as a means of contributing to reproducibility. Chapter 7 contains a thorough overview of the analysis process applied to the guided interviews with educators as experts. Before the analysis, the verbalized interview data had to be pre-processed via transcribing the audio recordings. The goal was to produce written protocols. The pre-processing of data thus addresses transcription guidelines, the developed transcription system, and the transcription process of the data gathered for this work. Moreover, the analysis of the written interview protocols via a summarizing, structuring content analysis is presented. As a part of this type of qualitative content analysis, the summary of data follows four steps (omission, generalization, first and second reduction). The category system resulting from the qualitative content analysis of university curricula was then reused for the analysis of interview data. Yet, the category system was extended by inductively built categories based on the interview material. Finally, the quality criteria for the expert interview analysis are summarized.

Chapter 6

Data Gathering of Guided Expert Interviews

6.1 Goals of Conducting and Analyzing Guided Expert Interviews The overall goal of this research study is to create a competency model for introductory programming education (i.e., by referring to the first three to four semesters within a baccalaureate program). By using guided interviews with educators as experts, it is possible to explore further competencies expected from novice learners, even if those are not explicitly listed in curricula and module descriptions (Kiesler 2020a,b, 2022). Thus, recent and authentic insights from different institutional contexts are of interest. Moreover, the expert view is supposed to add value to the perspective presented in the curricula. The experts can also share their experience about what fosters or prevents the development of programming competency, thereby addressing research questions 1.d and 1.e. Answers to respective interview questions may even add competencies to those expected by novice learners of programming. In addition, the expert perspective can add the experiences of different institutions and their diverse student body, as it is assumed each university works with heterogeneous groups of students. To conclude, the experts will add to the identification and classification of programming competencies expected in introductory programming education.

6.2 Relevance of Conducting and Analyzing Guided Expert Interviews Interviews as a qualitative research method are used to investigate and explore teaching practices, intended objectives, their importance, and well-known challenges within introductory programming education. Dropout rates are high in computer science, but the challenges within programming courses can be hard to grasp in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_6

73

74

6 Data Gathering of Guided Expert Interviews

the absence of a qualitative understanding. Programming competency, for example, is not defined by the body of literature. Therefore, it is necessary to obtain the educator’s perspective with years of teaching experience. This is expected to help gather challenges for novices, insights into the student body, general conditions, and actual curricula practices. In today’s (post-)knowledge society, students are more heterogeneous than ever and have a greater variety of backgrounds and prior knowledge. In Germany, this is partially due to the increasing number of paths towards higher education study programs, and the 16 different federal states responsible for education within their state. Therefore, it is challenging to make assumptions about novice CS students as a target group. It is impossible to assume certain competencies as a prerequisite for an Introduction to Programming class. Similarly, we cannot project which competencies should be integrated into the curriculum and promoted in class. Which competencies are relevant for the development of programming competencies, and what could a methodical design of programming courses look like? It is crucial to investigate the field to answer these questions and gain insights into the teaching practices at German higher education institutions. Therefore, a qualitative approach is required to explore and discover current structures, well-known challenges, maxims of action, and approaches to solutions. Interviewing university professors can help find answers to these questions by gathering their long-standing expertise. Professors work with students daily, so they know students’ problems, obstacles, and which methods or strategies can help mitigate such barriers. Furthermore, it remains unclear why some students find it easier than others to accomplish programming tasks. Why does one student fail while another does not? Which variables (i.e., programming competencies) are crucial during the first three to four semesters so that a student becomes a successful programmer? Assuming that students should start developing programming competencies early on in the curriculum, the question arises of which measures or structures may support students. Such measures may also address abilities and competencies related to the whole person, intra- and interpersonal. It is, therefore, crucial to gather the expertise of educators in the field to explore educational practices and experiences at different institutions. The gathered insights are evaluated by using the previously defined competency categories as part of the curricula analysis. Moreover, the results are related to existing theories and research.

6.3 Expectations and Limitations Even though interviews as a research method can help explore the expert perspective, they somewhat constitute a limitation concerning the generalization of results. Seven experts from different institutions were interviewed so the results’ scope is somewhat limited. Even though all experts are active professionals in the same field, factors influencing teaching and learning are likely to vary. Examples are the institution itself, conditions, support structures, the availability of student tutors,

6.4 Developing an Interview Guide and Questions

75

expertise of the teaching staff, teaching formats, assessment methods, curricula, and the degree to which courses are aligned. In addition, interviews can only capture an excerpt of an expert’s perspective, but not all of it. Moreover, interviews can only gather parts of the research subject’s view and experiences, so that not all aspects may be reflected in the sample. It is also possible that some respondents are concerned about a (positive) representation of their institutions, resulting in socially desirable responses and bias. It is thus impossible to make generalizations about learning processes. Those would only be possible by comparing the qualitative data with responses from a larger sample. The present work is rather an exploration of the field and actual practices (Bogner et al. 2009, 2014). Nevertheless, the conducted interviews represent one of the first empirical approaches to investigate the competencies expected from novice programmers. The internal validity of the results will further be triangulated with the qualitative content analysis of module descriptions, which will help increase inter-subjective comprehension.

6.4 Developing an Interview Guide and Questions The expert interviews primarily aim at the exploration of the field. Therefore, openended questions are used, which are supposed to help gain new insights into the context. By using this method, new information can be obtained. An interview guideline can help focus and structure questions during the interview (Fontana and Frey 1998). The guide further ensures that all topics are addressed in all interviews and that results are easily compared (Kuckartz et al. 2008). For this research, the interview guide was developed in alignment with the research questions and hypotheses. Accordingly, the interview questions address cognitive and other competency components expected from novice learners of programming, as well as factors promoting or hindering the development of such competencies. Three sets of questions were designed to serve as a guide through all interviews. Each of the three question sets consists of a main question and several follow-up questions to help lead the conversation in the desired direction. Not all questions must be uttered by the interviewer though. For example, if the respondents have already answered the questions in response to the narrative cue the interviewer is free to skip them (Hellferich 2005; Fontana and Frey 1998). The questions and their narrative function have been tested through a pilot interview. This way, the author evaluated and slightly adapted the wording of the questions, and their sequence, and estimated the duration of an interview. The length of each interview is 45–60 minutes. The first question set of the interview guide asks for the expert’s understanding of programming competency, and what it means to them if someone can program. This question is supposed to serve as a so-called narrative cue, and thus invite people to express their thoughts and experiences without judgment. Follow-up

76

6 Data Gathering of Guided Expert Interviews

questions ask for the operationalization of programming competency, how an educator can see that a student can program, or how one can notice improving programming competencies. Other possible follow-up questions are concerned with the educator’s learning objectives in their programming courses and the assessment of programming competency. The expectation is that components of programming competencies will be outlined by the experts, thereby conveying what students should be able to do as a result of a programming course. The second question set focuses on special aspects educators notice in their introductory programming classes. Follow-up questions explicitly address challenging topics and the educator’s experience with them, but also contents and tasks where students succeed and the conditions for such successes. These first two question sets are very open on purpose, as they serve as narrative prompts or cues (Niebert and Gropengießer 2014), leading to more openness in the responses (Friebertshäuser 1997). If necessary, follow-up questions are used. A rigorous run through every question or a dialogue with the interviewer is perceived as counterproductive and therefore avoided. In the third question set, experts are requested to identify factors influencing the development of programming competencies. The follow-up questions hint at the expert’s attention to some examples, such as prior knowledge and experience, or personal characteristics. Other possible follow-up questions specifically address barriers or success factors for learning how to program. The interview questions (1, 2, 3) and follow-up questions (a, b, c, d) are as follows:

Interview Questions 1. What does it mean to you to be able to program? (narrative cue) a. How do you recognize programming competency in your students? b. What are the educational goals of your programming courses? c. How do you recognize further developments of programming competency in your students? d. How do you determine a good programmer among your students? 2. What do you keep noticing in your courses regarding the development of programming competency? (narrative cue) a. Where do you repeatedly encounter difficulties in the learning process (e.g., among certain topics, or tasks)? b. How can you explain these difficulties? c. Where do you regularly observe learning successes? d. How can you explain such learning successes? (continued)

6.5 Data Gathering and Sampling

77

3. In your opinion, which factors determine or influence the development of programming competency? a. What role is played by, for example, prior knowledge and education, or personal skills, character traits? b. What factors promote the development of programming skills? c. What factors prevent the development of programming skills?

The guidelines and prepared follow-up questions are intended to ensure the interview’s standardization as much as possible. The interview guide with the question sets was not sent to the experts before the interview so the experts could not prepare their replies. Prepared responses may have been counterproductive, leading to more socially desirable responses.

6.5 Data Gathering and Sampling The following sections summarize how the guided interviews with experts were conducted. The process started with the selection of and contacting interviewees. This step is followed by an outline of the interview process on-site, and how the interviews were recorded (which was not the case for all expert interviews).

6.5.1 Selecting and Contacting Experts Before the selection of experts, the expert status required a definition. In this work, experts are defined by their experience in higher education. Thus, university professors were determined as persons of interest. The expert status is characterized by multiple years of practical teaching experience in modules with extensive programming components (e.g., Programming 101, Introduction to Programming). The special knowledge of the experts results from the functional context of higher education (Meuser and Nagel 1997), where teaching is one of the main tasks of the experts. Thus, the experts are actively and regularly involved in students’ learning processes. Moreover, the experts are inquired about their experiences with these learning processes, issues, successes, other peculiarities, and possible explanations. Therefore, experts with an increased interest in pedagogy are considered valuable sources to help answer this study’s research questions. Such interest may be observable in the form of publications, or the participation in educational (research) projects. In addition, the interviewees should have teaching experience in the first three to four semesters of undergraduate computing programs, so that they can

78

6 Data Gathering of Guided Expert Interviews

provide insights into the specifics of introductory programming education. Thus, the purposeful sampling method was used (Patton 2002). A minimum of 20% female experts was targeted, as this percentage represents the average number of female students in the 2016/2017 winter semester. This proportion slightly increased to 21.8% in the academic year 2021/22 (Federal Statistical Office 2023). Researching respective educators at the university lead to the identification of professors at several computer science departments at Hessian higher education institutions. The goal was to conduct five to ten interviews. The snowball principle was used to expand the number of addressed experts and the sample. However, not all of the contacted persons replied to the request for an interview. So finally, seven expert interviews were conducted. In addition, an unrecorded piloting interview (Aßmann et al. 2012) took place in advance. After identifying potential interview partners in the community, the experts were each contacted via e-mail. This first e-mail contained an outline of the research project and its objectives. Upon a positive response, follow-up questions were clarified by telephone and e-mail. As a rule, all information related to the interviews (e.g., organization, data protection, anonymity, questionnaire) was communicated by mail. The experts were further informed about the voluntary nature of their participation and the audio recording option (for transcription). A follow-up mail then aimed to schedule the interview within the next four weeks. The interview questions, however, were not sent to the experts before the interview on purpose. The experts further received a short questionnaire and consent form related to the audio recording via mail. The goal was that experts prepare the documents before the in-person meeting. The short questionnaire was used to gather general information about the respondents. Each expert was asked to complete the short questionnaire prior to the interview and send or give it back to the author of this work, which worked well in all cases. The goal of this procedure was to avoid splitting the interview into two parts and delaying it during the face-to-face appointment.

6.5.2 Conducting the Interviews Before each interview, the researcher documented general expectations, expected challenges, and other aspects concerning the recording situation in a written note. In addition, it was important to create a comfortable interview atmosphere at the beginning of each meeting with an expert (Niebert and Gropengießer 2014). If possible, coffee and snacks were prepared and brought to the meeting. Moreover, attention was paid to a friendly and open start of each interview. At the beginning of each interview session, the audio recording of the interview was prepared. So windows were closed, and telephones in the room or office had to be muted or switched off. These actions were supposed to ensure a concentrated working environment and to obtain high-quality sound recordings.

References

79

After explaining the intended interview procedure to the experts (e.g., duration, interview guide), the consent form and short questionnaire were exchanged in case the experts had not already sent them back via e-mail. If the expert had consented to the audio recording the recording device was switched on. Immediately after each interview, the audio file was saved and interview notes were to keep track of impressions, thoughts, incidents, and, under certain circumstances, disturbances (Friebertshäuser 1997). Such notes on the progression of the interview serve the intent to support the evaluation and reconstruction if memories of the interview have already faded. The author of this study additionally took notes in case the experts uttered anything before the beginning or after the end of the recording.

6.5.3 Recording the Interviews An audio recording of an interview usually serves as a basis for the transcription of the responses, and thus for the later evaluation and analysis. In this research study, interviews were recorded via a mobile recording device in the WAV format (Waveform Audio File). The quality of this format is sufficient for the subsequent transcription process. The used recording device can be characterized by its small size, enhancing mobility. It neither required a lengthy setup nor a distraction in the interview situation (e.g., compared to a handheld microphone). The kidney-shaped alignment of the two microphones supports the thorough recording of two speakers sitting in front of each other. The audio file could easily be transferred from the mobile audio recorder to a computer via a micro-USB port. A digital audio recording allows for more accurate transcriptions, as it is possible to pause and repeat the expert’s responses (Kuckartz 2010). Not every expert gave consent to record the interview. As a result, responses were transcribed immediately on a laptop during the interview. For this reason, they could also be evaluated and analyzed as a next step in the process (Gläser and Laudel 2004).

References A. Bogner, B. Littig, W. Menz, Introduction: expert interviews—an introduction to a new methodological debate, in Interviewing Experts (Springer, Berlin, 2009), pp. 1–13 A. Bogner, B. Littig, W. Menz Interviews mit Experten: eine praxisorientierte Einführung (Springer, Wiesbaden, 2014) Federal Statistical Office. Hochschulen: Studierende nach ausgewählten Fächergruppen. Online Publication (2023). https://www.destatis.de/DE/Themen/Gesellschaft-Umwelt/BildungForschung-Kultur/Hochschulen/Tabellen/studierende-mint-faechern.html. Accessed 23 Nov 2023

80

6 Data Gathering of Guided Expert Interviews

A. Fontana, J.H. Frey, Interviewing-the art of science, in Collecting and Interpreting Qualitative Materials, ed. by N.K. Denzin, Y.S. Lincoln (Sage Publications, Thousand Oaks, 1998), pp. 47–78 B. Friebertshäuser, Interviewtechniken: ein Überblick, in Handbuch qualitative forschungsmethoden in der erziehungswissenschaft, ed. by B. Friebertshäuser, A. Prengel (Juventa-Verlag, Weinheim, 1997), pp. 371–395 J. Gläser, G. Laudel, Experteninterviews und qualitative Inhaltsanalyse (VS Verlag für Sozialwissenschaften, Wiesbaden, 2004) C. Hellferich, Die qualität qualitativer daten: manual für die durchführung qualitativer interviews, 2. auflage edn. (VS-Verlag, Wiesbaden, 2005) N. Kiesler, On programming competence and its classification, in Koli Calling’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling’20 (Association for Computing Machinery, New York, 2020a) N. Kiesler, Towards a competence model for the novice programmer using bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE’20 (Association for Computing Machinery, New York, 2020b), pp. 459–465 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und Informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022b) U. Kuckartz, Einführung in die Computergestützte Analyse Qualitativer Daten, 3 auflage edn. (Springer, Wiesbaden, 2010) U. Kuckartz, T. Dresing, S. Rädiker, C. Stefer, Qualitative Evaluation: Der Einstieg in die Praxis (VS Verlag für Sozialwissenschaften, Wiesbaden, 2008) M. Meuser, U. Nagel, Das expertInneninterview – wissenssoziologische voraussetzungen und methodische Durchführung, in Handbuch Qualitative Forschungsmethoden in der Erziehungswissenschaft, ed. by B. Friebertshäuser, A. Prengel (Juventa-Verlag, Weinheim, 1997), pp. 441–471 K. Niebert, H. Gropengießer, Leitfadengestützte interviews, in Methoden in der naturwissenschaftsdidaktischen Forschung, ed. by D. Krüger, I. Parchmann, H. Schecker (Springer, Berlin, 2014), pp. 121–132 M.Q. Patton, Qualitative Research & Evaluation Methods (Sage, Thousand Oaks, 2002) S. Rässler, C. Aßmann, H.W. Steinhauer, Aspekte der Stichprobenziehung in der erziehungswissenschaftlichen Forschung. Enzyklopädie der Erziehungswissenschaften Online (EEO); Fachgebiet Methoden der empirischen erziehungswissenschaftlichen Forschung, 15 (2012). ISSN = 2191–8325

Chapter 7

Data Analysis of Guided Expert Interviews

7.1 Pre-processing of Data A crucial step for the analysis of interview data is their transcription into written form. Hence, the pre-processing of data section addresses the transcription guidelines and rules developed in alignment with the research questions. In addition, the conducted transcription process is summarized.

7.1.1 Transcription Guidelines Transcription as a form of representation of persons involved in a conversation aims at the documentation as a prerequisite for scientific research. It is common practice to record interviews for the transcription of verbal and non-verbal interpersonal communication. This is, however, accompanied by some loss of data. Information such as facial expressions, gestures, eye contact, laughing, and other aspects are challenging to record. Such an information loss is preventable to some extent. For example, by using special characters in the transcript to mark pauses with a notation, or a long exhalation with the letters “hhh”, or as “(exhale)”. It is important to include such non-verbal elements of communication to comprehend the conversation in the future (e.g., to understand sarcasm, or jokes) (Kowal and O’Connell 2009). Compared to linguistic transcriptions, the transcription in the present research study is simple. Individual sounds, pitch, elongation, accents, dialects, loudness, speech tempo, and similar prosodic features are given little or no consideration. Only longer pauses may be of interest and are, therefore, transcribed. Other acoustic signals, such as (outside) noise, cell phone ringing, long exhalations, or laughing are also taken into account since they could influence the meaning of speech acts. In the transcripts, all verbal speech elements are transcribed in the sequence they occur. Interview situations and conditions are documented as part of the preparation © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_7

81

82

7 Data Analysis of Guided Expert Interviews

and follow-up notes. These include the date, time, and location of the interview. Changing speakers are always indicated in the transcript (Dresing and Pehl 2010). Gestures and facial expressions are of little interest unless they imply, for example, irony or humor. In this case, respective notes are added. Similarly, other nonverbal aspects of the conversation are not considered relevant for this research. It is also not the goal to perform a conversational analysis, so word components such as “Mhm”, the exact duration of pauses, or intonation are less important. The goal of the transcription is to generate written data for the qualitative content analysis according to Mayring (2009, 2015). Further analysis steps are conducted via the software MAXQDA. Mayring (2009) generally requires textual material such as protocols. As a rule of thumb, a complete and verbatim transcription is the basis for the analysis of guided interviews (Schmidt 2009). In this study, a simple transcription system was selected as neither grounded theory nor memos are applied (Böhm 2009). It is most important to generate comprehensible data material, which requires a fixed set of transcription rules. Moreover, it is crucial to not significantly change the content and statements of the data material through the choice of the transcription system (Mayring 2015). According to Kuckartz (2010), the transcription of an interview always results in a loss of information. Therefore, the selection of a transcription system that appears to be suitable based on the research questions is the most reasonable choice. Other quality criteria of transcripts demand good readability, so little instruction and time is necessary for readers or other coders. This aspect can positively effect intra- and intersubjective reliability (Dresing and Pehl 2010).

7.1.2 Transcription System A simple transcription system was derived to address these requirements. The system is aligned to the guidelines summarized by Kuckartz et al. (2008) and Kuckartz (2010). It was further adapted to reflect on the research questions: 1. Transcription is verbatim, not phonetic or summarized. Dialects are not transcribed. The basis of the transcription is High German. 2. A slight linguistic smoothing is performed, meaning swallowed endings and syllables, or abbreviations are noted according to the written High German language. 3. Information indicating or hinting at a specific person is anonymized. 4. Speech pauses are indicated by dots in parentheses. The number of dots in the brackets implies the duration of the pause. 5. Stressed words and passages are marked in capital letters. 6. Monosyllabic phonetic utterances (e.g., “mhm”, “ah”) are usually not transcribed if they do not influence the utterances of the interviewee. These elements are only included if they carry meaning, such as agreement, disagreement, or thinking about something.

7.1 Pre-processing of Data

83

7. Sounds, reactions, or other nonverbal communication of the interviewee are noted in brackets, e.g., (laughing) or (loud exhalation). 8. A change of speakers is indicated by using a “B” for the interviewee’s utterances, and an “I” for the interviewing person. Moreover, a paragraph is inserted so that the change of speakers is visualized (Kuckartz et al. 2008; Kuckartz 2010). Based on these considerations for a simple transcription system, the following notations and rules for language smoothing, and a uniform writing style were developed. Table 7.1 summarizes the most important notations and their meaning for the transcripts of the guided expert interviews. The verbal communication was pre-processed with regard to additional aspects, most of which concern informal language, idioms, repetitions, or dialect patterns. The following system was established: • Repetitions of words are not noted if they result from stuttering or thinking. Words are repeatedly transcribed if used as a stylistic element to highlight certain facts, e.g., “practicing is very, very, very important”. • Incomplete words are complemented, resulting in full words, i.e., abbreviations are spelled out.

Table 7.1 Applied notations in the simple transcription system (Kiesler 2022b) Notations I: B: (.) (..) (.. . .)

[.. . .] ‘study program’ // / “” UPPERCASE mhm (agreeing) mhm (disagreeing) mhm (thoughtful) (unintelligible)

(laughing) (laughter) (loud exhale)

Meaning Interviewing person Interviewed person Short break, duration one second Short break, duration two seconds Longer breaks, duration three seconds The length of pauses is implied by the number of dots in brackets, whereby one dot implies one second Omission of names or details for anonymization Single quotation marks, used for names of, e.g., study programs Overlapping utterances, overlapping text is within these signs Breaks in words or sentences are indicated Quotation marks are used to indicate verbatim speech. The signs are also used when something was expressed as within quotation marks Noticeable emphasis of words are indicated in uppercase Monosyllabic sounds expressing agreement Monosyllabic sounds expressing disagreement Monosyllabic sounds expressing thoughtfulness Unintelligible words are indicated If applicable, the assumed word is added in brackets with a question mark, e.g., (coherent?), (unintelligible) Statement is uttered with audible laugh Laughter occurs distinct from an utterance and is noted in parentheses A loud exhalation is audible and noted in parentheses

84

7 Data Analysis of Guided Expert Interviews

• Grammatical and syntactic errors remain as they are, and do not get corrected, e.g., incorrect verb forms, verb-noun agreement, missing definite or indefinite articles, or other syntactic elements, etc. • Idioms remain as they are, even if they contain informal language or abbreviations. • If possible, lexemes typically used in some German dialects are translated into High German. An English equivalent may be “lil”, which is transcribed as “little”. In addition to these notations, further spelling rules were required to ensure consistency among all interview transcripts (Kiesler 2022b): • Punctuation marks are written according to the meaning and emphasis. If the intonation indicates a question, a question mark serves as a punctuation mark. Pauses are indicated after the punctuation mark. If in doubt, a period ends a sentence. • In case of dialogues with, for example, students or colleagues is obsolete, they are indicated in quotation marks and separated by a hyphen, so alternating speakers are recognizable. • Special characters and abbreviations indicating units of measurement or alike are written as full words, e.g., percent, minutes, hours, etc. • Numbers from zero to twelve are spelled out, the same is applied to numerical values such as twenty, thirty, or hundred. • Numbers representing grades from 1 to 6 (which are common in the German system) are transcribed as numerals. • Dates are noted in the format day/month. • Numbers indicating a specific year are written as numerals, e.g., 86, not as 1986. • Numbers indicating one’s age are presented as numerals, ranging from 0 to 18. • Variables such as X or Y are transcribed as capital letters.

7.1.3 Transcription Process The time-consuming transcription process of the expert interviews was supported by hardware and software. In particular, the program f4transcript was used along with a foot pedal. The pedal allowed control over playing the audio recording. While transcribing the recorded interviews, the recording constantly had to be paused, rewound, and replayed every few seconds. Controlling these actions via a foot pedal results in free hands for manual transcription, saving time and effort. In addition, the software supports the insertion of certain text elements via individually set key combinations and shortcuts. Those were defined based on the notations in Sect. 7.1.2. The data was transcribed in a dedicated window of f4transcript’s interface. Another helpful feature is the automatic insertion of a changing speaker (i.e., I and B), time stamps of the audio recording, line breaks, and new paragraphs by f4transcript.

7.1 Pre-processing of Data

85

Upon completion of each transcript, the data was saved as an RTF file (rich text format) with time stamps for further use in MAXQDA. The latter eases the analysis and triangulation of data from the curricular data analysis. Transcribing verbal communication required approximately four to eight times the length of the interview (or its recording) (Kuckartz et al. 2008; Kuckartz 2010). In a few cases, the verbatim transcription was executed on the laptop during the interview, making a subsequent transcription redundant. Before these two interviews, the experts were asked to speak slowly and, if necessary, to repeat their replies. As a result, the extensive notes were suitable for the following content analysis. So after each of the immediately transcribed interviews, a linguistic smoothing and review took place immediately after the interview. The goal of this process was to ensure the same notations as in the other interview transcripts. The first full transcription of every recorded interview was followed by a correction loop and several rounds of listening to the audio recording. The first correction loop took place immediately after finishing the transcription, and another one the following day. The final correction loop took place approximately two weeks after finishing the transcript. These correction loops are crucial as some tasks, such as full anonymization, cannot be accomplished simultaneously to the transcription (Kuckartz 2010). Moreover, the f4 software does not perform an automatic spell check. In addition, it is important to correct the transcripts (e.g., check spelling, eliminate errors, etc.) as a prerequisite for the subsequent analysis of the interviews (Schmidt 2009). Interviews transcribed during the interview situation adhere to the same transcription rules established here. Only time stamps (as inserted automatically by the f4 software) have not been inserted. The immediately transcribed interviews were also corrected within several loops, for example, to ensure anonymity and adherence to the notations defined in this work. Fortunately, this process yielded productive transcripts which could be used as intended for the analysis. Table 7.2 summarizes the characteristics of the transcription process, including the duration of each record (respectively the interview), the time required for the first full transcription (without correction loops), and the length of the written protocols.

Table 7.2 Overview of the gathered and transcribed material in the context of the guided expert interviews (Kiesler 2022b). Interview EI00 EI01 EI02 EI03 EI04 EI05 EI06 Overall

Duration of record 34 Minutes 36 Minutes 50 Minutes 75 Minutes 67 Minutes 50 Minutes 33 Minutes 345 Minutes

Time for transcription 240 Minutes 180 Minutes 230 Minutes – 270 Minutes – 240 Minutes 1160 Minutes

Protocol length 13 pages 11 pages 14 pages 13 pages 15 pages 8 pages 10 pages 84 pages

86

7 Data Analysis of Guided Expert Interviews

The table helps illustrate the scope of the gathered material. In total, 345 minutes (almost 5.75 hours) of interview material were transcribed on 84 pages. It took approximately 1160 minutes (19.3 hours) to transcribe the five recorded interviews. It should be noted that only the time to complete the initial transcription is displayed here, as correction loops have not been tracked. Similarly, the effort required to proofread and correct the two immediately transcribed interviews is not represented.

7.2 Data Analysis The transcribed interview data are analyzed using qualitative content analysis according to Mayring (2000, 2015). Mayring offers orientation for the concrete procedure employing a process model for the application and construction of inductive categories. For the interview data in this study, a summarizing, structuring content analysis is selected based on a content-analytical communication model. If necessary, it can be supplemented by frequency analyses of categories or their weighting (Mayring 2015). The general process model according to Mayring (2015) defines the following steps: 1. 2. 3. 4. 5. 6.

7. 8.

9. 10.

Specification of the material Analysis of the origin or situation in which the data was created Formal characteristics of the material Determination of the direction of analysis Theoretical distinction of the research question(s) Determination of an appropriate analysis technique (e.g., summary, explication, structuring) or a combination of them; determination of a concrete process model; determination and definition of categories and the category system Definition of the units of analysis (i.e., coding, context, and evaluation unit) Analysis according to the process model based on the category system; checking the category system about its alignment with theory and the material (see steps 5 and 6); if changes are made, a new run through the material is required Compilation of results and interpretation based on the research question(s) Application of content-analytical quality criteria

Similar to the qualitative content analysis of the curricula, the interview transcripts are analyzed by deductive-inductive categories. The categories extracted from the literature review and the qualitative content analysis of module descriptions are used as a starting point. These deductively built categories are tested on the summarized material. As a next step, the inductive development of additional categories from the material follows. The inductive categories are also tested on parts of the material before the full analysis. In contrast to the previously conducted qualitative content analysis, the transcribed interview material is analyzed according to the technique of summarization.

7.2 Data Analysis

87

Hence, the transcript of each interview is prepared for analysis and category formation through reductive processes. This reduction is led by macro-operators (Mayring 2015), such as the omission of text passages, generalization, construction of propositions, integration into existing propositions, selection and clustering steps (Ballstaedt et al. 1981). The step-by-step reduction process is then followed by the construction of categories. The summarizing content analysis of the transcribed interview data adheres to the following rules and definitions. The evaluation unit and context unit coincide (Mayring 2015) in this work. The context unit depends on the material run in which it is considered. In the first coding cycle, related statements are evaluated. In the second coding cycle, the entire interview material is considered as context. The coding unit is defined as a unit in which coherent statements can be identified that are suitable for paraphrasing and reduction processes in the next step. This refers to, for example, complete utterances about programming competencies, or challenges in their development. In the first run through the material, each coding unit is summarized within a paragraph, which helps ease the next step of the analysis. The steps of the summarizing content analysis follow Mayring’s technique (2015) and utilizes macro-operators coined by Ballstaedt et al. (1981) and Van Dijk (1980). First, omission and paraphrasing (step Z1) are conducted simultaneously with the generalization (step Z2), as a large amount of material is involved. The following list explains all four applied summarizing steps, and how they were applied in this analysis: Z1 Omission and paraphrasing Z1.1 Deletion of all text components that are not (or not very) content-bearing, such as decorative, repetitive, clarifying phrases. Omissions also take place (Ballstaedt et al. 1981), and empty phrases are deleted from the material. Z1.2 Content-bearing text passages are translated into a uniform language. Sentences are completed and, if necessary, syntactic elements are added. Sentence beginnings are deleted if they contain duplicates. Z1.3 Transformation to a grammatical short form takes place in some instances. Long sentences are shortened. Z2 Generalization Z2.1 Generalization of the paraphrased elements to the defined level of abstraction, so that the old elements are implied in the newly formulated ones. The level of abstraction is still low. Learning objectives (i.e., programming competencies) are generalized from the descriptions of tasks, and the educator’s observations regarding challenges and successes are summarized. As a rule, the first-person perspective is removed and reworded to the passive voice. The only exception is when respondents provide a personal verdict. Elements of verbatim speech indicating dialogues with other persons are removed. Informal expressions are removed or formalized if they contain meaningful content. Students are always referred to as such, deictic elements referring to them are removed.

88

7 Data Analysis of Guided Expert Interviews

Z2.2 Predicates are generalized in the same manner. Z2.3 Paraphrases above the intended level of abstraction remain as they are. Z2.4 Theoretical considerations are used in cases of doubt. In particular, the detailed descriptions of AKT dimensions are consulted (Anderson et al. 2001). Furthermore, the findings from the preceding content analysis are taken into account as prior assumptions. Z3 First reduction via omission, deletion, and selection Z3.1 Paraphrases with the same meaning within evaluation units are deleted. Z3.2 Paraphrases that do not bear essential information at the new abstraction level are deleted. Z3.3 Central content-bearing paraphrases are still adopted but selected. Z3.4 If in doubt, theoretical assumptions are consulted. Categories are always built from the perspective of identifying competencies expected from novice learners of programming. For this reason, negative aspects, i.e., the lack of certain competencies remain in the material. Z4 Second reduction via grouping, clustering, abstraction, and integration Z4.1 Paraphrases with the same (or similar) subject and similar statements are combined into one paraphrase, and thus grouped. Z4.2 Paraphrases with several statements are combined into one item (construction/integration). Z4.3 Paraphrases with the same or similar subject and different statements are combined into one paraphrase and thus integrated into a new construct. Z4.4 If in doubt, theoretical assumptions are consulted. After the first and second summarizing steps were conducted (Z1 and Z2), the material was loaded in MAXQDA, where the category system resulting from the curricula analysis was already available. Steps Z3 and Z4 were then realized with the support of MAXQDA, meaning that inductive categories and subcategories were developed based on the material (Mayring 2010).

7.3 Application of Quality Criteria Quality criteria of qualitative research are applied to both the interview process and the analysis of the guideline-based expert interviews. The prepared interview guide, for example, helps standardize the interview procedure and allows for replication studies. Moreover, the interview process including the sampling, communication with the test persons, and preparation, is presented transparently and comprehensively. The same applies to the transcription and summary of the material along reductive steps. Regarding the coding, the so-called intracoder-reliability test is used, in which a proportion of the material is coded again two weeks after the initial coding. The goal

References

89

is to assure the validity and internal consistency of categories, anchor examples, and the coding process. The consistency of the data represents yet another quality criterion of the analysis. Therefore, a triangulation with the results of the curricula analysis is performed. This will likely confirm the relevance of categories as both the curricula and the experts are likely to refer to similar constructs of programming competency. Another quality measure is communicative validation. In this process, the analysis results are reviewed with individual experts in a debriefing session. If necessary, categories representing certain constructs are clarified.

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) S.P. Ballstaedt, H. Mandl, W. Schnotz, Tergan, Texte verstehen, Texte gestalten (Urban u. Schwarzenberg, München, 1981) A. Böhm, Theoretisches codieren: textanalyse in der grounded theory, in Qualitative Forschung. Ein Handbuch, ed. by U. Flick, E. von Kardorff, I. Steinke (Rowohlt, Reinbek, 2009), pp. 475– 485 T. Dresing, T. Pehl, Transkription, in Handbuch Qualitative Forschung in der Psychologie, ed. by G. Mey, K. Mruck (VS-Verlag, Wiesbaden, 2010), pp. 723–733 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und Informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022b) S. Kowal, D.C. O’Connell, Zur Transkription von Gesprächen, in Qualitative Forschung. Ein Handbuch, ed. by E. von Kardorff, U. Flick, I. Steinke, 7 auflage edn. (Rowohlt, Reinbek, 2009), pp. 437–446 U. Kuckartz, Einführung in die Computergestützte Analyse Qualitativer Daten, 3 auflage edn. (Springer, Wiesbaden, 2010) U. Kuckartz, T. Dresing, S. Rädiker, C. Stefer, Qualitative Evaluation: Der Einstieg in die Praxis (VS Verlag für Sozialwissenschaften, Wiesbaden, 2008) P. Mayring, Qualitative content analysis forum qualitative sozialforschung. Forum Qual. Soc. Res. 1(2), 1–10 (2000) P. Mayring, Qualitative inhaltsanalyse, in Qualitative Forschung. Ein Handbuch, ed. by U. Flick, E. von Kardorff, I. Steinke, 7 auflage edn. (Rowohlt, Reinbek, 2009), pp. 468–475 P. Mayring, Qualitative Inhaltsanalyse: Grundlagen und Techniken, 11 auflage edn. (Beltz, Weinheim, 2010) P. Mayring, Qualitative Inhaltsanalyse: Grundlagen und Techniken, 12 auflage edn. (Beltz, Weinheim, 2015) C. Schmidt, Analyse von Leitfadeninterviews, in Qualitative Forschung. Ein Handbuch, ed. by U. Flick, E. von Kardorff, I. Steinke, 7 auflage edn. (Rowohlt, Reinbek, 2009), pp. 447–455 T.A. Van Dijk, Macrostructures: An Interdisciplinary Study of Global Structures in Discourse, Interaction, and Cognition (Lawrence Erlbaum, New York, 1980)

Part IV

Results

The fourth part of this book comprises the results of the conducted analyses of university curricula and guided interviews with experienced programming educators as experts. Accordingly, this part contains the answers to the five research questions and discussions of the results gathered in the two qualitative content analyses. Moreover, a synopsis of the competencies in introductory programming is presented, and the Anderson Krathwohl Taxonomy is reviewed and adapted. Chapter 8 particularly focuses on the results of the qualitative content analysis of university curricula. It presents the cognitive competencies expected within the first three to four semesters in a computer science degree program at German universities, along with other competencies and competency components. Cognitive competencies are classified into AKT’s cognitive process and knowledge dimensions. The presentation of results is accompanied by the evaluation of the study’s reliability and a discussion of the results. Chapter 9 follows the same structure by presenting both cognitive and other competencies expected from novice learners from the expert’s perspective. Cognitive programming competencies were classified within the AKT’s cognitive process and knowledge dimensions. The summarizing content analysis of interview transcripts revealed additional competency components, such as dispositions, and emphasized the role of gaining programming experience through time-intensive practice. Moreover, the experts’ responses about factors preventing and contributing to programming competency are summarized. Again, the analysis’ reliability is evaluated, and crucial results are discussed. The last chapter of this part summarizes the results of both data gathering and analysis methods by providing a synopsis of programming competencies and, if possible, their classification within AKT dimensions. This summary thus addresses cognitive and other competencies, among them dispositions. Chapter 10 further reviews and discusses the adequacy of the Anderson Krathwohl Taxonomy for the context of introductory programming education. As a result of this discussion, an adapted version of the AKT is suggested. It can serve as a context-specific addition to the generic and discipline-agnostic taxonomy.

Chapter 8

Results of University Curricula Analysis

8.1 Cognitive Competencies In this section, the competency goals found in the sample of 129 module descriptions from 35 German universities (Kiesler 2020a; Kiesler and Pfülb 2023) are classified in terms of the AKT’s dimensions representing knowledge and cognitive processes (Anderson et al. 2001). Accordingly, inductive subcategories with programming competencies from the material are integrated into the deductive categories representing cognitive process and knowledge dimensions. AKT dimensions are considered inclusive, meaning higher levels include the less complex levels of cognitive processes or knowledge dimensions below. Thus, a competency classified within the application dimension includes the mastery of the levels of remembering and understanding. Learning objectives within the procedural knowledge dimension also include conceptual and factual knowledge. Table 8.1 summarizes the inductive subcategories representing programming competencies and how they are classified within the AKT’s dimensions. The number of occurrences in the material is provided in brackets after each competency. The axes of the original AKT matrix were transposed for reasons of readability. During the coding process, some challenges were observed related to the assignment of inductive subcategories to the AKT dimensions. The initial effort of linguistically smoothing the material proved to be helpful, as it led to more comprehensible expressions in the first place. However, some objectives in the module descriptions remained challenging to understand, despite considering the context unit. In particular, the lack of operationalized verbs or nouns caused this problem. In some cases, competency goals were expressed in a very abstract, general, brief manner, or represented contents only. It was thus impossible to classify them according to the AKT dimensions. For this reason, the category “not operationalized” was added to the coding scheme. It was applied to 245 coding units.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_8

93

Level 2 understand

Assign literals to Level 1 remember data types (1)

Procedural knowledge

Describe concepts of programming paradigms and languages (11) Explain problems that can be solved by algorithms (3) Explain terms of formal verification techniques (2)

Know concepts for modeling algorithms and processes (9)

Know elements of GUIs (1)

Justify the use of software development tools (2)

Explain a software’s architecture and its development (2)

Know methods for the formal definition of programming languages (1) Characterize algorithms, data structures and data types (9)

Know distributed systems, parallel programming (1)

Know how compilers operate (1)

Know structure and principles of networks (1)

Know control structures (2)

Know concepts for data management (2)

Know mathematical basics of algorithms (2)

Know quality criteria for source code (3)

Know tools for software development (IDEs, debugger, profiler), their functions, purpose (3)

Process knowledge of run-time analysis (4)

Know implementation methods for data types and structures (4)

Know structure and principles of computers (5)

Know design pattern for algorithms and data structures (8)

Know methods of software development (12)

Know libraries (3)

Know basic characteristics of algorithms (3) Know basic algorithms and data structures (39)

Know terms and categories concerning Know principles and methods of programming languages and complexity and efficiency of algorithms (7) paradigms (50)

Cognitive Knowledge dimensions processes Conceptual knowledge Factual knowledge

Table 8.1 Overview of inductive categories and their classification within the AKT dimensions (Kiesler 2022, 2020b,c)

Meta-cognitive knowledge

94 8 Results of University Curricula Analysis

Level 5 evaluate

Level 4 analyze

Level 3 apply

Assess adequacy of programming tools and templates (2)

Evaluate properties of algorithms and programs (5) (continued)

Outline limits of the computational power of computer models (1) Assess adequacy of algorithms, data structures and data types (34) Evaluate adequacy of one’s Testing of algorithms and programs for errors (19) own self-concept Assess the complexity of algorithms (14) (1) Evaluate properties of algorithms and programs (10)

Characterize formal languages via Chomsky hierarchy (1)

Analyze adequacy and characteristics of data structures (6)

Being able to read, explain and identify the output of (foreign) code (8)

Use UNIX systems via the command line (1) Characterize programming language and Analyze complexity of algorithms (24) paradigms by outlining their inner structure Break down given problems into smaller components (20) (6) Analyze adequacy and characteristics of algorithms (9)

Use LATEX(1)

Use computers (1)

Apply quality criteria to source code (4)

Document programs professionally (4)

Use existing libraries (5)

Use tools for software development (30) Perform mathematical calculations and encodings (5)

8.1 Cognitive Competencies 95

Level 6 create

Cognitive Knowledge dimensions processes Factual Conceptual knowledge knowledge

Table 8.1 (continued)

Design interfaces (1)

Generate objects (1)

Call methods (1)

Develop libraries (1)

Design concurrently or parallel processes (2)

Programming of GUIs (2)

Develop formal tools and test cases (2)

Develop larger programs/applications (2)

Design program specifications (3)

Design formal syntax descriptions (4)

Extend or adapt given program code (5)

Specify and implement (abstract) data types (5)

Adapt standard algorithms (7)

Adapt standard data structures (8)

Design data structures (12)

Use standard data structures (14)

Abstraction of problems (2)

Transfer of knowledge and skills to new problems and programming languages (13) Self-regulated organization of learning process (7)

Write executable programs to solve problems (80) Design problem-adequate algorithms with design patterns (41) Implement algorithms (33) Modeling of problems and programs (26) Design smaller programs in a structured way (17)

Use standard algorithms (16)

Meta-cognitive knowledge

Procedural knowledge

96 8 Results of University Curricula Analysis

8.1 Cognitive Competencies

97

In other cases, the lack of crucial syntactic constituents caused the assignment of the “not operationalized” category. In extreme cases, the universities had only listed nouns, i.e., lists of contents, instead of competency goals, which is why those were excluded from further analyses. The following list provides some examples of challenging, non-operationalized expressions (after linguistic smoothing) of the data: 1. “First encounter with concurrent algorithms.” (UC02) 2. “On the basis of mathematically rigorous proofs, [students] grasp important areas of theoretical computer science.” (UC01) 3. “The courses of this module teach the fundamentals of software development and computer science.” (UC01) 4. “Students recognize that the acquired knowledge is required to write simple mobile apps.” (UC01) 5. “Being able to reproduce basic computer science concepts.” (UC10) 6. “Being able to explain [basic computer science concepts].” (UC10) While the first example from the second university curriculum (UC02) did not contain an operationalized learning objective, the second example from UC01 represents a very abstract expression. Both do not allow a classification within the cognitive process or knowledge dimensions. In the third and fourth examples, the abstract terms “fundamentals”, and “knowledge” led to the categorization as nonoperationalized. The lack of context caused the challenges with examples five and six. Despite using operationalized verbs, it was impossible to assess the precise context, and, therefore, the complexity of the expected competencies. The listed competencies remained too general and abstract, as the “basic concepts” were not specified. Another aspect worth mentioning is that there were a few occasions where competencies from software development and engineering courses were recognized (e.g., Explaining a software’s architecture and its development). These competencies are later eliminated from the list of basic programming competencies, as they appear less frequent than other commonly expected competencies in introductory programming courses, and do not align with the previously defined content area. In the following sections, the results presented in Table 8.1 are summarized and elaborated concerning each of the six cognitive process dimensions. Moreover, the results are analyzed from the knowledge dimension perspective.

8.1.1 Cognitive Process Dimension Remembering The curricula data revealed one subcategory of factual knowledge expected from novice learners of programming. That is the competency to assign literals to data types, classified as cognitive process dimension remember according to the AKT. Even though no other examples were identified in the sample, it does not mean that factual knowledge is not important in introductory programming. On the contrary,

98

8 Results of University Curricula Analysis

several competencies incorporate factual knowledge components, such as the use of keywords of programming languages, libraries, or other concepts related to hardware, software, and programming languages. The list of factual knowledge components could go on forever, yet none of them is explicitly addressed as part of the learning objectives. Thus, the elaboration of competency components within the factual knowledge dimension is often omitted. The omission may be due to the inclusive nature of the higher cognitive process dimensions and respective competencies. Understanding, applying, analyzing, evaluating, and creating always require remembering. However, remembering conceptual and procedural knowledge is more explicit in the data. Both these knowledge dimensions contain 14, respectively 147 coding units on the level of remembering, and within 21 different subcategories. An alternative explanation may be that remembering factual knowledge components is considered trivial, or not worth mentioning.

8.1.2 Cognitive Process Dimension Understanding Several competencies related to understanding certain concepts were classified as non-operationalized objectives. This is due to the use of “understanding” and “comprehension” as competency goals in several coding units, while observable or measurable actions or outcomes are neglected. For this reason, building inductive subcategories representing competencies was impossible, and the coding units were assigned the “non-operationalized” category. The following two examples from university curricula 02 and 03 illustrate this challenge: • “[Course Participants] understand the main concepts of a high-level programming language and its use.” (UC02) • “Students acquire a general understanding of the most important aspects of modern software development.” (UC03) • “[At the end of the modules], [students] can understand [descriptions and source codes of standard algorithms].” (UC05) The decision to code such cases as “non-operationalized” was based on the incomplete representation of competency within the learning objectives. It remains entirely unclear how “understanding” could be observed, or assessed by educators. What should students do to prove their understanding? The examples above do not represent competencies. They rather contradict the concept of competency. The third example in the list further illustrates the discrepancy between understanding the meaning of a concept and the analyses that may be required for it. In the last example, analyzing source code is the objective, but this is a prerequisite for understanding. In total, 29 coding units representing “understanding” were identified in the university curricula, whereby six inductive subcategories were developed.

8.1 Cognitive Competencies

99

8.1.3 Cognitive Process Dimension Applying The cognitive process dimension “applying” is represented in 51 coding units of the university curricula. For “applying”, eight inductive subcategories were constructed. Despite the explicit use of the verb “apply” in a learning goal, it was not necessarily coded as the homonymous cognitive process dimension. This is due to the deviating meaning of the lexeme depending on the context. In some cases, applying rather denotes integrating known structures into new ones or a new context, or assembling a new construct. In that case, the coding unit was classified as representing a competency within a more complex cognitive process dimension (i.e., create). The following three examples illustrate this aspect: • “Being able to apply [essential computer science algorithms].” (UC10) • “Being able to apply [concepts of an imperative programming language].” (UC10) • “Students will be able to apply their knowledge and understanding to their job or career.” (UC08) The second example from the list is more problematic, as no context or operationalization is provided. In the third example, the verb “apply” even seems inappropriate when referring to the transfer of meta-cognitive knowledge into professional practice. “Apply” does have multiple meanings, depending on the context, which is why university curricula must elaborate on them. Otherwise, it may be impossible to communicate expected competencies to both, educators and learners. “Applying” as the cognitive process dimension defined by Anderson et al. (2001) is more directly related to using a program or tool (e.g., Eclipse, Visual Studio, GitHub, etc.). The sample contains several coding units denoting the application of such tools and their functions for editing or debugging program code, e.g., “[Students] will be able to use basic technical tools for software development in teams (e.g., versioning).” (UC11).

8.1.4 Cognitive Process Dimension Analyzing In the context of programming, “analyzing” as a cognitive process dimension often refers to the reading and understanding of one’s own and unknown programs developed by others. Yet again, a lexeme within the coding units of the curricula data may be misleading. This is due to the necessity of dissecting problems or program code into (solvable) components, and clarifying their meaning and contribution to the bigger picture (i.e., the problem, or program code). The following examples illustrate competency goals within the cognitive process dimension “analyzing”: • “[Students] will be able to read descriptions and source codes of basic algorithms.” (UC05)

100

8 Results of University Curricula Analysis

• “[Students will be able to] compare [concrete algorithms with respect to their different properties and suitability].” (UC28) • “To implement these solution strategies, students learn how to approach simple mathematical and technical problems (analysis).” (UC04) In addition to these representative examples, conducting runtime analyses is classified within the cognitive process dimension of “analyzing”. In sum, 75 coding units within eight subcategories were assigned to this dimension.

8.1.5 Cognitive Process Dimension Evaluating The fifth cognitive process dimension “evaluating” was identified in 85 coding units. Seven inductive subcategories were constructed based on the material. They mostly represent the evaluation of program code as a solution to a problem, the use of algorithms and tools, and their adequacy for a context. In addition, the comparisons of algorithms, their runtime, and memory requirements are coded as “evaluating”, as students have to draw conclusions and make judgments. Other examples within this category refer to the execution of (unit) tests, which is why testing was defined as a subcategory. Generally, testing a program or algorithm and its functionality is related to the “evaluating” dimension. The following examples illustrate the results: • “[Students] can select appropriate algorithms and data structures for basic tasks.” (UC06) • “[Students can] [.. . .] assure quality using simple (unit) tests.” (UC17) • “[The students should be able to apply the terms and concepts on algorithms and data structures taught in the course in such a way that they] can evaluate them for complexity.” (UC09) • “[Professional competency:] [Students] [can describe basic algorithms] [.. . .] by specifying and estimating recurrence equations and their runtime complexity.” (UC35) During the qualitative content analysis, identifying competency goals within this category seemed more straightforward, as there was less ambiguity in the lexemes “evaluate”, “estimate”.“judge”, or “assess”.

8.1.6 Cognitive Process Dimension Creating The cognitive process dimension “creating” is the most frequent in the curricula data. It contains 305 coding units within 25 subcategories. Most of the competencies expected from novice programmers were classified within this cognitively complex dimension. This focus illustrates the sophisticated expectations towards students in introductory programming education.

8.1 Cognitive Competencies

101

High frequencies were noted for the subcategory “Write executable programs to solve problems” with 80 respective coding units, “Design problem-adequate algorithms with design patterns” with 41 coding units, the “Modeling of problems and programs” with 26 coding units, and “Design smaller programs in a structured way” with 17 occurrences in the material. It is worth mentioning the inclusive character of the competency to “Write executable programs to solve problems”, as it includes numerous other competency components. This aspect is illustrated well in the curricula of Lübeck University of Applied Sciences (UC32). They present the competency objectives step-bystep, as they explicitly mention data types, data structures, operators, expressions, and control structures, among other elements. Writing executable programs in a programming language is indeed complex. It requires factual, conceptual, and procedural knowledge to select appropriate algorithms and data structures for a problem. This example further illustrates how lower-level cognitive competencies are required as a basis for more complex ones. The subcategory “Write executable programs to solve problems” is applied to the material whenever a programming language is explicitly mentioned, or if program code is supposed to be executable. Another example representing the “creating” category is “Design problemadequate algorithms with design patterns”. Thus, students are expected to design an algorithmic solution to a new problem. This process requires a constructive effort, and a degree of creativity, as the problem has not yet occurred in the learner’s experience. And even so, a novice may not have the meta-cognitive knowledge and recognize certain patterns among similar problems. Hence, the code is assigned to the material if the student should develop a new algorithm, independently of an executable solution in the form of program code. Furthermore, modeling problems and programs is classified as constructive cognitive effort. The development of new models, structure diagrams, or class diagrams requires a prior analysis of the new problem. Then the development of a respective organizational structure follows, while the learner has to consider basic programming principles. Examples of coding units within this cognitive process dimension include the following: • “[Students should master the following aspects of an object-oriented programming language using the programming language JAVA as an example:] Representation of object-oriented problems by means of UML class diagrams.” (UC32) • “Methodological competencies: Students independently develop programs for given problems through the consistent application of the concepts of objectoriented modeling and programming.” (UC21)

102

8 Results of University Curricula Analysis

8.1.7 Knowledge Dimensions As part of the qualitative content analysis, the classification of coding units into the four knowledge dimensions according to Anderson et al. (2001) took place as a second step after the assignment to cognitive process dimensions. As mentioned, one instance of factual knowledge only was identified as part of the cognitive process of “remembering” (e.g., “Assign literals to data types” in UC34). Conceptual knowledge was more frequent than factual knowledge, with 36 instances in the dimensions “remembering”, “understanding”, and “analyzing”. Procedural knowledge was the most common knowledge dimension in the data, with 647 occurrences along all six cognitive process dimensions. The meta-cognitive knowledge dimension was assigned to 23 coding units. Table 8.2 represents the distribution of knowledge and cognitive process dimensions. The table summarizes the frequencies of coding units within all of these dimensions, excluding non-operationalized coding units. The most frequent procedural knowledge examples are situated within the cognitive process of “remembering” (147 coding units) and “creating” (283 coding units). This classification is how the highly sophisticated expectations towards novice programmers become transparent. Creating procedural knowledge is expected most frequently, and in the form of different competency components, as revealed by the great variety of inductively built subcategories. Remembering procedural knowledge, i.e., techniques, strategies, and methods is also a common expectation for learners in introductory classes. Overall, most of the competency goals in introductory programming education address procedural knowledge (i.e., how to do something). Meta-cognitive competencies are represented in the sample as well, but less frequently. It may be due to the sample’s focus on introductory programming education. Other meta-cognitive competencies may be expected in the curricula of the selected computer science degree programs, but perhaps in deviating contexts or with another emphasis. When it comes to the meta-cognitive knowledge dimension, content is merely the starting point. Instead, learners reflect on their strategies

Table 8.2 Matrix with frequencies of knowledge dimensions and cognitive process dimensions (based on Kiesler 2022) Cognitive processes Remember Understand Apply Analyze Evaluate Create SUM

Knowledge Dimensions Factual Conceptual knowledge knowledge 14 1 16 – – – 6 – – – – – 36 1

Procedural knowledge 147 13 51 69 84 283 647

Meta-cognitive knowledge – – – – 1 22 23

8.2 Other Competencies

103

and actions, think about what they know, or students prepare certain (learning) activities (Anderson et al. 2001). According to Hasselhorn and Gold (2009), knowledge about one’s cognitive system, i.e., one’s self-competency, learning competency, information competency, and self-control plays a crucial role. The same applies to the awareness of one’s cognitive abilities, emotions, and states. Students thus apply, analyze, evaluate, and generate systemic and epistemic knowledge, for example, by quickly learning new programming languages (Hasselhorn and Gold 2009). The following examples illustrate the coding units addressing meta-cognitive competency components: • “Upon successful participation in the software development internship, participants should have the confidence to take start student jobs in the IT industry.” (UC03) • “[Students] should develop the ability to independently research programming documentations to find out details of the programming language and to be able to use them (instrumental competence).” (UC16) • “Methodological competency: ability to quickly learn any programming language.” (UC19)

8.2 Other Competencies After assigning deductive AKT categories to the material, inductive subcategories were built to reflect cognitive competencies, but also the remaining, other competency components. The inductive categories, therefore, summarize competencies addressed in the 48 remaining coding units not representing cognitive competencies. The resulting categories and frequencies are as follows (Kiesler 2020c, 2022): • Social-communicative competencies (37 in sum) – Cooperate and collaborate with others (22) Students can work together in a team on (software) projects and problems to achieve shared solutions. The focus is on successfully working together. – Communicate (13) Students can participate and lead group discussions, exchange ideas in groups, create presentations in groups, and finally present them. – Organization of teamwork and projects (2) The students can coordinate teamwork, plan, execute, monitor and, if necessary, control collaborative tasks steps so that projects/tasks can be completed successfully. • Gaining programming experience (10) – Students time-intensive practice and work on exercises and tasks to increase their body of knowledge (including problems, tasks, design patterns, strategies, and solutions).

104

8 Results of University Curricula Analysis

• Increasing consciousness of IT-security (affective) (1) – Students develop a sensitivity (value) towards security issues in computer science. Gaining programming experience is hard to classify within the dimensions of the Anderson Krathwohl Taxonomy. However, explicitly mentioning programming practice and experience in the university curricula reveals its contribution to developing programming competency. Students have to become familiar with many different problems, and (strategies) how to solve them. So it is common that new a task can be solved by adapting the solution to a similar, well-known problem. Experience can thus help generate new knowledge and consciousness about that knowledge (Hasselhorn and Gold 2009), thereby contributing to meta-cognitive knowledge and respective competencies. It was surprising though, to find the experience as an explicitly mentioned competency goal in the university curricula with ten occurrences. Other dispositional competency components (Clear et al. 2020) address socialcommunicative aspects related to programming competency. They are the most common among the other competencies. Respective coding units focus on teamwork, thus cooperating and collaborating with others in groups, which was expressed in 22 coding units. Such group work also entails communicative opportunities. So communication was also present in the data (in 13 coding units). Related to that is the organization of teamwork and projects, which was identified in two coding units. The social-communicative competencies also exhibit proximity to self-competencies and other meta-cognitive competencies, such as organizing a project. Defining a clear-cut boundary between these competencies is challenging, if not counterproductive, as intra- and interpersonal competencies are interrelated, depend on the context, and relate to the whole person (Fink 2013).

8.3 Reliability The intracoder-reliability of the developed categories was evaluated two weeks after the completed analysis. It was applied to the curricula data of five universities (UC01 to UC05), which corresponds to 14% of the material. Since segments (i.e., coding units) have been defined prior to the analysis, there were no deviations between the two coding processes. MAXQDA supports the computation of the coefficient kappa .κ according to Brennan and Prediger (1981). A requirement for this coefficient is the use of fixed, predefined segments, which is the case in the present analysis. The determination of .κ according to Brennan and Prediger (1981) results from .pobserved , the simple agreement in percent, and .pchance . By using MAXQDA, .κ was determined as 0.97. However, not all categories were used concisely. For example, implementing algorithms and implementing programs are hard to separate. The competency to

8.4 Discussion of Results

105

implement programs is included within the competency to implement algorithms. Furthermore, implementing programs is partially covered by the category “writing executable program code to solve problems”. This is how some diverging results in the two coding runs are explained. Furthermore, there were occasional discrepancies regarding non-operationalized competency objectives, as the initial coding run was quite generous. In addition, the categories of “know principles and methods of programming languages and paradigms” (procedural knowledge) and “describe concepts of programming paradigms and languages” (conceptual knowledge) caused disagreement in the second coding run. In this case, conceptual knowledge is more abstract than procedural knowledge of basic methods and other principles and functionalities of programming languages and paradigms. Moreover, the term “concepts” is somewhat misleading. While the category refers to concepts, the competency goals include methods, strategies, and thus procedural knowledge, meaning how to do something. In this case, the knowledge dimensions, as described by Anderson et al. (2001), appear to intertwine or illustrate that the knowledge dimensions of AKT are arranged within a continuum. To obtain the presented intracoder-reliability, any changes, irregularities, and inconsistencies were noted, and, if necessary, added to the coding scheme as part of each run through the material (e.g., if a coding unit reflected two different codes). Overall, the coding scheme and categories were confirmed in the second run two weeks later, which led to the assumed intracoder-reliability. Due to the sheer amount of material and lack of external resources or funding, reliability testing by a second coder was not feasible. To further assure the study’s validity, all analysis steps were thoroughly conducted in alignment with the established guidelines and rules. The goal of this ruleguided process was to support an inter-subjective understanding and reproducibility. For this reason, the sampling process, for example, is documented step by step. In addition, the coding scheme and its categories were further confirmed by the expert interviews and the communicative validation. As a part of this process, categories were checked for internal consistency and homogeneity (Mayring 2015).

8.4 Discussion of Results First of all, the results as displayed in Table 8.1 reveal that the cognitive competencies addressed in higher education programming modules can be classified in terms of the Anderson Krathwohl Taxonomy (Anderson et al. 2001). However, some gaps in the table can be noted, e.g., in the factual and meta-cognitive knowledge dimensions. Despite less frequent meta-cognitive competencies, self-reflection, self-knowledge, strategic knowledge, and knowledge about cognitive tasks were identified in the material. In contrast, competency components that address students’ motivation or volition were not identified in the sample. The same applies to observable outcomes related to some dispositions (Clear et al. 2020; Kiesler et al.

106

8 Results of University Curricula Analysis

2023). Instead, one affective competency goal was part of the curricula data (being cautious about IT security issues). Moreover, gaining programming experience (10 codes) and developing social-communicative competencies (35 codes) were explicitly addressed as competency goals. Furthermore, the data confirm the arrangement of the revised taxonomy by Anderson et al. (2001) as opposed to Bloom’s original taxonomy (1956). In the data, competencies reflecting the cognitive process of “creating” are described in a way that supports the hypothesis that “evaluating” precedes the creative processes. The assessment of an algorithm’s adequacy for a problem, for example, is required before its implementation. One exemplary module from the “Technische Universität Ilmenau” (UC35) illustrates this gradually increasing complexity by distinguishing the identification, evaluation, and implementation of problem-adequate and applicable algorithms (in that order). Another aspect worth discussing is related to the cognitive process dimension “understanding”, which is evident in only 29 coding units of the sample. One main challenge was that the actual “understanding” of concepts was rarely expressed in an operationalized form, meaning that institutions did not formulate the expected competencies in an observable or assessable way. As a result, the category “nonoperationalized” was assigned to more coding units (245) than expected. So one cause was the lack of clarity and consideration of competency, which negatively impacted comprehension and the qualitative analysis. Another cause was likely due to the primary objective of the analyzed curricula and documents. In higher education, module descriptions are used, for example, to receive accreditation or re-accreditation of study programs meaning these documents may be composed under pressure. The authors of module handbooks are usually faculty members or teaching staff, none of whom receives pedagogical training by default. As a result, the concept of competency or cognitive processes may not have been considered at all. The curricula data contained only a few teaching and learning objectives at the levels “understanding” and “applying”. In contrast, many coding units address more complex cognitive processes and creating” as the most sophisticated process. This aspect raises the question of how learners are instructed and what educators do to facilitate “remembering”, “understanding”, and “applying” in their courses to help prepare students for more complex tasks. Considering the concept of Constructive Alignment (Biggs 1996; Biggs and Tang 2011), yet again, raises questions concerning the assessment of these lower-level competencies. The impression is that introductory programming courses rarely address “remembering”, “understanding”, and “applying” in their curricula and in-class activities. Similarly, the limited occurrence of “applying” (in 51 coding units) should be noted. As mentioned, there is a discrepancy between the application of tools, and, for example, design patterns. The lexeme “applying” can have multiple meanings, depending on the context. In several cases, “applying” referred to creating new algorithms, programs, and thus new, previously unknown solutions, which denotes the most complex cognitive process “creating”. Therefore, the category “applying” was used for fewer coding units than expected at first glance at the material.

8.4 Discussion of Results

107

Accordingly, it is important to redefine or specify this cognitive process dimension and align it to the reality and context of programming education. Contrary to the few occurrences of competencies within lower-level cognitive process dimensions, many coding units were assigned to the more complex processes, and especially to “creating” as the most complex one (Anderson et al. 2001). This process was identified in 305 coding units which referred to procedural knowledge, and meta-cognitive knowledge in some cases. Due to this sheer volume, constructing inductive subcategories proved valuable for a more detailed understanding of the curricula data. Especially the subcategories reflect upon the multiple competencies expected from novices and their cognitive complexity. Students may have to develop several competencies before being able to successfully write an executable program in a programming language to solve a problem, for example: • • • • • • • • • • • • • • • • • • • •

Design problem-adequate algorithms with design patterns Implement algorithms Modeling of problems and programs Design smaller programs in a structured way Use standard algorithms Use standard data structures Design data structures Adapt standard data structures Adapt standard algorithms Specify and implement (abstract) data types Extend or adapt given program code Design program specifications Develop larger programs/applications Develop formal tools and test cases Programming of GUIs Design concurrently or parallel processes Develop libraries Call methods Generate objects Design interfaces

This list with examples of subcategories can easily be extended with other competencies expected from novice learners of programming. A resulting, open question is whether the creation of program code may even require an entirely new cognitive process dimension or at least an adaption of the AKT dimensions and definitions. Another aspect is that learners need to master all five lower levels before succeeding with the most complex one. The act of writing program code and developing algorithms is considered procedural knowledge, as the focus is on how to do something (Anderson et al. 2001). This aspect alone makes the extremely high demands towards novice programmers more transparent and recognizable while providing insights into the challenges within introductory programming education. In this context, it is worth mentioning Scott (2003) and his observations related to educators’ challenges in classifying tasks within the AKT dimensions.

108

8 Results of University Curricula Analysis

Educators tended to think they only expect “applying” from students, whereas Scott classified the same task as more complex, namely within the “creating” dimension. One consequence of such an underestimation is that the cognitive processes in between (i.e., “applying”, “analyzing” and “evaluating”) are likely not addressed via instruction and learning activities, which, in turn, can cause challenges for learners. Developing competencies at these intermediate levels is assumed to be a crucial prerequisite for developing algorithms and programs. Hence, it is important to explicitly express learning objectives and competency goals at all cognitive process dimensions in module descriptions, but also in the classroom, learning activities, and assessments. Finally, the meta-cognitive knowledge dimension is discussed. The inductive subcategories reflecting competencies expected from novices comprise several sophisticated aspects, such as self-regulation, self-knowledge and reflection, and learning strategies. These are extremely important for students, as they affect learning processes in all courses. However, they are rarely outlined in the curricula. Only a few programming modules incorporate meta-cognitive competencies such as self-reflection and, for example, socio-communicative competency components. While some universities do offer dedicated courses on presentation skills, scientific work, and intrapersonal competencies, the question arises whether and how such competencies could be promoted within programming courses (Kiesler 2020a,b,c,d; Kiesler and Thorbrügge 2023). After all, they do affect students’ performance and (learning) strategies. The occurrences of these meta-cognitive competencies, including self-knowledge, yet again support the hypothesis that learning is related to the whole person (Fink 2013; Raj et al. 2021a). It also seems an unresolved challenge to define clear-cut boundaries between meta-cognitive competencies and dispositions such as self-directed and related behaviors (Clear et al. 2020; Impagliazzo et al. 2022; MacKellar et al. 2023; Kiesler et al. 2023; Sabin et al. 2023).

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) J. Biggs, Enhancing teaching through constructive alignment. Higher Edu. 32(3), 347–364 (1996) J. Biggs, C. Tang, Teaching for Quality Learning at University (McGraw-Hill Education, New York, 2011) B.S. Bloom, Taxonomy of educational objectives: the classification of educational goals. Cognit. Domain (Longman, New York, 1956) R.L. Brennan, D.J. Prediger, Coefficient kappa: some uses, misuses, and alternatives. Edu. Psychol. Measur. 41(3), 687–699 (1981) A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing curricula 2020 (CC2020): paradigms for future computing curricula. Technical Report, Association for Computing Machinery/IEEE Computer Society, New York (2020). http://www.cc2020.net/

References

109

L.D. Fink, Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses (Wiley, Hoboken, 2013) M. Hasselhorn, A. Gold, Pädagogische Psychologie: Erfolgreiches Lernen und Lehren, 2 auflage edn. (W. Kohlhammer Verlag, Stuttgart, 2009) J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE’22 (ACM, New York, 2022), pp. 662–663 N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, in DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020, ed. by R. Zender, D. Ifenthaler, T. Leonhardt, and C. Schumacher. Lecture Notes in Informatics. . Gesellschaft für Informatik e.V. vol. P-308 (2020a), pp. 187–192 N. Kiesler, On programming competence and its classification, in Koli Calling’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling’20 (Association for Computing Machinery, New York, 2020b) N. Kiesler, Towards a competence model for the novice programmer using bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE’20 (Association for Computing Machinery, New York, 2020c), pp. 459–465 N. Kiesler, Zur Modellierung und Klassifizierung von Kompetenzen in der grundlegenden Programmierausbildung anhand der Anderson Krathwohl Taxonomie. CoRR abs/2006.16922. arXiv: 2006.16922 (2020d). https://arxiv.org/abs/2006.16922 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022) N. Kiesler, B. Pfülb, Higher education programming competencies: a novel dataset, in Artificial Neural Networks and Machine Learning – ICANN 2023. Lecture Notes in Computer Science (Springer, Cham, 2023) N. Kiesler, C. Thorbrügge, Socially responsible programming in computing education and expectations in the profession, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education,Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023). https://peer.asee.org/43018 P. Mayring, Qualitative Inhaltsanalyse: Grundlagen und Techniken, 12 auflage edn. (Beltz, Weinheim, 2015) R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR’21 (Association for Computing Machinery, New York, 2021a), pp. 133–161 M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Vol. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) T. Scott, Bloom’s taxonomy applied to testing in computer science classes. J. Comput. Sci. Colleges 19(1), 267–274 (2003)

Chapter 9

Results of Guided Expert Interviews

9.1 Cognitive Competencies This section introduces the inductively formed categories identified during the expert interview analysis. Table 9.1 shows the inductive categories representing programming competencies and their classification within the AKT dimensions reflecting their cognitive process and knowledge dimension. The categories’ frequencies are indicated in brackets behind every competency category. This classification is in analogy to the results of the qualitative content analysis of university curricula. Again, the dimensions are defined by their inclusive character, as higher dimensions include corresponding competencies in lower dimensions. Accordingly, a competency goal of the level “applying”, for instance, the application of tools, or IDEs, includes the achievement of the cognitive process dimensions “remembering” and “understanding” such IDEs and their functionality. Similarly, learning objectives representing procedural knowledge include mastering corresponding conceptual and factual knowledge. In the analysis of the interview data, a total of 185 codes were assigned to segments containing cognitive competencies. The programming competencies identified in the data are distributed across all six cognitive process dimensions, whereas the majority can be found within the procedural knowledge (98 codes) and metacognitive knowledge (82 codes) dimension (Anderson et al. 2001). Programming competencies within factual and conceptual knowledge dimensions are rare, with one category each. It should further be noted that some categories developed during the qualitative content analysis of curricula could be reused in the analysis of the interview transcripts, as the experts expressed similar expectations. Overall, the focus of introductory programming seems to be on procedural knowledge, with most competency goals on the level of “creating”. Due to the given context of the interviews and the possibility of asking specific follow-up questions, none of the segments had to be coded as “non-operationalized”, which was the case for several coding units of the curricula data. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_9

111

Level 5 evaluate

Level 4 analyze

Level 3 apply

Level 2 understand

Level 1 remember

Cognitive processes

Knowledge dimensions Conceptual Factual knowledge knowledge Know elementary programming language constructs (3) Describe concepts of programming paradigms and languages (2) Use tools for software development (5) Apply quality criteria to source code (3) Break down given problems into smaller components (11) Comprehend compiler and interpreter messages (5) Explain one’s own code (4) Understand foreign code (1) Determine the output of foreign code (1) Debugging of programs (4) Assess adequacy of algorithms, data structures and data types (2) Assess adequacy of solutions written in programming languages (2) Testing of algorithms and programs for errors (1)

Know object-oriented programming (1) Know technological basics (1)

Procedural knowledge

Taking responsibility for learning processes (10) Use of external resources for studying (9) Self-reflection (6)

Meta-cognitive knowledge

Table 9.1 Overview of inductive categories as a result of the expert interviews and their classification within AKT dimensions (Kiesler 2022b, 2020a,b)

112 9 Results of Guided Expert Interviews

Level 6 create Write executable programs to solve problems (17) Use logical expressions and operators to solve problems (9) Modeling of software (projects) (7) Write a method (4) Write single classes (3) Modeling of problems and programs (3) Extend given code (3) Write a program with several classes (2) Add code to a given method declaration (2) Write code (2) Use programming language constructs correctly (2) Write loops (1) Design problem adequate algorithms (1) Write class(es) and corresponding methods (1)

Develop a systematic approach to problem-solving (26) Self-regulated organization of learning process (18) Abstraction of problems and rules (e.g., recursion) (10) Transfer (3)

9.1 Cognitive Competencies 113

114

9 Results of Guided Expert Interviews

The expert interviews provided new insights into programming competencies expected of novice learners, as they supported their expectations with examples. In the third interview (EI03), an expert elaborated on how they observe and assess the abstraction of problems and rules. They gave recursion as an example. Thus, recursion was expressed as the transformation of mathematical notations into program code, which is what the expert addressed as an abstraction. Another expert (E06) similarly addressed this ability and the concept of recursion: Then there is recursion, where what you write down can no longer be observed directly during program execution. That means there is a huge step, and you need mathematics, similar to an induction proof. For many students, this is not feasible at all. (E06)

This quote indicates that abstraction, or the transfer of rules into mathematical structures, is an important prerequisite for successful programmers. In the coding scheme, the inductively formed subcategory “Abstraction of problems and rules (e.g., recursion)” is defined as follows:

Abstraction of Problems and Rules (e.g., Recursion) Students can consciously use their knowledge of their cognitive system to develop new knowledge and strategies for problem-solving. Recursion, for instance, requires the translation of rules (given in natural language) into variables and other mathematical notations, which involves the notation of base cases, recursive calls, and a condition to terminate. Respective meta-cognitive strategies must be developed as a prerequisite, whereas programming experience supports gaining this competency.

The guided expert interviews further lead to the identification of more metacognitive competencies, relating to planning one’s learning and taking responsibility for and organizing one’s learning processes. Moreover, the development of systematic problem-solving approaches was found in the material (Hasselhorn and Gold 2009), as the following examples illustrate: In addition to openness, structured thinking plays a role. (EI05) You have to think algorithmically, and think about what needs to be done and in which order. (EI06)

The experts expressed the transfer of knowledge and skills to new problems as another expectation. Such a transfer is often assessed in final exams, and it is characterized as a competency resulting from practice and experience: Students have to manage the transfer from the tasks in the exercises to the exam tasks. Whereas I practice actual exam tasks beforehand. (EI062)

Experts thus define the transfer of the awareness of previously gathered knowledge and skills, which is, in turn, required to apply the knowledge in other contexts, thereby creating new knowledge and skills. This process likely involves a critical

9.2 Other Competencies

115

reflection of known structures or patterns to create something new, which results in the classification as a meta-cognitive knowledge dimension. The experts further noted that experience seems to enhance the development of transfer competency.

9.2 Other Competencies Inducing categories from the interview material enabled the identification of a set of other competencies and components novice learners of programming are expected to develop in higher education. In this section, these competencies are introduced, defined, and briefly discussed. The following categories were developed by analyzing a total of 187 coding units. They are listed below according to their frequency in the interview data. For each category, a short description of its meaning is provided.

Attitude (89) • Enthusiasm for problem-solving and computer science (24) – Students are interested in computing and related topics (e.g., mathematics) and hobbies. They enjoy programming, trying new solutions, or solving puzzles or riddles. However, enthusiasm and interest alone do not automatically lead to academic success. (While a high motivation to learn may compensate for lower cognitive abilities in simple tasks, this interaction does not apply to more difficult tasks. Accordingly, a high level of enthusiasm or motivation for a topic does not automatically lead to the mastery of cognitively demanding tasks (Hasselhorn and Gold 2009). • Persistence despite frustration (23) – Students can continue working on a solution to a problem even if they are difficult and cannot be immediately solved. Despite time-intensive requirements, frustration during problem-solving due to continuous negative feedback from the compiler, and a lack of sense of achievement, students continuously work on their solutions. As it takes time to experience the first moments of success, students need stamina and ambition to reach this point so they do not drop out of the programming course or their degree program. (continued)

116

9 Results of Guided Expert Interviews

• Willingness to learn, openness and flexibility (20) – Students are ready to perceive and process new concepts, even if they seem difficult. They are open to learning and willing to try new things. The willingness to try out new strategies is included in this concept, as programming always requires flexibility in approaching problems to make adequate decisions. This code is also applied in the data analysis if the experts describe instances where this willingness to learn is not present, for instance, if students are reluctant to approach new and unfamiliar topics. • Voluntary participation (10) – Students are willing to work on given tasks to achieve the best possible results and learning progress. • Active and regular participation (9) – Students actively and regularly participate in the course. Regular participation implies a consistent engagement throughout the semester, which is important as fundamental programming concepts cannot be acquired within two weeks before the exam. This code is also applied in the data analysis if the experts describe instances where this active participation is lacking, for example, when experts describe students avoiding exercises or do not engage in group work or in-class activities. • Joy and pride over own results (3) – Students experience their competence by developing functioning programs. This sense of accomplishment manifests through joy and pride, and thus positive emotions about their results. This sense can enhance motivation, perseverance, and attitude or self-efficacy in the following learning activities.

Self-competencies (43) • Prior competencies (23) – Students can learn programming regardless of their prior knowledge. Nonetheless, students’ prior experiences and competencies are subsumed under this category. Due to students’ varying backgrounds (e.g., school types, further training, or practical experience), prior competencies are diverse and cannot be generalized. Some students (continued)

9.2 Other Competencies

117

enter their study program without prior knowledge or context-related skills from K-12 education. However, students can compensate for the lack of prior computing competencies through practice. • Adequate self-confidence in one’s abilities (14) – Students are aware of their own (academic and programming) competencies and have confidence in their ability to solve new tasks, even if they have not yet developed all expected competencies. • Creativity (6) – Students can use creativity in addition to their intuition, processoriented thinking, and systematic approaches (i.e., cognitive and metacognitive competencies) to develop solutions. Through a flexible, open approach, they can generate new ways of problem-solving (Kiesler 2022c).

Gaining Programming Experience via Practice (38) • Students gain programming experience through time-intensive practice. The practice causes students to become familiar with a wide range of problems and their solutions. By thoroughly working through programming tasks multiple times, students can use previously gained problem-solving experiences and apply them to approach new problems. This experience generates new cognitive competencies and increases students’ sensitivity to new cognitive actions (Hasselhorn and Gold 2009).

Social-Communicative Competencies (17) • Cooperate and work with others (10) – Students can collaboratively work in teams on (software) projects and problems to achieve solutions together. The emphasis is on collaborative work and socially compatible behavior. (continued)

118

9 Results of Guided Expert Interviews

• Communicate (7) – Students can collaboratively work in teams on (software) projects and problems to achieve solutions together. The emphasis is on collaborative work and socially compatible behavior. These behaviors include appropriate, respectful listening and communication with fellow students, teachers, and tutors.

The experts frequently expressed aspects related to students’ attitudes and mindsets (within 89 coding units), which contradicts the expected competencies defined in the module descriptions of the university curricula. Social-communicative competencies such as cooperation, collaboration, and communication were explicit in the module descriptions. Regarding the attitude and mindset, it is important to note that the experts mentioned the willingness and openness of students to learn and try new things as part of programming competency. This aspect intends to describe the required flexibility in programming, which enables problem-appropriate decisions. In addition, the experts often described the need for self-regulation in difficult situations, which refers to the continuous solving of problems, even when it takes time to achieve a solution, and frustration is involved. If students exhibit these continuous efforts over a longer time without giving up, students will eventually experience their first successes. Two of the experts were convinced that perseverance and ambition are necessary, as learners need to motivate themselves not to give up during challenges. The following three quotes illustrate the aspect and importance of students’ attitudes: But the important thing is to independently try to reach the solution and not give up beforehand. (EI01) The second thing is, I would not call it persistence, but rather determination. When you have a program that simply does not run, it’s unsatisfying and frustrating. It eats you away. It is also something scary; you cannot deceive yourself about it. The interpreter says the program is wrong. And then you have to improve it until it works, and that can take hours, causing frustration. (EI04) Programming is similar to doing a puzzle. You need patience and have to rethink, not give up, and have fun when something gets trickier. (EI06)

When examining these exemplary interview excerpts, it becomes apparent that learners may only experience a sense of accomplishment if they are purpose-driven, which serves as positive reinforcement while satisfying their need to experience competency. According to the experts, the willingness to actively and regularly participate in solving exercises (e.g., during tutorial or practical programming sessions) has a similar positive impact on developing programming competency. In basic programming education, most competencies build upon each other, requiring continuous work throughout the entire semester to successfully complete the

9.3 Factors Preventing Programming Competency

119

module. This situation differs from other disciplines, where the initial semesters may focus more on providing an overview of various topics, sub-disciplines, or periods. According to the experts, voluntary participation also positively affects gaining programming experience. Furthermore, the experts stated that joy and pride over successfully solved tasks are significant factors for constant engagement. Students should enjoy programming, challenging tasks, trial and error, and solving riddles to learn how to program and avoid giving up. It seems self-competencies interact with the competency components experts described related to students’ attitudes and mindsets. All of them help students gain more programming experience, which is an essential factor in developing and expanding cognitive and meta-cognitive competencies.

9.3 Factors Preventing Programming Competency As part of the guided expert interviews, a series of questions concerned challenges students experience while learning to program. The qualitative content analysis of the summarized transcripts resulted in several inductive categories reflecting such challenges. Among them are difficulties for educators and learners, as well as challenges related to the development of cognitive and other competencies. In addition, experts noted further, context-related challenges. All factors preventing or hindering competency development in introductory courses are summarized in the following list. Since numerical values indicating the categories’ frequencies are considered irrelevant in this qualitative data, numbers are not added to the list. • Challenges for educators – – – – – – – – – –

Designing and evaluating (exam) tasks Simulating human assistance in learning systems Selecting appropriate exam formats Developing appropriate and assessable tasks Creating a learning atmosphere in face-to-face class meetings Limited capacity of teachers for providing feedback Preventing plagiarism Limited competencies of tutors Identifying programming language-independent constructs Demonstrating the relevance of theory for programming

• Challenges for learners – Challenges related to cognitive competencies Developing systematic problem-solving skills Abstraction of problems and solutions Transferring knowledge and experience to new tasks and programming languages

120

9 Results of Guided Expert Interviews

Using logical expressions and operators for problem-solving Using software development tools Writing programs to solve problems Using external resources for learning Organizing one’s learning process Explaining programming concepts – Other competencies Self-competency · Lack of prior knowledge · Lack of creativity · Lack of self-confidence in one’s abilities Attitude · · · · ·

Lack of persistence despite frustration Lack of enthusiasm for problem-solving and computer science Lack of willingness, openness, and flexibility to learn Lack of active and regular participation Lack of voluntary participation

Social-communicative competencies · Inadequate communication · Insufficient cooperation and collaboration • Other context-related challenges – – – – – – – – –

Insufficient programming experience due to lack of intensive practice Learning problem-solving strategies specific to computer science Highly interdependent contents that build on each other Learning programming within a limited time Deterrent nerd stereotypes Time constraints due to part-time job Fear of mathematics and formal expressions Non-attendance in class Technical English for computer scientists

The list of challenges and limiting factors is quite extensive. In their role as educators, the interviewed experts mentioned, for instance, their struggle when selecting relevant topics and problems which are independent of a certain programming language. The advantage is that a solution is easily transferable to other programming languages. In addition, experts emphasized the importance of conveying the relevance of theoretical constructs in computer science. Furthermore, the experts reported resource-related issues at their universities. In some cases, rooms are not ideal to create a positive learning atmosphere during exercises and tutorials which is due to increasing enrollment numbers and the resulting lack of space at universities. Furthermore, experts experience a lack of capacity for formative feedback and all learners (every week). Even when tutors

9.3 Factors Preventing Programming Competency

121

are available, they may not always provide fully competent feedback, as they may not have fully grasped a concept. Neither do they have the competency to explain complex concepts in a pedagogically reduced manner. These aspects are reflected well in the following excerpts from the interview transcripts: And then we simply have the difficulty that not all of them are good teachers. They are all good computer scientists, but not all of them are good teachers. And, yes, that is a challenge. (EI04) When it comes to non-core tiers (e.g., the difference between an expression and a statement, or the purpose of data types), it becomes more challenging for the teaching assistants to provide assistance. (EI06)

In addition, educators face the challenge of assessing social-communicative competencies as part of, for example, group assignments while at the same time evaluating individual contributions for plagiarism. Several experts discussed student assessment as demanding, as the design and evaluation of exercises and (formative) assessments can be challenging. Providing adequate and individual support to learners is considered resource-intensive. Despite the availability of student assistants and tutors as support, educators cannot guarantee that every learner receives individual and competent feedback on their assignments and progress every week (i.e., during face-to-face meetings). The assessment of programming knowledge, skills, and dispositions is challenging for various reasons, particularly in large-scale courses such as Programming 1 and 2. According to the pedagogical design principle of Constructive Alignments (Biggs 1996; Biggs and Tang 2011), programming should be practiced in a way that allows students to reach the previously defined competency objectives. These very same goals should then be assessed in a format that has previously been practiced in assignments and coursework. This design principle requires the alignment of competency goals, assessment, and teaching and learning activities (e.g., exercises, problems, group work, and projects). Changing assessment formats may even require the adjustment of the respective legal framework, curricula, and module handbooks. Regarding dispositions alone, there is not yet a consensus on how to teach and assess them in the context of computing (Impagliazzo et al. 2022; Kiesler et al. 2023a; Kumar et al. 2023; MacKellar et al. 2023; McCauley et al. 2023; Sabin et al. 2023). When the interviews were conducted, educators still experienced numerous limitations when developing tasks for (online) tools or plug-ins for code verification, input options, or test cases. These challenges in developing appropriate, and automatically evaluated tasks include limitations to awarding partial points and assigning credits to the solution process or other individual aspects. Fully replicating human assistance in an available learning management system has not yet been possible in common learning environments applied at universities. For this reason, experts report students being overwhelmed by the number of possible problemsolving approaches, especially at the beginning of their studies. In programming, there is usually not a single correct solution, but multiple paths to a solution. According to the experts, students can feel overloaded and struggle with this. The availability of several solutions further complicates supporting students through

122

9 Results of Guided Expert Interviews

formative feedback, as every individual student’s solution needs to be analyzed and assessed. The advent of Large Language Models (LLMs) may significantly ease that limiting factor soon, due to their performance in solving introductory programming tasks and the generation of individual feedback to students’ program code (Kazemitabaar et al. 2023; Kiesler et al. 2023b; Prather et al. 2023b; Puryear and Sprint 2022; Vaithilingam et al. 2022; Wermelinger 2023; Zhang et al. 2022). The list of factors preventing students from programming competency development also highlights various competency expectations toward learners. While experts only mentioned a few cognitive competencies, all other non-cognitive competency components emerge as challenges. Related to that, some other contextspecific factors were discussed by the experts. For example, students lack attendance in face-to-face class meetings and exercises, and experience time constraints due to their part-time jobs and financial situation. Both these interrelated factors can hinder students’ progress in their studies (not just programming, of course). Furthermore, and this is very specific to introductory programming, basic concepts build upon each other, and the time to learn how to program is limited, which makes time and opportunities to practice particularly valuable for novice learners.

9.4 Factors Contributing to Programming Competency The guided interviews revealed several factors the experts describe as a contribution to learning programming. Three main subcategories were identified in the interview data. The first subcategory comprises structural measures, such as offering standard exercises, tutorials, or extracurricular programming support courses. The second subcategory includes pedagogical planning measures designed by the educator before the course and in-class meetings. Many of these measures aim to promote students’ motivation. The third subcategory summarizes measures related to pedagogical actions during face-to-face, synchronous teaching sessions, which are hard to plan. These include direct interactions and communication with students as well as creating an appropriate learning atmosphere. The full list of subcategories is as follows: • Structural Measures – Offering programming support courses – Providing support through tutorials and exercises • Pedagogical Design Measures – – – – – –

Promoting motivation through mandatory tasks Enabling experiences of success and competency Assessing via (online) tools Providing motivating exercises Promoting motivation through practical relevance Offering students freedom and choices

9.4 Factors Contributing to Programming Competency

– – – – – – – – – – –

123

Promoting motivation through group work Promoting motivation through bonus points Applying Constructive Alignment Demonstrating the need for action through challenging tasks Designing tasks for advanced students Utilizing gamification Using the Inverted Classroom method Selecting an appropriate programming language Identifying and teaching standard algorithms Motivating through a modern development environment Avoiding publication of model solutions

• Pedagogical interactions during face-to-face classes – – – – –

Interacting and communicating with students Explaining abstract concepts through illustrations Conveying motivation through personal attitude Live programming of examples Creating a positive learning atmosphere

The list contains many pedagogical design measures the experts use in their introductory programming courses. Many of them aim at encouraging and engaging to participate and practice, which ultimately should help them gain programming experience and become more advanced. Interestingly, some of these measures represent fundamental contradictions, such as using mandatory tasks and avoiding the publication of model solutions on the one hand and offering freedom, choices, and bonus points on the other hand. The experts thus try to address both students’ intrinsic and extrinsic motivation. According to the experts, these approaches do not have the same effect on all students: With good students, coercion is not necessary. However, for those who are on the edge, coercion can be effective. This is a universal experience—when you compel students to take action, they also come to reach understanding. (EI00)

Some educators describe the approaches they apply to offer somewhat individual forms of assessments: If someone is very advanced, and I have seen that in the exercises, they do not need to take the exam but have the opportunity to do a project with me (EI02)

The great variety of pedagogical measures can partially be explained through highly heterogeneous groups of students. The student’s prior knowledge greatly varies, which may have a more profound impact on basic programming education, when compared to other disciplines. Contents and competencies strongly build upon each other, even within the first weeks of the first semester. Likewise, expectations towards novices concerning active, and regular participation are high right from the beginning of a course. This expectation may require significant effort from students without prior knowledge from school or vocational training, which they may not anticipate or be able to fulfill due to part-time jobs or the need to familiarize

124

9 Results of Guided Expert Interviews

themselves with other concepts. Within just a few weeks of the study program, it may become very challenging to catch up on deficits. Moreover, students must also cope with various intra- and interpersonal, and meta-cognitive demands, particularly in the early stages of their studies. Finally, the overall situation as a freshman can add complexity to students’ life. Nonetheless, educators can and do implement a wide range of measures to support their students while addressing their heterogeneity. Even though not all approaches are suitable for the entire target group, the efforts at the structural and pedagogical levels should be noted as they demonstrate a broad spectrum of educators’ actions and commitment.

9.5 Reliability To test the intracoder reliability of the category system of the expert interviews, one of the seven interview transcripts was selected. It was coded again by the author of this week two weeks after the initial coding. For this second run, interview transcript EI00 was chosen. It contains 118 segments, and thus the highest number of segments within an interview. The other interviews were segmented into 65 to 115 coding units. Considering 634 segments across all interview transcripts, 18.6% of the material was reviewed. The overlaps of the two coding runs were evaluated automatically by using MAXQDA. Only segments with an overlap of at least 90% were considered. Since the segments and coding units were predetermined in the qualitative content analysis of the interview transcripts, there is a 100% agreement. Hence, this analysis form is appropriate. The reviewed codes include all inductively built subcategories of cognitive process dimensions, knowledge dimensions, and other competencies, as well as factors preventing and contributing to programming competency. Overall, the interviews as a very open format resulted in more deviations when compared to the analyzed module descriptions. This point is partly due to the larger context units. The values of agreement and non-agreement between the two document passes are nonetheless high. For the summarizing qualitative content analysis of the expert interviews, a .κ value of 0.94, according to Brennan and Prediger (1981) was determined via MAXQDA. Due to the sheer amount of material and resource limitations, a reliability test by a second coder was unfeasible. MAXQDA helped identify discrepancies in the two coding runs through the material. Those were related to, for example, the categories “extend given code” and “add code to a given method declaration”. These categories were too similar, as one partially includes the other one. Moreover, the category “write code” still lacks a specific definition. Another example is the category “selecting appropriate exam formats” which also includes the category “developing appropriate and assessable tasks”. This led to further inconsistent codings. In this analysis, it was once again obvious that procedural knowledge cannot be clearly distinguished from metacognitive knowledge in some cases, confirming the

9.6 Discussion of Results

125

continuum of the AKT. Similarly, there is some overlap between the “willingness to learn, openness, and flexibility” and the meta-cognitive competency of “taking responsibility for learning processes”. Both seem to be closely related to students’ attitudes and mindsets. Moreover, gaining programming experience through time-intensive practice seems closely related to other categories in the meta-cognitive knowledge dimension. In some cases, it was difficult to distinguish whether the experts addressed a competency expected from novice programmers or a competency requirement difficult to achieve. These perspectives are closely interconnected. Usually, the transcribed verbal data was understood and coded as competency requirements to encompass all of the expected competencies in the sample. The validity of this second, summarizing qualitative content analysis is strengthened though by the use of an interview guideline and assuring similar interview settings in the respective universities (Friebertshäuser 1997). Furthermore, the consistency of the data was ensured by checking any unclear cases as part of multiple passes through the material, while guidelines were determined and consistently adapted with every analysis run (Gläser and Laudel 2004). The analysis has further been made transparent by its comprehensive documentation and insights into the coding scheme, including the definition of categories.1 The verification of the internal consistency of coding units, linguistic validation, and discussion with experts is complemented by communicative validation. As part of this process, the category system was discussed, aligned, and confirmed with experts in a personal exchange (Krüger and Riemeier 2014; Mayring 2002, 2015).

9.6 Discussion of Results When compared to the written documents comprising module descriptions and competency objectives, the interviews with experts focused more on interpersonal, intrapersonal, and individual aspects of learners and their role in introductory programming education. The summarized interview transcripts contain 185 coding units representing cognitive competencies, and 187 coding units addressing other competency components. Most cognitive competencies are classified within the “creating” dimension, which is comparable to the curricula data analysis results. The distribution of the remaining cognitive competency objectives within the AKT largely corresponds to the distribution identified in the module descriptions, with factual knowledge and conceptual knowledge not being addressed extensively by the experts. This lack may be due to their focus on practical implementation in programming courses, as the interview questions were open-ended but could have

1 Interview transcripts and summaries are not fully provided because the interviews were conducted in the German language.

126

9 Results of Guided Expert Interviews

been interpreted accordingly. In addition, the experts may have wanted to highlight the most complex and challenging competencies. The more frequent occurrence of other competencies and components in the expert interviews is another aspect worth discussing. The experts’ emphasis on these dispositions, intra- and interpersonal competencies shows their relevance for students to succeed. According to the experts, competencies related to students’ attitudes and mindsets seem to influence their learning process. For this reason, experts take so many measures to promote students’ motivation, perseverance, and joy as part of programming education. This is how the continuous frustration of students caused by negative feedback from compilers and interpreters becomes apparent. The experts conclude that students require a certain attitude and mindset (e.g., enthusiasm, determination, willingness to learn, openness and flexibility, voluntary, active, and regular participation). In addition, meta-cognitive competencies are mentioned in the interview transcripts, for example, related to students’ organization of their learning processes or using external resources for learning. Other meta-cognitive competencies in the sample involve planning one’s actions and taking responsibility for the learning process. It should be noted that attitude and mindset, as well as self-competencies (and prior knowledge), can influence the development of these competencies, which, in turn, can affect attitude and mindset. The experts further highlighted the connection between meta-cognitive competencies, programming experience, and intuition, as the following exemplary quote reveals: The core problem that our students have is finding a solution based on the given task. This has something to do with intuition. It is something that is difficult to learn or that has to be compensated for later in life with a lot of effort. (EI02)

The so-called “intuition” leading to a systematic and correct problem-solving approach is classified as meta-cognitive competency (knowledge about knowledge). It is learnable but requires a lot of practice or years of experience. It is assumed that the competency to successfully transfer solutions, and systematically solve problems can only be achieved through experience and practice. It seems reasonable though, to distinguish programming experience from other competencies. Experience represents the gradual accumulation of cognitive knowledge and competencies, but experience does not necessarily comprise the awareness of that knowledge. Nonetheless, programming experience certainly enables the development of meta-cognitive competencies, such as transfer and systematic problem-solving. Through intensive practice and exposure to different problems and their solutions, learners can gain experience that can be applied to future tasks by reusing or adapting previous solutions (and strategies). Yet programming practice and experience support the development of further cognitive and other competencies. Upon successful task completion, practicing may also foster joy and satisfaction with one’s effort and progress, which, in turn, can enhance motivation and participation. Without experiencing success and the resulting pride or joy in one’s achievements, students cannot ever experience competency.

9.6 Discussion of Results

127

Development of cognitive and meta-cognitive competencies

Voluntary participation, enthusiams and joy for problem-solving

Active, regular participation, perseverance

Programming experience

Experiencing competency, joy and pride over own results

Fig. 9.1 Relationship between active participation, attitude and mindset and the acquisition of cognitive and meta-cognitive competencies in basic programming education (Kiesler 2022b)

I believe that everyone should have experienced what it feels like when a program finally works (laughing). The compiler always said it did not work, and then suddenly, exactly what I expected appeared, and what is correct. It’s just such a childlike joy. I think that, from a pedagogical perspective, everyone should be given the chance to experience that. Because it compensates for the dark tunnel. So, the joy of seeing your own function work. (EI04)

These potential connections between programming practice and experience, attitude, and mindset, as well as the acquisition of cognitive and meta-cognitive competencies in basic programming education, are illustrated in Fig. 9.1. These connections further reflect the concept of competency, and how intertwined knowledge, skills and dispositions are in the computing context (Raj et al. 2021b,a). Based on these assumed connections, most experts justify the need to regularly and actively participate in exercises and tutorials, wherein all students are expected to independently solve problems. Participation is thus the starting point of the cycle. The development of other competencies, such as self-reflection and taking responsibility for one’s learning success, are closely related to active participation, attitude, and mindset, as students must be conscious of the need to practice regularly. However, not all experts take measures to enforce or extrinsically motivate students’ active and regular participation. The perspectives of the experts diverge significantly, as the following examples illustrate: Enforcing regular practice appears to be a meaningful measure in the first semesters, as students are not yet familiar with academic work. (EI00) In the exercises, it is different. I usually assign a task every week, but some tasks last two to three weeks. Everything is voluntary in my class. I used to do it differently. I tried all sorts of variations. Now I do it voluntarily. It is risky. (EI02)

128

9 Results of Guided Expert Interviews

We have a system with exercises in the computer lab. Students can solve these tasks at home and do not have to attend the exercise sessions. Those who are present can ask for help when they are stuck. But most students do not come to the exercises. (EI06)

While some institutions aim to enforce attendance, other experts advocate for providing freedom to students, as they expect them to take over responsibility for their learning and success from the very beginning. The freedom to participate is usually intended to support independence and competency development. However, these contrasting positions do not seem to be due to the different institutional contexts of universities and universities of applied sciences. The experts further introduced the implementation of structural measures to support novice programmers, such as establishing support courses to promote low-performing students. Overall, two structural measures promoting programming competency were identified in the expert interview data. In addition to support courses, regular exercises and tutorials are included in this category of factors contributing to programming competency. All remaining measures describe pedagogical designs and interventions during face-to-face sessions. The lack of nationwide additional support courses is likely due to the lack of (human) resources and funding, as educators already reported on limited resources to provide students with meaningful, individual feedback every week. It is thus impossible to attribute structural deficits to a single cause. One last aspect subject to discussion is the presence or absence of prior knowledge and experience. Due to a diverse student body, instructors and learners both may face further challenges: On the other hand, there are people who can already program quite well, who only need to switch to a different language but have understood the principles, and they just need to supplement their knowledge. And that is a huge spectrum. That is the difficulty for the instructors. And that is why we can only react by adapting to the knowledge levels of the cohorts. (EI04)

So, on the one hand, students with prior knowledge or practical experience may have advantages in their studies. On the other hand, prior knowledge can also be a factor causing challenges for novices, as mentioned by an expert: “[.. . .] because prior knowledge, poor prior knowledge, or dogmatic prior knowledge can be counterproductive, and dogmatic prior knowledge can come from schools or training. A lot of explanation and additional context needs to be provided [.. . .]” (EI03)

In any case, instructors need to adapt their courses to the varying degrees of prior knowledge and experience within a group of students to respond to their needs. Ideally, educators would offer tasks with a gradually increasing complexity, i.e., scaffolding (Piaget 2013, 1967; Piaget and Inhelder 1972; Vygotsky 1962, 2012; Kiesler 2022c). Educators may also offer more challenging (but solvable) tasks to more experienced and achievement-oriented students. Unless additional capacities for human support are available, educators may also recommend freely available learning environments with programming exercises and auto-grading or assessment so that students can receive feedback on their solutions (Jeuring et al. 2022a,b; Kiesler 2022a, 2023). Another option, of course, is using LLMs and generative AI.

References

129

Several studies have started to evaluate the potential of LLMs to help students learn programming, or to generate feedback to students’ input, and there is the hope that students may be able to use LLMs as “study buddy” in the near future (Kiesler et al. 2023b; Prather et al. 2023b; Kiesler and Schiffner 2023).

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) J. Biggs, Enhancing teaching through constructive alignment. Higher Edu. 32(3), 347–364 (1996) J. Biggs, C. Tang, Teaching for Quality Learning at University (McGraw-Hill Education, New York, 2011) B. Friebertshäuser, Interviewtechniken: Ein Überblick, in Handbuch Qualitative Forschungsmethoden in der Erziehungswissenschaft, ed. by B. Friebertshäuser, A. Prengel (Juventa-Verlag, Weinheim, 1997). J. Gläser, G. Laudel, Experteninterviews und Qualitative Inhaltsanalyse (VS Verlag für Sozialwissenschaften, Wiesbaden, 2004) M. Hasselhorn, A. Gold, Pädagogische Psychologie: Erfolgreiches Lernen und Lehren, 2 auflage edn. (W. Kohlhammer Verlag, Stuttgart, 2009) J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE’22 (ACM, New York, 2022), pp. 662–663 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Petersen, S. Sarsa, Steps learners take when solving programming tasks, and how learning environments (should) respond to them, in Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 2, , ITiCSE’22 (Association for Computing Machinery, New York, 2022a), pp. 570–571 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Peterson, S. Sarsa, Towards giving timely formative feedback and hints to novice programmers. In: Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR’22 (Association for Computing Machinery, New York, 2022b), pp. 95–115 M. Kazemitabaar, J. Chow, C.K.T. Ma, B.J. Ericson, D. Weintrop, T. Grossman, Studying the effect of AI code generators on supporting novice learners in introductory programming, in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (2023), pp. 1–23 N. Kiesler, On programming competence and its classification, in Koli Calling’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling’20 (Association for Computing Machinery, New York, 2020a) N. Kiesler, Towards a competence model for the novice programmer using bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE’20 (Association for Computing Machinery, New York, 2020b), pp. 459–465 N. Kiesler, An exploratory analysis of feedback types used in online coding exercises. CoRR abs/2206.03077v2. arXiv: 2206.03077v2 (2022a). https://doi.org/10.48550/arXiv.2206.03077 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und Informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022b)

130

9 Results of Guided Expert Interviews

N. Kiesler, Reviewing constructivist theories to help foster creativity in programming education, in 2022 IEEE Frontiers in Education Conference (FIE) (2022c), pp. 1–5 N. Kiesler, Investigating the use and effects of feedback in codingbat exercises: an exploratory thinking aloud study, in 2023 Future of Educational Innovation-Workshop Series Data in Action (2023a), pp. 1–12 N. Kiesler, D. Schiffner, Large language models in introductory programming education: chatgpt’s performance and implications for assessments. CoRR abs/2308.08572. arXiv: 2308.08572 (2023a). https://doi.org/10.48550/arXiv.2308.08572 N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023a) N. Kiesler, D. Lohr, H. Keuning, Exploring the potential of large language models to generate formative programming feedback, in 2023 IEEE Frontiers in Education Conference (FIE) (2023b), pp. 1–5 D. Krüger, T. Riemeier, Die Qualitative Inhaltsanalyse – eine Methode zur Auswertung von Interviews, in Methoden in der naturwissenschaftsdidaktischen Forschung, ed. by D. Krüger, I. Parchmann, H. Schecker (Springer, Berlin, 2014), pp. 133–145 A.N. Kumar, R. McCauley, B. MacKellar, M. Sabin, N. Kiesler, R.K. Raj, J. Impagliazzo, Quantitative results from a study of professional dispositions, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023). https://peer.asee.org/43018 P. Mayring, Einführung in die Qualitative Sozialforschung (Beltz, Weinheim, 2002) P. Mayring, Qualitative Inhaltsanalyse: Grundlagen und Techniken, 12 auflage edn. (Beltz, Weinheim, 2015) R. McCauley, M. Sabin, A.N. Kumar, N. Kiesler, B. MacKellar, R.K. Raj, J. Impagliazzo, Using vignettes to elicit students’ understanding of dispositions in computing education, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 J. Piaget, On the Development of Memory and Identity (Clark University Press, Worcester, 1967) J. Piaget, Play, Dreams and Imitation in Childhood (Norton, New York, 2013) J. Piaget, B. Inhelder, Die Psychologie des Kindes (Walter-Verlag, Olten, 1972) J. Prather, P. Denny, J. Leinonen, B.A. Becker, I. Albluwi, M.E. Caspersen, M. Craig, H. Keuning, N. Kiesler, T. Kohn, A. Luxton-Reilly, S. MacNeil, A. Petersen, R. Pettit, B.N. Reeves, J. Savelka, Transformed by transformers: navigating the ai coding revolution for computing education: an iticse working group conducted by humans, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE 2023 (Association for Computing Machinery, New York, 2023b), pp. 561–562 B. Puryear, & G. Sprint, Github copilot in the classroom: learning to code with ai assistance. J. Comput. Sci. Coll. 38(1), 37–47 (2022) R.K. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Toward practical computing competencies, in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE’21 (Association for Computing Machinery, New York, 2021a), pp. 603–604 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional Competencies in Computing Education: Pedagogies and Assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR’21 (Association for Computing Machinery, New York, 2021b), pp. 133–161

References

131

M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) P. Vaithilingam, T. Zhang, E.L. Glassman, Expectation vs. experience: evaluating the usability of code generation tools powered by large language models, in Chi Conference on Human Factors in Computing Systems Extended Abstracts (Association for Computing Machinery, New York, 2022), pp. 1–7. https://doi.org/10.1145/3491101.3519665 L.S. Vygotsky, Thought and Language (MIT Press, Cambridge, 1962) L.S. Vygotsky, Thought and language: revised and expanded edition (MIT Press, Cambridge, 2012) M. Wermelinger, Using github copilot to solve simple programming problems, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2023 (ACM, New York, 2023), pp. 172–178 J. Zhang, J. Cambronero, S. Gulwani, V. Le, R. Piskac, G. Soares, G. Verbruggen, Repairing bugs in python assignments using large language models (2022), eprint 2209.14876. https://doi.org/ 10.48550/arXiv.2209.14876

Chapter 10

Summarizing and Reviewing the Components of Programming Competency

10.1 Summary of Cognitive Programming Competencies Developing a synthesis of cognitive programming competencies started with the following task. First, the respective categories identified in the content analysis of university curricula, and the interview transcripts were compared and summarized. One of the goals was to reach the same abstraction level of category labels. To achieve this, the content area defined for introductory programming education based on the ACM curricula recommendations (ACM, Joint Task Force on Computing Curricula 2013) was, yet again, consulted (see Chap. 4.4.2). Accordingly, some competencies identified in the data had to be removed, for instance, “using LATEX”. The same was applied to other, more advanced concepts from theoretical computer science, as they are not considered a component of programming competency. As a result, a summary of competencies originating from both data sources was compiled. Table 10.1 presents the final synthesis of cognitive programming competencies and their classification along the AKT’s knowledge and cognitive process dimensions (Kiesler 2020a,b,c,d, 2022b; Kiesler and Pfülb 2023) The alignment of categories’ abstraction levels resulted in the programming competencies summarized in Table 10.1. Categories were consolidated if classified within the same cognitive process and knowledge dimensions. For example, the category “know control structures” was removed, which had been used to code the material whenever students were expected to know basic control structures and represent them, for example, via structure diagrams or valid syntax. This decision was because this competency is represented within the slightly more abstract category “know syntax and semantics of programming languages”. This example illustrates the expected challenges in the consolidation of categories. Procedural knowledge is more abstract than conceptual knowledge, which can make it more difficult to categorize respective competencies.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_10

133

Knowledge dimensions Cognitive processes Factual knowledge Conceptual knowledge Level 1 remember Know elementary Know libraries programming Know basic characteristics language constructs of algorithms Assign literals to Know terms and categories data types concerning complexity and efficiency of algorithms Know elements of GUIs

Procedural knowledge Meta-cognitive knowledge Know tools (IDEs, debugger, profiler) Know the structure and principles of networks, the computer and other technological basics Know how compiler and interpreter operate Know distributed systems and parallel programming Know methods for the formal definition of programming languages Know syntax and semantics of programming languages Know concepts for data management Know mathematical basics of algorithms Know basic algorithms and data structures Know design patterns for algorithms and data structures Know implementation methods for data structures Process knowledge of run-time analysis Know methods and tools for modeling of algorithms Know methods of software development Know quality criteria and conventions for source code

Table 10.1 Summary of all identified cognitive programming competencies and their classification within respective AKT dimensions (Kiesler 2022b, 2020b,c)

134 10 Summarizing and Reviewing the Components of Programming Competency

Level 5 evaluate

Level 4 analyze

Level 3 apply

Level 2 understand

Describe concepts of Characterize algorithms, data structures and data types programming paradigms and Justify the use of software development tools and languages paradigms Explain problems that can be solved by algorithms Explain terms of formal verification techniques Use computers Document programs professionally Apply quality criteria to source code Perform mathematical calculations and encodings Use existing libraries Use tools for software development Characterize programming Break down given problems into smaller components language and paradigms by Analyze adequacy and characteristics of data structures outlining their inner Analyze adequacy and characteristics of algorithms structure Analyze algorithms and their complexity Being able to read, explain and identify the output of (foreign) code Comprehend compiler and interpreter messages Assess adequacy of algorithms and data structures Assess adequacy of solutions written in programming languages Debugging of programs Testing of algorithms and programs for errors Evaluate properties of algorithms and programs Assess the complexity of algorithms Assess adequacy of programming tools and templates (continued)

Self-reflection Evaluate adequacy of one’s self-concept Taking responsibility for learning processes

10.1 Summary of Cognitive Programming Competencies 135

Knowledge dimensions Cognitive processes Factual knowledge Conceptual knowledge Level 6 create

Table 10.1 (continued) Procedural knowledge Design program specifications Use programming language constructs correctly Use logical expressions and operators to solve problems Add code to a given method declaration Write a single class Write class(es) and a corresponding method Write and call methods of an object Generate objects Write a program with several classes Extend or adapt given program code Write loops Modeling of problems and programs Design problem-adequate algorithms Design smaller programs in a structured way Design concurrently or parallel running processes Specify and implement (abstract) data types Use and adapt standard data structures Design data structures Use and adapt standard algorithms Implement algorithms Write an executable program to solve a problem Design interfaces Programming of GUIs Develop libraries

Meta-cognitive knowledge Transfer of knowledge and skills to new problems and programming languages Self-regulated organization of learning process Use of external resources for studying Abstraction of rules (e.g., in recursion) and problems Develop a systematic approach to problem-solving

136 10 Summarizing and Reviewing the Components of Programming Competency

10.1 Summary of Cognitive Programming Competencies

137

The synthesis of categories from two data sources and analyses further led to the merging of very similar categories. An example of this process is the final category “being able to read, explain, and identify the output of (foreign) code”, which is a combination of the categories “understand foreign code”, “explain one’s own code” and “determine the output of foreign code”. The final synopsis of the two data analyses in the form of Table 10.1 further reveals the applicability of the Anderson Krathwohl Taxonomy (Anderson et al. 2001) to the context of introductory programming education. As expected, the categories built from the material are distributed evenly along the imagined diagonal axis of the AKT matrix. In Table 10.1, the cognitive process and knowledge dimensions axes are transposed for better visualization. Moreover, it should be noted that the data material seems to confirm the sequence of cognitive processes proposed in the revised taxonomy (Anderson et al. 2001) as opposed to Bloom’s original taxonomy (Bloom 1956). The competency goals expected in the curricula describe the levels of evaluating and creating in a way that reflects the gradual increase of cognitive complexity in that order, as competencies aiming at the evaluation precede those denoting creative processes and tasks. Learning goals focusing on the cognitive process “evaluating” should therefore come before those aiming at the creation of new structures, such as writing program code as a problem-solving approach. Finally, the overview in Table 10.1 illustrates the complexity of programming competencies expected from novice programmers in the first three or four semesters, thereby answering research question 1.a. In addition, all cognitive programming competencies and their classification in terms of the AKT’s dimensions are presented as a comprehensive list that reflects the increasing complexity of competencies step by step concerning knowledge units (Kiesler 2020a,b,c,d; Kiesler and Pfülb 2023). Respective AKT dimensions are added in brackets via a short-hand notation. Knowledge dimensions of the AKT are indicated as follows: A (factual knowledge), B (conceptual knowledge), C (procedural knowledge), and D (meta-cognitive knowledge). Cognitive process dimensions are represented by the following abbreviations: 1 (remember), 2 (understand), 3 (apply), 4 (analyze), 5 (evaluate) and 6 (create).

Cognitive Programming Competencies • Remember, understand, and use basic functions of a computer, its architecture, compilers, and interpreters, as well as basic computing systems (C1-C4) – Remember the structure and functionality of networks, the computer, and other technological fundamentals (C1) – Remember how the compiler and interpreter analyze and compile source code (C1) (continued)

138

10 Summarizing and Reviewing the Components of Programming Competency

– Explain differences and functionality of compiler and interpreter (C2) – Being able to use a computer (C3) – Being able to convert numbers into different forms of representation or numeral systems, and calculate with them (C3) – Being able to read and interpret messages from compiler and interpreter (C4) • Remember tools (development environments and their functions) for programming, being able to use them professionally and justify and evaluate their usage (C1-C5) – Remember tools for software development (integrated development environments, debuggers, profilers), their functions, and intended use cases (C1) – Remember quality criteria for source code (C1) – Being able to justify the use of software development tools (C2) – Successfully use tools for software development (C3) – Apply quality criteria for source code and programming conventions, and document programs in a comprehensive manner (C3) – Evaluate existing programming tools, templates, and libraries (C5) • Remember, explain, analyze, and evaluate basic data types (e.g., integer, Boolean, float, string, etc.) and data structures (e.g., stack, queue, heap, graph, etc.), their representation, properties, and application and being able to create new, problem-adequate data structures (C1–C6) – Remember properties and possible implementations of data types and structures in different programming paradigms (C1) – Explain characteristics of basic data types and structures (C2) – Analyze properties and applicability of data structures (C4) – Assess the adequacy of data types and structures for a problem (C5) – Design, develop and implement data types (C6) – Adapt and implement standard data structures adequate to a given problem (C6) • Remember, understand, analyze, and evaluate simple algorithms and design patterns (e.g., sorting algorithms, greedy, divide-and-conquer, backtracking) and their properties as solutions for well-known problems, and being able to design new algorithms adequate to given problems (B1, C1– C6) – Remember basic algorithms, design patterns for algorithms, basic concepts for the modeling of algorithms and processes, and basics of runtime analysis (B1, C1) – Characterize algorithms and their features (C2) (continued)

10.1 Summary of Cognitive Programming Competencies

139

– Analyze properties, suitability, and complexity of algorithms (runtime, storage requirements) (C4) – Evaluate the complexity of algorithms (C5) – Test algorithms for errors and with regard to intended properties, as well as verifying properties (C5) – Evaluate the adequacy of algorithms as solutions to a problem (C5) – Design small programs in a structured manner (C6) – Being able to select problem-adequate algorithms, to redesign and implement them using techniques and design patterns adequate for a given problem (C6) • Remember, understand, characterize, and use basic features and constructs of a programming language and its paradigm (e.g., keywords, syntax, expressions, functions, variables, lists, control structures, recursion) to generate program code (A1, B1–B4, C1–C6, D6) – Remember elements of a programming language (e.g., language constructs such as keywords, literals, operators) (A1) – Remember concepts of programming languages and paradigms (e.g., typing, functions, parameter passing) (B1) – Being able to describe and compare concepts of programming languages and paradigms (B2) – Characterize programming languages and paradigms by outlining their inner structure (B4) – Remember the syntax and semantics of programming languages, and programming conventions (C1) – Use constructs of programming languages correctly (e.g., use logical expressions and operators to solve problems, write classes, write methods, create objects, write loops, etc.) (C6) – Transfer of knowledge and experience with programming languages to the acquisition of new programming languages (D6) • Understand and solve problems by developing and implementing adequate programs (B2, C4-C6, D6) – Explain well-known algorithmic problems and tasks (B2) – Break down given problems into parts and select the relevant information for problem-solving (C4) – Being able to read, explain and determine the output of foreign and own code (C4) – Assess the adequacy of solutions written in a programming language (C5) – Verify properties, debugging, and testing of programs (C5) – Design program specifications (C6) (continued)

140

10 Summarizing and Reviewing the Components of Programming Competency

– Model problems and programs (C6) – Write executable solutions for small/medium problems in a programming language (up to a few hundred lines) (C6) – Abstraction of a procedure, rule (e.g., recursion), problem (D6) – Transfer of knowledge and experience with problem-solving to unknown problems and contexts (D6) – Develop a systematic approach to problem-solving (D6) • Meta-cognitive competencies related to the individual self and their implementation of learning actions (D5-D6). – Self-reflection and assessment of the adequacy of one’s self-concept (D5) – Take responsibility for one’s learning success (D5) – Self-sufficient organization (plan, review, regulate) of one’s learning process (D6) – Select and use external resources for learning (D6)

10.2 Summary of Other Programming Competency Components In addition to the cognitive competencies that were classified according to the cognitive process and knowledge dimensions of the Anderson Krathwohl Taxonomy (Anderson et al. 2001), other competencies expected from novice learners were identified in the data. The following list illustrates the synopsis of other competency components identified in both data analyses, i.e., the curricula data and interview analysis (Kiesler 2020b, 2022b). It further serves as an answer to research question 1.b., as it summarizes other objectives explicitly expected as part of introductory programming education at German universities.

Other Programming Competencies • Self-competencies – Prior education and competencies – Adequate self-confidence in one’s abilities/self-efficacy – Creativity (continued)

10.2 Summary of Other Programming Competency Components

141

• Social-communicative skills – Cooperate and work with others – Communicate – Organization of teamwork and projects • Attitude and mindset – – – – – –

Enthusiasm for solving problems and computer science Purpose-driven behavior despite frustrating tasks Willingness to learn, openness, and flexibility Voluntary participation Active and regular participation Joy and pride over own results

Even though both data sources contained other competency components, most of these list items, among them many dispositions (Clear et al. 2020; Frezza and Adams 2020; Perkins et al. 1993; Schussler 2006), were identified in the data resulting from the expert interviews.1 It indicates that numerous other competencies are expected from novice programmers, but not explicitly as part of the curricula. Considering the concept of Constructive Alignment (Biggs’ 1996; Biggs and Tang 2011), this can lead to several challenges, if, for example, learners are not instructed but assessed in terms of their social-communicative abilities. The recent promotion of the dispositional dimension of competency in undergraduate computing programs is therefore considered an important development (MacKellar et al. 2023; Impagliazzo et al. 2022; Sabin et al. 2023; Kiesler et al. 2023a; McCauley et al. 2023). In addition, the experts stressed the importance of practicing and gaining experience for novice programmers. Time-intensive practice is described as the way to get used to and recognize certain types of problems and possible solutions that can later be transferred to new problems. Programming experience is therefore considered an important factor in developing cognitive and meta-cognitive competencies. Moreover, experiencing competency during practice seems to promote a positive attitude and mindset toward learning, according to the experts. As the industry expects computing graduates to exhibit the CC 2020s dispositions (Kiesler and Impagliazzo 2023), it is important to further investigate dispositions relevant for computing students (e.g., creativity, see Kiesler 2022c). This may eventually lead to an expansion of the CC 2020 list of eleven dispositions (Clear et al. 2020).

1 An

overview of all identified categories along with their definitions is presented in Sects. 8.2 and 9.2.

142

10 Summarizing and Reviewing the Components of Programming Competency

10.3 Review of the Anderson Krathwohl Taxonomy In this section, conclusions regarding the Anderson Krathwohl Taxonomy (Anderson et al. 2001) and the degree to which it was applicable for the classification of programming competencies are presented. As a result of the qualitative content analyses, the AKT’s cognitive process and knowledge dimensions were adapted to the context of introductory programming education. The adapted AKT dimensions thus address research question 1.c. As a part of the review process, the AKT’s types and subtypes were tailored to the context of basic programming education at universities. This way, it should become easier for educators to recognize and categorize their competency goals into the cognitive process and knowledge dimensions. The AKT is abstract and claims applicability in all disciplines, containing discipline-agnostic examples. Therefore, the adapted definitions and examples for each cognitive process and knowledge dimension, along with types and subtypes are supposed to serve as a context-specific extension of the AKT. They do not represent a contradiction or revision. The application of the AKT as a framework to classify programming competencies demonstrates the high cognitive demands towards novice programmers and CS students in programming education, let alone other contexts. All in all, the AKT proved to be a useful tool for the classification of cognitive competencies expected in introductory programming. A great number of competencies were identified in the procedural knowledge dimension, with a focus on the cognitive process dimension of creating. In addition, several meta-cognitive competencies are expected from students, as the curricula data and expert interviews revealed. These meta-cognitive competencies are classified within the “evaluating” and “creating” dimensions, as illustrated in Table 10.1. It should be noted that these two are the most cognitively complex dimensions of the AKT. Considering the context of introductory programming education, the expectations for novice learners seem to be very high. In a few other disciplines (e.g., engineering, mathematics, or physics), beginners are expected to demonstrate a demanding level of creative, constructive performance and independent problem-solving. When compared to linguistics, for example, it seems that corresponding cognitive processes, such as generating hypotheses, planning, designing, and creating a new solution, are only required in very advanced courses. The development of an artificial, fictional language (e.g., Na’vi or Klingon) would be an equivalent example. Such tasks, however, are uncommon in undergraduate programs and introductory courses. In the programming context, writing code in a programming language to solve a problem represents this constructive performance. And yet, novice learners of programming are expected to create individual, new solutions to problems within their first year or semester. It is somewhat paradoxical that the curricula and interview data both revealed few competencies within factual and conceptual knowledge dimensions, and at the levels “remembering” and “understanding”. Instead, most

10.3 Review of the Anderson Krathwohl Taxonomy

143

competencies were classified within the procedural knowledge dimension, and at the level of “creating”. These findings imply the omission of or ignorance towards lower cognitive processes as part of introductory programming education and respective learning objectives. These few occurrences and low frequencies of respective competencies may be due to the inclusive character of the AKTs’ dimensions. Nonetheless, competencies within lower cognitive process dimensions should be addressed explicitly in university curricula and education. It is important for students to gradually increase the complexity of tasks so that tasks are still solvable, and students’ motivation does not decrease (Piaget 2013; Vygotsky 1962; Keller 1983, 2010). The lack of explicit, less cognitively complex programming competencies in curricula further reflects the high expectations of novice programmers, as identified by McCracken et al. (2001) in a study on the challenges of students learning to program. The study’s multinational and multi-institutional character emphasizes the magnitude of this phenomenon, which seems to occur in higher education institutions worldwide. Of course, the AKT in its general form cannot address the lack of consciousness among educators regarding the classification of cognitive tasks. While achieving universality across disciplines through its high abstraction, the AKT can easily cause misinterpretations, as one of the GI’s models illustrates (GI 2016). The challenges in assigning specific dimensions to a learning objective are mainly due to the semantic ambiguity of the words and expressions in natural language. Some lexemes, such as “concept”, can even be misleading as they do not necessarily refer to conceptual knowledge. Similarly, the verb “apply” cannot be automatically classified into the corresponding cognitive process dimension, unless it refers to the application of software or tools. The verbs “apply” and “understand” are mostly misleading as the addressed competency goals rather denote more complex cognitive dimensions, such as “create”, or “analyze” (Scott 2003; Kiesler 2020a,b). In the context of these challenges, a domain-specific variant of the AKT for computer science or particularly programming is considered beneficial for both educators and learners. As the analysis of curricula data and expert interviews revealed, the AKT can serve as an adequate framework, but it should be adapted to the context of programming education and offer respective definitions and examples. Hence, the general and very abstract dimensions require a domainspecific interpretation based on empirical data to be applicable in computer science, particularly introductory programming education. The results of the two qualitative content analyses were used to revise the AKT into the context-specific variant presented in Tables 10.2 and 10.3. The inductive categories derived from module descriptions and interview transcripts served as the basis for redefining the types, subtypes, and cognitive processes. Then context-specific examples from basic programming education were selected from the inductively built categories. The types and subtypes still represent the types and subtypes defined by Anderson et al. (2001).

144

10 Summarizing and Reviewing the Components of Programming Competency

Table 10.2 AKT knowledge dimensions adapted to introductory programming including subtypes and examples (Kiesler 2020a,b, 2022b) A. AA.

Factual knowledge Knowledge of terminology

AB.

Knowledge of specific details and elements

B. BA.

Conceptual knowledge Knowledge of classifications and categories Knowledge of principles and generalizations

BB.

BC.

Knowledge of theories, models, and structures

C. CA.

Procedural knowledge Knowledge of subject-specific skills and algorithms

CB.

Knowledge of subject-specific techniques and methods Knowledge of criteria for determining when to use appropriate procedures

CC.

D. DA.

DB.

DC.

Know language elements (e.g., keywords, escape sequences, literals, operators), simple data types, simple style conventions, exceptions of lexical expressions History of CS, the computer, programming, and software development, know reliable textbooks and tools for learning, ethical, cultural, and social aspects of CS and programming, know major products (IDEs) Programming paradigms, classes of time complexity, algorithmic efficiency, characteristics of algorithms, generations of programming languages, data types Principles of programming paradigms and programming languages (e.g., dynamic vs. static typing, type systems, principles of variables, functions, passing of parameters) Theory of computational complexity (e.g., efficiency, runtime complexity), computer architectures, computer models (Turing machine), model of computer memory Knowledge of the syntax of programming languages, know algorithms for typical problems, know how to write code, conversion of decimal numbers in floating point numbers and vice versa Programming conventions, analyze problems and algorithms, use tools for software development, model or design algorithms, testing of algorithms and debugging Knowledge of criteria when to apply certain algorithms, data structures, programming languages, know when to use certain tools for help (e.g., library, compiler, interpreter)

Meta-cognitive knowledge Study techniques and learning strategies, organizational Strategic knowledge strategies and how to organize learning processes, when to use external resources for support, planning, monitoring, and regulation of cognitive processes, strategies for problem-solving and critical thinking Recognition of patterns or types of problems, abstraction Knowledge about of problems, selection of appropriate algorithmic cognitive tasks solutions, knowledge when to transfer knowledge to new tasks Self-knowledge Reflection of strengths and weaknesses, awareness of strategies, appropriateness of self-confidence and self-efficacy beliefs and the appropriateness of one’s self-confidence, taking responsibility for one’s learning process

Table 10.3 AKT cognitive process dimensions adapted to introductory programming (Kiesler 2020a,b, 2022b) 1. 1.1

REMEMBER Recognizing (Identifying)

1.2

Recalling (Retrieving)

2. 2.1

UNDERSTAND Interpreting (Clarifying, paraphrasing, representing, translating) Exemplifying (Illustrating, instantiating) Classifying (Categorizing, subsuming) Summarizing (Abstracting, generalizing) Inferring (Concluding, extrapolating, interpolating, predicting) Comparing (Contrasting, mapping, matching)

2.2

2.3

2.4

2.5

2.6

2.7

Explaining (Constructing models)

3. 3.1

APPLY Executing (Carrying out)

3.2

Implementing (Using)

4. 4.1

ANALYZE Differentiating (Discriminating, distinguishing, focusing, selecting) Organizing (Finding, coherence, integrating, outlining, parsing, structuring) Attributing (Deconstructing)

4.2

4.3

Locate long-term memory knowledge that is consistent with presented material (e.g., recognize dates of important events: when the computer was invented and by whom) Retrieving relevant knowledge from long-term memory (e.g., recall dates of important events in the history of computer science) Changing from one form of presentation (numerical, written) to another (verbal) (e.g., paraphrasing literature, documentations, manuals, describe written tasks orally) Demonstrate a specific example or illustration of a concept or principle (e.g., explain stack as an example of abstract data types) Determine membership of a category or class (e.g., to a class of data types, classify integer as primitive data type) Abstracting a general topic or motif (e.g., summarize core concepts of a programming paradigm) Draw logical conclusions from presented information (e.g., logically linking the states of two Boolean variables and predicting the resulting truth values) Recognizing similarities between two ideas, objects, concepts (e.g., comparison of data types, data structures or algorithms, compare Von-Neumann and Harvard architecture) Construction of a model of the cause and effect of a system (e.g., different memory consumption and runtime of iteration vs. recursion, construct Von-Neumann architecture) Applying a procedure to a familiar task (e.g., convert decimal numbers in binary numbers, Octal numbers, hexadecimal numbers) Applying a procedure to an unfamiliar task (e.g., apply quality criteria and programming conventions to own source code) Distinguish between relevant and irrelevant or important and unimportant parts of presented material (e.g., dissect a problem description and select relevant aspects for solving the problem) Determine how elements fit together or function within a structure (e.g. parsing components of foreign code and thereby outline its function) Determine a point of view, bias, values, or intent underlying presented material (e.g., deconstruct foreign code with regard to how traceable, readable, and maintainable it was implemented) (continued)

146

10 Summarizing and Reviewing the Components of Programming Competency

Table 10.3 (continued) 5. 5.1

EVALUATE Checking (Coordinating, detecting, monitoring, testing)

5.2

Critiquing (Judging)

6. 6.1

CREATE Generating (Hypothesizing)

6.2

Planning (Designing)

6.3

Producing (Constructing)

Detection of inconsistencies in processes or products, recognition of the effectiveness of a process during implementation (e.g., testing of algorithms and programs for correctness and characteristics, troubleshooting in programs, taking responsibility for learning success) Identification of inconsistencies between a product and external criteria or standards, Recognizing positive and negative properties of a product, recognizing the appropriateness of a procedure for a problem (e.g., judging which algorithm or program is the most appropriate to solve a particular problem, assessing the complexity of algorithms) Presenting problems in a new way, thereby creating alternative hypotheses and possibilities for solving problems, crossing the boundaries of previous knowledge and theories (e.g., modeling problems, compiling various new algorithms to solve a problem (iteration vs. recursion), transfer of knowledge to new problems and their solutions) Consciously or subconsciously developing a suitable procedure to fulfill a task, and defining sub-goals and work steps if necessary (e.g., modeling a program, designing algorithms, designing data structures, designing interfaces) Trace problems according to a work plan for problem-solving and thus invent or develop a product that meets the requirements (e.g., write executable solutions for problems in a programming language, program GUIs, develop a systematic approach for problem-solving)

Buck and Stucki (2000) employed a similar approach when adapting Bloom’s taxonomy as a basis for a discipline-specific pedagogy. In their work, Buck and Stucki (2000) emphasize the hierarchical nature and sequence of competencies in conjunction with a gradual, step-by-step approach towards learning while using examples. Likewise, the AKT allows for a granular classification of competencies. In general, it seems natural to address less complex cognitive competencies early in the curricula before targeting the most complex ones (Nelson et al. 2020). In the continuum of cognitive complexity, it may, however, not be possible to distinguish all competencies at a fine-grained level. Especially, if we assume the holistic nature of competency (Raj et al. 2021a,b). However, the classification of programming competencies bears various potentials. For example, it can support educators in making decisions regarding the sequence of exercises, learning processes, and addressed learning outcomes. In this context, Lister et al. (2009) discuss the relationship between the task to explain, understand, and write program code. Thus, the sequence of respective tasks and learning activities can be crucial for learners.

References

147

The synopsis of cognitive programming competencies in Table 10.1 revealed that several competencies have counterparts in higher or lower cognitive process dimensions or other knowledge dimensions. These counterparts become even more apparent in the comprehensive list of clustered competency units provided in Sect. 10.1. The results of this study can thus contribute to educator’s awareness of the cognitively demanding nature of introductory programming, and help them develop respective tasks on all cognitive process dimensions, not just the most complex ones. The context-specific overview and definition of cognitive process dimensions in Tables 10.2 and 10.3 can further support the sequencing of tasks, and moving towards formative assessments with a gradually increasing complexity (Kiesler 2020b, 2022b). Regarding the didactic design of teaching and learning processes and assessments, the AKT and its adaptation for introductory programming offer room for improvement. Biggs’ (1996) theory of constructive alignment describes a pedagogical concept in which educators align assessments, in-class practice, and exercises with educational outcomes, and thus competency goals as those within the curricula. This constructivist concept has started to gain traction in computing and introductory programming education (Gaspar and Langevin 2012; Cain and Woodward 2013; Cain and Babar 2016). Cain and Babar (2016), for instance, recommend increasingly focusing on learners’ needs and moving away from teacher-centered approaches in basic programming education. Instead, they emphasize guiding and supporting learners and use formative feedback, which eventually could replace grades. Portfolio-based assessments may constitute a starting point for novice learners of programmers to document and reflect upon their learning processes and progress. With the help of feedback, learners may then be able to gradually develop their programming competencies, starting with less complex competencies in the factual, conceptual, and procedural knowledge dimensions. Online learning environments (Jeuring et al. 2022a,b; Kiesler 2022a, 2023) or Large Language Models (Prather et al. 2023; Kiesler et al. 2023b; Kiesler and Schiffner 2023a) may offer this feedback in the near future, even in large-scale courses.

References ACM, Joint Task Force on Computing Curricula, Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science (Association for Computing Machinery, New York, 2013) L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) J. Biggs, Enhancing teaching through constructive alignment. High Edu. 32(3), 347–364 (1996) J. Biggs, & C. Tang, Teaching for Quality Learning at University (McGraw-Hill Education, New York, 2011)

148

10 Summarizing and Reviewing the Components of Programming Competency

B.S. Bloom, Taxonomy of educational objectives: the classification of educational goals. Cognit. Domain (Longman, New York, 1956) D. Buck, D.J. Stucki, Design early considered harmful: graduated exposure to complexity and structure based on levels of cognitive development, in Proceedings of the Thirty-First SIGCSE Technical Symposium on Computer Science Education, SIGCSE’00 (Association for Computing Machinery, New York, 2000), pp. 75–79 A. Cain, C.J. Woodward, Examining student reflections from a constructively aligned introductory programming unit, in Proceedings of the Fifteenth Australasian Computing Education Conference - Volume 136, ACE’13 (Australian Computer Society, Sydney, 2013) A. Cain, M.A. Babar, Reflections on applying constructive alignment with formative feedback for teaching introductory programming and software architecture, in Proceedings of the 38th International Conference on Software Engineering Companion, ICSE’16 (ACM, New York, 2016), pp. 336–345 A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing curricula 2020 (CC2020): paradigms for future computing curricula. Technical Report, Association for Computing Machinery/IEEE Computer Society, New York (2020). http://www.cc2020.net/ S. Frezza, S. Adams, Bridging professionalism: dispositions as means for relating competency across disciplines, in Proceedings - Frontiers in Education Conference, FIE, vol. 2020 (2020) A. Gaspar, S. Langevin, An experience report on improving constructive alignment in an introduction to programming. J. Comput. Sci. Coll. 28(2), 132–140 (2012) GI, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen. Online Publication (2016). https://dl.gi.de/handle/20.500.12116/2351. Accessed 23 Nov 2023 J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE’22 (ACM, New York, 2022), pp. 662–663 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Petersen, S. Sarsa, Steps learners take when solving programming tasks, and how learning environments (should) respond to them, in Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE’22 (Association for Computing Machinery, New York, 2022a), pp. 570–571 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Peterson, S. Sarsa, Towards giving timely formative feedback and hints to novice programmers, in Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR’22 (Association for Computing Machinery, New York, 2022b), pp. 95–115 J.M. Keller, Motivational design of instruction. Instr. Des. Theor. Models Overview Curr. Status 1, 383–434 (1983) J.M. Keller, Motivational Design research and Development (Springer, Berlin, 2010). N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, ed by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020. Lecture Notes in Informatics, vol. P-308. (Gesellschaft für Informatik e.V., 2020a), pp. 187–192 N. Kiesler, On programming competence and its classification, in Koli Calling’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling’20 (Association for Computing Machinery, New York, 2020b) N. Kiesler, Towards a competence model for the novice programmer using bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE’20 (Association for Computing Machinery, New York, 2020c), pp. 459–465

References

149

N. Kiesler, Zur Modellierung und Klassifizierung von Kompetenzen in der grundlegenden Programmierausbildung anhand der Anderson Krathwohl Taxonomie. CoRR abs/2006.16922. arXiv: 2006.16922 (2020d). https://arxiv.org/abs/2006.16922 N. Kiesler, An exploratory analysis of feedback types used in online coding exercises. CoRR abs/2206.03077v2. arXiv: 2206.03077v2 (2022a). https://doi.org/10.48550/arXiv.2206.03077 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022b) N. Kiesler, Reviewing constructivist theories to help foster creativity in programming education, in 2022 IEEE Frontiers in Education Conference (FIE) (2022c), pp. 1–5 N. Kiesler, Investigating the use and effects of feedback in codingbat exercises: an exploratory thinking aloud study, in 2023 Future of Educational Innovation-Workshop Series Data in Action (2023), pp. 1–12 N. Kiesler, J. Impagliazzo, Industry’s expectations of graduate dispositions, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 N. Kiesler, B. Pfülb, Higher education programming competencies: a novel dataset, in Artificial Neural Networks and Machine Learning – ICANN 2023. Lecture Notes in Computer Science (Springer, Cham, 2023) N. Kiesler, D. Schiffner, Large language models in introductory programming education: chatgpt’s performance and implications for assessments. CoRR abs/2308.08572. arXiv: 2308.08572 (2023a). https://doi.org/10.48550/arXiv.2308.08572 N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education,Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023b) N. Kiesler, D. Lohr, H. Keuning, Exploring the potential of large language models to generate formative programming feedback, in 2023 IEEE Frontiers in Education Conference (FIE) (2023a), pp. 1–5 R. Lister, C. Fidge, D. Teague, Further evidence of a relationship between explaining, tracing and writing skills in introductory programming, in Proceedings of the 14th Annual ACM SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE’09 (Association for Computing Machinery, New York, 2009), pp. 161–165 B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023). https://peer.asee.org/43018 R. McCauley, M. Sabin, A.N. Kumar, N. Kiesler, B. MacKellar, R.K. Raj, J. Impagliazzo, Using vignettes to elicit students’ understanding of dispositions in computing education, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y.B.D., Kolikant, C. Laxer, L. Thomas, I. Utting, T. Wilusz, A multi-national, multi-institutional study of assessment of programming skills of first-year CS students, in Working Group Reports from ITiCSE on Innovation and Technology in Computer Science Education, ITiCSE-WGR’01 (ACM, New York), pp. 125–180 G.L. Nelson, F. Strömbäck, A. Korhonen, I. Albluwi, M. Begum, B. Blamey, K.H. Jin, V. Lonati, B. MacKellar, M. Monga, Assessing how pre-requisite skills affect learning of advanced concepts, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE’20 (ACM, New York, 2020), pp. 506–507 D.N. Perkins, E. Jay, S. Tishman, Beyond abilities: a dispositional theory of thinking. MerrillPalmer Quart. 39(1), 1–21 (1993) J. Piaget, Play, Dreams and Imitation in Childhood (Norton, New York, 2013) J. Prather, P. Denny, J. Leinonen, B.A. Becker, I. Albluwi, M.E. Caspersen, M. Craig, H. Keuning, N. Kiesler, T. Kohn, A. Luxton-Reilly, S. MacNeil, A. Petersen, R. Pettit, B.N. Reeves, J. Savelka, Transformed by transformers: navigating the ai coding revolution for

150

10 Summarizing and Reviewing the Components of Programming Competency

computing education: an iticse working group conducted by humans, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE 2023 (Association for Computing Machinery, New York, 2023), pp. 561–562 R.K. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Toward practical computing competencies, in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE’21 (Association for Computing Machinery, New York, 2021a), pp. 603–604 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR’21 (Association for Computing Machinery, New York, 2021b), pp. 133–161 M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education Vol. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) D.L. Schussler, Defining dispositions: wading through murky waters. Teacher Edu. 41(4), 251–268 (2006) T. Scott, Bloom’s taxonomy applied to testing in computer science classes. J. Comput. Sci. Colleges 19(1), 267–274 (2003) L.S. Vygotsky, Thought and Language (MIT Press, Cambridge, 1962)

Part V

Wrap Up

The last part of this book contains a summary of the conducted work. Chapter 11, therefore, aims to wrap up the study comprising two data gathering and analysis methods: (1) a qualitative content analysis of 129 programming modules from 35 German university curricula and (2) a summarizing content analysis of seven guided interviews for experts. The last chapter concludes the most important findings regarding cognitive and other programming competencies including dispositions. It further summarizes the derived implications regarding introductory programming education at universities. Finally, a pathway to future work is outlined.

Chapter 11

Conclusion

11.1 Brief Summary of Results A summary of key findings answering the overall and sub-ordinary research questions is presented next. This summary highlights the components of programming competencies currently expected as learning objectives in introductory programming education as part of computer science degree programs in Germany. The methodology employed in this research involved a qualitative content analysis of representative programming modules during the first three to four semesters of undergraduate computer science. Moreover, guided interviews with experts in programming education were conducted to enrich the study’s findings and help identify additional, less explicit programming competencies expected from novice learners. The classification of cognitive programming competencies into the Anderson Krathwohl Taxonomy and its dimensions constituted the basis for assessing the AKT’s adequacy for programming education. Its general suitability and adaptation to the specific context of programming answer the respective research question. As part of the last sub-ordinary research questions, factors contributing to and preventing learners from developing programming competencies were subject to research. The influential factors were investigated through the expert lens. Relevant categories are summarized in the following.

11.1.1 Competencies Expected from Novice Programmers Based on the respective data collection and analyses, cognitive competencies expected in basic programming education were identified and classified according to the Anderson Krathwohl Taxonomy (Anderson et al. 2001). All cognitive components of programming competencies were classified within the cognitive process dimensions and knowledge dimensions and thus incorporated into the AKT, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 N. Kiesler, Modeling Programming Competency, https://doi.org/10.1007/978-3-031-47148-3_11

153

154

11 Conclusion

thereby answering research question 1.a. Especially the classification of numerous programming competencies within dimensions indicating a high cognitive complexity (e.g., “analyzing”, “evaluating”, and “creating”) highlighted the considerably high cognitive demands towards novice learners (see Table 10.1). It should be noted that a substantial number of competency goals in the curricula (in 245 coding units) were coded as “non-operationalized”, thus not allowing a categorization within the AKT. Moreover, it means not all higher education institutions within the sample focus on and implement the concept of competency (Clear et al. 2020; Frezza et al. 2018; Raj et al. 2021a) as part of their curricula objectives (yet). Nonetheless, the majority of cognitive programming competencies were classified into AKT dimensions. Most of them were assigned to the procedural knowledge dimension. In addition, many meta-cognitive competencies were identified in the data. These meta-cognitive competencies were classified within the two most challenging cognitive process dimensions “evaluating” and “creating”. Competencies within less complex cognitive process dimensions and the factual and conceptual knowledge dimensions appeared less frequently in the material. Those were less explicit in the curricula data. This aspect, however, does not mean they are not required or less important for introductory programming education. The AKT dimensions are characterized by their inclusive nature, which may be a reason for the curricula’s focus on the most complex competency objectives. So institutions seem to aim for more complex competencies and do not explicitly enlist less complex ones. Interestingly, the experts revealed a few additional competencies in less complex cognitive process and knowledge dimensions. So another explanation is that expectations towards novice learners of programming are unrealistic, as concluded in other studies (Winslow 1996; McCracken et al. 2001; Luxton-Reilly 2016; Luxton-Reilly et al. 2018).

Competency goals within the first three to four semesters of computing are likely too high. Less cognitively complex competencies seem to be omitted in the curricula documents. Even though the experts added some more competency objectives within less complex AKT dimensions during the interviews, the majority of expected competencies reveal that students should be able to construct, design and create entirely new solutions to problems. In addition, novices are expected to transfer their knowledge, strategies, competencies, and experiences about themselves and programming to previously unknown problems, tasks, and even programming languages within a few weeks and months.

In addition to cognitive programming competencies, other competency components were explored and identified in the data, answering research question 1.b. Module descriptions revealed social-communicative competencies such as cooperating and collaborating with others, communicating, and organizing team and project work. Moreover, the affective goal regarding sensitivity towards IT security

11.1 Brief Summary of Results

155

was identified in the data. Another explicit goal was to gain programming practice and experience. The guided interviews with experts revealed more competency components. Among them were dispositions and references to prior knowledge and experience. All in all, these other competencies and competency components refer to students’ attitudes, self-competencies, gaining programming practice and experience, as well as social-communicative competencies. Especially the attitude category reflects on the human component of competency (Fink 2013; Raj et al. 2021b,a; Kiesler et al. 2023), as it includes the following subcategories: enthusiasm for problem-solving and computer science, purpose-driven behaviors despite frustrating tasks, willingness to learn, openness and flexibility, voluntary, active, and regular participation in classes, as well as joy and pride over achieved results. Other components related to the whole person include students’ confidence in their abilities, and thus academic self-efficacy and creativity, which are both considered important for successful problem-solving by the experts (Kiesler 2020b,c, 2022a).

The data revealed a great number of programming competencies other than cognitive, indicating their relevance to student learning and success in introductory programming courses. Especially experts stressed students’ attitudes and dispositions as important components of developing programming competency. This aspect highlights that learning (programming) is related to the whole person. It further takes more than knowledge and skills to become a successful programmer.

Gaining programming experience is further categorized as a relevant method supporting the development of programming competency. It seems that only timeintensive practice of new tasks, and problems help students develop strategies for problem-solving. The acquired strategies can eventually be transferred to other tasks and problems.

Programming practice and experience are considered crucial factors supporting the development of cognitive and meta-cognitive programming competencies.

11.1.2 Adequacy of the Anderson Krathwohl Taxonomy for Programming Education The classification of cognitive competencies along the knowledge and cognitive process dimensions of the Anderson Krathwohl Taxonomy (Anderson et al. 2001)

156

11 Conclusion

demonstrates its general adequacy in the context of introductory programming education. The categorization of all cognitive programming competencies represented in Table 10.1 answers the research question (RQ 1.c) on the extent to which the AKT can be applied. As expected, competencies other than cognitive cannot be classified within the AKT. Classifying programming competencies with the help of the AKT as an established pedagogical framework led to further conclusions about its adequacy for the specifics of introductory programming education. For example, it was noted that the original definitions and examples of the types and subtypes related to knowledge dimensions were difficult to transfer and apply to the context of programming. Similar challenges were noted about the definitions and examples of the cognitive process dimensions (Anderson et al. 2001). The extremely abstract and context-independent descriptions challenge the application of the AKT to the programming context, and perhaps computing in general, which is illustrated by the misinterpretations of the GI’s competency model (GI 2016). For these reasons, a context-specific, adapted guideline was developed for introductory programming education. It is supposed to help educators classify programming competency objectives expected in their courses. Table 10.2 presents the knowledge dimensions of the AKT, and its adapted definitions and examples to reflect programming competencies. The examples originate from the inductively built categories of the data analyses, and thus the curricula data and interview transcripts. Similarly, Table 10.3 shows the cognitive process dimensions adapted for the context of programming education. They are supplemented with contextspecific definitions and examples.

The adaptation of the AKT to the programming context has the potential to raise educators’ awareness of the demanding nature of the expected competencies. It may also help to increase their understanding of the highly complex cognitive process and knowledge dimensions of most of their learning objectives in introductory programming. A greater awareness of the complex expectations towards novice learners of programming can help educators design teaching and learning activities, and assessments. They can, for example, reconsider the sequences of tasks, and introduce a more gradual increase of cognitive complexity while allowing more time for practice and gaining experience.

11.1.3 Factors Influencing Students’ Competency Development The last two research questions (RQ 1.d and 1.e) aimed to explore factors preventing and promoting students’ programming competencies. To answer the respective

11.1 Brief Summary of Results

157

research questions, interviews with experts were used as a data basis. By examining the interview transcripts and experts’ replies to the dedicated questions, external factors and challenging programming competencies were identified. The interviews further revealed how educators and institutions try to support CS students learning to program. For example, the experts classify meta-cognitive competencies within the two most complex cognitive process dimensions (i.e., “evaluating” and “creating”) as challenging for learners. The transfer of knowledge and experiences to new tasks and programming languages, the abstraction of rules, the independent organization of learning processes, and systematic problem-solving were provided as examples of students’ repeated struggles. Experts further note difficulties related to students’ competency development in the procedural knowledge dimension, which is generally considered challenging.

Experts reflected on challenging cognitive programming competencies within the procedural and meta-cognitive dimension but also the development of competencies other than cognitive, including dispositions. Self-competencies, attitude, and mindset, as well as social-communicative competencies, are evaluated as sophisticated by the experts. These observations lead to the recommendation to explicitly address these competencies in the curricula, teaching, and learning activities, and assessment. Moreover, developing new modules aiming at the facilitation of these other competency components may be beneficial for novice learners of programming.

Another aspect complicating students’ competency development is the fact that most contents, skills, and competencies build upon each other. Thus, novice students are expected to devote 100% of their attention and focus on their studies right from the start. Students may not be fully aware of that implicit expectation and the lack of support when trying to catch up later. In their first semester or year, novice students still need to manage their livelihood, new housing, or financial situation. If students need to work, their time resources are further limited. For many students, the decision to come to class is related to these matters, whereas missing classes further hinders learning and gaining programming practice and experience. At the same time, educators revealed many measures aiming at the support of students’ competency development. These include structural measures, such as small, practical sessions and tutorials, as well as additional support courses for low-performing students or those without prior knowledge, skills, and experiences in programming. Furthermore, the experts use many pedagogical design measures while planning their courses. These pedagogical measures predominantly aim to highlight the importance of programming practice and gaining more experience. For example, some experts do not publish model solutions, or they assign mandatory tasks. Other experts use gamification or bonus points as incentives for active

158

11 Conclusion

participation. Others focus on providing practical, relevant tasks, or integrate upto-date tools into their courses and assignments. According to the experts, the main goal of these measures is to promote students’ attitudes and mindset, socialcommunicative competencies, and self-competencies. During face-to-face class sessions, educators try to interact and communicate with students, explain abstract concepts, motivate them, solve problems together, and create a positive learning atmosphere.

Despite educators taking several measures (e.g., structural, pedagogical design, and interactions) to help students succeed in introductory programming education, it is impossible to assume they can reach and impact all learners. This aspect is partially due to the increasing heterogeneity of students. However, these measures reflect the educators’ strong focus on helping students exercise and gain programming experience, thereby developing programming competencies. Finally, educators seem to be painfully aware of students’ struggles, which is likely the reason they attempt so many (in part contradictory) measures, trying to support students’ progress.

11.2 Conclusions Competency, defined as the sum of knowledge, skills, and dispositions applied in unfamiliar situations and tasks, is the central concept in the present research. It integrates motivational and volitional aspects related to the whole person (Weinert 2001a; Clear et al. 2020; Fink 2013; Raj et al. 2021b,a). This understanding of competency is explored and applied to the context of introductory programming education, particularly to German higher education and the first three to four semesters of selected CS study programs. As the data collection and analysis of curricula and expert interviews shows, the competency to program is extraordinarily complex, comprising numerous other competencies and requiring substantial time and effort to practice and gain experience. The involved cognitive competencies align with the most demanding cognitive process dimensions according to Anderson et al. (2001). They are also mostly classified within procedural and meta-cognitive knowledge dimensions. However, programming not only requires cognitive competencies, but other competencies and competency components, such as purpose-drivenness, openness, flexibility, and creativity (Kiesler 2022b) to mention a few. Thus, expectations towards programming novices at German universities are high. Underestimating them can potentially cause challenges of its own. Few other scientific disciplines, if any, demand the development of such complex competencies and the mastery of extensive new knowledge, skills, and dispositions

11.2 Conclusions

159

during the first few semesters of a study program. Many programming competencies build upon one another, as illustrated by the AKT and its continuum of cognitive competencies, meaning students can quickly fall behind or become overwhelmed by the rapidly escalating complexity of requirements in the first weeks and semesters. Therefore, one of the conclusions refers to the design of current curricula and module descriptions of German universities and their CS undergraduate study programs, which were examined in the sample. Programming modules of the first three, or respectively four semesters, are full of competency goals on the highest cognitive process dimensions according to Anderson et al. (2001). Even though these dimensions are part of a continuum and inclusive, the question arises whether and to what extent less cognitively complex competencies receive too little attention in the curricula, the classroom, and respective exercises and assignments. The experts highlighted the need to gain programming practice and experience. But, seemingly simple tasks, such as writing viable, running code to solve “simple” problems within a few code lines requires knowledge, skills, and dispositions. The same applies to reading or tracing program code and predicting its output. The conclusion is that the expectations and demands within introductory programming education are too high. This is no surprise. According to Winslow (1996), it takes about ten years of experience before a novice programmer becomes an expert. In addition, programming educators as interviewed experts are well aware of their students’ challenges with regard to developing certain competencies within the procedural and meta-cognitive knowledge dimensions. This aspect was somewhat expected, since the desired competencies range within the highest, most complex dimensions of cognitive processes (i.e., “evaluating” and “creating”). The interviewed experts added many competencies expected from novice programmers to the data, meaning several competency goals are not explicit to students. Among them were self-competencies, students’ attitudes, and dispositions, as well as social-communicative competencies, which all seem to play a larger role in introductory programming education than previously known and communicated to students via, for example, the curricula. The necessity of these competencies contradicts the image of CS students and graduates as eccentric nerds (Jenkins 2002) who do not like to engage in social interactions - a negative image that may even deter prospective students from enrolling in computing courses. Another conclusion is thus the urgent need to revise curricula and information material for potential students. Such action includes, for example, the explicit identification of programming competencies at the lower cognitive process and knowledge dimensions, as well as the explicit integration of other, non-cognitive competencies and competency components, such as dispositions to the curricula. The curricula analysis further revealed that only a few CS study programs offer dedicated courses for students to acquire and practice learning strategies, selfmanagement, presentation skills, or project work in groups. It is one approach to include such classes early into undergraduate CS degrees so students can grasp the importance of respective competencies and practice them. Another approach is to address these competencies in an integrated way, and thus as part of programming competencies. The integrated approach requires explicit attention, instruction, and

160

11 Conclusion

new forms of assessments from educators. Another suggestion is to start developing counseling or self-assessment formats for interested and freshly enrolled CS students to align their expectations to those of the institution and respective educators. Universities and educators already pursue multiple paths to address the numerous challenges of novices in programming courses. Good-practice examples include, for example, support courses for first-year students without prior knowledge as a structural measure, and the use of tools and systems providing (automated) feedback. In addition, educators try to motivate students, explain complex concepts, and aim at fostering a positive learning atmosphere. Above all, they implement measures to help students gain practice and programming experience, which is essential for students developing meta-cognitive competencies. All in all, educators’ efforts in the classroom are immense, and far too often, they are accompanied by further challenges due to the lack of human and financial resources.

The results imply that changes in the curricula of CS study programs and introductory programming education are required. Introductory programming courses should explicitly address less cognitively complex competency goals in the curricula (e.g., reading and understanding program code). The same applies to dispositions regarding attitude and mindset, but also socialcommunicative and self-competencies. These objectives should be addressed more explicitly in the classroom, and assignments. Introductory programming courses have too high and complex expectations for novice learners. Therefore, the complexity of concepts and contents should be reduced, so that educators can dedicate more time to practice. Eventually, students will develop fewer competencies, but they may actually develop them. We know becoming a programming expert takes a decade, and it seems impossible to expect students to reach that stage within one or two years. Finally, more resources are required to support initiatives, such as pre-study counseling, support courses for low-performing students or those without prior knowledge and programming experience.

11.3 Future Work This research aimed at the identification and classification of basic programming competencies addressed in introductory programming courses at higher education institutions. For this purpose, computer science curricula and the educator’s perspective were gathered and analyzed. In addition to the identification and classification of relevant programming competencies, several recommendations regarding the orientation toward competency-based CS education were derived. These include, for example, the explicit inclusion of less cognitively complex programming

11.3 Future Work

161

competencies, and dispositions into curricula, courses, learning activities, and assessments so that students can gradually approach complex tasks, problems, and respective competencies via scaffolding (Kiesler 2022a). Scientific evaluations should accompany such efforts and their impact on novice programmers. Computer Science study programs still face high dropout rates (Heublein et al. 2018, 2020). The same is true for introductory programming. In this context, it seems reasonable to increase educators’ awareness of cognitively demanding teaching and learning objectives and experiment with new approaches granting students more time to practice programming and gain experience. One of them is reconsidering curricula and reducing their complexity and contents. Similarly, the complexity of competencies expected in the first semester can be reduced by limiting the competency objectives for novices to the cognitive process dimension of “applying” or “analyzing” (e.g., reading and understanding (foreign) source code). Offering more supervised practice hours, or supportive tools providing automated feedback (i.e., via learning environments or systems, or even Large Language Models (Kiesler and Schiffner 2023a)) may be additional measures. Especially structural support measures for novice students seem to provide opportunities for more comprehensive student support, and more individual feedback. However, the human and financial resources required for such projects can be another challenge. Regardless, future research would accompany and evaluate such attempts. Due to the focus on normative, empirical, and above all qualitative data, actual teaching practices at universities have not been considered in this work. Therefore, several research questions remain unanswered. For example, which competency objectives are addressed in the classroom and assignments, and which programming competencies are assessed? These questions can guide future work. The competencies identified in this work can certainly be expanded by, for example, gathering respective data from educators. Moreover, this research can be replicated and extended with a larger sample, including universities from other countries or regions. This is especially true since the curricula data has been published (Kiesler 2020a,d) and processed into a machine-readable format (Kiesler and Pfülb 2023). Such efforts would help validate the identified programming competencies and their classification. An exemplary scenario is a joint project of several universities that gathers and analyzes a large corpus of curricula, programming exercises, or exam items. Building such a large-scale pool of data may constitute the basis for investigating the following research questions: • Which other, additional competencies are expected in higher education programming courses for novices? • To what extent are the programming competencies identified in this work reflected in programming exercises, course assignments, and assessments? • To what extent can the cognitive complexity of the classified competencies be validated by measuring the task difficulty of individual items? To answer these questions, more curricula and experts may be gathered and analyzed. Another interesting option is the analysis of task collections, or teaching materials (e.g., from textbooks) to supplement the programming competencies

162

11 Conclusion

identified in this work. This methodology, however, assumes the availability of respective (research) data (Kiesler and Schiffner 2023b, 2022). The identified and classified programming competencies can thus serve as a basis for developing a complete competency model for programming including dispositions (Impagliazzo et al. 2022; Kiesler et al. 2023; Kiesler and Impagliazzo 2023; Kumar et al. 2023; MacKellar et al. 2023; McCauley et al. 2023; Sabin et al. 2023). Such a model can be applied, for example, in learning environments or learning management systems modeling programming competency. In particular, competencies associated with a certain task can be displayed to the user. Such a display of competencies can help illustrate students’ progress or provide feedback about successfully mastered exercises and competencies, errors, or learning paths by suggesting adequate follow-up tasks. The alignment of feedback to selected programming competencies is yet another field with multiple pathways for future work (Jeuring et al. 2022a,b; Kiesler 2023). In addition, there is a need to expand the competency model from basic programming education to computer science as a whole. Still, many curricula contain lists of contents, so the development of competency-based curricula recommendations (i.e., CS2023) is overdue. Respective curricula recommendations and research on their implementation could further improve the permeability of higher education and other educational systems, such as vocational training in Germany (Kiesler and Thorbrügge 2022), and how well they align with industry’s expectations (Kiesler and Thorbrügge 2023; Kiesler and Impagliazzo 2023). Finally, the adapted AKT definitions of cognitive process and knowledge dimensions with definitions, examples, types, and subtypes can offer support to educators who classify their competency objectives in introductory programming courses. Future work may be concerned with the evaluation of the newly adjusted guidelines, to investigate whether and to what extent it eases the application of the AKT and classification of cognitive competency goals. A respective study may introduce the new manual to introductory programming educators, and interview them after they have used the proposed guidelines to structure tasks or courses.

References L.W. Anderson, D.R. Krathwohl, P.W. Airasian, K.A. Cruikshank, R.E. Mayer, P.R. Pintrich, J. Raths, M.C. Wittrock, A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Addison Wesley Longman, New York, 2001) A. Clear, A. Parrish, P. Ciancarini, S. Frezza, J. Gal-Ezer, J. Impagliazzo, A. Pears, S. Takada, H. Topi, G. van der Veer, A. Vichare, L. Waguespack, P. Wang, M. Zhang, Computing Curricula 2020 (CC2020): Paradigms for Future Computing Curricula. Technical report (Association for Computing Machinery/IEEE Computer Society, New York, 2020). http://www.cc2020.net/ L.D. Fink, Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses (Wiley, New York, 2013) S. Frezza, M. Daniels, A. Pears, r. Cajander, V. Kann, A. Kapoor, R. McDermott, A.-K. Peters, M. Sabin, C. Wallace, Modelling competencies for computing education beyond 2020: a

References

163

research based approach to defining competencies in the computing disciplines, in Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2018 Companion (Association for Computing Machinery, New York, 2018), pp. 148–174 GI, German Informatics Society, Empfehlungen für Bachelor- und Masterprogramme im Studienfach Informatik an Hochschulen (2016). Online Publication U. Heublein, R. Schmelzer, D. Sommer, Die Entwicklung der Studienabbruchquote an den deutschen Hochschulen. Technical report, Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Hannover (2018) U. Heublein, J. Richter, R. Schmelzer, Die Entwicklung der Studienabbruchquoten in Deutschland. DZHW Brief (3, 2020) J. Impagliazzo, N. Kiesler, A.N. Kumar, B. Mackellar, R.K. Raj, M. Sabin, Perspectives on dispositions in computing competencies, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE ’22 (ACM, New York, 2022), pp. 662–663 T. Jenkins, On the difficulty of learning to program, in Proceedings of the 3rd Annual Conference of the LTSN Centre for Information and Computer Sciences, vol. 4. Citeseer (2002), pp. 53–58 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Petersen, S. Sarsa, Steps learners take when solving programming tasks, and how learning environments (should) respond to them, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 2, ITiCSE ’22 (Association for Computing Machinery, New York, 2022a), pp. 570–571 J. Jeuring, H. Keuning, S. Marwan, D. Bouvier, C. Izu, N. Kiesler, T. Lehtinen, D. Lohr, A. Peterson, S. Sarsa, Towards giving timely formative feedback and hints to novice programmers, in Proceedings of the 2022 Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’22 (Association for Computing Machinery, New York, 2022b), pp. 95–115 N. Kiesler, Kompetenzmodellierung für die grundlegende Programmierausbildung–Eine kritische Diskussion zu Vorzügen und Anwendbarkeit der Anderson Krathwohl Taxonomie im Vergleich zum Kompetenzmodell der GI, in DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., online, 14.–18. September 2020, ed. by R. Zender, D. Ifenthaler, T. Leonhardt, C. Schumacher, vol. P-308. LNI (Gesellschaft für Informatik e.V., 2020a), pp. 187–192 N. Kiesler, On programming competence and its classification, in Koli Calling ’20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research, Koli Calling ’20 (Association for Computing Machinery, New York, 2020b) N. Kiesler, Towards a competence model for the novice programmer using Bloom’s revised taxonomy – an empirical approach, in Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’20 (Association for Computing Machinery, New York, 2020c), pp. 459–465 N. Kiesler, Zur Modellierung und Klassifizierung von Kompetenzen in der grundlegenden Programmierausbildung anhand der Anderson Krathwohl Taxonomie (2020d). CoRR abs/2006.16922. arXiv: 2006.16922. https://arxiv.org/abs/2006.16922 N. Kiesler, Kompetenzförderung in der Programmierausbildung durch Modellierung von Kompetenzen und informativem Feedback. Dissertation, Johann Wolfgang Goethe-Universität, Frankfurt am Main. Fachbereich Informatik und Mathematik (2022a) N. Kiesler, Reviewing constructivist theories to help foster creativity in programming education, in 2022 IEEE Frontiers in Education Conference (FIE) (2022b), pp. 1–5 N. Kiesler, Investigating the use and effects of feedback in CodingBat exercises: an exploratory thinking aloud study, in 2023 Future of Educational Innovation-Workshop Series Data in Action (2023), pp. 1–12 N. Kiesler, J. Impagliazzo, Industry’s expectations of graduate dispositions, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5

164

11 Conclusion

N. Kiesler, B. Pfülb, Higher education programming competencies: a novel dataset, in Artificial Neural Networks and Machine Learning – ICANN 2023. Lecture Notes in Computer Science (Springer, Cham, 2023) N. Kiesler, D. Schiffner, On the lack of recognition of software artifacts and IT infrastructure in educational technology research, in 20. Fachtagung Bildungstechnologien (DELFI), ed. by P.A. Henning, M. Striewe, M. Wölfel (Gesellschaft für Informatik e.V., Bonn, 2022), pp. 201–206 N. Kiesler, D. Schiffner, Large language models in introductory programming education: Chatgpt’s performance and implications for assessments (2023a). CoRR abs/2308.08572. arXiv: 2308.08572. https://doi.org/10.48550/arXiv.2308.08572 N. Kiesler, D. Schiffner, Why we need open data in computer science education research, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023b) N. Kiesler, C. Thorbrügge, A comparative study of programming competencies in vocational training and higher education, in Proceedings of the 27th ACM Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE ’22 (Association for Computing Machinery, New York, 2022), pp. 214–220 N. Kiesler, C. Thorbrügge, Socially responsible programming in computing education and expectations in the profession, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) N. Kiesler, B.K. Mackellar, A.N. Kumar, R. McCauley, R.K. Raj, M. Sabin, J. Impagliazzo, Computing students’ understanding of dispositions: a qualitative study, in Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education Vol. 1, ITiCSE 2023 (Association for Computing Machinery, New York, 2023) A.N. Kumar, R. McCauley, B. MacKellar, M. Sabin, N. Kiesler, R.K. Raj, J. Impagliazzo, Quantitative results from a study of professional dispositions, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) A. Luxton-Reilly, Learning to program is easy, in Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE ’16 (Association for Computing Machinery, New York, 2016), pp. 284–289 A. Luxton-Reilly, Simon, I. Albluwi, B.A. Becker, M. Giannakos, A.N. Kumar, L. Ott, J. Paterson, M.J. Scott, J. Sheard, C. Szabo, Introductory programming: a systematic literature review, in Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (ACM, New York, 2018), pp. 55–106 B.K. MacKellar, N. Kiesler, R.K. Raj, M. Sabin, R. McCauley, A.N. Kumar, Promoting the dispositional dimension of competency in undergraduate computing programs, in 2023 ASEE Annual Conference & Exposition. ASEE Conferences (2023) https://peer.asee.org/43018 R. McCauley, M. Sabin, A.N. Kumar, N. Kiesler, B. MacKellar, R.K. Raj, J. Impagliazzo, Using vignettes to elicit students’ understanding of dispositions in computing education, in 2023 IEEE Frontiers in Education Conference (FIE) (2023), pp. 1–5 M. McCracken, V. Almstrum, D. Diaz, M. Guzdial, D. Hagan, Y.B.-D. Kolikant, C. Laxer, L. Thomas, I. Utting, T. Wilusz, A multi-national, multi-institutional study of assessment of programming skills of first-year CS students, in Working Group Reports from ITiCSE on Innovation and Technology in Computer Science Education, ITiCSE-WGR ’01 (ACM, New York, 2001), pp. 125–180 R.K. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Toward practical computing competencies, in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2, ITiCSE ’21 (Association for Computing Machinery, New York, 2021b), pp. 603–604 R. Raj, M. Sabin, J. Impagliazzo, D. Bowers, M. Daniels, F. Hermans, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, S.W. Nabi, M. Oudshoorn, Professional competencies in computing education: pedagogies and assessment, in Proceedings of the 2021 Working Group Reports on

References

165

Innovation and Technology in Computer Science Education, ITiCSE-WGR ’21 (Association for Computing Machinery, New York, 2021a), pp. 133–161 M. Sabin, N. Kiesler, A.N. Kumar, B. MacKellar, R. McCauley, R.K. Raj, J. Impagliazzo, Fostering dispositions and engaging computing educators, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 2, SIGCSE 2023 (Association for Computing Machinery, New York, 2023) F.E. Weinert, Concept of competence: a conceptual clarification, in Defining and selecting key competencies (Hogrefe & Huber, Seattle, 2001a), pp. 45–65 L.E. Winslow, Programming pedagogy–a psychological overview. ACM Sigcse Bull. 28(3), 17–22 (1996)