Lecture Notes in Computer Science Innovative technologies and learning : 6th International Conference, ICITL 2023, Porto, Portugal, August 28-30, 2023, Proceedings 9783031401121, 9783031401138


295 90 42MB

English Pages [664] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents
Artificial Intelligence in Education
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
1 Introduction
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
2 Intelligent Tutoring Systems vs Competence Development
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
3 Why EarMaster?
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
4 Learning Process Monitoring Indicators
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
5 Application and Analysis: Research Method
Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning?
6 Discussion and Conclusions
References
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
1 Introduction
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
2 Literature Review
2.1 Artificial Intelligence
2.2 AI Literacy
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
3 Methodology
3.1 Instruments
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
3 Methodology
3.2 Student Demographics
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
4 Results and Discussion
4.1 Development of AI Concepts
4.2 Improvement in Self-perceived AI Literacy
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
4 Results and Discussion
4.3 Students’ Reflections and Feedback
Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds
5 Conclusion
References
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
1 Introduction
1.1 Motivation
1.2 Purpose
2 Literature Review
2.1 Artificial Intelligence
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
2 Literature Review
2.2 DISCOVER Model
3 Research Methods
3.1 Research Design
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
3 Research Methods
3.2 Research Steps
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
4 Results and Discussions
4.1 Paired t-Tests on Competence Scale Tests
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
4 Results and Discussions
4.2 Paired t-Tests on the DISCOVER Tests
4.3 Paired t-Tests on Teacher Trainees
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
4 Results and Discussions
4.4 Paired t-Tests on Non-teacher Trainees
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
4 Results and Discussions
4.5 Independent t-Test on the Satisfaction Questionnaire
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
4 Results and Discussions
4.6 Qualitative Interview Records
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
5 Conclusions
The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials
References
Concerns About Using ChatGPT in Education
1 Introduction
Concerns About Using ChatGPT in Education
2 Methods
3 Results and Discussion
Concerns About Using ChatGPT in Education
3 Results and Discussion
3.1 Concerns Mentioned in Target Articles
Concerns About Using ChatGPT in Education
3 Results and Discussion
3.2 Co-occurrence Network
Concerns About Using ChatGPT in Education
3 Results and Discussion
3.3 Thematic Analysis
Concerns About Using ChatGPT in Education
4 Conclusion
References
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
1 Introduction
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
2 Literature Review
2.1 Test Requirements and Handwriting Fluency
2.2 Machine Vision and OpenCV
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
3 Methodology
3.1 Dataset and Data Processing
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
3 Methodology
3.2 Training
4 Results and Discussion
4.1 Results
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
4 Results and Discussion
4.2 Discussion and Conclusion
Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques
References
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
1 Introduction
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
2 Literature
2.1 Application of Experiential Learning Theory in Communication Skills Training
2.2 Integration of Chatbots in Communication Skills Training
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
2 Literature
2.3 Retail Personnel Service Training Standards and Assessment
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
3 Integrating Chatbot and Experiential Learning Theory in Communication Training
3.1 Training Strategy
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
3 Integrating Chatbot and Experiential Learning Theory in Communication Training
3.2 Platform Features
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
4 Conclusion
Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach
References
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
1 Introduction
2 Literature Review
2.1 EFL Writing in Authentic Contexts
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
2 Literature Review
2.2 Technology-Supported EFL Writing in Authentic Contexts
2.3 The Smart Questioning and Clarification Mechanism for EFL Writing
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
3 System Design
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
4 Methodology
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
5 Results and Discussions
The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts
6 Conclusion
References
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
1 Introduction
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
2 Related Work
2.1 Self-regulated Learning
2.2 Large Language Model
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
2 Related Work
2.3 Chatbot in Self-regulation Learning
3 Mathematics Test Questions
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
4 Result and Discussion
4.1 Accuracy of Math Problems Using ChatGPT
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
4 Result and Discussion
4.2 Can ChatGPT Master All the Questions in the Six Major Area of Junior High School Mathematics?
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
4 Result and Discussion
4.3 Does the Effective Responses Generated by ChatGPT Have the Potential to Impact Students’ Learning in Mathematics Courses?
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
5 Conclusion
Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics
References
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
1 Introduction
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
2 Background Information
2.1 Qualitative Data Analysis
2.2 Computer-Supported Collaborative Problem-Solving
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
2 Background Information
2.3 AI Technologies
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
3 Method
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
4 Results and Discussion
4.1 AI-Assisted Deductive Qualitative Analysis
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
4 Results and Discussion
4.2 AI-Assisted Inductive Qualitative Analysis
Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data
4 Results and Discussion
4.3 Opportunities and Challenges for AI-Assisted Qualitative Analysis
References
Computational Thinking in Education
Exploring the Development of a Teaching Model Based on the TPACK Framework
1 Introduction
Exploring the Development of a Teaching Model Based on the TPACK Framework
2 Literature Review
2.1 TPACK
Exploring the Development of a Teaching Model Based on the TPACK Framework
2 Literature Review
2.2 Self-regulated Learning
2.3 Computational Thinking
Exploring the Development of a Teaching Model Based on the TPACK Framework
3 Method
3.1 Participants
3.2 Design of the Teacher Training Course
Exploring the Development of a Teaching Model Based on the TPACK Framework
3 Method
3.3 Measuring Tools
3.4 Experimental Design
Exploring the Development of a Teaching Model Based on the TPACK Framework
4 Result
4.1 TPACK Self-efficacy
5 Discussion and Conclusions
Exploring the Development of a Teaching Model Based on the TPACK Framework
References
Cultivating Data Analyst Skills and Mindfulness in Higher Education
1 Introduction
Cultivating Data Analyst Skills and Mindfulness in Higher Education
2 Data Analytics and Data Analyst Skills
Cultivating Data Analyst Skills and Mindfulness in Higher Education
3 Analytical Mindfulness
Cultivating Data Analyst Skills and Mindfulness in Higher Education
4 Course Design, Analytic Ability and Mindfulness
Cultivating Data Analyst Skills and Mindfulness in Higher Education
5 Conclusion
Cultivating Data Analyst Skills and Mindfulness in Higher Education
References
Students Learning Performance and Engagement in a Visual Programming Environment
1 Introduction
Students Learning Performance and Engagement in a Visual Programming Environment
2 Related Literature
2.1 Visual Programming Environment and Scratch
Students Learning Performance and Engagement in a Visual Programming Environment
2 Related Literature
2.2 Student Engagement and Programming Learning
Students Learning Performance and Engagement in a Visual Programming Environment
3 Methodology
3.1 Participants
Students Learning Performance and Engagement in a Visual Programming Environment
3 Methodology
3.2 Procedures
3.3 Measures
4 Analyses and Results
4.1 ANCOVA Analysis on Learning Performance
Students Learning Performance and Engagement in a Visual Programming Environment
4 Analyses and Results
4.2 MANCOVA Analysis on Student Engagement
4.3 Regression Analysis on a Relationship Between Performance and Student Engagement in a Scratch Intervention
Students Learning Performance and Engagement in a Visual Programming Environment
5 Discussion
Students Learning Performance and Engagement in a Visual Programming Environment
6 Conclusion and Contributions
Students Learning Performance and Engagement in a Visual Programming Environment
References
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
1 Introduction
2 Literature Review
2.1 Applications of Computational Thinking in Educational Research
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
2 Literature Review
2.2 Applications of Formative Assessment in Educational Research
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
3 Research Method
3.1 Participants
3.2 Visual Programming Formative Assessment System (VPFAS)
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
3 Research Method
3.3 Applying Core Competencies of Computational Thinking to the App Inventor Programming Course
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
3 Research Method
3.4 Experimental Process
3.5 Data Collection and Analysis
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
4 Results
Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language
5 Conclusion
References
Design and Framework of Learning Systems
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
1 Introduction
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
2 Related Work
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
3 Student Management Use Cases
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
4 Name Variations
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
5 Overview of FLINK
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
6 Linking Algorithm
6.1 Experiences
FLINK: An Educator’s Tool for Linking Inaccurate Student Records
7 Conclusions
References
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
1 Introduction
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
2 Theoretical Framework
2.1 Pillars of Online Pedagogy
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
2 Theoretical Framework
2.2 Intercultural Communication
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
3 Methodology
3.1 Research Context
3.2 Research Data
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
3 Methodology
3.3 Data Analysis
4 Results
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
5 Discussion
Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study
6 Conclusion
References
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
1 Introduction
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
2 Literature Review
2.1 Design Thinking
2.2 Creative Self-efficacy
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
3 Research Methodology
3.1 Participants in Research
3.2 Hands-on Activity: Design a Solar Battery Charger
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
3 Research Methodology
3.3 Research Instrument
4 Results and Discussion
4.1 Results of Quantitative Data
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
5 Conclusion
5.1 Data Analysis Results
The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger
5 Conclusion
5.2 Implications for Practice and Theory
5.3 Suggestion
References
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
1 Introduction
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
2 Background
2.1 User Experience (UX)
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
2 Background
2.2 Conversational User Interface (CUI)
2.3 User Experience Principles for Conversational User Interfaces
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
3 Research Approach
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
4 Data Analysis and Findings
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
5 A Conversational User Interface Design Conceptual Learning Model
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
6 Conclusion
Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model
References
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
1 Introduction
1.1 Background
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
1 Introduction
1.2 Goal
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
2 Literature Review
2.1 Growth Mindset
2.2 Grit
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
2 Literature Review
2.3 Mobile Game-Based Learning (MGBL)
2.4 Chatbot
2.5 Innovative Mobile Game Learning System Combining Chatbot, Grit, and Growth Mindset Concepts
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
3 Research Method
3.1 System Design
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
3 Research Method
3.2 Problem-Solving Design
3.3 Fragmented Time Learning
3.4 Game Design and Cultivating Grit and Growth Mindset
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
3 Research Method
3.5 Experimental Design
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
3 Research Method
3.6 Experimental Procedure
Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System
4 Conclusion
References
A Guiding Cooperative Learning Approach in Mobile Learning Environments
1 Introduction
A Guiding Cooperative Learning Approach in Mobile Learning Environments
2 Related Studies
A Guiding Cooperative Learning Approach in Mobile Learning Environments
3 Guiding Kelly Repertory Grid Approach for Mobile Cooperative Learning
A Guiding Cooperative Learning Approach in Mobile Learning Environments
4 Experimental Design
4.1 Participants
A Guiding Cooperative Learning Approach in Mobile Learning Environments
4 Experimental Design
4.2 Experimental Procedure
A Guiding Cooperative Learning Approach in Mobile Learning Environments
4 Experimental Design
4.3 Measuring Tools
A Guiding Cooperative Learning Approach in Mobile Learning Environments
5 Results of the Experiment
5.1 Learning Achievement
A Guiding Cooperative Learning Approach in Mobile Learning Environments
5 Results of the Experiment
5.2 Questionnaires
6 Discussion
A Guiding Cooperative Learning Approach in Mobile Learning Environments
7 Conclusion
References
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
1 Introduction
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
2 Theoretical Framework
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
3 Methodology
3.1 Target Group
3.2 Researching Tools
3.3 Data Collection
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
3 Methodology
3.4 Data Analysis
4 Result
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
5 Discussion
The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students
6 Suggestion
References
Pedagogies to Innovative Technologies and Learning
Learning Processes and Digital Transformation
1 Introduction
Learning Processes and Digital Transformation
2 Learning Theory
2.1 Social Constructivism
Learning Processes and Digital Transformation
3 Methodology
3.1 Focus Group Interviews
Learning Processes and Digital Transformation
4 Findings
Learning Processes and Digital Transformation
5 Analysis and Discussion
Learning Processes and Digital Transformation
6 Concluding Remarks
Learning Processes and Digital Transformation
References
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
1 Introduction
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
2 Research Methodology
3 Results
3.1 Social Characteristics of Survey Participants
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
3 Results
3.2 Understanding Flipped Classroom Method
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
3 Results
3.3 The Use of the Flipped Classroom Method in Practice in Kazakhstani Universities
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
3 Results
3.4 Difficulties in Using the Flipped Classroom Method
4 Discussion
Flipped Classroom Method in Higher Education: A Case of Kazakhstan
5 Conclusions
References
Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications
1 Introduction
Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications
2 Research Purpose and Question
3 Details Experimental
3.1 Research Model and Hypothesis
Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications
3 Details Experimental
3.2 Sample and Data Collection
Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications
3 Details Experimental
3.3 Measurement
3.4 Analytical Method
4 Results and Discussion
Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications
References
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
1 Introduction
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
2 Literature Review
2.1 Telecollaboration in Language Teacher Education
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
2 Literature Review
2.2 Communities of Practice
2.3 Teacher Efficacy and Language Teacher Education
2.4 Telecollaboration and Intercultural Competence
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
3 Methodology
3.1 Research Design
3.2 Participants and Pedagogical Setting
3.3 Telecollaborative Project
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
4 Results and Discussion
4.1 Changes in Participants’ Teacher Efficacy
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
4 Results and Discussion
4.2 Changes in Participant’s Intercultural Communication Competence Measured by Intercultural Sensitivity Survey (ISS)
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
5 Pedagogical Implication and Conclusion
Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers
References
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
1 Introduction
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
2 Method
2.1 Participants
2.2 Instruments
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
2 Method
2.3 Research Design and Procedure
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
3 Results
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
4 Conclusion
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
Appendix A
A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education
References
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
1 Introduction
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
2 Literature Review
2.1 Online Language Learning
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
2 Literature Review
2.2 Japanese Learner of the Phonetic
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
2 Literature Review
2.3 Teaching Methods for Phonetics
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
3 Method
3.1 Participants and Procedures
3.2 Test Design
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
3 Method
3.3 Research Tools
4 Results
4.1 Test of Reliability
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
4 Results
4.2 Effects of CLD Method
4.3 Effects of SCA Method
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
4 Results
4.4 Different Effects Between CLD Method and SCA Method
5 Discussion and Conclusion
Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students
6 Limitations and Future Direction
References
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
1 Introduction
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
2 Study Design
3 Basic Concepts and Theory
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
4 Results
4.1 “The Planets”- A Multimedia Presentation in Fourth Grade
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
4 Results
4.2 Podcast Production in Tenth Grade
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
4 Results
4.3 When Students’ Struggle – Complex Methods and Adapted Education in Tenth Grade
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
5 Conclusion
Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method
References
Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists
1 Introduction
1.1 A Subsection Sample
Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists
2 Literature Review
3 Method
Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists
3 Method
3.1 Participants
3.2 Experiment Design
Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists
3 Method
3.3 Measuring Tools
4 Experimental Results
Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists
5 Conclusions
References
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
1 Introduction
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
2 Related Work
2.1 Math Flipped Learning
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
2 Related Work
2.2 Grouping Format
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
3 Methodology
3.1 Design of a Grouping Mechanism for Flipped Learning in Mathematics
3.2 Quasi-experimental Design
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
4 Results
4.1 Compared to Traditional Grouping Methods, Can the Math-Flipped Learning Grouping Mechanism Proposed in This Study Improve Learners’ Math Score?
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
4 Results
4.2 Compared to Traditional Grouping Methods, Can the Math-Flipped Learning Grouping Mechanism Proposed in This Study Enhance GRoup’s Performance?
Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data
5 Conclusion
References
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
1 Introduction
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
2 Literature Review
2.1 Innovative Education and Innovative Teaching
2.2 Project-Based Learning (PBL) and Team Teaching
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
2 Literature Review
2.3 CDIO Engineering Education Model
2.4 Course Conduct Mode: CDIO + Project-Based Learning + Multi-course and Multi-teacher Collaborative Team Teaching
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
3 Method
4 Result
4.1 Collection and Presentation of Student Learning Outcomes
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
4 Result
4.2 Student Learning Profiles
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
4 Result
4.3 Multivariate Course Assessment
4.4 4 Course Final Teacher-Student Review Meeting
4.5 Teachers’ Personal Course Reflection and Teaching Team Discussion
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
4 Result
4.6 Student Satisfaction with Learning and Teachers
5 Conclusion and Discussion
5.1 Conclusion
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
5 Conclusion and Discussion
5.2 Discussion
The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course
References
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
1 Introduction
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
2 Literature Review
2.1 Reading Text and Statistical Graphs with an Eye-Tracking Technique
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
2 Literature Review
2.2 Eye-Tracking Technique in the Real Classroom
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
2 Literature Review
2.3 The Current Study and Research Question
3 Method
3.1 Participants
3.2 Materials
3.3 Prior Ability Test
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
3 Method
3.4 Learning Performance
3.5 Apparatus and Classroom Setup
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
3 Method
3.6 Data Analysis
3.7 Procedures
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
4 Result
4.1 Prior Ability
4.2 Learning Performance
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
4 Result
4.3 Eye Movement Measurement
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
5 Discussion
How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels
References
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
1 Introduction
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
2 Online Learning Readiness
3 Material and Methods
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
3 Material and Methods
3.1 Participants
3.2 Structural Equation Modelling
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
4 Results and Discussion
4.1 Gender Differences
4.2 Model Validity and Reliability
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
4 Results and Discussion
4.3 Final PLS-SEM Model
4.4 Discussion
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
5 Conclusions and Future Work
Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era
References
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
1 Introduction
1.1 A Subsection Sample
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
2 Introduction
2.1 Online Learning Model
2.2 Graduate Level
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
2 Introduction
2.3 COVID-19 Pandemic
3 Methodology
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
3 Methodology
3.1 Research Objectives
3.2 Scope of Research
3.3 Study Variables
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
3 Methodology
3.4 Research Tools
3.5 Research Methodology
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
4 Results
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
5 Interpretation of Results
6 Discussion
Online Learning Model for Graduate Level to Support COVID-19 Pandemic
References
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
1 Introduction
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
2 Engaged Scholarship (ES) and Lifelong Learning
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
3 Academic Access Through MOOCs
3.1 MOOC 1: Cyber Safety
3.2 MOOC 2: M1 (Robotics Basics)
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
3 Academic Access Through MOOCs
3.3 MOOC 3: M2 (Robotics Next)
3.4 MOOC 4: M3 (Robotics Strategy)
4 Pre and Post Evaluation of Knowledge
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
4 Pre and Post Evaluation of Knowledge
4.1 MOOC 1
4.2 MOOC 2
4.3 MOOC 3 and MOOC 4
4.4 Lesson Learnt from the Presentation of the Four ES MOOCs
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
5 Proposed Framework
Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES)
6 Conclusions
References
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
1 Introduction
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
2 Background
2.1 The Profile of an Industry-Ready Graduate
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
2 Background
2.2 The Profile of a BA in the World of Work
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
2 Background
2.3 Experiential Learning as Pedagogical Approach
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
3 Research Method
4 Data Analysis
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
5 Experiential Learning Application
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
6 Conclusion
Applying Experiential Learning to Deliver Industry-Ready Business Analysts
References
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
1 Introduction and Background
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
2 Literature Review
2.1 Blackboard
2.2 Collaborative Learning
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
3 Method
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
4 Discussion
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
5 Conclusion
A Lecturer’s Perception of Blackboard Support for Collaborative Learning
References
Two Experiences of Including Critical Thinking in Mathematics Courses
1 Introduction
Two Experiences of Including Critical Thinking in Mathematics Courses
2 Framework
Two Experiences of Including Critical Thinking in Mathematics Courses
3 Methodology
Two Experiences of Including Critical Thinking in Mathematics Courses
3 Methodology
3.1 Case 1
Two Experiences of Including Critical Thinking in Mathematics Courses
3 Methodology
3.2 Case 2
Two Experiences of Including Critical Thinking in Mathematics Courses
4 Results
4.1 Case 1
Two Experiences of Including Critical Thinking in Mathematics Courses
4 Results
4.2 Case 2
Two Experiences of Including Critical Thinking in Mathematics Courses
5 Reflection on the Reported Cases
Two Experiences of Including Critical Thinking in Mathematics Courses
References
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
1 Introduction
2 Design Science Research in IS
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
3 Educational Technology
4 Research Approach
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
4 Research Approach
4.1 Data Analysis
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
5 Discussion of Results and Findings
5.1 Administration
5.2 Learning Environment
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
5 Discussion of Results and Findings
5.3 Teaching and Learning
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
6 Theoretical Implication
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
7 Summary and Conclusion
Appendix A: Final Research Article Pool
Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review
References
Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study
1 Introduction
Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study
2 Related Research
3 Methodology
3.1 The Game
Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study
3 Methodology
3.2 The Training
Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study
3 Methodology
3.3 Design and Participants
3.4 Data Collection
4 Results and Conclusions
Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study
References
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
1 Introduction
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
2 Literatures
2.1 E-OSCE
2.2 Segmentation Effect
2.3 Learning Effectiveness
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
3 Method
3.1 Participants
3.2 Experimental Design
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
4 Results and Discussion
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
4 Results and Discussion
4.1 Learning Effectiveness of Nursing Students
4.2 Learning Effectiveness of Male Nursing Students
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
4 Results and Discussion
4.3 Learning Effectiveness of Female Nursing Students
4.4 Comparison of Pre- and Post-Test Learning Effectiveness between Male and Female Nursing Students
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
5 Conclusions and Recommendations for Future Work
Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring
5 Conclusions and Recommendations for Future Work
5.1 Research Limitations
5.2 Future Work
References
STEM/STEAM Education
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
1 Introduction
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
2 Research Purposes and Questions
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
3 Literature Review
4 Method
4.1 Participants
4.2 Research Design and Development of Instrument
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
4 Method
4.3 Data Acquisition
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
5 Results and Discussion
5.1 STEM Learning Effectiveness
5.2 One-Way ANOVA
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
5 Results and Discussion
5.3 Analysis of Learning Interview
Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience
5 Results and Discussion
5.4 Responses to Learning Difficulties
6 Conclusion
References
Learning Analytics Based on Streamed Log Data from a Course in Logic
1 Introduction
Learning Analytics Based on Streamed Log Data from a Course in Logic
2 The Temporal Aspect of Streamed Log Data
Learning Analytics Based on Streamed Log Data from a Course in Logic
3 Measuring the Learning
4 The Effect of the Order of the Presentations During the Course
Learning Analytics Based on Streamed Log Data from a Course in Logic
5 Types of Reasoning
Learning Analytics Based on Streamed Log Data from a Course in Logic
6 Conclusion and Perspectives
Learning Analytics Based on Streamed Log Data from a Course in Logic
References
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
1 Introduction
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
2 Literature Review
2.1 Flipped Learning Education
2.2 Expectancy-Value Theory in STEM
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
3 Methodology
3.1 Chatbot Design for Flipped Classroom
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
3 Methodology
3.2 Participants
3.3 Experimental Procedure
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
4 Experimental Results
4.1 Learning Achievement
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
4 Experimental Results
4.2 Expectancy-Value
5 Discussion and Conclusions
The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study
References
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
1 Introduction
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
2 Literature Review
2.1 E-Book Use in K-12 Education
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
2 Literature Review
2.2 Sequence Analysis in the Social Sciences
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
3 Methodology
3.1 Data Collection
3.2 The Opiq Distance Learning Environment
3.3 Methods of Analysis
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
4 Results
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
5 Discussion
Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study
References
VR/AR/MR/XR in Education
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
1 Introduction
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
2 Literature Review
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
3 Method
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
4 Results
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
5 Discussion and Conclusion
Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology
References
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
1 Introduction
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
2 Research Motivation
3 Research Purposes
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
4 Literature Review
4.1 Environmental, Social, and Governance (ESG)
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
4 Literature Review
4.2 Metaverse
5 Production Method
5.1 System Development
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
6 Research Analysis
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
7 Conclusion
Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse
References
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
1 Introduction
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
2 Related Work
2.1 Learning Anxiety
2.2 Learning Confidence
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
2 Related Work
2.3 Learning in a Virtual Reality Environment
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
3 Method
3.1 Participants and Experimental Design
3.2 Materials and Measures
3.3 Questionnaires
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
3 Method
3.4 Procedure
4 Results
4.1 Descriptive Results
4.2 Learning Outcomes
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
4 Results
4.3 Confidence and Anxiety
The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment
5 Discussion
6 Conclusion
References
Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities
1 Introduction
Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities
2 Literature Review
2.1 Virtual Reality
Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities
2 Literature Review
2.2 Virtual Reality in SEM Education
3 Research Method
3.1 Design of an Immersive Virtual Reality Assisted Learning System
Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities
3 Research Method
3.2 Procedure
4 Results
Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities
5 Conclusion
References
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
1 Introduction
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
2 Virtual Reality, Education and Autism
3 Method
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
3 Method
3.1 Research Questions
3.2 Search Strategy
3.3 Inclusion and Exclusion Criteria
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
3 Method
3.4 Document Selection
3.5 Data Extractions and Synthesis
3.6 Quality Assessments
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
4 Results
4.1 Quality Assessment Results
4.2 Overview of the Studies
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
5 Limitations
6 Conclusion
Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD)
References
Application and Design of Innovative Learning Software
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
1 Introduction
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
2 Method
2.1 Participant and Procedure
2.2 Materials
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
2 Method
2.3 Data Analysis
3 Result
3.1 Difference Testing
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
3 Result
3.2 Reliability and Validity Testing
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
4 Discussion and Conclusion
Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study
References
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
1 Students with Disabilities Learning Happily
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
2 The Origin of Accessible Digital Learning
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
3 Accessible Digital Learning Instructional Design and Activity Planning
3.1 Accessible Learning Platform
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
3 Accessible Digital Learning Instructional Design and Activity Planning
3.2 Learning Material Design
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
3 Accessible Digital Learning Instructional Design and Activity Planning
3.3 Accessible Learning Services
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
4 Difficulties and Challenges of Openstudy Academy
The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students
5 Conclusions
References
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
1 Introduction
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
2 Literature Review
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
3 Methods and Implementation of the Study
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
4 Results
4.1 Implementing the ChatGPT Python API in the Classroom
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
4 Results
4.2 Students Learning Performance and Evaluation
5 Discussion
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
6 Conclusion
Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills
References
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
1 Introduction
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
2 Literature Review
2.1 AI Chatbots-Based Learning
2.2 Motivation
3 Methods
3.1 Participants
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
3 Methods
3.2 Experimental Procedure
3.3 Measurement
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
3 Methods
3.4 Data Analysis
4 Result
4.1 Intrinsic Motivation
4.2 Extrinsic Motivation
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
4 Result
4.3 Task Value
5 Discussion
The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom
6 Conclusion
References
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
1 Introduction
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
2 Literature Review
2.1 Technology Acceptance Model
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
2 Literature Review
2.2 Internet-Only Banks
3 Methodology
3.1 Proposed Participants and Procedures Design
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
3 Methodology
3.2 Instrument
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
3 Methodology
3.3 Data Collection and Data Analysis
4 Result and Discussion
4.1 Usage Incentives for Internet-Only Banks: A Survey of FinTech Practitioners
4.2 Statistical Analysis of SUS Scores to Evaluate Perceived Usefulness in TAM
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
4 Result and Discussion
4.3 Results of QUIS Analysis for TAM’S Perceived Ease of Use
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
4 Result and Discussion
4.4 Exploring Relationships Through Regression Analysis
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
5 Conclusion
TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank
References
An Analysis of Student Perceptions of Computational Thinking in Writing Classes
1 Introduction
An Analysis of Student Perceptions of Computational Thinking in Writing Classes
2 Literature Review
2.1 CT
2.2 Students Perceptions
An Analysis of Student Perceptions of Computational Thinking in Writing Classes
3 Methods
3.1 Participants and Data Collection
3.2 Experimental Procedure
3.3 Measurement
An Analysis of Student Perceptions of Computational Thinking in Writing Classes
3 Methods
3.4 Data Analysis
4 Result
An Analysis of Student Perceptions of Computational Thinking in Writing Classes
5 Conclusion
References
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
1 Introduction
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
2 Literature Review
2.1 Game-Based Learning
2.2 Tangible Learning
2.3 Ocean Protection
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
3 Research Methods
3.1 Research Process
3.2 Participants
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
3 Research Methods
3.3 Research Tool
4 Results
4.1 Marine Conservation Efficacy Assessment
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
4 Results
4.2 Marine Conservation Attitude Assessment
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
4 Results
4.3 Marine Conservation Efficacy Assessment
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
4 Results
4.4 Interview Analysis
The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students
5 Conclusion
References
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
1 Introduction
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
2 Literature Review
2.1 Virtual Reality (VR)
2.2 The Sense of Ownership (SoO) and Sense of Agency (SoA)
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
3 Method
3.1 Participants
3.2 System Design
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
3 Method
3.3 Experimental Procedure
3.4 Data Collection and Analysis
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
4 Results
4.1 Discussion of Homogeneity of Variance Test between Rural and Urban Elementary School Students
4.2 Analysis the Effect of Immersion on SoO and SoA
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
5 Conclusion
A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency
References
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
1 Introduction
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
2 Literature Review
2.1 Computational Thinking
2.2 Rubric-Referenced Peer Feedback for Writing Skill
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
3 Methods
3.1 Participants
3.2 Materials
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
3 Methods
3.3 Pilot Study
3.4 Data Analysis
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
4 Result and Discussion
4.1 The Usefulness of the Peer Feedback Activity
4.2 Factors Influencing the Perceived Usefulness of Peer Feedback
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
4 Result and Discussion
4.3 Students’ Perspectives on the Role of the Rubric
5 Conclusion and Future Research
Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study
References
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
1 Introduction
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
2 Literature Review
2.1 Social Sustainability
2.2 The Relationship Between Design Thinking and Scratch
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
3 Research Method
3.1 Participants
3.2 Research Process
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
3 Research Method
3.3 Research Tool
3.4 Data Collection and Analysis
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
4 Research Results and Discussion
4.1 A Subsection Sample
The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability
5 Conclusions and Future Directions
References
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
1 Introduction
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
2 Introduction to the English Vocabulary Learning System
2.1 System Structure
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
2 Introduction to the English Vocabulary Learning System
2.2 Functions and Interfaces of the System
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
3 Research Method
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
3 Research Method
3.1 Research Structure
3.2 Research Instruments
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
3 Research Method
3.3 Research Subjects
3.4 Experimental Procedure
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
4 Results and Discussion
4.1 Satisfaction Reliability Analysis
4.2 The Influence of Prior Knowledge on the Satisfaction in Using the System
4.3 The Influence on the Learning Effectiveness in Using the System
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
4 Results and Discussion
4.4 The Influence of Prior Knowledge on the Progress of Learning Effectiveness in Using the System
The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System
5 Conclusion and Future Study
References
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
1 Introduction
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
2 Research Method
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
3 Results
3.1 Domain Knowledge in Cross-Disciplinary Collaboration Based on the TPACK Framework
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
3 Results
3.2 Elements in Cross-Disciplinary Collaboration Based on the Activity Theory Framework
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
3 Results
3.3 The Development Process of Cross-Disciplinary Collaboration Based on TPACK and Activity Theory Framework
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
4 Conclusion
Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process
References
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
1 Introduction
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
2 Literature Review
2.1 Visual Programming
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
2 Literature Review
2.2 Cooperative Learning
3 Research Method
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
4 Result
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
5 Conclusion
Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement
References
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
1 Introduction
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
2 Literature Review
2.1 Computational Thinking and Collaborative Programming
2.2 Metacognition and Its Role in Collaborative Learning
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
3 The Metacognition-Based Collaborative Programming System
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
4 Methodology
4.1 Participants
4.2 Instrument
4.3 Experimental Procedure
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
5 Results
5.1 Learning Achievement
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
5 Results
5.2 Computational Thinking Tendency
6 Discussion and Conclusions
Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses
References
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
1 Introduction of Affective Tutoring Testing System
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
2 Literature Review
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
3 System Architecture and Operation Mode
3.1 System Framework
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
3 System Architecture and Operation Mode
3.2 AI Emotion Recognition Implementation Explanation
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
4 Narrative Statistics and Facial Emotion Data Mining
4.1 Emotion Expression Descriptive Statistics
4.2 Facial Emotion Data Mining
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
5 Conclusion and Further Work
5.1 Conclusion
Facial AI and Data Mining-Based Testing System in the Post-pandemic Era
5 Conclusion and Further Work
5.2 Further Work
References
Author Index
Recommend Papers

Lecture Notes in Computer Science 
Innovative technologies and learning : 6th International Conference, ICITL 2023, Porto, Portugal, August 28-30, 2023, Proceedings
 9783031401121, 9783031401138

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

LNCS 14099

Yueh-Min Huang Tânia Rocha (Eds.)

Innovative Technologies and Learning 6th International Conference, ICITL 2023 Porto, Portugal, August 28–30, 2023 Proceedings

Lecture Notes in Computer Science Founding Editors Gerhard Goos Juris Hartmanis

Editorial Board Members Elisa Bertino, Purdue University, West Lafayette, IN, USA Wen Gao, Peking University, Beijing, China Bernhard Steffen , TU Dortmund University, Dortmund, Germany Moti Yung , Columbia University, New York, NY, USA

14099

The series Lecture Notes in Computer Science (LNCS), including its subseries Lecture Notes in Artificial Intelligence (LNAI) and Lecture Notes in Bioinformatics (LNBI), has established itself as a medium for the publication of new developments in computer science and information technology research, teaching, and education. LNCS enjoys close cooperation with the computer science R & D community, the series counts many renowned academics among its volume editors and paper authors, and collaborates with prestigious societies. Its mission is to serve this international community by providing an invaluable service, mainly focused on the publication of conference and workshop proceedings and postproceedings. LNCS commenced publication in 1973.

Yueh-Min Huang · Tânia Rocha Editors

Innovative Technologies and Learning 6th International Conference, ICITL 2023 Porto, Portugal, August 28–30, 2023 Proceedings

Editors Yueh-Min Huang National Cheng-Kung University Tainan City, Taiwan

Tânia Rocha University of Trás-os-Montes and Alto Douro Villa Real, Portugal

ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-031-40112-1 ISBN 978-3-031-40113-8 (eBook) https://doi.org/10.1007/978-3-031-40113-8 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The International Conference of Innovative Technologies and Learning (ICITL), provides a platform for those who are working on educational technology to get together and exchange experiences. Benefiting from using a variety of emerging innovative technologies, the e-learning environment has become highly diversified along the way. Diversified innovative technologies have fueled the creation of advanced learning environments by adopting appropriate pedagogies. Moreover, those technologies not only facilitate learning but also actively help students reach maximized learning performances. However, due to the rapid evolution of new technologies, how to make use of those technologies by complying with effective pedagogies to create adaptive or smart learning environments has always been in question. Therefore, this conference intended to provide a platform for researchers in education, computer science, and educational technology to share experiences of effectively applying cutting-edge technologies to learning and to further spark brightening prospects. It is hoped that the findings of the works presented at the conference stimulate relevant researchers or education practitioners to create more effective learning environments. ICITL is always ready to share their works with the public. This year’s conference (ICITL 2023) was held at the Porto Palacio Hotel in Porto, Portugal. Porto, the second-largest city in Portugal, has a long history and cultural tradition. This year, we received 147 submissions from 23 countries worldwide. Each submitted article was assigned to three reviewers. 64 papers were selected after a rigorous single-blind review process. The acceptance rate was 44%. These contributions covered the latest findings in areas including: 1) Artificial Intelligence in Education; 2) Computational Thinking in Education; 3) Design and Framework of Learning Systems; 4) Pedagogies to Innovative Technologies and Learning; 5) STEM/STEAM Education; 6) VR/AR/MR/XR in Education; 7) Application and Design of Innovative Learning Software. Moreover, ICITL 2023 featured 2 keynote presentations and 2 invited plenary presentations by renowned experts and scholars. Gwo-Dong Chen and Sylvester Arnab gave us insights into the keynote topics: “From Textbooks to Digital Reality: The Future of Learning in the Digital Twin Theater” and “GameChangers: Empathic Experiences and Purposeful Game Design Creation”. The plenary topic, “Applying Artificial Intelligence Technologies in STEM Education” was presented in detail by Chin-Feng Lai. The other plenary presentation was presented by António Coelho. We would like to thank the Organizing Committee for their efforts and time spent to ensure the success of the conference. We would also like to express our gratitude to the Program Committee members for their timely and helpful reviews. Last but not least, we would like to thank all the authors for their contribution in maintaining a high-quality

vi

Preface

conference – we count on your continued support in playing a significant role in the Innovative Technologies and Learning community in the future. August 2023

Yueh-Min Huang Joao Barroso Wu-Yuin Hwang Frode Eika Sandnes Shu-Chen Cheng Tânia Rocha Yu-Ping Cheng

Organization

Honorary Chair Yueh-Min Huang

National Cheng Kung University, Taiwan

Conference Co-chairs Joao Barroso Wu-Yuin Hwang Frode Eika Sandnes

University of Trás-os-Montes e Alto Douro, Portugal National Central University, Taiwan Oslo Metropolitan University, Norway

Technical Program Co-chairs Shu-Chen Cheng Tânia Rocha Yu-Ping Cheng

Southern Taiwan University of Science and Technology, Taiwan University of Trás-os-Montes e Alto Douro, Portugal National Cheng Kung University, Taiwan

Finance Chair Ting-Ting Wu

National Yunlin University of Science & Technology, Taiwan

Program Committee Ana Balula Andreja Istenic Starcic António Coelho Arsênio Reis Celine Jost

University of Aveiro, Portugal University of Ljubljana, Slovenia University of Porto, Portugal University of Trás-os-Montes e Alto Douro, Portugal Paris 8 University Vincennes-Saint-Denis, France

viii

Organization

Chantana Viriyavejakul Charuni Samat Chin-Feng Lai Claudia Motta Constantino Martins Danial Hooshyar Daniela Pedrosa Elmarie Kritzinger George Ghinea Grace Qi Gwo-Dong Chen Hana Mohelska Hanlie Smuts Hugo Paredes Janne Väätäjä João Pedro Gomes Moreira Pêgo José Cravino José Alberto Lencastre Jui-Long Hung Jun-Ming Su Leonel Morgado Lisbet Ronningsbakk Manuel Cabral Margus Pedaste Mu-Yen Chen Pao-Nan Chou Paula Catarino Paulo Martins Qing Tan Ru-Chu Shih Rustam Shadiev Satu-Maarit Frangou Synnøve Thomassen Andersen Tacha Serif

King Mongkut’s Institute of Technology Ladkrabang, Thailand Khon Kaen University, Thailand National Cheng Kung University, Taiwan Federal University of Rio de Janeiro, Brazil Polytechnic Institute of Porto, Portugal Tallinn University, Estonia University of Aveiro, Portugal University of South Africa, South Africa Brunel University London, UK Massey University, New Zealand National Central University, Taiwan University of Hradec Kralove, Czech Republic University of Pretoria, South Africa University of Trás-os-Montes e Alto Douro, Portugal University of Lapland, Finland University of Porto, Portugal University of Trás-os-Montes e Alto Douro, Portugal University of Minho, Portugal Boise State University, USA National University of Tainan, Taiwan Universidade Aberta, Portugal UiT The Arctic University of Norway, Norway University of Trás-os-Montes e Alto Douro, Portugal University of Tartu, Estonia National Cheng Kung University, Taiwan National Pingtung University of Science and Technology, Taiwan University of Trás-os-Montes e Alto Douro, Portugal University of Trás-os-Montes e Alto Douro, Portugal Athabasca University, Canada National Pingtung University of Science and Technology, Taiwan Nanjing Normal University, China University of Lapland, Finland UiT The Arctic University of Norway, Norway Yeditepe University, Turkey

Organization

Tien-Chi Huang Ting-Sheng Weng Tor-Morten Grønli Yi-Shun Wang Yuping Wang

Organizers

Sponsored by

National Taichung University of Science and Technology, Taiwan National Chiayi University, Taiwan Kristiania University College, Norway National Changhua University of Education, Taiwan Griffith University, Australia

ix

Contents

Artificial Intelligence in Education Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michele Della Ventura

3

Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Siu Cheung Kong, Satu-Maarit Korte, and William Man-Yin Cheung

13

The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dyi-Cheng Chen, Ying-Tsung Chen, Kuo-Cheng Wen, Wei-Lun Deng, Shang-Wei Lu, and Xiao-Wei Chen

22

Concerns About Using ChatGPT in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shu-Min Lin, Hsin-Hsuan Chung, Fu-Ling Chung, and Yu-Ju Lan

37

Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chuan-Wei Syu, Shao-Yu Chang, and Chi-Cheng Chang

50

Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach . . . . . . . . . . Kuan-Yu Chen, Ming-Yu Chiang, and Tien-Chi Huang

57

The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wu-Yuin Hwang, Rio Nurtantyana, Yu-Fu Lai, I-Chin Nonie Chiang, George Ghenia, and Ming-Hsiu Michelle Tsai

67

Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pin-Hui Li, Hsin-Yu Lee, Yu-Ping Cheng, Andreja Isteniˇc Starˇciˇc, and Yueh-Min Huang

77

xii

Contents

Opportunities and Challenges for AI-Assisted Qualitative Data Analysis: An Example from Collaborative Problem-Solving Discourse Data . . . . . . . . . . . . Leo A. Siiman, Meeli Rannastu-Avalos, Johanna Pöysä-Tarhonen, Päivi Häkkinen, and Margus Pedaste

87

Computational Thinking in Education Exploring the Development of a Teaching Model Based on the TPACK Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tai-Ping Hsu, Mu-Sheng Chen, and Ting-Chia Hsu

99

Cultivating Data Analyst Skills and Mindfulness in Higher Education . . . . . . . . . 109 Jessica H. F. Chen Students Learning Performance and Engagement in a Visual Programming Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Fu-Hsiang Wen, Tienhua Wu, and Wei-Chih Hsu Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language . . . . . . 130 Yu-Ping Cheng, Shu-Chen Cheng, Ming Yang, Jim-Min Lin, and Yueh-Min Huang Design and Framework of Learning Systems FLINK: An Educator’s Tool for Linking Inaccurate Student Records . . . . . . . . . . 143 Frode Eika Sandnes Intercultural Collaborative Teaching and Learning in Online Environments – e-Quality in Global Media Education Case Study . . . . . . . . . . . . 153 Satu-Maarit Korte, Mari Maasilta, Chaak Ming Lau, Lixun Wang, and Pigga Keskitalo The Impact of Hands-on Activities Integrating Design Thinking on the Creative Self-efficacy and Learning Performance of Junior High School Students: A Case of Producing Solar Battery Charger . . . . . . . . . . . . . . . . 163 Pin-Hsiang Tseng, Chen-Yin Xu, and Chi-Cheng Chang Key Principles Pertinent to User Experience Design for Conversational User Interfaces: A Conceptual Learning Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Amore Rossouw and Hanlie Smuts

Contents

xiii

Integrating a Chatbot and the Concepts of Grit and Growth Mindset into a Mobile Game-Based Learning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Su-Hang Yang, Yao-En Chen, Jen-Hang Wang, and Gwo-Dong Chen A Guiding Cooperative Learning Approach in Mobile Learning Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Hsin-Chin Chen, Meng-Chang Tsai, Jan-Pan Hwang, and Yi-Zeng Hsieh The Designing Framework of Simulation Flipped Classroom to Enhance Analytical Thinking on the Topic of the Nervous System for Grade 11 Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Kiattisak Rampuengchit and Charuni Samat Pedagogies to Innovative Technologies and Learning Learning Processes and Digital Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Synnøve Thomassen Andersen Flipped Classroom Method in Higher Education: A Case of Kazakhstan . . . . . . . 232 Gulbarshin Baigunissova, Zhanargul Beisembayeva, Rustam Shadiev, Akzhan Abdykhalykova, Assel Amrenova, Narzikul Shadiev, and Mirzaali Fayziev Combining the AISAS Model and Online Collaborative Learning Features to Examine Learners’ We-Intention to Use Health-Related Applications . . . . . . . 242 Ming Yuan Ding and Wei-Tsong Wang Building a Telecollaborative Community of Practice Among Pre-service English Teachers, In-Service Teachers, and International-School Teachers . . . . . 250 Min-Hsun Liao A Pilot Study of Preservice Teachers Accepting and Using Chatbots to Enhance CFL Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Yu-Ju Lan, Fu-Ling Chung, and Maiga Chang Enhancing Phonetics Learning in Online Language Courses: A Counterbalanced Study on CLD and SCA Methods for Intermediate CFL Japanese Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 I-Lin Kao, Chi-Cheng Chang, and Wan-Hsuan Yen Pedagogy for the Digital Infants: Perspectives on Multimedia Production as Teaching Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Lisbet Rønningsbakk

xiv

Contents

Effects of Interactive E-books Based on Graduated-Prompting Strategies to Enhance Self-efficacy of Medical Radiologic Technologists . . . . . . . . . . . . . . . 289 Yu-Ju Chao and Han-Yu Sung Enhancing Mathematics Learning Outcomes through a Flipped Classroom Grouping Mechanism Informed by Self-study Habits: Utilizing iPad Screen Time Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Hsin-Yu Lee, Chia-Nan Huang, Chih-Yu Tsai, Shin-Ying Huang, and Yueh-Min Huang The Implication of Project-Based Learning with CDIO, and Team Teaching on Business-Management Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Chih-Huang Lin How Can We Set Up Eye Trackers in a Real Classroom? Using Mobile Eye Trackers to Record Learners’ Visual Attention During Learning Statistical Graphs with Different Complex Levels . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Zheng-Hong Guan, Sunny S. J. Lin, and Jerry N. C. Li Are Our Students Ready? Students’ Online Learning Readiness in Higher Education Post-covid Era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Bernadett Sarro-Olah and Szabina Fodor Online Learning Model for Graduate Level to Support COVID-19 Pandemic . . . 336 Chantana Viriyavejakul Using Massive Open Online Courses (MOOCs) to Create Learning Spaces for Quality Lifelong Learning for All Communities Through Engaged Scholarship (ES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Patricia Gouws and Elmarie Kritzinger Applying Experiential Learning to Deliver Industry-Ready Business Analysts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Lizette Weilbach, Hanlie Smuts, and Marié Hattingh A Lecturer’s Perception of Blackboard Support for Collaborative Learning . . . . 367 Komla Pillay Two Experiences of Including Critical Thinking in Mathematics Courses . . . . . . 375 Paula Catarino and Maria M. Nascimento Design Science Research in Information Systems as Educational Technology in Teaching and Learning Environments: A Systematic Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Sunet Eybers

Contents

xv

Engaging Company Employees in Critical Thinking Through Game-Based Learning: A Qualitative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Juri Mets and Merja Bauters Learning Effectiveness of Nursing Students in OSCE Video Segmentation Combined with Digital Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 Yi-Chen Lu, Yen-Hsun Lu, and Ting-Ting Wu STEM/STEAM Education Exploring the Learning Efficacy of Students’ STEM Education from the Process of Hands-On Practical Experience . . . . . . . . . . . . . . . . . . . . . . . . 421 King-Dow Su and Hsih-Yueh Chen Learning Analytics Based on Streamed Log Data from a Course in Logic . . . . . . 430 Peter Øhrstrøm, Steinar Thorvaldsen, and David Jakobsen The Effect of Chatbot Use on Students’ Expectations and Achievement in STEM Flipped Learning Activities: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . 441 Ting-Ting Wu, Chia-Ju Lin, Margus Pedaste, and Yueh-Min Huang Students’ Patterns of Interaction with E-Books in Estonian Basic Schools: A Sequence Analysis Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 Yaroslav Opanasenko, Margus Pedaste, and Leo A. Siiman VR/AR/MR/XR in Education Cultivating Creativity of High School Students in Cross-Cultural Learning Project Based on VR Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Rustam Shadiev, Suping Yi, Chuanwen Dang, Narzikul Shadiev, Mirzaali Fayziev, Dilshod Atamuratov, Gulbarshin Baigunissova, Assel Amrenova, and Natalya Tukenova Creating an Engaging and Immersive Environmental, Social, and Governance Learning Experience through the Metaverse . . . . . . . . . . . . . . . . 473 Ting-Sheng Weng, Chien-Kuo Li, and Yuka Kawasaki The Influence of Emotion in STEM Activity Based on Virtual Reality Learning Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 Wei-Sheng Wang, Margus Pedaste, and Yueh-Min Huang Developing an Immersive Virtual Reality-Assisted Learning System to Support Scanning Electron Microscopy Learning Activities . . . . . . . . . . . . . . . 494 Chia-Ching Lin, Bo-Yuan Cheng, and Ru-Chu Shih

xvi

Contents

Systematic Literature Review of the Use of Virtual Reality in the Inclusion of Children with Autism Spectrum Disorders (ASD) . . . . . . . . . . . . . . . . . . . . . . . . 501 Rui Manuel Silva, Diana Carvalho, Paulo Martins, and Tânia Rocha Application and Design of Innovative Learning Software Tablet-Based Design Fluency Test: Taiwan Normative Data and Reliability and Validity Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 Pei-Zhen Chen, Ching-Lin Wu, Li-Yun Chang, and Hsueh-Chih Chen The Openstudy Academy that Stimulates the Energy of Digital Learning for the Disabled Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Yao-ming Yeh and Yen-fang Lai Empowering Learner-Centered Instruction: Integrating ChatGPT Python API and Tinker Learning for Enhanced Creativity and Problem-Solving Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 Yun-Cheng Tsai The Impact of AI Chatbot-Based Learning on Students’ Motivation in English Writing Classroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 Lusia Maryani Silitonga, Santhy Hawanti, Feisal Aziez, Miftahul Furqon, Dodi Siraj Muamar Zain, Shelia Anjarani, and Ting-Ting Wu TAM Application in Investigating the Learning Behavior of FinTech Practitioners Towards Internet-Only Bank During the COVID-19 Lockdown: A Case Study of LINE Bank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 Yu-Chieh Chen and Kuo-Hao Lin An Analysis of Student Perceptions of Computational Thinking in Writing Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 Astrid Tiara Murti, Listyaning Sumardiyani, and Ting-Ting Wu The Effect of Makey Makey Combined with Tangible Learning on Marine Conservation Outcomes with Attitude, and Learning Satisfaction of Rural Elementary School Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 Yu-Hsuan Lin, Jie-Yu Rong, and Hao-Chiang Koong Lin A Study of Virtual Skills Training on Students’ Perceptions of Sense of Ownership and Sense of Agency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 Li-Wen Lu, Tao-Hua Wang, Koong Hao-Chiang Lin, Fan-Chi Liu, and Wen-Ju Li

Contents

xvii

Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study . . . . . . . . . . . . . . . . . . . . . . 587 Sri Suciati, Elsa, Lusia Maryani Silitonga, Jim-Min Lin, and Ting-Ting Wu The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability . . . . . . . . . . . . . . . . . . 597 Wei-Shan Liu, Hsueh-Cheng Hsu, and Ting-Ting Wu The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System . . . . . . . . . . . . . . . . . . . . . . . . . . 606 Jui-Chi Peng, Yen-Ching Kuo, and Gwo-Haur Hwang Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 618 Hsuan Li and Nian-Shing Chen Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement . . . . . . . . . . . . . . . . . . 625 Po-Han Wu, Wei-Ting Wu, Ming-Chia Wu, and Tosti H. C. Chiang Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses . . . . . . . . . . . . . . . . . 635 Wei Li, Judy C. R. Tseng, and Li-Chen Cheng Facial AI and Data Mining-Based Testing System in the Post-pandemic Era . . . . 644 Ihao Chen, Yueh-Hsia Huang, Hao-Chiang Lin, and Chun-Yi Lu Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655

Artificial Intelligence in Education

Intelligent (Musical) Tutoring System: The Strategic Sense for Deep Learning? Michele Della Ventura(B) Department of Music Technology, Music Academy “Studio Musica”, Treviso, Italy [email protected]

Abstract. This article intends to contribute to the reflections on the effectiveness of the use of the Intelligent Turing System (ITS) in music learning. The intent is to promote the search for specific indicators for monitoring the student’s learning process, which integrate the student’s ability to make decisions and to know how to act and react appropriately in specific situations, according to the paradigms of citizenship: autonomy and responsibility. This paper presents an educational experience based on the use of the web platform EarMaster (equipped with an ITS), in which a set of learning indicators has been set up. The case study shows that indicators can provide a good guide for monitoring the learning process and that an ITS can be a good tool to enhance student’s interest in learning and thus improve their skills. Keywords: artificial intelligence · e-learning indicators · intelligent tutoring system · learning process · skills

1 Introduction The development of new technologies and the unstoppable growth of increasingly flexible and effective teaching/learning models, based on the use of Artificial Intelligence (AI), are determining the birth of a new society that we can define as the “cognitive society” [1]. The role of the teacher and the student, within the educational process, changes radically [2]: on the one hand, the teacher must acquire new and complex skills related to teaching, as well as the role of guiding the student’s learning process; on the other hand, the student takes on an active role which allows him/her to become a real protagonist in the creation of new knowledge and new idea. In this context, the figure of the tutoring system assumes significant importance, the task of which is to support, stimulate, accompany the student in the training path [3]. Tutoring promises to be a large and fertile ground for educational research. The scientific roots of the proposed theme are investigated starting from the Latin etymology of the verb “tueor”, in the various historical evolutions, up to modernity and its most recent versions anglicized and declined in coaching, counselling, mentoring, tutorship, peer-tutoring. Recent research progress in the field of AI allowed the development of Intelligent Tutoring Systems (ITS) [4]: software systems designed, mainly, to support the learning © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 3–12, 2023. https://doi.org/10.1007/978-3-031-40113-8_1

4

M. Della Ventura

activity in an individualized way. The basic feature of these systems is to consider each student as unique, creating a student model capable of recording preferences [5] and progress during the cognitive process [6]. In addition, what increases the effectiveness of teaching is its ability to adapt to the characteristics of the student [7], providing a set of learning materials based on the student’s level of understanding [8]. This allows for modifying a student’s learning experience based on individual characteristics such as subject knowledge and learning styles [9] and by providing immediate feedback that the student will receive based on his or her past actions. ITS have been developed for individual and collaborative learning [10] and have also applied game-based tutoring systems to increase the student’s pleasure in the learning process [11]. In such a learning environment where the student is supported in the learning process by an intelligent tutoring system, the systematic collection of data/information on specific process indicators is essential to provide the teacher and the student with indications on the extent of progress and the achievement of objectives. This study aims to propose an approach to monitor a musical teaching activity carried out through the use of the EarMaster web platform, equipped with an ITS, used to teach students a small subdomain of music theory that is musically important and difficult to learn: ear training and sense of rhythm. The proposed solution consists of a series of process indicators that allow the teacher to analyze the behavior of the student and of the ITS and to classify effective and less effective tutoring sessions. The structure of this paper has been organized as follows. In the Introduction, the context of this study is presented, followed by a review of related studies on Intelligent Tutoring Systems and an analysis of the characteristics that these systems present in order to define the research goals. Section 2 explores the concept of skills, which is the basis of current learning processes. This is followed (Sect. 3) by a description of the online platform EarMaster and (Sect. 4) the formulation of a set of indicators that could be useful for the teacher to monitor the student’s learning process while using the EarMaster platform. Section 5 shows a case study that illustrates the effectiveness of the proposed method. Finally, in Sect. 6 the paper ends with concluding remarks on the current issues and future research possibilities with respect to the efficient enhancement of educational practices and technologies.

2 Intelligent Tutoring Systems vs Competence Development The concept of competence is, in today’s school, the fundamental regulatory criterion. In other words, the learning process must teach the student to use his/her knowledge and skills to effectively manage a variety of situations, understanding them, facing them and reflecting on his/her own work to adapt it to unforeseen and changing conditions [12]. The application of the knowledge acquired at school to real problems is not automatic, but it is the result of an appropriate training action that insists on the autonomy and responsibility of the student [13]. Demonstrating autonomy means knowing how to make decisions and act independently, detaching oneself, if and when necessary, from the models taken as a reference and reflecting critically on them. Demonstrating responsibility means knowing how to foresee and evaluate the consequences of one’s own interpretations and actions

Intelligent (Musical) Tutoring System

5

and respond to them by justifying them through plausible arguments. Responsibility implies the capacity for judgment and choice, but also for taking on precise commitments and completing them by showing tenacity and perseverance [14]. Autonomy does not mean doing things alone, but knowing how to decide when it is time to ask for help and how [15]; responsibility does not mean running away from risks, but taking controlled risks, the result of personal and conscious choices [16]. The learning process must allow the student to develop the skills useful for the specific discipline. Precisely in this context, research in the field of AI has led to the development of ITS. The teacher remains an essential figure, without whom teaching cannot take place; the ITS takes the form of a companion, a process assistant who can do nothing but support the teacher’s work, making it even more effective (deep learning) [17]. Intelligent tutoring systems revolutionize the way students learn and interact with educational material. Instead of traditional methods, such as textbooks or lectures, intelligent tutoring systems provide multimedia content and feedback to the student. Students can therefore learn in an interactive and adaptive way, according to their individual learning needs [18]. Furthermore, they are systems designed to diagnose each student’s strengths and weaknesses and tailor the individual lesson accordingly, ensuring that each subject recipient gets the most out of their study [17]. What are the strengths and weaknesses of ITS for skills development? ITS are able to provide personalized learning experiences for each student, adapting to the individual student’s learning style, stimulating him/her [19]. In fact, it provides immediate feedback to the student, allowing him/her to identify mistakes and learn how to find a solution [19]; it can make learning more engaging and interactive [20] (through the use of multimedia and interactive elements it can keep students interested and motivated, making learning a more enjoyable experience). However, it must be considered that the learning environment equipped with ITS lack of human interaction [21]: the system is unable to provide emotional support or develop personal relationships with students. ITS can only teach what it was programmed to do and lacks the ability to think creatively or to improvise (which is something that human teachers are capable of doing) and thus to accommodate individual needs or special requests. This work aims to investigate whether the SIT is able to help the student to develop the necessary skills in the study of musical rhythm (Solfège), to allow him/her to independently solve complex rhythmic patterns. The research was carried out using the EarMaster web platform which allows teachers to monitor the learning process, through multiple information relating to the activities proposed and carried out by the student.

3 Why EarMaster? This paragraph describes an intelligent and individualized tutoring system designed for the music field: EarMaster. EarMaster is a learning environment for ear training, sightsinging practice and rhythm training. It represents a stimulating, efficient and interactive educational tool: students see, hear, but above all they put music theory into practice.

6

M. Della Ventura

This ITS can support a concurrent learning environment and dynamic lesson planning for each student. The student’s behavior is examined within learning structures called teaching units, capable of evaluating the progress achieved and consequently providing the student with lessons appropriate to his level of preparation. In EarMaster the domain of knowledge is partitioned into small parts which are called “concepts” and each domain can be represented by a finite set of concepts (see Fig. 1).

Fig. 1. The domain of knowledge.

Learning is accomplished by establishing relationships between these pieces of knowledge. In the sense that, two concepts of the domain are related if knowledge of the first is required for a better understanding of the second. EarMaster is able to offer the student specific activities (already prepared internally, but which can be integrated by the teacher) to help him/her to improve his/her sense of the rhythm (Rhythm Training). His Tutor, after having graphically proposed a rhythmic structure, is able to listen to its execution by the student (via the computer microphone) and to verify its correct execution: in case of inaccuracies, it plays the correct version and suggests similar new structures. EarMaster, in order to adapt to the needs of the students, is able to perform the following tasks: • present the learner with any content or skill that he/she wants to learn that suits his/her learning style; • suggest to the student, when requested or necessary, how best to learn the content or skills; • work with the student to monitor the learning process; • provide an analysis of what the student is processing to allow the teacher to help in real time. In a learning process supported by an ITS, the student cannot and must not be left alone: the teacher must always monitor this process in order to achieve the set objectives. In the next paragraph, some process objectives are proposed and for each of them the relative indicators that could be considered in order to verify the effectiveness of the ITS in the process of developing skills and therefore of the student’s autonomy. There is a circular relationship between the characteristics of the student and his/her degree of autonomy: as autonomy increases, the skills and motivation of the subject also increase, who will therefore be able to achieve his/her learning objectives with ever

Intelligent (Musical) Tutoring System

7

greater effective and with less need for support. The teacher must carry out continuous and constant monitoring of the level of communication and reception of educational contents by the student, and of the level of empathy and relationality with which these contents are transmitted, shared and evaluated [22].

4 Learning Process Monitoring Indicators Monitoring is the tool with which the teacher (and the student) follows the progress of the learning process, with the aim of acquiring useful information to develop intervention strategies. Monitoring can be done through a series of indicators prepared on the basis of the objective to be achieved, understood as: the terminal behavior of the expected student [23]; a result, a knowledge/skill/competence that the student is expected to achieve through teaching activities [24]. More generally, the goal to be pursued is the improvement of the results of the learning process. Inaccurate or generic learning objectives (especially on cognitive action) can create difficulties in identifying process indicators and therefore in constructing the training path [25]. Indicators are a key element of the monitoring system. The set of indicators associated with the objectives must be characterized by: • precision, or significance, understood as the ability of an indicator or set of indicators to really and exactly measure the degree of achievement of an objective. Among the many possible indicators, it is therefore necessary to select those that best represent the results to be achieved. • completeness, that means the ability of the indicator system to represent the main variables that determine the results. Also, in this case the impact and effectiveness on the user are a guiding element, to be associated with the efficiency and effectiveness of the processes or projects that lead to a better or worse performance on users. Incompleteness and poor precision have implications for both the planning phase and the measurement and evaluation phase. In the planning phase, in fact, they can lead to a wrong choice of the most effective operating methods to adopt to achieve the objective. In the measurement and evaluation phase, on the other hand, they can lead to an incorrect assessment of the degree of achievement of objectives and the failure to correctly identify the reasons for a deviation between expected target values and actual results. Table 1 shows what could be the goals of a learning process that is developed through the use of a learning environment equipped with ITS, e the related process indicators.

8

M. Della Ventura Table 1. Learning process indicators.

Obiettivo

Indicatore

• Understanding of the contents of the discipline

1) level of improvement (in relation to the starting situation) of individual behaviors, knowledge, specific skills acquired 2) comparison of learning outcomes between students involved and not involved in the specific activity 3) ability to organize learning 4) ability to reflect on what knowledge was gained and what has not been collected yet

• Reworking of the contents of the discipline Sub-goals - Develop critical analytical and reasoning skills - Analytical and argumentative skills

5) development of the student’s cognitive and creative potential 6) level of participation/involvement of the student 7) acquisition of information and inferences between the various disciplinary fields 8) ability to coming up with alternatives

• Acquisizione lessico specifico

9) improvement of students’ self-efficacy 10) comparison of learning outcomes between students involved and not involved in the specific activity

• Ability to apply the acquired knowledge

11) changes in the student’s self-esteem 12) reduction of failure 13) Improvement of students’ self-assessment capacity

• Ability to deal with problems based on the tools provided during the course Sub-goals - Ability to understand how to solve the problem - Ability to reason on the subject

14) Operational autonomy 15) quantity and quality of resources owned and mobilized, in terms of knowledge, skills, personal and methodological skills 16) changes in learning outcomes 17) changes in the student’s self-esteem 18) Increased student motivation 19) Reduction of failure 20) Reduction of the error risk 21) Ability to use resources optimally 22) Ability to find information relevant to problem solution without guidance

5 Application and Analysis: Research Method This study aims to propose an approach to monitor (through a series of process indicators) a musical teaching activity carried out through the use of EarMaster, and verify the effectiveness of the ITS in learning the sense of musical rhythm (solfège).

Intelligent (Musical) Tutoring System

9

The research was conducted for a time period of 10 weeks (from October 2022 to January 2023). The research was divided into 3 stages: 1) preparation: definition of the goals, development of tools for monitoring (process indicators): see previous paragraph; 2) practical: collection of information using methods of observation, interviews, tests, analysis of lessons and exercises; 3) analytical: analysis of collected data, elaboration of recommendations and suggestions for the future. In order to conduct the research, two groups of students (aged from 15 to 16, with the same level of initial knowledge) were formed. The students were selected on the basis of their willingness to carry out educational monitoring, following a workshop, during which they were informed about the fundamental principles of the didactic activity object of the research. 2nd Stage (practical) Solfège is a practice that consists in reading the musical score taking into account the figures, the pauses and the tempo. Through the rhythmic scanning of the notes and the rests, the student learns to divide the time, therefore to understand a score, to read it and to play it, even without having ever heard it. Thanks to solfège it is also possible to solve bars with irregular and difficult tempos. The latter represents the major obstacle for music students, who must acquire autonomy in recognizing and solving (i.e. performing correctly) rhythmic structures. The two groups of students addressed the same topics and in the same weeks. All students attended lectures in the classroom, where the teacher explained the theoretical concepts. Each group then had to carry out exercises to consolidate the concepts explained. The first group of students (10 students, of which 5 boys and 5 girls) was assigned the task of carrying out only the exercises present in the textbook adopted by the teacher (which also had a CD to listen to the correct executions of the individual exercises in the book); the second group (10 students, of which 5 boys and 5 girls) was assigned the task of carrying out the exercises proposed by EarMaster. The teacher’s role in all of this was complex, but it involved being more aware of the dynamics that could be created in EarMaster, observing student behavior and updating (if necessary) the teaching material to accommodate those who participated peripherally. An important part of educational monitoring was the estimation of the level of learningcognitive abilities of students. In order to screen it, the teacher had to observe the students in the process of solving the learning-cognitive problems. 3rd Stage (analytical) At the end of the activities, the two groups of students carried out a final test, the results of which demonstrated the effectiveness and importance of monitoring the learning process (see Table 2).

10

M. Della Ventura Table 2. Results of the learning process.

Competency levels

Group 1 Before the new activities

After the new activities

Group 2 Before the new activities

After the new activities

Advanced

1

1

1

2

Intermediate

4

5

4

6

Beginner

5

4

5

2

Students of Group 1 did not show significant changes, unlike the students of Group 2 where there is a marked improvement of the students of the beginner level. On the basis of the results, it is possible to state that monitoring presupposes the continuous observation of a learning process in order to reveal its compliance with a desired result or with an initial assumption. Monitoring is aimed at the constant screening of the phenomena that occur in the learning environment and during the learning process. An important aspect (of the proposed learning path) that should be highlighted is the importance of making students aware of a certain learning environment equipped with ITS (such as EarMaster) before starting a learning path. In fact, in order to be able to evaluate the effectiveness of the ITS, it was necessary to teach students how to use EarMaster so that they could carry out the activities independently (process indicator 14) and without losing motivation (process indicator 11): the disorientation in using EarMaster could affect their motivation and the results of the learning process. The analysis of the results of the various exercises carried out by the students and therefore of any errors made by them was important: mistakes could demotivate students and leads them to disesteem and educational failure (process indicators 17, 18). The teacher could support the student with targeted examples and supplementary explanations because the way in which the student perceives his/her competences is an important motivational precursor of learning process results [26] (process indicators 19, 20). EarMaster allows the student to break down complex rhythmic structures into simpler rhythmic structures in order to interpret them correctly: this allows the teacher to understand if the student has acquired previous concepts (process indicators 5, 7, 8, 12, 13, 14, 15, 20) and if he/she was able (or not) to solve the problem (process indicators 16, 21, 22) and reflect on what knowledge was gained and what has not been collected yet (process indicators 3, 4). During the activity object of the research, the teacher proposed in the classroom, at regular intervals, some exercises (without evaluation) to verify the skills/abilities of the students of the two groups (process indicator 10) and how they provided help to classmates in difficulty (process indicators 1, 2, 6, 9, 16). The teacher’s monitoring of the questions that the students asked him during the classroom lessons was important: questions relating to the use of the EarMaster (to be sure of using the work tools correctly) (process indicator 14); questions relating to theoretical concepts (process indicators 14, 15): in these cases, the teacher asked the other students of group 2 to answer the questions (process indicators 6, 9, 11, 15, 21,

Intelligent (Musical) Tutoring System

11

22). The students of group 2 had to fill in a self-evaluation questionnaire at the end of each week of activity (process indicators 11, 13, 17, 18), which provided the teacher with useful information for a possible support.

6 Discussion and Conclusions In this study, a model of indicators useful for monitoring the learning process in an environment equipped with an Intelligent Tutoring System was outlined. It is an environment that can be, on the one hand, a risky place, and on the other, a playful place: provided that one is able to understand the appropriate language to transmit the contents of the discipline based on the needs of the individual student. Such a learning environment must allow the development of the student’s skills, considering that the skill arises by reproducing procedures already known and derived from experience, adapts by conforming to the different situations (and therefore to the different learning environments) and is characterized in the personal specificity. The monitoring and analysis of the learning process therefore become fundamental and must focus attention not only on the performance (the object of the competence) but also on the person (the subject of the competence). The model that has been outlined in this article intends to present itself as a set of criteria for researching the value of the Intelligent tutoring System, to guide the development direction of teaching activity. The research results offer promising opportunities to monitor a learning process focused on new technologies, improving current practice and also generating various future research opportunities. However, the skills and know-how acquired and perfected by students in practice are not always measurable by computer-aided testing and this point needs to be further investigated.

References 1. Scott-Phillips, T., Nettle, D.: Cognition and society: prolegomenon to a dialog. Cogn. Sci. 46, e13162 (2022) 2. Aubrey, K., Riley, A.: Understanding and Using Educational Theories. SAGE Publications Ltd., London (2016) 3. Kenny, C., Pahl, C.: Intelligent and adaptive tutoring for active learning and training environments. Learn. Environ. 17(2), 181–195 (2009) 4. Burns, H.L., Capps, C.G.: Foundations of intelligent tutoring systems: an introduction. In: Poison, M.C., Richardson, J.J. (eds.) Foundations of Intelligent Tutoring Systems, pp. 1–19. Lawrence Erlbaum, London (1988) 5. Gitinabard, N., Heckman, S., Barnes, T., Lynch, C.F.: What will you do next? A sequence analysis on the student transitions between online platforms in blended courses. arXiv:1905. 00928 (2019) 6. VanLehn, K.: Student modeling. In: Polson, M.C., Richardson, J.J. (eds.) Foundations of Intelligent Tutoring Systems, capitoli 3, pp. 55–78. Lawrence Erlbaum Associates (1988) 7. VanLehn, K.: The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ. Psychol. 46, 197–221 (2011) 8. Das, B.K., Pal, S.: A framework of Intelligent Tutorial System to incorporate adaptive learning and assess the relative performance of adaptive learning system over general classroom learning (2011)

12

M. Della Ventura

9. Han, J., Zhao, W., Jiang, Q., Oubibi, M., Hu, X.: Intelligent tutoring system trends 2006–2018: a literature review. In: Proceedings of the 2019 8th International Conference of Educational Innovation through Technology, EITT 2019, October 2019, pp. 153–159 (2019). https://doi. org/10.1109/EITT.2019.00037 10. Katsaris, I., Vidakis, N.: Adaptive e-learning systems through learning styles: a review of the literature. In: Advances in Mobile Learning Educational Research, vol. 1, no. 2, pp. 124–145 (2021) 11. Wongwatkit, C.: An online web-based adaptive tutoring system for university exit exam on IT literacy. In: International Conference on Advanced Communication Technology, ICACT, April 2019, vol. 2019-February, pp. 563–568 (2019) 12. Auteri, G., Di Francesco, G.: La certificazione delle competenze. Innovazione e sostenibilità. Franco Angeli, Milano (2000) 13. Benadussi, L., Di Francesco, G.: Formare per competenze. Un percorso innovativo tra istruzione e formazione. Ed. Tecnodid (2002) 14. Pellerey, M. Competenze, conoscenze, abilità, atteggiamenti. Ed. Tecnodid (2010) 15. Le, H., Jia, J.: Design and implementation of an intelligent tutoring system in the view of learner autonomy. Interact. Technol. Smart Educ. 19(4), 510–525 (2022) 16. Hooshyar, D., Ahmad, R.B., Yousefi, M., Fathi, M., Horng, S.J., Lim, H.: Sits: a solution-based intelligent tutoring system for students’ acquisition of problem-solving skills in computer programming. Innov. Educ. Teach. Int. 55(3), 325–335 (2018) 17. Koti, M.S., Kumta, S.D.: Role of intelligent tutoring system in enhancing the quality of education. Int. J. Adv. Stud. Sci. Rese. 3, 330–334 (2018) 18. Della Ventura, M.: A self-learning musical tool to support the educational activity. In: Arai, K. (ed.) IntelliSys 2022. LNNS, vol. 543, pp. 49–67. Springer, Cham (2023). https://doi.org/ 10.1007/978-3-031-16078-3_3 19. Schez-Sobrino, S., Gmez-Portes, C., Vallejo, D., Glez-Morcillo, C., Redondo, M.A.: An intelligent tutoring system to facilitate the learning of programming through the usage of dynamic graphic visualizations. Appl. Sci. 10, 1518 (2020) 20. Akyuz, Y.: Effects of intelligent tutoring systems (ITS) on personalized learning (PL). Creative Educ. 11(06), 953–978 (2020). https://doi.org/10.4236/ce.2020.116069 21. Paviotti, G., Rossi, P.G., Zarka, S.: Intelligent Tutoring Systems: an Overview. Pensa MultiMedia Editore (2012) 22. Toala, R., Durães, D., Novais, P.: Human-computer interaction in intelligent tutoring systems. In: Herrera, F., Matsui, K., Rodríguez-González, S. (eds.) DCAI 2019. AISC, vol. 1003, pp. 52–59. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-23887-2_7 23. Della Ventura, M.: Exploring the impact of artificial intelligence in music education to enhance the dyslexic student’s skills. In: Uden, L., Liberona, D., Sanchez, G., Rodríguez-González, S. (eds.) LTEC 2019. CCIS, vol. 1011, pp. 14–22. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-20798-4_2 24. Pellerey, M.: Dirigere il proprio apprendimento. La Scuola, Brescia (2006) 25. Coggi, C., Ricchiardi, P.: Progettare la ricerca empirica in educazione. Ed. Carocci (2005) 26. Della Ventura, M.: Monitoring the learning process to enhance motivation by means of learning by discovery using Facebook. In: Ma, W.W.K., Chan, W.W.L., Cheng, C.M. (eds.) Shaping the Future of Education, Communication and Technology. ECTY, pp. 117–128. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-6681-9_9

Nurturing Artificial Intelligence Literacy in Students with Diverse Cultural Backgrounds Siu Cheung Kong1,2

, Satu-Maarit Korte3

, and William Man-Yin Cheung2(B)

1 Department of Mathematics and Information Technology, The Education University of Hong

Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong [email protected] 2 Centre for Learning, Teaching and Technology, The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong [email protected] 3 Media Education Hub, Faculty of Education, University of Lapland, P.O. Box 122, 96101 Rovaniemi, Finland [email protected] Abstract. Recent advances in artificial intelligence have demonstrated the importance of nurturing artificial intelligence literacy among citizens of different backgrounds to shape future society. Distinct from the traditional approach focusing on programming, a module emphasising artificial intelligence concepts was delivered to 29 students at a Finnish university in the academic years 2021–2022 and 2022– 2023. The students came from various European and Asian countries, with 76% of them reporting not knowing programming. The results of pre- and post-module surveys on artificial intelligence literacy and concepts tests demonstrated that through the module, the students acquired statistically significant improvements in their understanding of artificial intelligence concepts and their self-perceived level of artificial intelligence literacy. Through reflective writing before and after completing the module, the students shared a change in perception: from being uncertain about the relevance of artificial intelligence to daily life and inappropriately equating it with robotics before completing the module to understanding more about the working principles behind machine learning after completing the module. In addition to the favourable evaluations from the students, this article reports the positive results of a concept-focused approach to nurturing artificial intelligence literacy among university students from multiple cultural backgrounds. Keywords: Artificial Intelligence Literacy · Machine Learning · Cultural Background

1 Introduction The recent boom in the application of artificial intelligence (AI) in a wide range of areas has prompted much effort to nurture AI literacy among citizens from different cultural backgrounds [1]. People must understand foundational AI concepts and become AI literate to be ready for a future society and working life where AI will be even more ubiquitous and powerful [2]. From the more computer science-oriented approach of teaching, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 13–21, 2023. https://doi.org/10.1007/978-3-031-40113-8_2

14

S. C. Kong et al.

which relies heavily on coding [3], a shift to emphasis on the acquisition of foundational concepts would be necessary so that people from different cultural backgrounds can develop an understanding of the working principles behind AI. The aim of this study was to address two research questions: (1) Can university students with non-technical backgrounds from various countries develop AI concepts without taking coding as a prerequisite? (2) Can a concept-focused module nurture AI literacy among university students with non-technical backgrounds from various countries?

2 Literature Review 2.1 Artificial Intelligence AI refers to the field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. Hence, this means that AI systems are not intrinsically intelligent but are designed to learn to mimic certain aspects of human cognitive abilities [9]. To learn, AI employs various techniques, algorithms, and methodologies that enable intelligent machines to perceive their environment and learn from data and experiences, constantly improving their performances. AI can also learn to interact with humans and other machines. These capabilities are achieved through the use of various AI subfields such as machine learning, computer vision, natural language processing, and robotics [9–11]. AI has found applications in numerous domains, such as healthcare and transportation, and has the potential to revolutionise working lives through, for example, automating repetitive tasks in industries. It can also enhance learning environments by employing intelligent tutoring systems that tailor the learning content to the needs and capabilities of learners [11]. However, AI raises ethical, legal, and philosophical questions pertaining to privacy, bias, accountability, and its impact on the labour market [9], pointing to the need for AI literacy. 2.2 AI Literacy AI literacy is defined as individuals’ knowledge, understanding, and competency related to AI concepts. Furthermore, it encompasses one’s ability to observe, ethically use, and critically evaluate AI applications and AI-produced contents and their implications without necessarily having programming knowledge [1, 10]. It can also entail familiarity with AI applications in various domains such as health care and customer service. Being AI literate also means that one understands the capabilities and limitations of AI technologies. Hence, it includes the ability to critically evaluate AI systems and algorithms. It involves understanding how AI models are trained and the importance of high-quality data and being able to identify biases and potential challenges in AI systems [9, 10]. To better define the areas that AI literacy education should address, the conceptual framework of AI literacy [12] further defines AI literacy in terms of cognitive, affective, and sociocultural dimensions. The cognitive dimension refers to the provision of education on AI concepts such as machine learning, algorithms, data mining, neural networks, natural language processing, and deep learning; understanding them; and developing

Nurturing Artificial Intelligence Literacy in Students

15

adequate competencies to use them. The affective dimension refers to the empowerment of learners to be confident in digital environments and to believe in their own abilities to engage with AI. Lastly, the sociocultural dimension concerns learners’ ethical awareness and ethical use of AI so that one understands the potential risks and benefits associated with AI adoption [12]. Overall, AI literacy is crucial for individuals to become informed citizens, responsible users of AI technologies, and active participants in shaping the future of AI and its impact on society. Furthermore, it enables individuals to understand how AI systems work, empowering them to actively participate in discussions, policymaking, and decisionmaking processes related to AI [1, 9, 10].

3 Methodology A module on AI literacy that focused on machine learning owing to its wide applications in the current decade was delivered to 29 international students in the academic years 2021–2022 and 2022–2023. The 29 students were either exchange students visiting a university in Finland or Finnish students studying at the same university. The students were from Belgium, China, Czech Republic, Finland, France, Germany, Hungary, Italy, Japan, Poland, Spain, Switzerland, and Uzbekistan. The module was part of the Global Media Education course, which belongs to the curriculum of the master’s degree programme in Media Education. The teachers of the course were from Brazil, Finland, Germany, Hong Kong, and Italy. Distinct from many AI curricula that start by teaching programming [3], the module focuses on equipping students with the foundational concepts underlying machine learning. This module adopted a flipped classroom learning approach; that is, learning materials were first shared with students to study, followed by two synchronous online workshops for more in-depth discussions on what AI is and its underlying concepts. The module amounts to 9 h of work, including self-learning by students before the workshops, with each workshop being 1.5–2 h long. This module is an extract from the first of four parts of the material for an AI literacy programme that was developed for university students of all backgrounds. The programme starts by equipping students with AI concepts in machine learning and deep learning, followed by students applying the concepts to solve real-life problems and reflecting on the related ethical considerations. The last part of the programme covers Python coding and more recent developments in AI to further nurture students’ AI literacy. 3.1 Instruments A survey questionnaire, which assessed students’ self-perceived level of AI literacy, was administered before and after the module. It consisted of 10 statements covering the 3 domains of AI literacy [5]. The statement, ‘I know how to provide data for the computer to learn,’ for example, assesses students’ confidence in their mastery of ‘AI concepts’. On the other hand, the statement, ‘I can tell if a technological product used artificial intelligence (AI) or not,’ is an example of measuring students’ self-perception of ‘using AI concepts for evaluation’. To evaluate students’ confidence in ‘using AI concepts to

16

S. C. Kong et al.

understand the world’, ‘I know how to use AI to solve real-life problems’ is an example statement. The survey questionnaire used a 5-point Likert scale, with 1 corresponding to ‘strongly disagree’ and 5 corresponding to ‘strongly agree’. It also collected the students’ demographic information, such as their gender and whether they knew programming. To evaluate the students’ acquisition of the AI concepts covered, a concepts test was also conducted before and after the module. The test was composed of 15 multiplechoice questions, each with 4 answer options. The questions were designed to assess students’ conceptual understanding rather than technical details. For example, the question, ‘Which algorithm for supervised learning involves the concept of “birds of a feather flocking together”?’ assessed if students can identify the appropriate machine learning algorithm to adopt under a particular scenario. Before and after completing the module, the students were also asked to write a 100to 200-word reflection in English on their understanding of AI. To assess their opinions on the module, the students were asked to rate the statement, ‘I understand more about artificial intelligence (AI) after attending the workshops,’ on a 5-point Likert scale. 3.2 Student Demographics Overall, the students exhibited cultural diversity, coming from several countries in Europe and Asia. For their study background, they were undergraduate and master’s degree students in education. Table 1 summarises the students’ demographic information collected through the survey. Table 1. Students’ demographics Number of Students in 2021

Number of Students in 2022

Combined

Bachelor

10

14

24

Master

3

2

5

Total

13

16

29

Female

11

13

24

Male

2

3

5

Total

13

16

29

Level of Study

Gender

Do you know programming Yes

2

5

7

No

11

11

22

Total

13

16

29

In summary, of the students, 17% were at the master’s level, 83% were female, and 76% reported that they did not know programming.

Nurturing Artificial Intelligence Literacy in Students

17

4 Results and Discussion 4.1 Development of AI Concepts The results of the AI concepts test are summarised in Table 2. The paired t-test revealed that the students showed statistically significant improvements in their understanding of the AI concepts through the module, even though they were from the education discipline and most of them did not know programming. This is consistent with the findings of previous studies [4–6] that evaluated AI literacy courses offered to students from various study backgrounds at an Asian university. In comparison, the students in the present study exhibited a wider cultural diversity (coming from various countries in Europe and Asia) and had less technical background (all from education). On the other hand, 31.7% of the students in the study of Kong et al. [4] came from computer science-related disciplines such as information and communication technology, and mathematics. In another study, statistically significant improvements in the acquisition of AI concepts were observed when the module was delivered to senior secondary students (equivalent to grades 10–12) [7]. Table 2. Statistical results on the AI concepts test before and after the workshops Year the AI concepts test was taken

Before the workshops (maximum mark, 15)

After the workshops (maximum mark, 15)

Paired t-test score

N

M

SD

M

SD

2021

6.78

1.79

9.89

2.80

5.52**

9

2022

4.75

1.69

9.00

2.48

5.84***

16

Combined

5.48

1.96

9.32

2.58

7.498***

25

*p < .05; **p < .01; ***p < .001

4.2 Improvement in Self-perceived AI Literacy Table 3 presents a comparison of the results of the AI literacy survey before and after the workshops. It shows a statistically significant increase in the level of self-perceived AI literacy among the university students with non-technical backgrounds and those from various countries.

18

S. C. Kong et al. Table 3. Statistical results of the AI literacy survey before and after the workshops

Year of the AI literacy survey

Before the workshops (maximum mark, 5)

After the workshops (maximum mark, 5)

Paired t-test score

n

M

SD

M

SD

2021

2.55

0.58

3.74

2022

2.91

0.54

3.65

0.52

5.40***

11

0.46

5.49***

16

Combined

2.77

0.58

3.69

0.48

7.36***

27

*p < .05; **p < .01; ***p < .001

4.3 Students’ Reflections and Feedback Table 4 tabulates the students’ ratings on the workshops. Overall, they positively rated (4.14/5 when all responses were combined) the workshops on enabling them to understand more about AI. Table 4. Students’ evaluation I understand more about artificial intelligence (AI) after attending the workshops

M (maximum mark, 5)

SD

n

Year 2021

4.15

0.55

13

Year 2022

4.13

0.50

16

Combined

4.14

0.52

29

The reflective writing submitted by the students before and after the workshops helped to show the qualitative changes in their understanding of and perception towards AI. Table 5 presents excerpts of the reflections of 3 students. The students revealed that they did not know much about AI before the workshops and that they often related AI to robotics. This is also consistent with the misconceptions of AI shown by Finnish students in the fifth and sixth grades [8]. After the workshops, the students expressed that they acquired a greater understanding of AI (e.g. the concepts of ‘supervised learning’ and ‘unsupervised learning’). They also gained more appreciation of the importance of AI in their daily lives.

Nurturing Artificial Intelligence Literacy in Students

19

Table 5. Quotes from students’ reflective writing before and after the workshops Student number Before the workshops

After the workshops

S1

Artificial intelligences are intelligences built on algorithms that can help humans. They are created with programmes, and the computer thereby learns to predict or support behaviours. Especially in the advertising field, AIs have a very big impact, as they suggest products to people that they may have been looking for all along

Before these classes, I knew the word AI, but I had never been interested in it. After these classes, I have been able to see how we find AI in our daily lives and how we could use it in a useful way ourselves. On the other hand, I had always been ‘scared’ of how computers are increasingly supplanting people, and after these classes, I have realised that we are still far from being replaced, and actually, feelings, emotions, etc. can never be replaced

S2

I think AI is a really important and necessary field nowadays. However, I do not have any knowledge of artificial intelligence, and I hope in this workshop, I will learn new things about it

Firstly, I was scared about these lectures because I did not know anything about artificial intelligence. However, I think these workshops were very useful for understanding some new concepts and knowing more about this topic. Now, I can say I know more about AI; for example, the difference between weak and strong artificial intelligence, supervised and unsupervised learning, or the tree diagram on machine learning

S3

I do not know much about artificial intelligence, but I can say that it is related to the creation of machines with the same capacities and abilities of humans

At the beginning of the workshops, I didn’t understand what ‘artificial intelligence’ was, and I was not aware about the great importance to learn about it in order to be able to use it. Currently, I am able to see Al as something different than just robots, as I used to think before taking these workshops. I am also able to recognise and distinguish ‘supervised learning’ and ‘unsupervised learning’ and the different kinds of problems within these two types of learning. I cannot forget how important the quality of the data is to get good results when building a model

20

S. C. Kong et al.

5 Conclusion AI technologies have been increasingly used globally in all domains and rapidly integrated into our daily lives, necessitating the development of appropriate curricula at all levels of education to achieve the necessary AI literacy competencies and enable learners to become active citizens in the future. This article reports the results of efforts to develop AI literacy among university students from widely varied cultural backgrounds. Therefore, the study findings can contribute to initiatives that promote the preparation of students for AI learning irrespective of their backgrounds or field of study. In spite of their diverse backgrounds and general lack of prior experience in programming, the students demonstrated improvements in their AI concept acquisition and levels of self-perceived AI literacy after completing a concept-focused module on AI literacy. Furthermore, the students expressed positive attitudes towards the module and better understanding of AI. The primary contribution of this study is the proven enhancement of the students’ awareness and competency in AI literacy and the confirmation that no prior knowledge of coding is needed to understand AI concepts. The limitations of this study are the small number and varied age groups of participants. Owing to these limitations, we could not obtain more generalisable results. Hence, research possibilities within this topic could include comparative research with different population samples. Considering the wide diversity of students’ cultural backgrounds, how AI literacy can be nurtured under different cultural norms can also be studied further through more in-depth focus-group interviews in the future. Acknowledgements. This research was supported by the international cooperative project Global Media Education through the Development of Online Teaching, which involves the University of Lapland, Finland, and the Education University of Hong Kong, China, and financed by the Finnish National Agency for Education, Team Finland Knowledge. The project involves the UNITWIN/UNESCO Network on Teacher Education for Social Justice and Diversity; the University of Lapland, Finland; the UNESCO Chair programme; and the Faculty of Humanities of the Education University of Hong Kong, China. More information on the project is available at https:// glomed.webnode.fi. The authors also acknowledge funding support for this project from the Li Ka Shing Foundation and a grant from the Research Grants Council, University Grants Committee of the Hong Kong Special Administrative Region, China (Project No. EdUHK CB302), particularly in developing the AI course material and evaluation tools.

References 1. Laupichler, M.C., Aster, A., Schirch, J., Raupach, T.: Artificial intelligence literacy in higher and adult education: a scoping literature review. Comput. Educ. Artif. Intell. 3, 100101 (2022). https://doi.org/10.1016/j.caeai.2022.100101 2. Yi, Y.: Establishing the concept of AI literacy: focusing on competence and purpose. Eur. J. Bioethics 12(2), 353–368 (2021). https://doi.org/10.21860/j.12.2.8 3. Mishra, A., Siy, H.: An interdisciplinary approach for teaching artificial intelligence to computer science students. In: Proceedings of the 21st Annual Conference on Information Technology Education (SIGITE 2020), p. 344. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3368308.3415440

Nurturing Artificial Intelligence Literacy in Students

21

4. Kong, S.-C., Cheung, W.M.-Y., Zhang, G.: Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ. Artif. Intell. 2, 100026 (2021). https://doi.org/10.1016/j.caeai.2021.100026 5. Kong, S.-C., Cheung, W.M.-Y., Zhang, G.: Evaluating artificial intelligence literacy courses for fostering conceptual learning, literacy and empowerment in university students: refocusing to conceptual building. Comput. Hum. Behav. Rep. 7, 100223 (2022). https://doi.org/10.1016/ j.chbr.2022.100223 6. Kong, S.-C., Cheung, W.M.-Y., Zhang, G.: Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. Educ. Technol. Soc. 26(1), 16–30 (2023). https://www.jstor.org/stable/ 48707964 7. Kong, S.-C., Cheung, W.M.-Y., Tsang, O.: Evaluating an artificial intelligence literacy programme for empowering and developing concepts, literacy and ethical awareness in senior secondary students. Educ. Inf. Technol. 28, 4703–4724 (2023). https://doi.org/10.1007/s10 639-022-11408-7 8. Mertala, P., Fagerlund, J., Calderon, O.: Finnish 5th and 6th grade students’ pre-instructional conceptions of artificial intelligence (AI) and their implications for AI literacy education. Comput. Educ. Artif. Intell. 3, 100095 (2022). https://doi.org/10.1016/j.caeai.2022.100095 9. UNESCO (United Nations Educational, Scientific and Cultural Organization): AI and education: guidance for policy-makers (2021). https://unesdoc.unesco.org/ark:/48223/pf0000 376709 10. Ng, D.T.K., Leung, J.K.L., Chu, S.K.W., Qiao, M.S.: Conceptualizing AI literacy: an exploratory review. Comput. Educ. Artif. Intell. 2, 100041 (2021). https://doi.org/10.1016/j. caeai.2021.100041 11. Salas-Pilco, S.Z., Xiao, K., Hu, X.: Artificial intelligence and learning analytics in teacher education: a systematic review. Educ. Sci. 12(8), 569 (2022). https://doi.org/10.3390/educsc i12080569 12. Kong, S.-C., Zhang, G.: A conceptual framework for designing artificial intelligence literacy programmes for educated citizens. In: Kong, S.-C., Wang, Q., Huang, R., Li, Y., Hsu, T.C. (eds.) Conference Proceedings (English Paper) of the 25th Global Chinese Conference on Computers in Education (GCCCE 2021), pp. 11–15. The Education University of Hong Kong, Hong Kong (2021)

The Course of Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence and the Learning Effects of Its Learning Materials Dyi-Cheng Chen(B) , Ying-Tsung Chen, Kuo-Cheng Wen, Wei-Lun Deng, Shang-Wei Lu, and Xiao-Wei Chen Department of Industrial Education and Technology, National Changhua University of Education, No. 1, Jin De Road, Changhua 500, Taiwan [email protected], [email protected], {m0831005, m0931019,m0931007}@gm.ncue.edu.tw

Abstract. The course planning for the Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence course is a major work in vocational education. This study applied the DISCOVER model in course planning and divided the students into two groups: teacher and non-teacher trainees. This study further revised and optimized teaching units and strategies by implementing courses and tests based on the prepared teaching strategies. It applied DISCOVER paired t-tests for analyzing differences in learning effects. There were significant differences in pre- and post-test learning effects on chapters 1, 2, 5, 6, and 7 for the whole class. This represents the effective teaching of these chapters. The effects were not significant in both groups in chapters 3 and 4. Combining precision measurement, AI, and precision machinery enables mechanical engineering undergraduates to consolidate their professional competence. After integrating AI into professional knowledge and building on the base of mechanical knowledge, students will be more interested in mechanical engineering. In this way, in an environment where teaching benefits teachers and students, a win-win core value will be jointly created by teachers and students. This study emphasizes course spirit, teaching objectives, teaching implementation, teaching evaluation, and multi-dimensional evaluation. It encourages students to develop diverse capabilities and improves the teaching effectiveness of future teacher trainees. This will improve the overall educational level and accelerate the educational development process. Keywords: The Course of Machining Technology · Artificial Intelligence · DISCOVER Mode · Precision Machinery

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 22–36, 2023. https://doi.org/10.1007/978-3-031-40113-8_3

The Course of Precision Measurements from the Incorporation

23

1 Introduction 1.1 Motivation The course planning for “Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence” course is significant in vocational education. It allows teachers to show teaching effectiveness and individual creativity. The teaching modes are changing with technological developments (Chang, 2019) [1]. We can think about the relationship between artificial intelligence (AI) technology and education from “B2 Technical Information and Media Literacy” of the 108 syllabus. It is the ability to use technology, information, and various media to develop related ethics and media literacy for analyzing, speculating, and criticizing the relationships between people and technology and between information and media and other topics (Chen et al., 2021) [2]. Therefore, the precision machinery-AI course planning attempts to apply the DISCOVER model. This course planning provides a continuous and dynamic teaching and learning interaction. It allows the students to be interested in AI and precision machining. It allows them to study the course at their own pace. 1.2 Purpose This study opened up a new field of precision machining for applied education methods. At the beginning of systematic teaching of AI, an organization should provide horizontal and underlying support with the support of students and teachers. Teaching strategies and ability indicators can help students improve learning initiative during the establishment process. The introduction and support of experimental facilities can increase students’ academic knowledge (Tsai, 2018) [3]. Mahmoud (2019) [4] suggested that the qualities of engineering graduates should meet national capacity needs and employers’ needs. Students should shorten the career exploration period and establish a correct working attitude to expand employment opportunities.

2 Literature Review 2.1 Artificial Intelligence Data science technology, AI, machine learning, and deep learning have become the hottest topics in computer science. The research topics in AI include deduction, inference and problem-solving, knowledge representation, planning and learning, natural language processing, machine perception, social robot, and creativity. AI education has spread from computer science to various engineering, mathematics, and scientific disciplines, and spread from large universities to other various educational institutions and programs (Goel, 2017) [5]. AI-based education focuses on personalized learning, reflecting the great value of vocational education. It enhances students’ creativity, multidisciplinary thinking, critical thinking, and problem-solving ability. It allows teachers to analyze cases and problems during teaching and transfer the focus from skill and knowledge education to smart and spirit education. Vocational education should reform teaching content and faculty to achieve the objective of talent training in the AI era (Ma, 2019) [6].

24

D.-C. Chen et al.

2.2 DISCOVER Model The “Discovering Intellectual Strengths and Capabilities while Observing Varied Ethnic Responses (DISCOVER)” model was established by June Maker at Arizona State University in 1987. It is based on multi-intelligence and problem-solving architecture. The DISCOVER model was originally used to observe different races’ reactions to identify the strengths and potential of intelligence. Maker (2013) [7] found that dominant intelligence can be evaluated by observing the quality and quantity of individuals’ problem-solving strategies. In contrast, the learning dominant intelligence helps improve problem-solving skills and overall learning ability. Using the DISCOVER course model, teachers can demonstrate various processes (involving intelligence and content fields) and provide students with opportunities to use these processes. There are originally two types of problem continuum. The students can choose a specific method. Maker and Zimmerman (2015) proposed four types of problem continuum based on the original two types for students to select, create, and apply their methods [8]. In a DISCOVER course, students are encouraged to find the learning style which suits them. Table 1 lists six types of problems. Teachers act as guides or consultants, encouraging students to progressively study in depth in the selected role and progressively resolve the complication in the corner (Tsai, 2013) [9]. Table 1. Comparison of six types of problems in DISCOVER (Guo, 2013) [10] Problem Type

Problem Specificity

Problem Solving Approach

Type I

Specific

Only one approach

Type II

Answer Type

Yes

With a standard answer

No

Type III

Multiple approaches

Type IV Type V Type VI

Whether Students Know the Approach

Not specific

Approaches created by students

Without a standard answer

3 Research Methods 3.1 Research Design This study implemented teaching by applying the experimental teaching model of the quasi-experimental design. This study focused on the mechanical engineering students in the Department of Industrial Education and Technology at the National Changhua University of Education. The Experimental group (X1) consisted of 20 students. The group included teacher education students for mechanical engineering, cartographic, sheet metalworking, automobile, and living technology in secondary education. The control group consisted of 20 students. The total sample size was 40 students. This grouping was based on the mechanical basic practice and programming scores in the pre-tests. The students were grouped into teacher education students and non-teacher

The Course of Precision Measurements from the Incorporation

25

education students. The DISCOVER model was adopted. The experiment lasted for 16 weeks. Students in the experimental group were provided with a DISCOVER teaching unit thinking and design every two weeks. The DISCOVER divides the problems into six levels. The higher level represented a higher degree of openness to a problem. The type levels comprised Type I to Type VI. Pre- and post-tests were conducted every two weeks. A total of six pre- and post-tests were conducted. Table 2 lists the approach adopted in the experiment. Table 2. Experimental teaching model Group

Pretest

Experimental group (G1)

O1

Type I

Module Teaching (Treatment)

Posttest

X1

O2

Type II

Type III

Type III

Type IV

Type IV

Type V

Type V

Type VI Experimental group (G2)

O1

Type I

Type II

Type I

Type VI X2

O2

Type I

Type II

Type II

Type III

Type III

Type IV

Type IV

Type V

Type V

Type VI

Type VI

Experimental groupX1X2: DISCOVER model.

Teaching Methods. This study performed pre-tests on precision measurement competence, AI basic concept, and heterogeneous grouping of students in the mechanical engineering department. It designed teaching units (the Precision Measurements from the Incorporation of Precision Machinery and Artificial Intelligence course of the mechanical engineering department) and teaching strategies (the DISCOVER model) and implemented experimental teaching. Afterward, it analyzed the pre- and post-tests for summative evaluation of the differences among students regarding their learning. Revision and establishment of the teaching units, objectives, teaching strategies, and enhancing promotion were conducted while promotion and advocacy were strengthened.

26

D.-C. Chen et al.

3.2 Research Steps Research Topics. This study developed a teaching satisfaction scale for review and pre-test. Then it used the developed competence scale for pre-tests. This study grouped students into teacher trainees and non-teacher trainees. Based on the developed course objective, teaching units, and teaching strategies, this study designed course evaluation indicators. The experts were invited to revise and establish the course evaluation indicators. This study implemented experimental teaching. During the experimental teaching period, students and teachers could communicate anytime. In this way, this study could revise the teaching strategies and units. It then implemented post-tests and summative evaluations. This study again invited experts and scholars to revise and establish the course objective, teaching units, and teaching strategies. Research Method. This study also took qualitative approaches, such as observation and interviews. By observing and interviewing students, this study could show systematic execution. A research survey tool was used to interview teachers and make detailed interview records during the data collections. The detailed records helped in crosschecking and making necessary revisions. During the teaching process, the researchers observed and recorded students’ learning situations in a participatory manner. Research Methods and Tools. This study applied SPSS to implement computer statistical analysis after the experimental teaching and post-tests to understand the learning effect improvement. Considering Type I and Type II errors, this study adjusted the significance level α to 0.05. The statistical analysis methods applied in this study include: 1. One-Way Analysis of Variance The learning effect pre-test was used as a covariance to analyze differences in learning effect post-tests among different teaching methods. Upon reaching the significance level, multivariate covariance analysis and post-tests were implemented. Before implementing a one-way multivariate analysis of covariance, in addition to ensuring that basic hypotheses have met the principle of variance analysis, this study ensured that the regression homogeneity was not significant and the multicollinearity was significant. 2. Paired t-tests The DISCOVER teaching model used paired t-tests to analyze six types of preand post-test experimental designs. It compared differences in learning results and attitudes of students among different types. Based on the preceding process, this program completed 8 work items, as listed in Table 3.

The Course of Precision Measurements from the Incorporation

27

Table 3. Paired t-test results of non-teacher trainees on chapters 1 to 7 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Pre- and post-tests on chapters 1 and 2

−6.01786

25.13702

6.71815

−20.53154

8.49583

−.896

13

.387

Pre- and post-tests on chapters 3 and 4

.85714

23.97710

6.40815

−12.98682

14.70111

.134

13

.896

Pre- and post-tests on chapters 5 to 7

−5.28571

25.65280

6.85600

−20.09720

9.52577

−.771

13

.455

4 Results and Discussions 4.1 Paired t-Tests on Competence Scale Tests Competence scale tests were performed before and after classes. There were a total of 40 questions (choice questions). Samples were grouped into teacher trainees (23) and non-teacher trainees (13) for paired t-tests. Based on the analysis results, both teacher trainees (.365) and non-teacher trainees (.487) did not reach the significance level, as listed in Table 4. Table 4. Paired t-test results on competence scale tests Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Pre- and post-tests of teacher trainees

−2.61364

13.23949

2.82267

−8.48370

3.25642

−.926

21

.365

Pre- and post-tests of non-teacher trainees

3.21429

16.79711

4.48922

−6.48408

12.91265

.716

13

.487

28

D.-C. Chen et al.

4.2 Paired t-Tests on the DISCOVER Tests The DISCOVER tests were performed for 6 times. Questions included multiple choice questions and essay questions. The first test was pre- and post-tests on chapters 1 and 2. The second test was pre- and post-tests on chapters 3 and 4. The third test was pre- and post-tests on chapters 5 to 7. According to the paired t-tests on the three pre- and post-tests, chapters 1 and 2 reached the significance level (.035); chapters 3 and 4 did not reach the significance level (.247); chapters 5 to 7 reached the significance level (.011). The students were grouped into teacher and non-teacher trainees for in-depth analysis, as listed in Table 5. Table 5. Paired t-test results on three tests Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Pre- and post-tests on chapters 1 and 2

−9.25000

25.23051

4.20508

−17.78678

−.71322

−2.200

35

.035

Pre- and post-tests on chapters 3 and 4

−4.38889

22.36742

3.72790

−11.95694

3.17916

−1.177

35

.247

Pre- and post-tests on chapters 5 to 7

−9.88889

21.99971

3.66662

−17.33252

−2.44526

−2.697

35

.011

4.3 Paired t-Tests on Teacher Trainees Pre- and post-tests completed by teacher trainees were analyzed using paired t-tests. According to the analysis/result, chapters 1 and 2 did not reach the significance level (.051); chapters 3 and 4 did not reach the significance level (.102); chapters 5 to 7 reached the significance level (.005), as listed in Table 6.

The Course of Precision Measurements from the Incorporation

29

Table 6. Paired t-test results of teacher trainees on chapters 1 to 7 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Pre-testpost-test on chapters 1 and 2

−11.30682

25.65951

5.47063

−22.68361

.06997

−2.067

21

.051

Pre- and post-tests on chapters 3 and 4

−7.72727

21.16417

4.51222

−17.11094

1.65639

−1.713

21

.102

Pre- and post-tests on chapters 5 to 7

−12.81818

19.39474

4.13497

−21.41733

−4.21904

−3.100

21

.005

Tests on chapters 1 and 2 were classified into six categories by DISCOVER type. This study conducted paired t-tests on pre- and post-test results after completing statistics on the mean scores of the six types of teacher trainees. According to the result, Type I, Type II, Type V, and Type VI did not reach the significance level; Type III (.014) and Type IV (.017) reached the significance level, as listed in Table 7. Table 7. DISCOVER paired t-test results of teacher trainees on chapters 1 and 2 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type I pre-testpost-test

−2.95455

7.18117

1.53103

−6.13850

.22941

−1.930

21

.067

Type II pre-testpost-test

−3.40909

10.95297

2.33518

−8.26537

1.44718

−1.460

21

.159

Type III pre-testpost-test

−2.06818

3.60322

.76821

−3.66576

−.47060

−2.692

21

.014

Type IV pre-testpost-test

−2.11364

3.80455

.81113

−3.80048

−.42680

−2.606

21

.017

(continued)

30

D.-C. Chen et al. Table 7. (continued) Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type V pre-testpost-test

.27273

3.73203

.79567

−1.38196

1.92742

.343

21

.735

Type VI pre-testpost-test

−1.03409

4.98468

1.06274

−3.24417

1.17599

−.973

21

.342

Tests on chapters 3 and 4 were classified into six categories by DISCOVER type. This study conducted paired t-tests on pre- and post-test results after completing statistics on the mean scores of the six types of teacher trainees. According to the result, Type I to Type VI did not reach the significance level, as listed in Table 8. Table 8. DISCOVER paired t-test results of teacher trainees on chapters 3 and 4 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type I pre-testpost-test

−6.22727

18.55733

3.95644

−14.45513

2.00059

−1.574

21

.130

Type II pre-testpost-test

−.45455

2.57695

.54941

−1.59710

.68801

−.827

21

.417

Type III pre-testpost-test

.00000

1.60357

.34188

−.71098

.71098

.000

21

1.000

Type IV pre-testpost-test

.09091

2.20193

.46945

−.88537

1.06719

.194

21

.848

Type V pre-testpost-test

−.36364

1.36436

.29088

−.96856

.24129

−1.250

21

.225

Type VI pre-testpost-test

−.31818

1.08612

.23156

−.79974

.16338

−1.374

21

.184

Tests on chapters 5 to 7 were classified into six categories by DISCOVER type. This study conducted paired t-tests on pre- and post-test results after completing statistics on the mean scores of the six types of teacher trainees. According to the result, Type II, Type VI, and Type VI did not reach the significance level; Type I (.029), Type III (.000), and Type IV (.024) reached the significance level, as listed in Table 9.

The Course of Precision Measurements from the Incorporation

31

Table 9. DISCOVER paired t-test results of teacher trainees on chapters 5 to 7 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type I pre-testpost-test

−4.77273

9.57144

2.04064

−9.01647

−.52899

−2.339

21

.029

Type II pre-testpost-test

−1.13636

8.71941

1.85898

−5.00233

2.72961

−.611

21

.548

Type III pre-testpost-test

−4.63636

4.63471

.98812

−6.69128

−2.58145

−4.692

21

.000

Type IV pre-testpost-test

−2.09091

4.03448

.86015

−3.87970

−.30212

−2.431

21

.024

Type V pre-testpost-test

−.40909

.95912

.20449

−.83434

.01616

−2.001

21

.059

Type VI pre-testpost-test

−.45455

1.47122

.31367

−1.10685

.19776

−1.449

21

.162

4.4 Paired t-Tests on Non-teacher Trainees Pre- and post-tests completed by non-teacher trainees were analyzed using paired t-tests. According to the analysis/result, chapters 1 and 2 did not reach the significance level (.387); chapters 3 and 4 did not reach the significance level (.896); chapters 5 to 7 did not reach the significance level (.455), as listed in Table 10. Table 10. Paired t-test results of non-teacher trainees on chapters 1 to 7 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Pre- and post-tests on chapters 1 and 2

−6.01786

25.13702

6.71815

−20.53154

8.49583

−.896

13

.387

Pre- and post-tests on chapters 3 and 4

.85714

23.97710

6.40815

−12.98682

14.70111

.134

13

.896

Pre- and post-tests on chapters 5 to 7

−5.28571

25.65280

6.85600

−20.09720

9.52577

−.771

13

.455

32

D.-C. Chen et al.

Tests on chapters 1 and 2 were classified into six categories by DISCOVER type. This study conducted paired t-tests on pre- and post-test results after completing statistics on the mean scores of the six types of non-teacher trainees. According to the result, Type I, Type IV, Type V, and Type VI did not reach the significance level; Type II (.027) and Type III (.014) reached the significance level, as listed in Table 11. Table 11. DISCOVER paired t-test results of non-teacher trainees on chapters 1 and 2 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type I pre-testpost-test

−4.61538

8.02640

2.22612

−9.46569

.23492

−2.073

12

.060

Type II pre-testpost-test

−2.30769

3.30113

.91557

−4.30254

−.31284

−2.521

12

.027

Type III pre-testpost-test

−2.07692

2.61284

.72467

−3.65585

−.49800

−2.866

12

.014

Type IV pre-testpost-test

−.92308

2.84199

.78823

−2.64048

.79432

−1.171

12

.264

Type V pre-testpost-test

−.15385

2.70327

.74975

−1.78742

1.47973

−.205

12

.841

Type VI pre-testpost-test

.82692

4.86632

1.34967

−2.11377

3.76761

.613

12

.552

Tests on chapters 3 and 4 were classified into six categories by the DISCOVER type. This study conducted paired t-tests on pre- and post-test results after calculating the mean scores of the six types of non-teacher trainees. According to the result, Type I to Type VI did not reach the significance level, as listed in Table 12. Tests on chapters 5 to 7 were classified into six categories by the DISCOVER type. This study conducted paired t-tests on pre- and post-test results after calculating the mean scores of the six types of non-teacher trainees. According to the result, Type I to Type VI did not reach the significance level, as listed in Table 13. 4.5 Independent t-Test on the Satisfaction Questionnaire Based on the independent t-test on the satisfaction questionnaire, the mean score was mostly greater than 4. This indicates high satisfaction among the students (regardless of non-teacher trainees and teacher trainees). Most students have learned something from their performance. Students also learned the combination of precision measurements and AI in the first-year program. Students summarized what they have learned through written and oral reports and queried related data for extended learning and maximizing

The Course of Precision Measurements from the Incorporation

33

Table 12. DISCOVER paired t-test results of non-teacher trainees on chapters 3 and 4 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference Upper Limit

Lower Limit

t

Degree of Freedom

Significance (Two-tailed)

Type I pre-testpost-test

2.00000

18.42657

4.92471

−8.63918

12.63918

.406

13

.691

Type II pre-testpost-test

−.07143

3.75119

1.00255

−2.23730

2.09444

−.071

13

.944

Type III pre-testpost-test

−.28571

1.43734

.38414

−1.11561

.54418

−.744

13

.470

Type IV pre-testpost-test

−1.07143

3.07507

.82185

−2.84692

.70406

−1.304

13

.215

Type V pre-testpost-test

−.21429

1.80506

.48242

−1.25650

.82792

−.444

13

.664

Type VI pre-testpost-test

.64286

1.44686

.38669

−.19254

1.47825

1.662

13

.120

Table 13. DISCOVER paired t-test results of non-teacher trainees on chapters 5 to 7 Mean

Standard Deviation

Mean Standard Error

95% Confidence Interval for Difference

t

Degree of Freedom

Significance (Two-tailed)

Upper Limit

Lower Limit

Type I pre-testpost-test

−3.92857

12.11778

3.23861

−10.92517

3.06803

−1.213

13

.247

Type II pre-testpost-test

−.35714

7.95765

2.12677

−4.95175

4.23747

−.168

13

.869

Type III pre-testpost-test

.71429

3.85164

1.02940

−1.50959

2.93816

.694

13

.500

Type IV pre-testpost-test

−1.14286

4.55492

1.21735

−3.77279

1.48707

−.939

13

.365

Type V pre-testpost-test

−.35714

1.33631

.35714

−1.12870

.41442

−1.000

13

.336

Type VI pre-testpost-test

−.57143

1.50457

.40211

−1.44014

.29729

−1.421

13

.179

the benefit of the course. Students also learned knowledge beyond textbooks. Table 14 lists the independent t-test data.

34

D.-C. Chen et al. Table 14. Independent t-test on the satisfaction questionnaire Mean (Teacher trainees)

Mean (Non-teacher trainees)

Standard Deviation (Teacher trainees)

Standard Deviation (Non-teacher trainees)

Mean Standard Error (Teacher trainees)

Mean Standard Error (Non-teacher trainees)

95% Confidence Interval for Difference Lower Limit

Upper Limit

t

Degree of Freedom

Significance (Two-tailed)

Question 1

4.04

4.08

.767

.760

.160

.211

−.573

.506

−.126

34

.900

Question 2

4.13

4.46

.757

.519

.158

.144

−.812

.150

−1.398

34

.171

Question 3

4.13

4.13

.757

.506

.158

.140

−.733

.225

−1.078

34

.288

Question 4

3.91

3.91

.668

.480

.139

.133

−.824

.035

−1.869

34

.070

Question 5

4.17

4.17

.650

.801

.136

.222

−.479

.519

.082

34

.935

Question 6

4.48

4.46

.511

.519

.106

.144

−.345

.379

.094

34

.926

Question 7

4.43

4.23

.507

.725

.106

.201

−.214

.622

.991

34

.329

Question 8

4.22

4.00

.736

.707

.153

.196

−.294

.729

.863

34

.394

Question 9

4.09

3.85

.733

.801

.153

.222

−.293

.775

.916

34

.366

Question 10

3.74

4.07

.810

.760

.169

.211

−.897

.221

−1.228

34

.228

Question 11

4.09

3.85

4.09

.801

.733

.191

−.293

.775

.916

34

.366

Question 12

4.26

4.15

4.26

.689

.752

.201

−.408

.622

.422

34

.675

Question 13

4.26

4.23

.619

.725

.129

.178

−.434

.494

.132

34

.896

Question 14

4.13

4.08

.757

.641

.158

.183

−.453

.560

.215

34

.831

Question 15

4.22

4.46

.671

.660

.140

.175

−.715

.226

−1.054

34

.299

Question 16

4.30

4.30

.703

.630

.147

.175

−.482

.475

−.014

34

.989

Question 17

4.57

4.00

.590

.577

.123

.160

.152

.978

2.783

34

.009

Question 18

4.26

4.15

.689

.555

.144

.154

−.348

.562

.478

34

.635

Question 19

4.04

4.00

.767

.707

.160

.196

−.483

.570

.168

34

.868

Question 20

4.09

4.09

.668

.660

.139

.183

−.844

.095

−1.622

34

.114

4.6 Qualitative Interview Records About the interviewees (three teacher trainees and two non-teacher trainees) who studied the course. There were five questions in total, as listed in Table 15. Based on the answers, ordinary students benefited from the course. Students also pointed out the aspects of the course that need improvement. Improvements in the course can meet students’ needs, keep up with time, reduce the learning-application gap, and meet corporate needs.

The Course of Precision Measurements from the Incorporation

35

Table 15. Interview outline Topic Outline

Expected Data to Be Obtained

Topic 1

What is the difference between the AI-integrated precision measurement course and a traditional precision measurement course?

Understand whether students have a certain perception of the course content and the difference between the AI-integrated precision measurement course and a traditional precision measurement course

Topic 2

How many points out of 10 will you give to the AI-integrated precision measurement course? Why?

Understand the evaluation of students on the course from the scores they have given and what they have learned from the course

Topic 3

What new knowledge have you learned from the AI-integrated precision measurement course?

Understand whether students have learned knowledge about AI and combined it with previous experience as AI was integrated into the course

Topic 4

Whether the content of the AI-integrated precision measurement course meets your expectation? Why?

Understand the expectations of students about the course as they had certain expectations about the course content before studying it

Topic 5

What constructive suggestions would you like to give on the AI-integrated precision measurement course?

Understand the aspects of similar courses to be improved in the future through students’ feedback after studying the course for a semester

5 Conclusions This study grouped students into teacher trainees and non-teacher trainees. This study further revised and optimized teaching units and strategies by implementing courses and tests based on the prepared teaching strategies and applying DISCOVER paired t-tests for analyzing differences in learning effects. According to the pre- and post-tests on the competence scale, there was no significant difference in the two groups’ paired t-test results, possibly due to a wide scope. The test covered chapters 1 to 7 and consisted of many questions. Some question types were supplementary questions and were not mentioned during class, adding difficulty for students. According to DISCOVER paired t-tests, there were significant differences in learning effects in pre- and post-tests on chapters 1 and 2 and chapters 5 to 7 for the whole class. This represents an effective teaching of these chapters. Paired t-tests were conducted on teacher trainee and nonteacher trainees on DISCOVER questions in chapters 1 to 7 for verifying whether there are differences in Types I to VI between the two groups. For chapters 1 and 2, teacher trainees reached a significance level in Types III and IV, and non-teacher trainees reached a significance level in Types II and III. For chapters 3 and 4, both groups did not reach the significance level. For chapters 5 to 7, teacher trainees reached the significance level in Types I, III, and IV. In comparison, non-teacher trainees did not reach the significance level. DISCOVER Types included Type I to Type VI. A higher level represented a higher degree of question openness. Therefore, students in both groups were good at Type III – with multiple solutions and a standard answer. Teacher trainees were good at Type IV - with multiple solutions and no standard answer in chapters 1 and 2 and 5 to 7. This indicates that teacher trainees were better at open-ended questions. Generally, students were not good at closed-ended questions after studying the course. Acknowledgments. I am grateful to the National Science and Technology Council, Taiwan, for its support and funding for this research with Project Number MOST 110-2511-H-018-004.

36

D.-C. Chen et al.

References 1. Chang, S.Y.: Artificial intelligence and human wisdom: rethinking the role of teachers in education 4.0. J. Yu Da University 47, 189–214 (2019). (in Chinese) 2. Chen, M.F., Liu, C.C., Lin, Y.J., Wang, Y.X., Wang, Z.H., Zhang, Y.Z.: AI artificial intelligence and vocational education. Taiwan Educ. Rev. 730, 88–97 (2021). (in Chinese) 3. Tsai, Y.-T., Wang, C.-C., Peng, H.-S., Huang, J.H., Tsai, C.-P.: Construction of artificial intelligence mechanical laboratory with engineering education based on CDIO teaching strategies. In: Wu, T.-T., Huang, Y.-M., Shadieva, R., Lin, L., Starˇciˇc, A.I. (eds.) ICITL 2018. LNCS, vol. 11003, pp. 81–87. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99737-7_8 4. Abdulwahed, M.: The toolkit: pedagogies, curricular models, and cases for engineering academics and schools implementing industry-integrated engineering education. In: Abdulwahed, M., Bouras, A., Veillard, L. (eds.) Industry Integrated Engineering and Computing Education, pp. 213–222. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-191399_12 5. Goel, A.: Editorial: AI education for the world. AI Mag. 38(2), 3–4 (2017) 6. Ma, J.: The challenge and development of vocational education under the background of artificial intelligence. Adv. Soc. Sci. Educ. Hum. Res. 319, 522–525 (2019) 7. Maker, C.J.: DISCOVER’s components (2013). http://discover.arizona.edu/ 8. Maker, J., Zimmerman, R., Alhusaini, A., Pease, R.: Real Engagement in Active Problem Solving (REAPS): an evidence based model that meets content, process, product, and learning environment principles recommended for gifted students. APEX: New Zealand J. Gifted Educ. 19(1) (2015) 9. Tsai, S.Y.: The connotation and application of DISCOVER. Elementary Sch. Spec. Educ. 55, 87–102 (2013). (in Chinese) 10. Guo, J.G.: How to implement differentiated teaching for gifter students? Gifted Educ. Q. 127, 1–11 (2013). (in Chinese)

Concerns About Using ChatGPT in Education Shu-Min Lin1

, Hsin-Hsuan Chung2

, Fu-Ling Chung2(B)

, and Yu-Ju Lan3

1 Tamkang University, New Taipei City, Taiwan 2 University of North Texas, Denton, USA

[email protected] 3 National Taiwan Normal University, Taipei, Taiwan

Abstract. This study aims to explore the concerns about using ChatGPT in education that have been investigated by researchers from a variety of disciplines. This study conducted a bibliometric analysis to analyze the 47 existing literature from the Scopus database, applied VOS viewer to construct and visualize the thematic analysis, and discussed three major educational concerns when using ChatGPT, (1) Ethics, (2) Plagiarism, and (3) Academic integrity. Several potential solutions to solve the concerns were also mentioned. This study concluded that it cannot deny ChatGPT can help users brainstorm and provide personalized services in lots of fields. However, if scholars and educators over-rely on it, they may lose the originality and novelty of their academic work and have plagiarism problems. Keywords: ChatGPT · Education · Concerns · VOS Viewer

1 Introduction Recently, the extensive worldwide acceptance of ChatGPT has showcased its remarkable versatility in various applications and scenarios, such as software development and testing, essays, business letters, and contracts (Reed 2022; Tung 2023). ChatGPT is a free and conversational AI chatbot software application using natural language processing (NLP) launched by OpenAI on November 30, 2022 (Rudolph et al. 2023; Tlili et al. 2023). The human-like conversations that ChatGPT provides allow users to ask questions, make requests, and receive responses in seconds (Rudolph et al. 2023). Also, ChatGPT can answer follow-up questions, challenge incorrect premises, and admit mistakes (Zhai 2022). The powerful functions that ChatGPT supports make it the most advanced chatbot in the world so far because it has received much attention for engaging in conversation naturally and intuitively with users (Rudolph et al. 2023). However, it is not easy to distinguish the text generated by ChatGPT or humans, so higher education teachers have difficulty assessing students (Rudolph et al. 2023). In addition, Lin et al. (2023) indicated that more and more people applied NLP to build conversational chatbots, but the system lacks human cognition capability when the chatbot is a companion in learning. Kasneci et al. (2023) pointed out the importance of critical thinking, competencies, and literacies necessary to understand the technology and its unexpected frangibility since there still are risks and biases of AI applications. Therefore, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 37–49, 2023. https://doi.org/10.1007/978-3-031-40113-8_4

38

S.-M. Lin et al.

this study aims to answer the following research questions: (1) What are the concerns about ChatGPT in education? (2) What are some ways educators could utilize ChatGPT from different disciplines? (3) What are the potential solutions to solve the concerns about using ChatGPT in education? The following sections briefly describe the research method, present results, and discuss study implications and conclusions.

2 Methods The selection of existing literature is a critical step in conducting a literature analysis. As Fig. 1 shown, in the stage 1, the keywords, “ChatGPT” and “education*” were used to perform an opening search to retrieve related articles from the Scopus database. The initial result of 340 documents was obtained. In the stage 2, 243 documents were removed because their titles and abstracts did not contain the keywords, “ChatGPT” and “education*,” leaving 97 documents for further screening. In the stage 3, in order to maintain the consistency of the research quality, the document type was limited to Article, and the language was limited to English. Finally, 47 eligible articles were the main interest of this analysis by May 22, 2023.

Fig. 1. The research design.

The next step was to conduct a bibliometric analysis to analyze existing literature and apply VOS viewer to construct and visualize bibliometric networks that can identify and cluster key terms to realize the leading directions of future research (Ali and Gölgeci 2019). The Co-occurrence type was chosen in VOS viewer with the All keywords unit of analysis to perform the co-occurrence keyword network. After combining similar keywords, setting two as the minimum number of occurrences of a keyword, and removing the searching keywords, 33 of 311 keywords were taken closer discussed.

3 Results and Discussion This section reports the concerns mentioned in target articles, the co-occurrence network result from the VOS viewer, and conducts the analysis of the themes.

Concerns About Using ChatGPT in Education

39

3.1 Concerns Mentioned in Target Articles This study identified some major and minor concerns mentioned in the 47 target articles, shown as Table 1. The results showed that 43 (91%) articles indicated concerns when using ChatGPT in education field, including ethical concerns, cheating, academic misconduct, and incorrect information. These concerns take up a large portion when using ChatGPT in the education field, that is, educators cannot ignore considering the concerns when using ChatGPT. Table 1. Concerns mentioned in the 47 target articles. Authors

Title

Concerns

Abdel-Messih et al. (2023)

ChatGPT in Clinical Toxicology X

Alafnan et al. (2023)

ChatGPT as an Educational Tool: Opportunities, Challenges, and Recommendations for Communication, Business Writing, and Composition Courses

Alnaqbi and Fouda (2023)

Exploring the Role of ChatGPT Practical and ethical concerns and social media in Enhancing Student Evaluation of Teaching Styles in Higher Education Using Neutrosophic Sets

Cascella et al. (2023)

Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios

Ethical concerns

Choi et al. (2023)

Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education

Cheating on assignments and examinations

Cingillioglu (2023)

Detecting AI-generated essays: the ChatGPT challenge

Academic integrity

Cooper (2023)

Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence

Ethical concerns

Corsello and Santangelo (2023)

May Artificial Intelligence Influence Future Pediatric Research?—The Case of ChatGPT

Ethical concerns

Ethical concerns Human unintelligence and unlearning

(continued)

40

S.-M. Lin et al. Table 1. (continued)

Authors

Title

Concerns

Cotton et al. (2023)

Chatting and cheating: Ensuring Academic integrity and academic integrity in the era of honesty ChatGPT Plagiarism

Crawford et al. (2023)

Leadership is needed for ethical Plagiarism and academic ChatGPT: Character, integrity assessment, and learning using artificial intelligence (AI)

Dwivedi et al. (2023)

“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

Disruptions to practices Threats to privacy and security Consequences of biases, misuse, and misinformation

Elfaki et al. (2023)

Revolutionizing Social Robotics: A Cloud-Based Framework for Enhancing the Intelligence and Autonomy of Social Robots

X

Farrokhnia et al. (2023)

A SWOT analysis of ChatGPT: Implications for educational practice and research

Ethical concerns and cheating

Fergus et al. (2023)

Evaluating Academic Answers Generated Using ChatGPT

Contain errors Provide incorrect answers

Frith (2023)

ChatGPT: Disruptive Educational Technology

The erosion of students’ accountability to learn

Geerling et al. (2023)

ChatGPT has Aced the Test of Understanding in College Economics: Now What?

Academic dishonesty

Gilson et al. (2023)

How Does ChatGPT Perform Insufficient information on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment

Grünebaum et al. (2023)

The exciting potential for ChatGPT in obstetrics and gynecology

Plagiarism

(continued)

Concerns About Using ChatGPT in Education

41

Table 1. (continued) Authors

Title

Concerns

Gupta et al. (2023)

Utilization of ChatGPT for Plastic Surgery Research: Friend or Foe?

X (no full-text)

Halaweh (2023)

ChatGPT in education: Strategies for responsible implementation

Concerns stem from text generation and ideas generation

Hallsworth et al. (2023)

Scientific novelty beyond the experiment

Reinforces concerns about slowing innovative activity

Huh (2023)

Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study

The value of the assessments and the overall quality of the university program diminished

Humphry and Fuller (2023) Potential ChatGPT Use in Undergraduate Chemistry Laboratories

No copyright on any of the text it generates

Hwang and Chen (2023)

Editorial Position Paper: Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions

Ethical concerns Misuse and effectiveness problems

Ibrahim et al. (2023)

Rethinking Homework in the Age of Artificial Intelligence

Ethical concerns

Iskender (2023)

Holy or Unholy? Interview with Lack of originality and novelty Open AI’s ChatGPT Students’ critical thinking reduced

Ivanov and Soliman (2023) Game of algorithms: ChatGPT implications for the future of tourism education and research

The validity of works The worth of academic degrees

Jeon and Lee (2023)

Inappropriate or unethical student behavior

Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT

(continued)

42

S.-M. Lin et al. Table 1. (continued)

Authors

Title

Concerns

Johinke et al. (2023)

Reclaiming the technology of higher education for teaching digital writing in a post—pandemic world

Students’ autonomy and literacy skills The ability of teachers to hear student voices

Karaali (2023)

Artificial Intelligence, Basic Ethical concerns Skills, and Quantitative Literacy Relevance problem

Khan et al. (2023)

ChatGPT-Reshaping medical education and clinical management

Plagiarism and cheating

Kooli (2023)

Chatbots in Education and Research: A Critical Examination of Ethical Implications and Solutions

Ethical concerns Lack of empathy

Lecler et al. (2023)

Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT

Lack domain expertise Unreliable results Inconsistent or nonsensical answers

Lim et al. (2023)

Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators

Academic misconduct (unethical and dishonest practices and behaviors)

Lo (2023)

The CLEAR path: A framework X for enhancing information literacy through prompt engineering

Masters (2023)

Ethical use of artificial intelligence in health professions education: AMEE Guide No.158

Pavlik (2023)

Collaborating With ChatGPT: Ethical issues Considering the Implications of The question of accountability Generative Artificial Intelligence for Journalism and Media Education

Perkins (2023)

Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond

Ethical concerns

Academic integrity

(continued)

Concerns About Using ChatGPT in Education

43

Table 1. (continued) Authors

Title

Concerns

Santandreu-Calonge et al. (2023)

Can ChatGPT improve communication in hospitals?

Ethical concerns Provide seemingly credible but inaccurate responses

Seney et al. (2023)

Using ChatGPT to Teach Enhanced Clinical Judgment in Nursing Education

Plagiarism Students’ abilities for synthesizing evidence into their own words underdeveloped

Sevgi et al. (2023)

The role of an open artificial Absence of citations for intelligence platform in modern scientific queries neurosurgical education: a preliminary study

Shoufan (2023)

Exploring Students’ Perceptions The accuracy of given answers of ChatGPT: Thematic Analysis and Follow-Up Survey

Strzelecki (2023)

To use or not to use ChatGPT in Without a formal review higher education? A study of process students’ acceptance and use of technology

Su and Yang (2023)

Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education

The untested effectiveness of the technology Limitations in the quality of data Ethical and safety concerns

Sun and Hoelscher (2023)

The ChatGPT Storm and What Faculty Can Do

Academic integrity

Tlili et al. (2023)

What if the devil is my guardian Cheating, honesty, and angel: ChatGPT as a case study truthfulness of using chatbots in education Privacy misleading Manipulation

Yan (2023)

Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation

Academic honesty Educational equity

3.2 Co-occurrence Network This study adopted a software tool, VOS viewer, to construct and visualize data. In Fig. 2, there are 3 clusters, red, blue, and green. Cluster 1 in red includes 16 keywords: adult, ai, article, chatbot, education, nursing, educational status, ethics, follow up, human experiment, humans, knowledge, language, male, nursing, nursing education, and nursing student. Cluster 2 in green includes 14 keywords: academic integrity, clinical practice, conversational agent, data analysis, generative ai, internet, large language models,

44

S.-M. Lin et al.

machine learning, medical education, nlp, openai, performance, teacher, and technology. Cluster 3 in blue includes 3 keywords: educational technologies, higher education, and plagiarism.

Fig. 2. All keywords co-occurrence network diagram for ChatGPT in education.

3.3 Thematic Analysis For the first and second research questions, this section presented the concerns about using ChatGPT in education and how educators utilize ChatGPT from different disciplines. Since the study focused on the educational concerns about ChatGPT, this section discussed the major concern in three clusters in Fig. 2. The major concern in Cluster 1(red) was Ethics, the major concern in Cluster 2 (blue) was Plagiarism, and the major concern in Cluster 3 (green) was Academic integrity. Some scholars mentioned several concerns about ethics in using ChatGPT. Dwivedi et al. (2023) indicated that ChatGPT benefits people in information technology industries, banking, hospitality, and tourism. Still, users need to recognize the limitations of ChatGPT, including threats to privacy and security, disruptions to practices, misuse, misinformation, and consequences of biases. Kasneci et al. (2023) highlighted several

Concerns About Using ChatGPT in Education

45

challenges of AI, such as users having to keep oversight and avoid potential bias. People should pay attention to prevent the misuse of AI to ensure a responsible and ethical manner responsibly and ethically in education. Iskender (2023) applied ChatGPT as an interviewee to examine its impact on higher education and academic publishing in the hospitality and tourism industry. ChatGPT can help teachers convey tasks, students brainstorm ideas, and the tourism and hospitality industry provide personalized services and create advertising content. However, if users over-rely on ChatGPT, they are likely to have lower critical thinking ability and lose the originality and novelty of their academic work (Iskender 2023). Tlili et al. (2023) conducted a case study to examine ChatGPT in educational settings. The authors called people’s attention to the use of ChatGPT because of the ethical issues, such as cheating, privacy misleading, manipulation, and honesty and truthfulness of ChatGPT. Cascella et al. (2023) pointed out that ChatGPT trained on a massive dataset of text for dialogue can bring benefits and impressive people with its capabilities, users are still concerned about ethical issues and the performance of ChatGPT in real-world scenarios, especially in healthcare and medicine that requires high-level and complex thinking. Some scholars mentioned several concerns about plagiarism in using ChatGPT. Cotton et al. (2023) suggested that AI tools like ChatGPT can increase learners’ accessibility, collaboration, and engagement, but it is hard to detect academic honesty and plagiarism in higher education. Lim et al. (2023) pointed out that ChatGPT has popularized generative AI, an educational game-changer, and they suggested that people should adopt generative AI rather than avoid it in the future of education. Although ChatGPT can generate text for multiple processing tasks, such as translating languages, summarizing text, and creating dialogue systems, people worry about plagiarism and cheating issues using ChatGPT (Khan et al. 2023). Some scholars mentioned several concerns about academic integrity in using ChatGPT. Cascella et al. (2023) assessed the practicality of ChatGPT in clinical practice, scientific production, misuse in medicine, and reasoning about public health topics, and the results suggested that people should recognize the drawbacks of AI-based tools and apply them in a proper way in medical education. The role of ChatGPT in medical education and clinical management includes automated scoring, teaching or research assistance, creating content to facilitate learning, personalized learning, decision support, and patient communication. Still, it cannot replace humans’ knowledge and capability (Khan et al. 2023). Thurzo et al. (2023) pointed out that AI applications in dental education started in 2020, and most dental educators were not trained to use AI technology. For the third research question: What are the potential solutions to solve the concerns about using ChatGPT in education? This study summarized several suggestions for educational organizations to address the concerns about Ethics, Plagiarism, and Academic integrity. Tlili et al. (2023) suggested that people in education should carefully apply chatbots, especially ChatGPT, in safe settings and take responsibility for the adoption. Also, universities have to make sure the ethical and responsible use of AI tools with suitable strategies, such as providing training, developing policies, and discovering approaches to detect and prevent cheating (Cotton et al. 2023). Teachers have to model and use ChatGPT responsibly and prioritize critical thinking to avoid the risk of copyright infringement, potential environmental impact, and issues related to content

46

S.-M. Lin et al.

moderation when science teachers design units, rubrics, and quizzes (Cooper 2023). Nevertheless, ChatGPT still has advantages in certain fields. Pavlik (2023) promoted the potential of AI for journalism and media education because it generates content with high-quality written expression. ChatGPT is free to use and allows users to input text prompts with fast text responses from the result of machine learning with the Internet.

4 Conclusion This study conducted a bibliometric analysis to analyze the existing literature on ChatGPT in education and applied the VOS viewer to construct and visualize the thematic analysis. It focused on realizing the current status of this field, such as several concerns about using ChatGPT in education, some ways educators could utilize ChatGPT from different disciplines, and the potential solutions to solve the concerns about using ChatGPT in education. Since ChatGPT in education is an ongoing hot topic in many fields, results from this study may support researchers in developing and designing their research in the future. Since 43 of 47 (91%) target articles indicated concerns when using ChatGPT in education field, the authors provided a general view of concerns about using ChatGPT in education from 3 aspects, (1) Ethics, (2) Plagiarism, and (3) Academic integrity. ChatGPT can be applied in lots of fields, such as information technology industries, medicine, and education. In information technology industries, users have to consider the issues of privacy, security, and misinformation. Also, when users ask ChatGPT to make decisions, they may face ethical or legal problems since AI applications have lower critical thinking ability. In the medical field, ChatGPT is used for clinical practice, scientific production, and reasoning about public health topics, but it still cannot replace humans’ knowledge and capability. Not to mention, most dental educators have not been trained to use ChatGPT. In the education field, ChatGPT can help users brainstorm and provide personalized services. However, if people over-rely on ChatGPT, they may lose the originality and novelty of their academic work and have plagiarism issues. To sum up, ChatGPT is a popular AI tool currently in most fields to support people’s work. However, as an educator, our priority is to make people know its advantages and disadvantages. ChatGPT indeed can aid people in generating ideas and making decisions quicker and easier, but users still should take responsibility for ethical and legal issues. Acknowledgement. We thank the National Science and Technology Council, Taiwan, ROC, under grant numbers MOST 110-2511-H-003-038-MY3 and MOST 111-2410-H-003-006-MY3 for financially supporting this research.

References Abdel-Messih, M.S., Boulos, M.N.K.: ChatGPT in clinical toxicology. J. Med. Internet Res. 9, e46876 (2023) AlAfnan, M.A., Dishari, S., Jovic, M., Lomidze, K.: ChatGPT as an educational tool: opportunities, challenges, and recommendations for communication, business writing, and composition courses. J. Artif. Intell. Technol. 3(2), 60–68 (2023)

Concerns About Using ChatGPT in Education

47

Ali, I., Gölgeci, I.: Where is supply chain resilience research heading? A systematic and cooccurrence analysis. Int. J. Phys. Distrib. Logist. Manag. 49(8), 793–815 (2019) Alnaqbi, N.M., Fouda, W.: Exploring the role of ChatGPT and social media in enhancing student evaluation of teaching styles in higher education using neutrosophic sets. Int. J. Neutrosophic Sci. 20(4), 181–190 (2023) Cascella, M., Montomoli, J., Bellini, V., Bignami, E.: Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J. Med. Syst. 47(1), 1–5 (2023) Choi, E.P.H., Lee, J.J., Ho, M.H., Kwok, J.Y.Y., Lok, K.Y.W.: Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Educ. Today 125, 105796 (2023) Cingillioglu, I.: Detecting AI-generated essays: the ChatGPT challenge. Int. J. Inf. Learn. Technol. 40(3), 259–268 (2023) Cooper, G.: Examining science education in ChatGPT: an exploratory study of generative artificial intelligence. J. Sci. Educ. Technol. 1–9 (2023) Corsello, A., Santangelo, A.: May artificial intelligence influence future Pediatric research?—the case of ChatGPT. Children 10(4), 757 (2023) Cotton, D.R.E., Cotton, P.A., Shipway, J.R.: Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. (ahead-of-print) 1–12 (2023) Crawford, J., Cowling, M., Allen, K.A.: Leadership is needed for ethical ChatGPT: character, assessment, and learning using artificial intelligence (AI). J. Univ. Teach. Learn. Pract. 20(3), 02 (2023) Dahmen, J., et al.: Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword. Knee Surgery Sports Traumatol. Arthroscopy 1–3 (2023) Dwivedi, Y.K., et al.: So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71, 102642 (2023) Elfaki, A.O., et al.: Revolutionizing social robotics: a cloud-based framework for enhancing the intelligence and autonomy of social robots. Robotics 12(2), 48 (2023) Farrokhnia, M., Banihashem, S.K., Noroozi, O., Wals, A.: A SWOT analysis of ChatGPT: implications for educational practice and research. Innov. Educ. Teach. Int. 1–15 (2023) Fergus, S., Botha, M., Ostovar, M.: Evaluating academic answers generated using ChatGPT. J. Chem. Educ. 100(4), 1672–1675 (2023) Frith, K.H.: ChatGPT: disruptive educational technology. Nurs. Educ. Perspect. 44(3), 198–199 (2023) Geerling, W., Mateer, G.D., Wooten, J., Damodaran, N.: ChatGPT has aced the test of understanding in college economics: now what? Am. Econ. 05694345231169654 (2023) Gilson, A., et al.: How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educ. 9(1), e45312 (2023) Grünebaum, A., Chervenak, J., Pollet, S.L., Katz, A., Chervenak, F.A.: The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 228(6), 696–705 (2023) Gupta, R., Herzog, I., Weisberger, J., Chao, J., Chaiyasate, K., Lee, E.S.: Utilization of ChatGPT for plastic surgery research: friend or foe? J. Plast. Reconstr. Aesthet. Surg. 80, 145–147 (2023) Halaweh, M.: ChatGPT in education: strategies for responsible implementation. Contemp. Educ. Technol. 15(2), ep421 (2023) Hallsworth, J.E., et al.: Scientific novelty beyond the experiment. Microb. Biotechnol. 16, 1131– 1173 (2023) Huh, S.: Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J. Educ. Eval. Health Prof. 20(1) (2023)

48

S.-M. Lin et al.

Humphry, T., Fuller, A.L.: Potential ChatGPT use in undergraduate chemistry laboratories. J. Chem. Educ. 100(4), 1434–1436 (2023) Hwang, G.J., Chen, N.S.: Editorial position paper. Educ. Technol. Soc. 26(2) (2023) Ibrahim, H., Asim, R., Zaffar, F., Rahwan, T., Zaki, Y.: Rethinking homework in the age of artificial intelligence. IEEE Intell. Syst. 38(2), 24–27 (2023) Iskender, A.: Holy or unholy? Interview with Open AI’s ChatGPT. Eur. J. Tourism Res. 34, 3414 (2023) Ivanov, S., Soliman, M.: Game of algorithms: ChatGPT implications for the future of tourism education and research. J. Tourism Futures 9(2), 214–221 (2023) Jeon, J., Lee, S.: Large language models in education: a focus on the complementary relationship between human teachers and ChatGPT. Educ. Inf. Technol. 1–20 (2023) Johinke, R., Cummings, R., Di Lauro, F.: Reclaiming the technology of higher education for teaching digital writing in a post—pandemic world. J. Univ. Teach. Learn. Pract. 20(2), 01 (2023) Karaali, G.: Artificial intelligence, basic skills, and quantitative literacy. Numeracy 16(1), 9 (2023) Kasneci, E., et al.: ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individual Differ. 103, 102274 (2023) Khan, R.A., Jawaid, M., Khan, A.R., Sajjad, M.: ChatGPT - reshaping medical education and clinical management. Pak. J. Med. Sci. 39(2), 605–607 (2023) Kooli, C.: Chatbots in education and research: a critical examination of ethical implications and solutions. Sustainability 15(7), 5614 (2023) Lecler, A., Duron, L., Soyer, P.: Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn. Interv. Imaging 104(6), 269–274 (2023) Lim, W.M., Gunasekara, A., Pallant, J.L., Pallant, J.I., Pechenkina, E.: Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 21(2), 100790 (2023) Lin, C.C., Huang, A.Y., Yang, S.J.: A review of AI-driven conversational chatbots implementation methodologies and challenges (1999–2022). Sustainability 15(5), 4012 (2023) Lo, L.S.: The CLEAR path: a framework for enhancing information literacy through prompt engineering. J. Acad. Librariansh. 49(4), 102720 (2023) Masters, K.: Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Med. Teach. 45(6), 574–584 (2023) Pavlik, J.V.: Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educ. 78(1), 84–93 (2023) Perkins, M.: Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. J. Univ. Teach. Learn. Pract. 20(2), 07 (2023) Qadir, J.: Engineering education in the era of ChatGPT: promise and pitfalls of generative AI for education. TechRxiv Preprint (2022) Reed, L.: ChatGPT for Automated Testing: From conversation to code. Sauce Labs (2022) Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in higher education?. J. Appl. Learn. Teach. 6(1) (2023) Santandreu-Calonge, D., Medina-Aguerrebere, P., Hultberg, P., Shah, M.A.: Can ChatGPT improve communication in hospitals?. Profesional de la información 32(2) (2023) Seney, V., Desroches, M.L., Schuler, M.S.: Using ChatGPT to teach enhanced clinical judgment in nursing education. Nurse Educ. 48(3), 124 (2023) Sevgi, U.T., Erol, G., Do˘gruel, Y., Sönmez, O.F., Tubbs, R.S., Güngor, A.: The role of an open artificial intelligence platform in modern neurosurgical education: a preliminary study. Neurosurg. Rev. 46(1), 86 (2023) Shoufan, A.: Exploring students’ perceptions of CHATGPT: thematic analysis and follow-up survey. IEEE Access 11, 38805–38818 (2023)

Concerns About Using ChatGPT in Education

49

Strzelecki, A.: To use or not to use ChatGPT in higher education? A study of students’ acceptance and use of technology. Interact. Learn. Environ. 1–14 (2023) Su, J., Yang, W.: Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Rev. Educ. 20965311231168423 (2023) Sun, G.H., Hoelscher, S.H.: The ChatGPT storm and what faculty can do. Nurse Educ. 48(3), 119–124 (2023) Thurzo, A., Strunga, M., Urban, R., Surovková, J., Afrashtehfar, K.I.: Impact of artificial intelligence on dental education: a review and guide for curriculum update. Educ. Sci. 13(2), 150 (2023) Tlili, A., et al.: What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 10(1), 1–24 (2023) Tung, L.: ChatGPT can write code. Now researchers say it’s good at fixing bugs, too. ZDNet (2023) Yan, D.: Impact of ChatGPT on learners in a L2 writing practicum: an exploratory investigation. Educ. Inf. Technol. 1–25 (2023) Zhai, X.: ChatGPT user experience: Implications for education. Available at SSRN 4312418 (2022)

Comparing Handwriting Fluency in English Language Teaching Using Computer Vision Techniques Chuan-Wei Syu1

, Shao-Yu Chang2

, and Chi-Cheng Chang1(B)

1 National Taiwan Normal University, Taipei City 106, Taiwan

[email protected] 2 University of Illinois at Urbana-Champaign, Urbana, IL 61801-3633, USA

Abstract. Educational materials play a vital role in effectively conveying information to learners, with the readability and legibility of written text serving as crucial factors. This study investigates the influence of font selection on educational materials and explores the relationship between handwriting fluency and cognitive load. By identifying challenges in written expression, such as reduced working memory capacity, text organization difficulties, and content recall issues, the study sheds light on the significance of neat handwriting. The research emphasizes the relevance of neat handwriting in critical examinations, including college entrance exams, academic English exams, and job interviews, where the fluency of one’s handwriting can impact the decision-making process of interviewers. This highlights the value of handwriting fluency beyond educational contexts. Advancements in computer science and machine vision present new opportunities for automating font evaluation and selection. By employing machine vision algorithms to objectively analyze visual features of fonts, such as serifs, stroke width, and character spacing, the legibility and readability of fonts used in English language teaching materials are assessed. In this study, machine vision techniques are applied to score fonts used in educational materials. The OpenCV computer vision library is utilized to extract visual features of fonts from images, enabling the analysis of their legibility and readability. The primary objective is to provide educators with an automated and objective tool for scoring handwriting, reducing visual fatigue, and ensuring impartial evaluations. This research contributes to enhancing the quality of educational materials and provides valuable insights for educators, researchers, and font designers. Keywords: Handwriting fluency · Legibility · Readability · Machine vision · OpenCV

1 Introduction Educational materials, such as textbooks, workbooks, and teaching aids, have a crucial role in conveying information to learners. In this context, the readability and legibility of the written text are of utmost importance. Research has widely recognized that font © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 50–56, 2023. https://doi.org/10.1007/978-3-031-40113-8_5

Comparing Handwriting Fluency in English Language Teaching

51

selection significantly affects the effectiveness of these materials in facilitating learning and comprehension. Cognitive load theory suggests a close relationship between disfluency and cognitive load [1]. Despite significant advancements in computer and typographic layout technologies, writing continues to play a pivotal role in learning. Supporting this notion, an experimental study [2] conducted with 30 undergraduate students revealed challenges in various aspects of written expression. These challenges encompassed reduced working memory capacity while typing, difficulty in organizing structured texts, and recalling written content. Interestingly, the study demonstrated that writing with a pen yielded superior results in all aspects. According to a systematic review [3] conducted on the research related to automated writing evaluation systems, it has been found that such studies have been conducted over a period of two decades. While numerous studies have focused on enhancing the quality of written texts produced by learners, automated writing evaluation systems, as opposed to improving their handwriting fluency, remain dominant [4]. However, is handwriting fluency only pertinent to students who have received formal education as writers or pursued careers as wordsmiths? We contend that this is not the case. Proficient writing in English holds significant value in critical examinations, including college entrance exams, academic English exams for graduate studies, and English proficiency exams for job hunting [5]. Moreover, evidence suggests that even in professional settings, the fluency of one’s handwriting during an interview can influence the interviewer’s decision-making process [6]. This highlights the broader significance of handwriting fluency beyond educational contexts. Recent advancements in computer science and machine vision present new opportunities for automating the font evaluation and selection process. Machine vision, a field within computer science, enables computers to interpret and analyze visual information from images and videos. Leveraging machine vision algorithms, visual features of fonts, such as serifs, stroke width, and character spacing, can be identified and extracted. These features can then be used to score and rank fonts based on their legibility and readability. In this study, our aim is to explore the application of machine vision techniques in scoring fonts used in English language teaching materials. To achieve this objective, we utilize the open-source computer vision library, OpenCV, to extract visual features of fonts from images and analyze their legibility and readability. Our primary goal is to provide educators with an automated and objective tool for scoring handwriting. This tool can alleviate visual fatigue experienced by reviewers due to handwritten submissions and offer an impartial scoring mechanism. The structure of this paper is as follows. The next section presents a comprehensive literature review incorporating relevant studies and research gaps related to test requirements and machine vision. Subsequently, we outline the methodology employed in our study, including data collection, image processing, and feature extraction. Following that, we present the results of our experiments, which demonstrate the effectiveness of our proposed font evaluation method. Finally, we conclude with a discussion of the implications of our study and highlight potential avenues for future research.

52

C.-W. Syu et al.

2 Literature Review 2.1 Test Requirements and Handwriting Fluency English proficiency tests, such as GEPT (General English Proficiency Test), TOEIC (Test of English for International Communication), and TOEFL (The Test of English as a Foreign Language), are widely recognized as graduation thresholds for colleges and universities in Taiwan [7]. These tests are also commonly used by corporations as part of their recruitment process, especially after the academic year concludes. While recent shifts towards online testing have occurred due to the COVID-19 pandemic, it is possible that in-person testing may return in the future. Through a study [8] that analyzed a substantial sample of 2,996 examinations, it was observed that computer-generated essays received higher ratings in comparison to handwritten essays. This outcome is consistent with a prior investigation [9], which identified that handwriting may impact the initial impression of an essay scorer. Specifically, some evaluators perceive handwriting as an indicator of the writer’s attitude and literacy level, and as a result, view it as a potential hindrance to the readability of the text.

Fig. 1. Comparison of handwriting fluency and neatness (Left: fluent, right: scribbled)

The significance of neat handwriting in English essays is clearly demonstrated in the provided image (see Fig. 1). Studies have shown that handwriting fluency is closely linked to meta-cognition, and neatness of handwriting is likely to be related to students’ time allocation and planning abilities during exams [10]. 2.2 Machine Vision and OpenCV Machine vision, an interdisciplinary field combining computer science, mathematics, and physics, utilizes computer algorithms to interpret and understand images and video. By extracting information from images, it has various applications, such as quality control, robotics, and medical imaging. Among the challenging tasks in machine vision is scene text recognition, which has gained considerable attention from researchers in recent years [11]. One potential application of machine vision in handwriting recognition is in the grading of handwritten documents, such as essays or exams. Handwriting recognition is challenging due to the natural variability in handwriting styles and the complexity of the writing process. Traditional methods of handwriting

Comparing Handwriting Fluency in English Language Teaching

53

recognition rely on stroke direction, curvature, and spacing to identify characters, but these methods have limitations in capturing the full range of variability in handwriting styles. While early work in this area focused on print, the rise of deep learning has enabled the utilization of large amounts of handwritten text data for machine vision. Techniques have evolved from early convolutional neural networks (CNNs) to today’s graph convolutional networks (GCNs) [12] for text recognition. At the application level, there are suite tools available, such as OpenCV, Tesseract, Kraken, among others, to lower the threshold for the use of existing technologies. By leveraging computer algorithms to analyze pixel-level features, machine vision can provide more accurate and objective evaluations of handwriting.

3 Methodology 3.1 Dataset and Data Processing This academic article aims to present the process and findings of a study that examined the ten highest-scoring essays from the 2022 Subject Aptitude Test in Taiwan. These essays constitute the primary dataset for our research and will be referred to as the “student Dataset” (Fig. 2, top). To analyze the data, we employed the Optical Character Recognition (OCR) function of a software package to convert the Datasets into text files in the.txt format. Subsequently, we conducted an analysis of the frequency of occurrence for each character within the dataset, and the results are presented in a table.

Fig. 2. Student dataset description and preliminary processing flow

54

C.-W. Syu et al.

3.2 Training To ensure the completeness of our study’s training set, it would be ideal to include an equal amount of font data from teachers. However, we recognize the difficulty of this task given the substantial size of the student dataset used in this study, which contains a total of 3524 words. Thus, we understand that asking an average English teacher to undertake this process is not feasible. In the preceding phase of our study, we compiled a list of 329 words and their corresponding frequency of occurrence across all the texts. These frequency values were then utilized as weights for the student Dataset. We presume that the higher the frequency of occurrence, the more representative the word is of the texts. With this in mind, we aim to streamline the process without compromising the quality of our results. To create the training set for our study, we engaged teachers to transcribe all of the keywords several times (five times in this study). Subsequently, we subjected the transcribed words to various procedures, such as noise removal, graying, and enhancement, to produce the training set, which we referred to as the “teacher Dataset.” Finally, we utilized keras training to develop a post-training model in Extensible Markup Language (.xml) format (see Fig. 3).

Fig. 3. Teacher dataset to the end of training

4 Results and Discussion 4.1 Results Through the utilization of the trained model, our research proposes to incorporate grayscale and binarization into the input data (see Fig. 4 left), with the objective of expediting the elimination of the underlying line and enhancing the program’s capacity to filter and identify matching keywords. Our research findings suggest that achieving a certain degree of scoring effect does not require the inclusion of all keywords as training sets. Specifically, we observed that certain words, such as “park”, are present in nearly all texts and trigger the machine to visually mark the corresponding box, which subsequently returns a probability value.

Comparing Handwriting Fluency in English Language Teaching

55

Fig. 4. Input adjustment and identification

To obtain a score, the average of these probability values can be used, although it may be appropriate to weight them in practice (Table 1). Table 1. Probability mean and training set size reference table. The number of words Text length: Short in the training set (about 100 words)

Text length: Medium (half)

Text length: long (full text)

Top 20 with the most weight

.98

.55

.66

Different number of transcriptions according to weight

.63

.77

.82

All words copied five .71 times

.81

.86

Based on the table presented above, it can be observed that a larger training set database leads to more accurate and stable text judgments. Specifically, a greater amount of text data results in the system providing more valid scores, whereas shorter text data produces less consistent outcomes. Overall, our investigation revealed that it is possible to achieve acceptable results by copying words with a weight of 5 or greater five times and adding the remaining words to the training set only once. This level of effort is akin to producing a essay, and we contend that it is both feasible and meaningful. 4.2 Discussion and Conclusion In the realm of English essay grading, the consideration of handwriting as a grading factor, or at the very least, the explicit inclusion of grading criteria, is of utmost importance. Handwriting fluency and the quality of written content have been identified as influential factors in exam performance. Our experiment has shed light on the current limitations of technology in this domain. Specifically, students with illegible handwriting are more prone to having sections of their

56

C.-W. Syu et al.

text rendered unreadable, potentially leading to unintended higher scores for students with similarly messy writing. Looking ahead, as Optical Character Recognition (OCR) technology continues to advance and refine, there is a possibility that teachers will rely solely on electronic compositions for grading purposes. Moreover, it may become feasible for the system to provide a reliable score for handwriting fluency based on teachers’ writing samples. In conclusion, this study emphasizes the significance of handwriting fluency in various important exams and job interviews. Machine vision techniques have been employed to develop an automated and objective tool for scoring handwriting in English language teaching materials. The proposed font evaluation method has been demonstrated to be effective, offering benefits to educators, researchers, and font designers. Future research endeavors could focus on the development of more advanced machine vision techniques for handwriting recognition and explore their application in other domains, such as medical handwriting recognition.

References 1. Seufert, T., Wagner, F., Westphal, J.: The effects of different levels of disfluency on learning outcomes and cognitive load. Instr. Sci. 45(2), 221–238 (2016). https://doi.org/10.1007/s11 251-016-9387-8 2. Bouriga, S., Olive, T.: Is typewriting more resources-demanding than handwriting in undergraduate students? Read. Writ. 34(9), 2227–2255 (2021) 3. Nunes, A., Cordeiro, C., Limpo, T., Castro, S.L.: Effectiveness of automated writing evaluation systems in school settings: a systematic review of studies from 2000 to 2020. J. Comput. Assist. Learn. 38(2), 599–620 (2021) 4. Limpo, T., Graham, S.: The role of handwriting instruction in writers’ education. Br. J. Educ. Stud. 68(3), 311–329 (2019) 5. Huang, H., Curle, S.: Higher education medium of instruction and career prospects: an exploration of current and graduated Chinese students’ perceptions. J. Educ. Work. 34(3), 331–343 (2021) 6. Mundell, J.: Characteristics of Notes Taken during the Employment Interview and Their Impact on Organizational Outcomes. University of Missouri-Saint Louis, St. Louis (2020) 7. Hsieh, C.N.: The Case of Taiwan: Perceptions of College Students about the Use of the Toeic ® Tests as a Condition of Graduation. Technical report, ETS Research Report Series (2017) 8. Canz, T., Hoffmann, L., Kania, R.: Presentation-mode effects in large-scale writing assessments. Assess. Writ. 45, 100470 (2020) 9. Mei, Y., Cheng, L.: Scoring fairness in large-scale high-stakes English language testing: an examination of the national matriculation English test. In: Coniam, D. (ed.) English Language Education and Assessment, pp. 171–187. Springer, Singapore (2014). https://doi.org/10.1007/ 978-981-287-071-1_11 10. Reber, R., Greifeneder, R.: Processing fluency in education: how metacognitive feelings shape learning, belief formation, and affect. Educ. Psychol. 52(2), 84–103 (2016) 11. Long, S., He, X., Yao, C.: Scene text detection and recognition: the deep learning era. Int. J. Comput. Vis. 129(1), 161–184 (2020) 12. Zhang, S., Zhou, C., Li, Y., Zhang, X., Ye, L., Wei, Y.: Irregular scene text detection based on a graph convolutional network. Sensors 23(3), 1070 (2023)

Redefining Customer Service Education in Taiwan’s Convenience Store Sector: Implementing an AI-Driven Experiential Training Approach Kuan-Yu Chen, Ming-Yu Chiang, and Tien-Chi Huang(B) National Taichung University of Science and Technology, Taichung City, Taiwan [email protected]

Abstract. Taiwan has seen a significant increase in the number of convenience stores, with 619,741 stores in 2021, making it the country with the second-highest density of convenience stores in the world. This growth, coupled with the Ministry of Education adding a retail service group to the centralized special education classes curriculum in the “Curriculum Guidelines for the Service Group of Junior High School and Senior Vocational High School Special Education Classes,” indicates a rising demand for professional retail personnel. In 2022, 14 senior vocational high schools included this subject in their curriculum, further emphasizing the need for enhanced training and education of retail personnel. Customer service and response training often require significant time and manpower costs, presenting a challenge for businesses and the education sector to efficiently train new retail personnel. To address this, this study proposes a customer communication training model based on experiential learning theory and conversational AI technology. By collecting and incorporating common customer communication responses and questions into simulated training, the model aims to reduce risks for businesses during new employee training and effectively improve customer consumption experience and satisfaction. This training model not only provides customers with a tangible service experience but also strengthens the training and communication skills of retail personnel when facing diverse customers, thus equipping them with communication skills that meet corporate standards more quickly. Keywords: Experiential Learning Theory · ChatBot · Convenience store · Communication skills

1 Introduction According to the Taiwan Ministry of Economic Affairs’ Annual Economic Statistics Report, the number of convenience stores in Taiwan increased from 598,754 in 2019 to 604,328 in 2021 (Table 1). Concurrently, the Ministry of Education has incorporated a retail service group into the curriculum of its centralized special education classes, as outlined in the “Curriculum Guidelines for the Service Group of Junior High School © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 57–66, 2023. https://doi.org/10.1007/978-3-031-40113-8_6

58

K.-Y. Chen et al.

and Senior Vocational High School Special Education Classes.” In 2022, a total of 14 senior vocational high schools have included this subject in their curriculum. These trends indicate a rapidly growing demand for retail personnel in Taiwan, driven by the expanding retail industry and fast-developing economy. The Ministry of Education is actively promoting relevant educational reforms to cultivate more professional retail service talent. However, the challenge of effectively training retail personnel in customer communication skills persists within the industry’s education and training. Retail staff must navigate a variety of tasks, from answering customers’ basic inquiries to actively engaging and assisting customers with enthusiasm, ensuring a warm and welcoming service experience. While fundamental language and greeting training can be provided, honing customer communication skills through hands-on experience and continuous onsite adjustment proves to be more effective. Nevertheless, this training method may pose risks, such as customer complaints or negative customer experiences, which could ultimately tarnish the store’s image and impact revenue, despite its practicality. Table 1. Number of Stores Requiring Communication with Customers from 2019 to 2020. Industry/Year

2019

2020

Wholesale and Retail Trade

487,302

487,125

Accommodation and Catering Services

85,034

90,346

Support Services

26,418

26,857

Total

598,754

604,328

In light of the challenges described above, this study proposes a customer communication training model that leverages experiential learning theory and conversational AI technology. The model is designed to address the growing demand for new retail personnel while rapidly equipping them with communication skills that align with corporate standards. By gathering common customer communication responses and questions, and incorporating them into simulated training scenarios, this model aims to minimize risks for businesses during the onboarding of new employees, and effectively enhance customer satisfaction and consumption experience. The primary objective of the training model is to foster warm and personalized service experiences for customers and to strengthen the communication skills of retail personnel when interacting with diverse clientele. By utilizing this innovative approach to training, businesses can efficiently prepare retail staff to handle various customer scenarios, ultimately improving overall service quality and customer satisfaction.

Redefining Customer Service Education

59

2 Literature 2.1 Application of Experiential Learning Theory in Communication Skills Training According to Kolb’s Experiential Learning Theory (ELT) [1], the experiential learning process is divided into four stages: Concrete Experience, Reflective Observation, Abstract Conceptualization, and Active Experimentation. Through repeated cycles of experiential learning, learners can acquire practical experience in informal situations and integrate their knowledge and skills through experience exchange, thereby reducing learning obstacles. Many studies have explored the feasibility of applying the ELT to communication skills training. [2] found that using the experiential learning model can significantly improve emotional communication and problem-solving abilities between communicative groups. In summary, training communication skills based on the experiential learning theory is an effective training method. Therefore, this study will utilize the ELT to train retail personnel’s communication skills to enhance their training effectiveness (Fig. 1).

Concrete Experience • Gain experience by actively participating.

Active Experiment

Reflective Observation

• Apply the learned experience.

• Reflect on experience and propose ideas.

Abstract Conceptualization • Synthesize theories and integrate concepts.

Fig. 1. Experiential Learning Theory, ELT Model

2.2 Integration of Chatbots in Communication Skills Training In recent years, many conversational robots have emerged in our lives, providing services such as consultation, retrieval, and e-commerce. Conversational robots input and mimic human dialogue through natural language processing [3]. Currently, there are two types of conversational robots: “declarative” conversational robots that provide specific services or functions based on user questions, and “dialogue” conversational robots that use predictive responses based on past experiences, such as ChatGPT developed by OpenAI. Currently, there is limited research on incorporating dialogue conversational robots into educational training. Instead, many rely on declarative conversational robots to

60

K.-Y. Chen et al.

search for information stored in databases, reducing the time and effort required by educators during training [4]. However, when facing real-world scenarios where customers’ needs constantly change, relying on rules to provide simple answers may not suffice. GPT-4 is the latest generation of large-scale language models developed by OpenAI, which has broader general knowledge and problem-solving abilities than its predecessor GPT-3, and has demonstrated human-level performance in multiple professional and academic benchmark tests [5]. Previous studies have shown significant improvements in learning effectiveness and attitudes by incorporating large language models into English dialogue learning systems [6]. Therefore, this study adopts a dialogue conversational robot integrated with the GPT-4 model to train communication skills, aiming to interact with users using language that mimics real-world conversations as much as possible during the training process. 2.3 Retail Personnel Service Training Standards and Assessment Traditionally, retail personnel’s practical skills have been trained through an apprenticeship model, in which senior employees guide newcomers by demonstrating tasks while explaining the process. Newcomers first observe and then practice, with senior employees assisting and correcting them to develop their skills. The SERVQUAL scale, based on Total Quality Management (TQM) theory, is used to evaluate service quality in the service industry, consisting of five major assessment indicators (Table 2) and 22 specific items [7]. Many studies have reported that industries requiring service, such as finance and tourism, use the SERVQUAL scale to assess service standards [8]. This study adopts the SERVQUAL scale to evaluate retail personnel’s customer communication service quality, providing a consistent reference standard for training. Furthermore, retail personnel must maintain good emotional control and respect differences to ensure high-quality communication. This study utilizes the hybrid deep neural network HMMs (DNN-HMM), a speech emotion recognition model based on Hidden Markov Models (HMMs) and Deep Neural Networks, for emotional assessment. This model outperforms traditional speech emotion recognition models in various aspects [9], enabling retail personnel to identify and improve areas of weakness or emotional responses. By incorporating these strategies, this study aims to develop an effective communication training model for retail personnel, enhancing their ability to provide warm, customercentric service and adapt to the diverse communication demands of various clients. By integrating experiential learning theory, chatbot technology, and comprehensive evaluation methods, this research contributes to the development of innovative communication training practices in the retail industry.

Redefining Customer Service Education

61

Table 2. SERVQUAL’s five dimensions. Dimensions

Definition

Responsiveness The willingness to help customers and provide prompt service Assurance

The knowledge and courtesy of the employees and the ability to convey trust and confidence

Tangibles

The physical facilities, equipment, and appearance of personnel

Empathy

The caring, individualized attention given to customers

Reliability

The ability to provide the promised service dependably and accurately

3 Integrating Chatbot and Experiential Learning Theory in Communication Training Building upon the research motivation and literature review discussed earlier, this study proposes an educational platform that integrates a web platform with OpenAI’s natural language processing model, GPT-4, Deep Neural Network-Hidden Markov Model (DNN-HMM), and Microsoft Azure Speech to Text API. The purpose of this platform is to simulate training scenarios and improve retail personnel’s communication skills when interacting with customers. The proposed platform will leverage GPT-4’s advanced natural language processing capabilities to generate realistic and varied customer interactions, allowing retail personnel to practice responding to different types of inquiries and situations. Additionally, the DNN-HMM model will be utilized to recognize speech patterns and provide accurate transcriptions, enabling the platform to evaluate trainees’ verbal communication skills effectively. Lastly, the Microsoft Azure Speech to Text API will be integrated into the platform to convert spoken responses into text, allowing for seamless interactions between the trainee and the AI-generated customer scenarios. By combining these technologies, the educational platform will provide a comprehensive and immersive training experience for retail personnel, effectively enhancing their communication abilities and preparing them for real-life customer interactions. This innovative approach to training will help businesses improve customer service quality and overall customer satisfaction. 3.1 Training Strategy Drawing from the four learning stages in Kolb’s Experiential Learning Theory - Concrete Experience, Reflective Observation, Abstract Conceptualization, and Active Experiment - the study designs a training strategy (Table 3).

62

K.-Y. Chen et al. Table 3. Training Strategies and Curriculum.

Stages of experiential learning Training strategies in this study Concrete Experience

New employees learn by observing the example experiences provided by educational trainers and, through the feedback received after the simulated practical communication tests, they identify areas for improvement in their learning content

Reflective Observation

New employees observe and reflect on the differences between their past experiences in communicating with others, the example experiences provided by educational trainers, and the feedback received after the simulated practical communication tests, subsequently documenting their observations

Abstract Conceptualization

The new employees will summarize and integrate their observations with their prior experiences and record the results of their synthesis

Active Experiment

The new employees will use their synthesis to develop communication strategies that are suitable for themselves and apply these strategies in simulated practical communication tests to verify whether the results of their synthesis meet customer expectations

3.2 Platform Features The proposed platform is divided into two sections: one for new employees and another for educational personnel, catering to their specific needs. New employees can access learning materials, interactive scenarios, and quizzes to develop communication skills for retail settings. Educational personnel can contribute insights, monitor progress, and adjust training content based on individual needs. The platform’s modules and system architecture integrate GPT-4, DNN-HMM, and Microsoft Azure Speech to Text API technologies, ensuring an effective training experience and improved customer satisfaction (Fig. 2). New Employee Section: In this system, new employees engage in training for four major practical scenarios: counter checkout, product inquiries, customer complaints, and comprehensive situations. The training consists of three main modules: Example Experience-Based Learning Module, Self-Observation and Reflection Module, and Simulated Practical Communication Test Module. New employees learn from examples provided by training personnel and use the Self-Observation and Reflection Module to identify differences between their observations and reflections, helping them deduce an optimal communication style. They can then take the simulated practical communication test to evaluate their style and improve their communication abilities with customers through experiential learning. Example Experience-Based Learning Module. In the Example Experience Learning Module, new employees can learn about the four major common practical scenarios, such as counter checkout, product inquiries, customer complaints, and comprehensive

Redefining Customer Service Education

63

Fig. 2. Function Modules and System Architecture of a Communication Skills Training Platform

situations, through the demonstration content provided by educational training personnel. This allows them to understand related response methods, quality service performance, and details that need attention when communicating with customers, providing examples for observation. Self-observation and Reflection Module. New employees can learn from the example experiences presented by educational trainers or participate in simulated communication tests. They can record observed communication methods, service performances, and details, and reflect on their own feelings towards these contents. By comparing and combining with past experiences, new hires can deduce similar concepts and develop unique communication skills. Simulated Practical Communication Test Module. New employees will take the test through a conversational AI. During the conversation, the user’s voice will first be recognized for emotional cues by DNN-HMM, and then converted to text by Microsoft Azure Speech To Text API. Once completed, the system will send the emotion recognition results and conversation records to GPT-4 for scoring based on the five major indicators of the SERVQUAL scale, ultimately obtaining test scores and suggestions. New employees can use this test to re-observe and reflect on their communication style, verify

64

K.-Y. Chen et al.

whether it meets the standard, and achieve the goal of improving customer satisfaction (Fig. 3 & Fig. 4).

Fig. 3. Diagram of Simulated Practical Communication Test

Fig. 4. Diagram of Feedback for Simulated Practical Communication Test Results

Educational Personnel Section: In the educational personnel section, two modules are available: the “Experience-Based Learning Management Module” and the “Learning

Redefining Customer Service Education

65

And Assessment Status Analysis Module. Through the “Experience-Based Learning Management Module,” educational training personnel can organize past communication methods with customers, good service performance, and details when responding to customers into informational educational training content, and incorporate reflection and observation stages to help new employees internalize and deduce their communication styles. At the same time, by using the “ Learning And Assessment Status Analysis Module,” the scores of new employees in the self-observation and reflection stage, and the simulated practical communication test, can be analyzed, revealing their training status in customer communication skills. Educational training personnel can adjust the learning content of the “Example Experience-based Learning Management Module” accordingly to make it more suitable. Experience-Based Learning Management Module. In the Experience Learning Management Module, educational training personnel can add past communication methods with customers, good service performance, and details of customer interactions, and organize them into informational educational training content for new employees to reference and learn while developing customer communication skills. Learning and Assessment Status Analysis Module. As new employees progress through the Example Experience Learning Module and engage in the process of learning customer communication, they combine their observation abilities and reflection of past experience differences based on the learning stage content and record the results. Simultaneously, they take the simulated practical communication test with the conversational AI, uploading their test scores to the system. Educational training personnel can understand the learning status of new employees and adjust the example experience learning content accordingly to meet the company’s service standards.

4 Conclusion In recent years, Taiwan’s industries have faced a widespread shortage of labor, particularly in the manufacturing and service sectors. This situation not only negatively impacts productivity but also hinders overall industry development. Therefore, effectively addressing the labor shortage has become a major challenge for Taiwan’s industries. Currently, many countries and companies, including Taiwan, have established strategies to import foreign labor to address the issue. However, as of January 2023, despite the number of migrant workers in Taiwan reaching 730,000 [10], the labor shortage problem persists. Therefore, in recent years, there has been increasing attention on automation technology as a solution to the labor shortage problem. Through automation technology, productivity can be significantly improved while reducing dependence on human resources. At the same time, using automation technology to train new employees can help address the practical dilemma of the shortage of master-apprentice system. This study aims to propose a forward-looking and innovative training model that has significant implications for addressing Taiwan’s current labor shortage problem. The proposed training model is expected to bring substantial benefits to related industries. This study primarily focuses on communication skills training, and with the advancement and development of technology, the training model that combines conversational

66

K.-Y. Chen et al.

AI and experiential learning theory is expected to become a widely adopted trend, further promoting industry development and innovation.

References 1. Kolb, D.A.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Hoboken (1984) 2. Almalag, H.M., et al.: Evaluation of a multidisciplinary extracurricular event using kolb’s experiential learning theory: a qualitative study. J. Multidiscip. Healthc. 15, 2957–2967 (2022) 3. Reshmi, S., Balakrishnan, K.: Implementation of an inquisitive chatbot for database supported knowledge bases. S¯adhan¯a 41(10), 1173–1178 (2016). https://doi.org/10.1007/s12046-0160544-1 4. Chang, C.Y., Hwang, G.J., Gau, M.L.: Promoting students’ learning achievement and selfefficacy: a mobile chatbot approach for nursing training. Br. J. Edu. Technol. 53, 171–188 (2022) 5. OPENAI: GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. https://openai.com/product/gpt-4. Accessed 30 May 2023 6. Li, K.C., Chang, M., Wu, K.H.: Developing a task-based dialogue system for English language learning. Educ. Sci. 10, 306 (2020) 7. Parasuraman, A., Berry, L.L., Zeithaml, V.A.: SERVQUAL: a multiple-Item Scale for measuring consumer perceptions of service quality. J. Retail. 64, 12–40 (1988) 8. Buttle, F.: SERVQUAL: review, critique, research agenda. Eur. J. Mark. 30, 8–32 (1996) 9. Mao, S., Tao, D., Zhang, G., Ching, P.C., Lee, T.: Revisiting hidden Markov models for speech emotion recognition. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, pp. 6715–6719 (2019) 10. Ministry of Labor: The Number of Industrial and Social Welfare Migrant Workers. https://sta tfy.mol.gov.tw/. Accessed 30 May 2023

The Combination of Recognition Technology and Artificial Intelligence for Questioning and Clarification Mechanisms to Facilitate Meaningful EFL Writing in Authentic Contexts Wu-Yuin Hwang1,6 , Rio Nurtantyana1,2(B) , Yu-Fu Lai1 , I-Chin Nonie Chiang3 , George Ghenia4 , and Ming-Hsiu Michelle Tsai5 1 Graduate Institute of Network Learning Technology, National Central University,

Taoyuan City, Taiwan [email protected] 2 School of Computing, Telkom University, Bandung, Indonesia 3 Department of Humanities, National Open University, New Taipei City, Taiwan 4 Department of Computer Science, Brunel University, London, UK 5 Department of Foreign Languages and Literature, Feng Chia University, Taichung City, Taiwan 6 Department of Computer Science and Information Engineering, College of Science and Engineering, National Dong Hwa University, Hualien County, Taiwan

Abstract. Most studies of English as a Foreign Language (EFL) writing usually used grammar checking to help EFL learners to check writing errors. However, it is not enough since EFL learners have to learn how to create more meaningful content, particularly using their surroundings in authentic contexts. Therefore, we develop one app, Ubiquitous English (UEnglish), with recognition technology, particularly Image-to-Text Recognition (ITR) texts to provide the vocabulary and description from authentic pictures, and generative-AI that can generate meaningful questions and clarifications to trigger EFL learners to write more. In addition, EFL learners need to answer the question from generative-AI before they receive the clarification. Hence, we proposed a Smart Questioning-AnsweringClarification (QAC) mechanism to help EFL writing. A total of 35 participants were assigned into two groups, experimental groups (EG) with 19 learners and control groups (CG) with 16 learners with/without Smart QAC mechanism support, respectively. In this study, the quasi-experiment was conducted over five weeks and we used quantitative analysis methods. The results revealed that the EG with ITR-texts and Smart QAC had a significant difference with CG in the learning behaviors and post-test. Furthermore, EG could write more meaningful words in the assignments. Therefore, the Smart QAC mechanism could facilitate EFL learners to enhance their EFL writing in authentic contexts. Keywords: EFL writing · recognition technology · ITR texts · generative-AI · authentic context

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 67–76, 2023. https://doi.org/10.1007/978-3-031-40113-8_7

68

W.-Y. Hwang et al.

1 Introduction EFL writing plays an important role in learning a foreign language. The previous study indicated that EFL writing is challenging for EFL learners due to the lack of related lexical resources to inspire them to write essay more meaningfully besides grammar feedback [1]. Most studies of EFL writing usually used grammar checking or common mechanisms to help EFL learners to check writing errors such as spelling and grammatical errors [2]. However, it is not enough for university learners, especially for essay writing. It is because EFL learners have to write essays with more meaningful words and creatively related to authentic contexts [2, 3]. On the other hand, the recognition technology to support EFL writing, like Image-toText Recognition (ITR-texts) that can provide vocabulary and description from pictures taken by EFL learners was beneficial for EFL learners [1, 3]. The EFL learners could get inspiration and have more ideas from their surroundings to improve their writing content [1]. In addition, the rise of the generative-AI that can provide sample sentences could be taken into account since it can inspire EFL learners to write more content in their essays [1, 4]. Therefore, this study proposed a system that can provide meaningful questions and clarification as feedback to EFL learners based on authentic contextual information from ITR texts. By doing so, EFL learners are triggered to think and complete their essays based on the question and the clarification provided by the generative-AI. Based on the aforementioned above, we proposed a system with the Smart Questioning, Answering, and Clarifying (QAC) mechanism to help EFL learners write meaningful essays based on authentic contexts. The following are the research questions in this study: 1. Are there any significant differences in learning achievement between the experimental group (EG) with the Smart QAC mechanism support and the control group (CG) without the Smart QAC mechanism support? 2. Are there any significant differences in the learning behaviors between EG and CG? 3. What are the relationships between learning achievement and behaviors in EG?

2 Literature Review 2.1 EFL Writing in Authentic Contexts Prior studies showed that EFL writing in authentic contexts could significantly enhance EFL learners’ learning achievements and motivations [5, 6]. In addition, authentic contextual learning has played an important role in EFL writing. It can inspire learners to write meaningful and useful content about their surroundings. Furthermore, the previous study stated that authentic contextual learning could be integrated with recognition technology. It is because it could recognize their surroundings and then provide some text suggestions, such as vocabulary and description, that can help EFL learners to write meaningful essays [1]. Furthermore, EFL writing could be evaluated based on several dimensions, such as reasoning, communication, organization, and cohesion. Hence, it is important to provide suggestions to trigger EFL learners to write more meaningful essays based on these dimension [3, 6].

The Combination of Recognition Technology and Artificial Intelligence

69

2.2 Technology-Supported EFL Writing in Authentic Contexts The previous study showed that several recognition technologies could be applied to facilitate EFL learning in authentic contexts [1]. One example of recognition technology to support EFL writing is the Image-to-Text Recognition (ITR) which can provide vocabulary and descriptions from pictures taken by EFL learners [1–3]. Therefore, it is an opportunity to implement the advantages of ITR for EFL writing since it can provide related words based on the authentic contexts to support EFL writing. In a traditional EFL learning environment, EFL learners usually learn from textbooks or writing exercises with paper based. However, we have seen a rise in the use of AI in various fields in recent years, particularly generative-AI with GPT-3 [1, 7]. Hence, ITR as an input can be integrated with generative-AI to generating the sample sentences [2]. It is useful for EFL learners to inspire them to write more meaningful content while they write the essay draft [8]. Hence, in this study, the generative-AI like GPT-3.5 by openAI can be utilized to generate not only the sentences like the previous study [2] but also could generate the question and clarification to inspire and trigger EFL learners to write more comprehensive essays. 2.3 The Smart Questioning and Clarification Mechanism for EFL Writing A study showed that question-posing is a powerful tool in the teaching process to trigger EFL learners to think more deeply [9]. Furthermore, the previous study mentioned that the reciprocal teaching method could facilitate deep thinking between learners and teachers with four basic strategies, such as summarizing, questioning, clarifying, and predicting (SQCP) [9]. In addition, the SQCP was widely used in flipped classrooms at the university level [10]. In the SQCP method, the questions and the clarifications could trigger learners to enhance their critical thinking [11]. In addition, answering questions related to authentic contexts is also considered critical thinking. Hence, we provide the questioning, answering, and clarifying (QAC) mechanism for the EFL learners inspired by the SQCP method. Therefore, the QAC mechanism could trigger EFL learners to create more writing content related to their surroundings by answering the questions and referencing the clarification. Furthermore, generative-AI could be applied to generate meaningful questions and clarification related to authentic contexts [2, 10]. The detailed implementation of QAC, such as EFL learners received questions from AI based on the authentic contexts, then EFL learners need to answer the questions. Afterwards, they received the clarification from AI. Hence, EFL learners with the Smart QAC mechanism empowered by generative-AI could generate more meaningful QAC as triggers that could enhance their EFL writing skills.

70

W.-Y. Hwang et al.

3 System Design We have developed a Ubiquitous English (UEnglish) app for Android platform. It implemented the recognition technology, like ITR-texts and generative-AI to provide the Smart QAC mechanism support for EFL writing. In detail, the QAC mechanism including, questioning, answering, and clarification. Questioning in this study means UEnglish will provide AI questions based on the authentic contexts. Answering means EFL learners need to think with answering the questions. Clarification means UEnglish will provide AI clarification to help EFL learners to clarify their answer. In detail, the EFL learners could take photos of their surroundings then ITR-texts will recognize and provide the sentence and the vocabulary, as shown in Fig. 1. After EFL learners modify the ITR-texts, they can generate the Smart QAC, as shown in Fig. 2. In detail, the Smart QAC mechanism includes three aspects such as questioning (Q), answering (A), and clarification (C). By doing so, the EFL learners can use the words from ITR-texts and answer or clarification from QAC to enhance their essays. Hence, EFL learners can write and revise their essay more meaningfully based on authentic contexts. However, we also provide a general QAC mechanism for some learners. The general QAC will provide questions from a question bank designed by an English expert. Hence, the different between general and smart QAC mechanism is that the general QAC mechanism will provide questions and clarifications from human or without AI.

Fig. 1. The EFL learners received the ITR-texts based on authentic contexts.

The Combination of Recognition Technology and Artificial Intelligence

71

Fig. 2. The Smart QAC mechanism can inspire and trigger EFL learners based on the information from authentic contexts: (a) generate new QAC and (b) enhance the essay by adding the words from ITR-text and QAC.

4 Methodology A total of 35 learners from the first grade of business college in a university were participated in this experiment. The learners were randomly grouped into two groups, such as the experimental group (EG) with 19 learners that learn EFL writing with the Smart QAC mechanism support, and the control group (CG) with 16 learners that learn EFL writing with the General QAC mechanism support. The Smart QAC mechanism support means that the EG learners utilized generative-AI to generate a meaningful question and clarification. On the other hand, the General QAC mechanism means that the CG learners used questions bank and clarification bank provided by the teacher. The quasi-experiment was conducted over five weeks, as shown in Fig. 3. In the first week, the teacher gives the pretest, and then from the second until the fourth week, the EFL learners need to complete the essay assignments with different topics based on their curricula in the course. During the class, they can explore their surroundings to gather pictures related to the topic. In the fifth week, the EFL learners do the post-test.

72

W.-Y. Hwang et al.

Fig. 3. The experimental procedure

Table 1 shows the research variables for this study, such as the learning achievement and the learning behaviors of the assignment. In this study, we combined the QAC from three dimensions to become one variable. The essay score of the pretest, three assignments, and the post-test have a significant inter-rater correlation between two raters (ICC > .80). It indicates that two raters absolutely agree about the essay scores. In this study, we used several analysis mechanisms, such as ANOVA analysis was used to examine the difference between EG and CG in the pretest, ANCOVA analysis was used to examine the difference between EG and CG in the post-test with the pretest as covariate, Independent T-test analysis to compare the means of learning behaviors between two groups, and Pearson Correlation analysis to examine the correlation between learning achievement and the learning behaviors. Table 1. The research variables. Variables

Description

Learning achievement 1. The posttest score

The mean score of the EFL writing test is based on the scoring rubric and evaluated by two experts

Learning behaviors of the assignments 1. The number of words

The total number of words written in the essays

2. The number of ITR used in essays

The total number of words from the ITR-texts that were used in the essays

3. The number of QAC used in essays The total number of words from the Answer and Clarification that were used in the essays 4. The assignment scores

The means of the three assignment scores

The Combination of Recognition Technology and Artificial Intelligence

73

5 Results and Discussions The ANOVA analysis results showed no significant difference between the two groups in the pretest, as shown in Table 2 (F = .096, p = .333; p > .05). It indicated that the two groups were not statistically different about the prior knowledge of EFL writing in the pretest. In detail, the mean score of EG (M = 9.184) was slightly higher than the CG (M = 8.562). In the learning achievement, the pretest score was used as a covariate to conduct the ANCOVA analysis for the posttest. The homogeneity assumption was tested and allowed us to conduct the ANCOVA analysis (p = .164; p > .05). Furthermore, the ANCOVA analysis results in Table 2 show there was a significant difference between the two groups in the post-test (F = 52.542; p < .01). In detail, the mean score of the EG (M = 11.86) was higher than the CG (M = 9.96). It indicated that the EG learners with the help of recognition technology and AI to provide the Smart QAC mechanism could enhance their learning achievement. Hence, EFL learners received more inspiration from their surroundings to write meaningful essays for each assignment. Through practicing writing essays in the assignments could help them enhance their learning achievement [2]. Table 2. The ANCOVA analysis results of the learning achievement between two groups. Group

N

Pre-test

ANOVA

Post-test

Mean

SD

F

Mean

SD

F

.967

11.86

1.597

52.542***

9.96

2.276

EG

19

9.18

1.668

CG

16

8.56

2.072

ANCOVA

Note. ***p < .01;

Furthermore, the independent T-test analysis results of the learning behaviors in the assignments were significantly different in the assignment score between the two groups (t = 3.302, p < .01), as shown in Table 3. In addition, the number of ITR used in essays was a significant difference between the two groups despite both groups using ITRtexts (t = 2.964; p < .01). However, the EG utilized the ITR-texts more (M = 53.684) compared with CG (M = 16.687) to inspire their writing in the assignment by adding the words from ITR-texts to their essays. On the other hand, the number of QAC used in the essay was a significant difference between the two groups (t = 3.918, p < .01). The EG benefited and was inspired by the Smart QAC mechanisms three times more (M = 185.947) compared to the CG (M = 60.062) since the Smart QAC mechanism for EG learners uses AI to generate the questions and the clarifications. It means the EG learners added more words to their essays from the QAC mechanism to enhance their content essays. It is because the question (Q) was matched with the surrounding contexts, and it could provide meaningful clarification (C). On the other hand, the CG learners received general questions from teacher design that might not be related to the contexts. It in line with the previous study that questioning technique could trigger learner to think more [11].

74

W.-Y. Hwang et al.

Table 3. The independent T-test analysis results of the learning behavior between two groups. Variables

Group

N

Mean

SD

t

The number of the words

EG

19

448.842

186.381

1.401

CG

16

362.000

178.151

The number of ITR-texts used in essays

EG

19

53.684

48.342

CG

16

16.687

13.133

The number of QAC used in essays

EG

19

185.947

123.842

CG

16

60.062

36.314

EG

19

13.184

2.339

CG

16

10.208

2.991

The assignment score

2.964** 3.918** 3.302**

Note. **p < .01.

Further analysis with Pearson Correlation analysis showed significant correlations between learning achievement and the learning behaviors in the EG, as shown in Table 4. The ITR-texts (r = 509, p = .026) and the Smart QAC mechanism (r = .542, p = .017) were significantly correlated with the assignment score. It indicated that the help of ITR-text with the recognition technology and the Smart QAC mechanism with AI can help EG learners to write meaningful and more words in the essay assignments. On the other hand, the number of words (r = .480, p = .038) and the ITR-texts (r = .478, p = .039) were significant correlations with learning achievement of the posttest. However, it needed further investigation of the Smart QAC mechanism based on each dimension to know the correlations rather than using the total number of QACs used. Table 4. The Pearson Correlation analysis results of the learning behaviors in the EG. Variables

1

2

3

4

5

Learning behaviors 1. The number of the words

1

2. The number of ITR used in essays

.567*

1

3. The number of Smart QAC used in essays

.489*

.818**

1

4. The assignment score

.896**

.509*

.542*

1

.480*

.478*

.363

.277

Learning achievement 5. The posttest score Note. *p < .05. **p < .01

1

The Combination of Recognition Technology and Artificial Intelligence

75

6 Conclusion In this study, the integration of ITR and generative-AI in the UEnglish app to provide the Smart QAC mechanism has been demonstrated its potential to support EFL writing in authentic contexts. Regarding the first research question, there was significant different in the learning achievement between two groups. EG learners could carefully answer (A) the questions from AI (Q) and then learn from the clarification (C) that related to the contexts to enhance their essay. By learning from the ITR-texts and the Smart QAC mechanism, EG learners can enhance their writing score in the assignment compared to the CG. Through learning to write the essay in assignments could help them to write more meaningfully in the post-test. Regarding the second research question, the number of ITR-texts and the Smart QAC used in the essay assignments and the assignment score were statistical differences between the two groups. Therefore, the EG learners perform better than the CG in the three assignments. Regarding research question three, the ITR texts and the Smart QAC mechanism were significantly correlated with the assignment score. On the other hand, the number of words and the ITR-texts were significant correlations with learning achievement. It mean that the ITR texts and smart QAC mechanism could help them learn in the assignments then they could perform well in the posttest by write more meaningful words. The limitation of this study, the experiment time each week was limited to 50 min to complete the QAC and write an essay for both groups. We only conduct three writing essay topics based on the authentic contexts. In addition, the analysis for QAC mechanism only use single variable, need more deeply analysis about the benefit of QAC mechanism for EFL learning. In the future, we will analyze the Smart QAC mechanism for each dimension (i.e., reasoning, communication, and organization) to understand its correlation and effects on learning achievement deeply. The duration of the experiment could be longer to make EFL learners more familiar with the system and QAC mechanism. In addition, we will continue to provide other input for the generative-AI to generate meaningful Smart QAC mechanisms, like providing the learner’s essay as input beside the ITR texts. Acknowledgments. This work was partly supported by National Science and Technology Council, Taiwan under grant NSTC110-2511-H-035-002-MY2, NSTC 111-2410-H-008-061-MY3, and NSTC109-2511-H-008-009-MY3.

References 1. Hwang, W.-Y., Nguyen, V.-G., Purba, S.W.D.: Systematic survey of anything-to-text recognition and constructing its framework in language learning. Educ. Inf. Technol. 27(9), 12273–12299 (2022) 2. Hwang, W.-Y., Nurtantyana, R., Purba, S.W.D., Hariyanti, U., Indrihapsari, Y., Surjono, H.D.: AI and recognition technologies to facilitate English as Foreign language writing for supporting personalization and contextualization in authentic contexts. J. Educ. Comput. Res. (2023)

76

W.-Y. Hwang et al.

3. Shadiev, R., Wu, T.-T., Huang, Y.-M.: Using image-to-text recognition technology to facilitate vocabulary acquisition in authentic contexts. ReCALL 32(2), 195–212 (2020) 4. Gayed, J.M., Carlon, M.K.J., Oriola, A.M., Cross, J.S.: Exploring an AI-based writing assistant’s impact on English language learners. Comput. Educ.: Artif. Intell. 3 (2022) 5. Shadiev, R., Hwang, W.-Y., Chen, N.-S., Huang, Y.-M.: Review of speech-to-text recognition technology for enhancing learning. Educ. Technol. Soc. 17(4), 65–84 (2014) 6. Nguyen, T.-H., Hwang, W.-Y., Pham, X.-L., Pham, T.: Self-experienced storytelling in an authentic context to facilitate EFL writing. Comput. Assist. Lang. Learn. 35(4), 666 (2022) 7. Hwang, W.-Y., Nurtantyana, R.: The integration of multiple recognition technologies and artificial intelligence to facilitate EFL writing in authentic contexts. In: 6th International Conference on Information Technology (InCIT), Thailand, pp. 379–383. IEEE (2022) 8. Hwang, W.-Y., Nurtantyana, R.: X-education: education of all things with ai and edge computing -one case study for EFL learning. Sustainability 14(19), 12533 (2022) 9. Tofade, T., Elsner, J., Haines, S.T.: Best practice strategies for effective use of questions as a teaching tool. Am. J. Pharm. Educ. 77(7), 155 (2013) 10. Hwang, W.-Y., Nurtantyana, R., Hariyanti, U.: Collaboration and interaction with smart mechanisms in flipped classrooms. Data Technol. Appl. (2023) 11. Etemadzadeh, A., Seifi, S., Far, H.R.: The role of questioning technique in developing thinking skills: the ongoing effect on writing skill. Procedia – Soc. Behav. Sci. 1024–1031 (2013)

Solving the Self-regulated Learning Problem: Exploring the Performance of ChatGPT in Mathematics Pin-Hui Li1 , Hsin-Yu Lee1 , Yu-Ping Cheng1 , Andreja Isteniˇc Starˇciˇc2,3 , and Yueh-Min Huang1(B) 1 Department of Engineering Science, National Cheng Kung University, Tainan City, Taiwan

[email protected]

2 Faculty of Education, University of Primorska, Koper, Slovenia 3 Faculty of Civil and Geodetic Engineering, University of Ljubljana, Ljubljana, Slovenia

Abstract. In flipped math classrooms, chatbots are commonly used to assist students and provide personalized learning to improve self-regulation issues that students face when learning through online resources at home. ChatGPT, the state-of-the-art natural language model, has been tested in this study to explore its ability to impact middle school students’ math learning since middle school math is a crucial stage that affects future math learning success. The study tested ChatGPT’s accuracy by using it to answer questions from Taiwan’s past education examinations, and the accuracy rate was found to be as high as 90% (A+). Moreover, compared to most studies that developed chatbots for a single unit or course, this study found that ChatGPT’s accuracy in each of the six major areas of mathematics education in Taiwan exceeded 80% (A). The results indicate that ChatGPT is an excellent learning tool that can improve students’ self-regulation issues and has the potential to impact middle school math education. Keywords: ChatGPT · self-regulated learning · mathematics

1 Introduction With the progress of information technology, teaching methods have been constantly changing. The traditional classroom-style education leads to a lack of interest in learning among students. Flipped classrooms not only improve the shortcomings of traditional teaching methods and enhance students’ learning motivation, but also increase their thinking ability [1]. Therefore, flipped classrooms often replace traditional classrooms as a new form of teaching, where students use online learning resources to preview course content at home, and teachers guide them in deeper discussions or various learning activities during class. The student-centered flipped classroom moves knowledge acquisition to before class, creating opportunities for reflection and active learning, and making students no longer passive learners, resulting in positive effects [2]. Most studies mention the help of flipped classrooms in students’ learning of mathematics. For example, López Belmonte, Fuentes © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 77–86, 2023. https://doi.org/10.1007/978-3-031-40113-8_8

78

P.-H. Li et al.

Cabrera, López Núñez and Pozo Sánchez [3] pointed out that flipped learning can improve middle school students’ motivation and participation in learning mathematics. Rojas-Celis and Cely-Rojas [4] adopted a flipped classroom teaching approach for their mathematics course, and the results showed that students had a higher level of subject understanding, which in turn improved their mathematics grades. However, while flipped learning creates opportunities for student self-learning and self-reflection, it also presents various challenges in learning [5, 6]. A systematic literature review by Rasheed, Kamsin, Abdullah, Kakudi, Ali, Musa and Yahaya [7] indicated that self-regulation is a challenge in flipped learning. Self-regulated challenges arise from students’ high level of autonomy, leading to poor behaviors such as procrastination and engagement in irrelevant activities, resulting in reduced learning time. Lai and Hwang [8] also noted that to improve the effectiveness of a flipped math classroom, a self-regulation mechanism must be provided, as most students exhibit lower self-regulatory behavior when learning math without appropriate guidance or help. Therefore, self-regulation challenges are a major problem in flipped learning. Mathematics has always been an important subject at any learning stage, and due to its coherence, the ninth-grade math curriculum is the most critical stage, affecting the success of subsequent more difficult courses [9]. Additionally, several studies have shown that improving students’ self-regulation problems in learning math can affect their learning motivation and exam results [10, 11]. Therefore, solving self-regulation learning problems that students face in a flipped math classroom can enhance their learning effectiveness. For a long time, large language models (LLMs) have been used to develop suitable teaching tools to assist students in learning. These models analyze problems and generate effective sentences, and then converse with students and provide positive answers. They have a great auxiliary function in learning, can improve students’ self-regulated learning problems, and enhance learning motivation. For example, Hew, Huang, Du and Jia [12] used a chatbot in flipped learning to improve students’ interest in learning, and most students found the chatbot to be helpful for learning. Chatbot Generalized Pre-trained Transformer (ChatGPT), developed by OpenAI in 2022, is currently the most well-known and powerful large language model. ChatGPT has been trained with billions of parameters to achieve good performance. Many studies have confirmed the potential benefits of ChatGPT in promoting teaching. Baidoo-Anu and Owusu Ansah [13] pointed out that ChatGPT can provide personalized learning and complete formative assessments to improve education and support students’ learning. Kung, Cheatham, Medenilla, Sillos, De Leon, Elepaño, Madriaga, Aggabao, DiazCandido and Maningo [14] found that ChatGPT achieved a 60% correct answer rate in the United States Medical Licensing Examination (USMLE), demonstrating its ability to help students in medical education. Until now, most chatbots developed by researchers have been only applicable to a single unit or class. If the versatility of chatbots can be improved, they can provide more comprehensive answers to students. Therefore, this study investigates the ability of ChatGPT in various fields based on Taiwan’s curriculum guidelines of 12-year basic education for mathematics, which includes six major areas: numbers and quantity, space

Solving the Self-regulated Learning Problem

79

and shape, coordinate geometry, algebra, functions, and data and uncertainty. In addition, given the performance of ChatGPT, this study will investigate whether ChatGPT’s performance is sufficient to handle Taiwan’s high school mathematics entrance exams, with the hope of improving students’ self-regulation problems in learning mathematics. The research questions proposed in this study are as follows: 1. Can ChatGPT master all the questions in the six major areas of mathematics in junior high school? 2. Does the effective response generated by ChatGPT have the potential to impact junior high school students’ learning in mathematics courses?

2 Related Work 2.1 Self-regulated Learning Self-regulated learning (SRL) refers to the use of appropriate learning strategies by learners to improve learning outcomes and achieve learning goals. Zimmerman [15] divides the SRL process into three stages: (1) the forethought stage, in which learners adjust their motivation to complete tasks and design tasks and set goals to stimulate their learning motivation; (2) the performance stage, in which learners adjust their learning behavior from mistakes in classroom activities and achieve ideal learning outcomes; (3) the reflection stage, in which learners evaluate their own learning process and outcomes and engage in reflection. The learner’s reflection results will affect the first stage of the next cycle, making the model cycle periodically [15]. Many studies also indicate the importance of implementing SRL strategies for student learning, in order to help students develop useful knowledge and skills [16]. For example, [17] used a concept map-based SRL method to improve students’ self-regulation abilities and help them learn STEM skills. Writing diaries was also used to optimize cognitive load in self-regulated learning and promote teaching outcomes. 2.2 Large Language Model Training a large language model (LLM) requires incorporating over billions of parameters or weights. It is because of the massive input values that the model can process, understand, generate, and ultimately learn human-readable natural language [18]. Currently, most large language models are developed based on the Transformer model, with the two main models being BERT [19] and GPT [20]. The large-scale training makes them perform well and is commonly used in natural language processing (NLP) problems. For example, Nugroho, Sukmadewa and Yudistira [21] used BERT to classify a massive amount of news with an accuracy of up to 91%, while Chiu, Collins and Alexander [22] used GPT-3 to generate hate texts against marginalized groups, recognizing hate texts and classifying them into two categories: gender discrimination and racial discrimination. Large language models developed based on Transformer are also commonly used in the education field. Li, Lam and See [23] trained an AI chatbot to teach medical students anatomy, while Bathija, Agarwal, Somanna and Pallavi [24] developed a BERT-based

80

P.-H. Li et al.

chatbot with an interactive and friendly interface to interact with learners, immediately addressing any misunderstandings or mistakes, improving the effectiveness of online learning. 2.3 Chatbot in Self-regulation Learning Due to the lack of personalized design in online courses provided by flipped learning, the use of chatbots can improve student learning outcomes by providing personalized assistance, promoting discussion and collaboration, and providing feedback and assessment [25]. Therefore, using chatbots in flipped learning can alleviate the problems caused by self-regulation difficulties faced by students. A systematic review by Wollny, Schneider, Di Mitri, Weidlich, Rittberger and Drachsler [26] showed that (1) chatbots can improve learning efficiency, learning motivation, and usability, and (2) chatbots are mainly applied to self-regulated learning and life skills. In summary, using chatbots to improve students’ self-regulation difficulties is useful. However, whether math teaching chatbots are instructive or counseling, research is usually focused on a single unit or problem [27, 28]. Therefore, this study uses ChatGPT to test and calculate the average correct answer rate for the Taiwan Comprehensive Assessment Program for Junior High School Students, which includes questions from six major domains, in order to investigate whether ChatGPT has the potential to influence mathematics education in junior high schools.

3 Mathematics Test Questions This study selected the math test questions from the Comprehensive Assessment Program for Junior High School Students (CAP), which is a standardized score evaluation held by the Ministry of Education for ninth-grade students in Taiwan’s national junior high schools. The purpose of CAP is to measure students’ understanding of the knowledge they have learned in junior high school. CAP’s math section includes six major areas: Number and Quantity, Space and Shape, Coordinate Geometry, Algebra, Functions, and Data and Uncertainty, based on the 12-year national basic education curriculum guidelines. Junior High School Math is an important part of learning math and lays a good foundation for high school courses [9]. Therefore, this study selected 60 questions from the CAP math test in the past nine years, as shown in Fig. 1, and categorized the difficulty level based on the pass rate for each question into easy (>0.70, answered correctly by more than 70% of people), moderate (0.45–0.70), and difficult ( 0.05). Participants’ demographics between these two groups were no different. The average age of the experimental and control groups was 20.51 (SD = 1.20) and 20.81 (SD = 1.44), respectively, and no statistically significant difference existed between the two groups (t = −1.30, p = 0.20 > 0.05). The chi-square test result of gender in these two groups was χ2 = 3.12 (p = 0.08 > 0.05), suggesting no potential gender difference between the two groups.

124

F.-H. Wen et al.

3.2 Procedures Each intervention continued for ten weeks and the learners participated in each intervention three hours a week. One week (Week 1) for pre-test, practical orientation, and introduction to intervention tools; eight weeks (Week 2–9) for four programming concepts learning; and one week (Week 10) for post-test and questionnaires. The intervention content included four computational concepts of variables, sequences, loops, and conditionals in Scratch and text-based language classes with the same instructor. Participants were given a paper-based quiz after each topic lesson, which helped teacher evaluate students’ understanding of topics and provided additional technical support if necessary. 3.3 Measures The post-test concerning basic programming concepts were assessed using a numeric 0– 100 assessment scale. Two experienced programming education teachers developed the test with twenty multiple-choice and five short-answer questions. Learners’ engagement were measured prior to the start of the intervention and immediately after. An engagement questionnaire was as in [19] and used a seven-point Likert scale where 1 corresponded to “strongly disagree,” and 7 corresponded to “strongly agree.” Student engagement subscales behavioral (7 items, Cronbach’s α = 0.94, factor loading values 0.76–0.91), emotional (6 items, Cronbach’s α = 0.95, factor loading values 0.77–0.90), cognitive (5 items, Cronbach’s α = 0.94, factor loading values 0.84–0.93), and agentic engagement (6 items, Cronbach’s α = 0.90, factor loading values 0.69–0.87). Cronbach’s α and factor loading values are close to or higher than the suggested level of 0.7, suggesting the reliability and validity of measures are acceptable [28]. The same conditions and evaluation criteria were used in the two groups.

4 Analyses and Results This study performed an analysis of covariance (ANCOVA) on the post-test scores and a multivariate analysis of variance (MANCOVA) on the four types of engagements. Participants’ age was used as a covariate as the age specificity of some intervention effects may exist [11]. This study also employed regression analysis to determine which engagement predicted student performance in the experimental group when including all four types of engagements. The analysis tool is SPSS 20. 4.1 ANCOVA Analysis on Learning Performance Table 1 presents the ANCOVA results on the post-test performance. The experimental group scored significantly higher than the control group, providing the answer for RQ1. The F value of the interaction between the independent variables and covariate was 0.05 (p = 0.83 > 0.05), confirming the hypothesis of the homogeneity of the regression coefficient. ANCOVA revealed a significant effect for post-test learning performance with a small effect size (F = 6.26, p < 0.05, η2 = 0.05).

Students Learning Performance and Engagement

125

Table 1. ANCOVA results on the post-test performance. Performance

Groups

Mean

SD

Post-test

Experimental (n = 68)

73.94

8.98

Control (n = 64)

70.19

10.21

F

p

η2

6.26

0.014*

0.05

* p < 0.05

4.2 MANCOVA Analysis on Student Engagement Table 2 shows the MANCOVA results on the subscales of student engagement. The Wilks’ Lambda value was 1.88 (p = 0.12 > 0.05), which suggests the fulfilment of the homogeneity assumption of the regression coefficients. The MANCOVA results showed that there were significant effects of programming intervention on student engagement between these two groups in behavioral engagement with medium effect size (F = 10.30, p < 0.01, η2 = 0.07), emotional engagement with medium effect size (F = 15.45, p < 0.001, η2 = 0.11), cognitive engagement with medium effect size (F = 10.58, p < 0.01, η2 = 0.08), and agentic engagement with large effect size (F = 27.12, p < 0.001, η2 = 0.17). Participants in Scratch intervention group had significantly higher levels of behavioral, emotional, cognitive, and agentic engagement than participants in the textual programming language group. The MANCOVA results confirming the significant engagement difference in the two groups answered RQ2. Table 2. MANCOVA results on the engagements. Engagements

Groups

Mean

SD

F

p

η2

Behavioral

Experimental (n = 68)

5.09

0.96

10.30

0.002**

0.07

Control (n = 64)

4.55

1.13

Emotional

Experimental (n = 68)

5.23

0.99

15.45

0.000***

0.11

Control (n = 64)

4.51

1.27

Cognitive

Experimental (n = 68)

5.19

0.99

10.58

0.001**

0.08

Control (n = 64)

4.68

0.96

Experimental (n = 68)

4.93

0.97

27.12

0.000***

0.17

Control (n = 64)

4.14

0.86

Agentic

**p < 0.01, *** p < 0.001

4.3 Regression Analysis on a Relationship Between Performance and Student Engagement in a Scratch Intervention Table 3 shows the regression results considering four engagement subscales simultaneously to determine how strong engagement predicted student performance in a Scratch

126

F.-H. Wen et al.

intervention environment. Regression results revealed that student engagement significantly predicted performance (F = 62.61, p < 0.001) and accounted for 79% of the variance (Adjusted R2 = 0.79). Notably, among the four dimensions, only behavioral (β = 0.43, t = 2.85, p < 0.01) and cognitive (β = 0.32, t = 2.03, p < 0.05) engagements were significant predictors of performance. The variance inflation factors of the analysis ranged from 5.56 to 7.67, which are lower than 10 and are acceptable [28]. The simultaneous multiple regression results provided answers to RQ3. Table 3. Regression results on the performance scores of Scratch participants. Engagements

M

SD

β

t

F

R2

Behavioral

5.09

0.96

0.11

0.86

62.61***

0.79

Emotional

5.23

0.99

0.43

2.85**

Cognitive

5.19

0.99

0.32

2.03*

Agentic

4.93

0.97

0.07

0.54

* p < 0.05, ** p < 0.01, *** p < 0.001

5 Discussion Although participants’ post-test scores increased in the experimental and control groups, the ANCOVA results indicated that performance differences existed in the groups with different instructions. Scratch implementation has helped participants in the treatment group comprehend programming logic better than those in text-based instruction process. The results corresponded to studies showing that Scratch could improve students’ programming learning achievements [3, 5, 6, 10]. In line with the prior research, this study suggested that Scratch blocks language would likely help students reduce cognitive load in learning (e.g., avoiding syntax issues or supporting intuitive experience) while focusing on core aspects of programming knowledge [2, 5, 7, 11], leading to the desired programming learning outcomes. The MANCOVA results indicated that participants perceived Scratch activities as engaging (behavioral, emotional, cognitive, agentic engagement) compared to textual programming instructions. Students with Scratch instructions were engaged behaviorally (e.g., listen carefully, pay attention); responsive to Scratch intervention through emotional engagement such as curiosity, interests, and enjoyment; willing to devote more efforts and time to learning challenges; and lastly, showed the proactiveness and attempts to the flow of the visual programming instructions they received [13, 16, 19]. The MANCOVA results concerning engagement differences of the groups help observe how learners’ behaviors vary during academic progress with differential instructions [14, 15, 17, 18]. [19] suggested that the more students are engaged, the better their achievement. Accordingly, significant engagement differences provide better knowledge of educational contexts of visualization tools as an effectual means to improve student learning achievement by a multidimensional engaged way [12, 13].

Students Learning Performance and Engagement

127

The four engagement subscales to the simultaneous multiple regression model explained 79% of the variance, suggesting a close, significant relationship between Scratch participants’ engagement and performance. Concerning how engagement predicts performance in an experimental setting, only emotional and cognitive engagements were significantly positive predictor of post-test scores. This result is consistent with previous studies, which suggest that emotions aroused from a visual, media-rich Scratch environment, such as playfulness, enjoyment, relief, and decreased anxiety, help learners change their negative beliefs about programming difficulties and boredom, thereby achieving better performance [4, 6, 8]. Furthermore, Scratch blocks language may be more accessible to students than text-based languages on reading, understanding, and creating code [6, 11]. This functionality may encourage students to cognitively engage better in programming difficulties and complex ideas through extra thoughtfulness and willingness and thus gain academic success [3, 12]. The findings implied that student perceptions of interventions, classroom environment, and academic challenge affect student engagement and the subsequent learning outcomes [13, 15, 24]. This study confirmed that emotional and cognitive engagements promoted participants’ knowledge and skills in programming, deepening the understanding of the mechanisms behind the effectiveness of visualization [11]. The regression results did not show the effects of behavioral and agentic engagement on academic achievement. The reasons would be that simplicity and visualization of Scratch support intuitive, self-directing learning and make the algorithm process comprehensible and entertaining [3, 6]. Consequently, Scratch participants needed not to be more engaged behaviorally in paying attention to and asking instructors nor engaged constructively in the learning progress. This study, therefore, suggests further research endeavors into this issue.

6 Conclusion and Contributions This study investigated whether a visual programming language environment like Scratch can promote learning performance and engagement of students. One hundred and thirty-two undergraduates participated in a quasi-experimental study. Participants in the Scratch experimental group had higher performance scores and engagements than those who received text-based programming language instructions. Our results supported RQ1 and 2. Additionally, regression results indicated that student engagement was a crucial predictor for positive academic outcomes. Emotional and cognitive engagements were identified as two significantly positive indicators of learners’ increased performance within the Scratch intervention, which answered RQ3. This study contributed to the theory by using a rigorous research design to examine the effect of visualization on programming learning, as suggested by [11] that pretestposttest intervention design provides an accurate picture of the actual intervention effect. Second, this study extended student engagement construct into the programming education domain. The levels of engagement revealed students’ varying perceptions, nuanced behaviors, and consequences in the visual and textual programming environments, providing empirical evidence by adding engaging factors to explain further the mechanism behind specific interventions [11–13] and within learning environments such as

128

F.-H. Wen et al.

e-learning [9, 26]. Last, emotional and cognitive engagements predicted the outcomes of Scratch participants, contributing to the research domain of programming instruction by emphasizing learners’ affective reactions and academic effort in learning process [3]. Concerning practice implications, although [11] noted that there is no evidence for the superiority of specific interventions in programing education, it is suggested the potential of visualization, such as Scratch, in increasing students’ emotional and cognitive engagement enhance their learning performance. Moreover, measuring engagement helped educators understand students’ perception and behaviors, providing a basis for addressing pedagogy, process, and context for engagement promotion and intended outcomes of novice programmers [3, 11]. This study has limitations. An undergraduate sample performing a short introductory programming concept learning period may not be generalized to other advanced programming tasks. Another limitation is that this study mainly focused on the Scratch tool. Future research should explore advanced programming tasks, extend the experimental period, and employ other VPE tools. Additionally, [12] indicated that the individual types of engagement have yet to be studied in combination or interaction. Future research could address this issue in the programming education domain.

References 1. Cheng, G.: Exploring factors influencing the acceptance of visual programming environment among boys and girls in primary schools. Comput. Hum. Behav. 92, 361–372 (2019) 2. Papadakis, S., Kalogiannakis, M.: Evaluating a course for teaching introductory programming with Scratch to pre-service kindergarten teachers. Int. J. Technol. Enhanced Learn. 11(3), 231–246 (2019) 3. Tsai, C.Y.: Improving students’ understanding of basic programming concepts through visual programming language: the role of self-efficacy. Comput. Hum. Behav. 95, 224–232 (2019) 4. Yukselturk, E., Altiok, S.: An investigation of the effects of programming with Scratch on the preservice IT teachers’ self-efficacy perceptions and attitudes towards computer programming. Br. J. Edu. Technol. 48(3), 789–801 (2017) 5. Wang, X.M., Hwang, G.J., Liang, Z.Y., Wang, H.Y.: Enhancing students’ computer programming performances, critical thinking awareness and attitudes towards programming: an online peer-assessment attempt. J. Educ. Technol. Soc. 20(4), 58–68 (2017) 6. Erol, O., Kurt, A.A.: The effects of teaching programming with scratch on pre-service information technology teachers’ motivation and achievement. Comput. Hum. Behav. 77, 11–18 (2017) 7. Resnick, M., et al.: Scratch: programming for all. Commun. ACM 52(11), 60–67 (2009) 8. Chang, C.K.: Effects of using Alice and Scratch in an introductory programming course for corrective instruction. J. Educ. Comput. Res. 51(2), 185–204 (2014) 9. Marcelino, M.J., Pessoa, T., Vieira, C., Salvador, T., Mendes, A.J.: Learning computational thinking and scratch at distance. Comput. Hum. Behav. 80, 470–477 (2018) 10. Papadakis, S., Kalogiannakis, M., Zaranis, N., Orfanakis, V.: Using Scratch and App Inventor for teaching introductory programming in secondary education. A case study. Int. J. Technol. Enhanced Learn. 8(3–4), 217–233 (2016) 11. Scherer, R., Siddiq, F., Viveros, B.S.: A meta-analysis of teaching and learning computer programming: effective instructional approaches and conditions. Comput. Hum. Behav. 109, 106349 (2020)

Students Learning Performance and Engagement

129

12. Fredricks, J.A., Blumenfeld, P.C., Paris, A.H.: School engagement: potential of the concept, state of the evidence. Rev. Educ. Res. 74(1), 59–109 (2004) 13. Reeve, J., Tseng, C.M.: Agency as a fourth aspect of students’ engagement during learning activities. Contemp. Educ. Psychol. 36(4), 257–267 (2011) 14. Skinner, E.A., Zimmer-Gembeck, M.J., Connell, J.P., Eccles, J.S., Wellborn, J.G.: Individual differences and the development of perceived control. Monogr. Soc. Res. Child Dev. 63(2/3), i-231 (1998) 15. Trowler, V.: Student engagement literature review. High. Educ. Acad. 11(1), 1–15 (2010) 16. Huang, B., Hew, K.F., Lo, C.K.: Investigating the effects of gamification-enhanced flipped learning on undergraduate students’ behavioral and cognitive engagement. Interact. Learn. Environ. 27(8), 1106–1126 (2019) 17. Junco, R.: The relationship between frequency of Facebook use, participation in Facebook activities, and student engagement. Comput. Educ. 58(1), 162–171 (2011) 18. Kuh, G.D.: What student affairs professionals need to know about student engagement. J. Coll. Stud. Dev. 50(6), 683–706 (2009) 19. Zainuddin, Z., Shujahat, M., Haruna, H., Chu, S.K.W.: The role of gamified e-quizzes on student learning and engagement: an interactive gamification solution for a formative assessment system. Comput. Educ. 145, 103729 (2020) 20. Baldwin, L.P., Kuljis, J.: Visualisation techniques for learning and teaching programming. J. Comput. Inf. Technol. 8(4), 285–291 (2000) 21. Brito, M.A., de Sá-Soares, F.: Assessment frequency in introductory computer programming disciplines. Comput. Hum. Behav. 30, 623–628 (2014) 22. Lorås, M., Sindre, G., Trætteberg, H., Aalberg, T.: Study behavior in computing education—a systematic literature review. ACM Trans. Comput. Educ. (TOCE) 22(1), 1–40 (2021) 23. Astin, A.W.: Student involvement: a developmental theory for higher education. J. Coll. Stud. Pers. 25(4), 297–308 (1984) 24. Carini, R.M., Kuh, G.D., Klein, S.P.: Student engagement and student learning: testing the linkages. Res. High. Educ. 47, 1–32 (2006). https://doi.org/10.1007/s11162-005-8150-9 25. Çakıro˘glu, Ü., Ba¸sıbüyük, B., Güler, M., Atabay, M., Memi¸s, B.Y.: Gamifying an ICT course: influences on engagement and academic performance. Comput. Hum. Behav. 69, 98–107 (2017) 26. Hussain, M., Zhu, W., Zhang, W., Abidi, S.M.R.: Student engagement predictions in an e-learning system and their impact on student course assessment scores. Comput. Intell. Neurosci. 2018, 6347186 (2018) 27. Ortiz Rojas, M.E., Chiluiza, K., Valcke, M.: Gamification in computer programming: effects on learning, engagement, self-efficacy and intrinsic motivation. In: 11th European Conference on Game-Based Learning (ECGBL), pp. 507–514 (2017) 28. Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E., Tatham, R.L.: Multivariate Data Analysis. Prentice-Hall, Englewood Cliff (2006)

Applying Computational Thinking and Formative Assessment to Enhance the Learning Performance of Students in Virtual Programming Language Yu-Ping Cheng1

, Shu-Chen Cheng2 , Ming Yang3 and Yueh-Min Huang1(B)

, Jim-Min Lin4

,

1 Department of Engineering Science, National Cheng Kung University, Tainan, Taiwan

[email protected]

2 Department of Computer Science and Information Engineering, Southern Taiwan University

of Science and Technology, Tainan, Taiwan [email protected] 3 Department of Geomatics, National Cheng Kung University, Tainan, Taiwan [email protected] 4 Department of Information Engineering and Computer Science, Feng Chia University, Taichung City, Taiwan [email protected]

Abstract. Computational thinking (CT) is considered to be one of the core competencies of the 21st century, and many scholars have explored the feasibility of CT in different subjects. However, traditional programming languages have a certain degree of difficulty, and it is difficult for learners to learn and understand the structure and logic of syntax in this process. Visual programming language improves upon these conditions and constraints. Additionally, formative assessment (FA) has been shown to increase students’ motivation and interest in programming courses while improving their performance. Therefore, the application of CT core competencies and visual programming formative assessment system (VPFAS) is proposed by this study. Also, we explore whether it can enhance learning performance of students in virtual programming. A total of 52 students were recruited in this 10-week experiment. The result shows that the experimental group significantly enhance their learning performance in virtual programming by applying the CT core competence and the VPFAS. This means that the experimental group can not only solve the program’s problems through the CT core competencies but also improve their knowledge of the program’s concepts and objects through the VPFAS. Thus, this study confirms that applying CT and FA can enhance student learning performance in virtual programming. Keywords: Computational Thinking · Formative Assessment · Visual Programming · Learning Performance

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 130–139, 2023. https://doi.org/10.1007/978-3-031-40113-8_13

Applying Computational Thinking and Formative Assessment

131

1 Introduction The traditional instructional method typically involves teachers imparting knowledge to students through lectures. However, this approach may limit the knowledge and content that students can obtain, and it also faces various limitations and challenges [1, 2]. Studies have shown that students can acquire more knowledge and content through elearning, which has become a solution to address the limitations of traditional teaching [3]. Yildiz Durak [4] further points out that digital textbooks and multimedia resources can help students improve their learning outcomes in the classroom. At the same time, students can also improve their learning motivation, thereby enhancing their learning effectiveness [5, 6]. Cheng, Cheng and Huang [3] enabled students to use digital learning tools to enhance their knowledge in various subjects by reading massive open educational resources in the classroom through online learning. Additionally, some studies have begun to introduce various instructional strategies, theories, and learning platforms in teaching to help students acquire helpful knowledge and concepts from the learning process, such as computational thinking [7], concept maps [3], and immersive virtual reality environments [8]. Computational thinking (CT) is one of the problem-solving skill that enables individuals to better understand and solve complex problems [9]. Programming is a popular method for introducing CT into education. Lye and Koh [10] noted that students can develop CT through programming concepts. In recent years, visual programming has been an effective tool for programming education and curriculum [11]. It uses graphical building blocks to represent various program components and concepts, enabling users to program by dragging and dropping building blocks. However, programming involves complex and abstract concepts and beginners can face many challenges in learning. To help teachers effectively monitor student learning and improve academic performance, Cañadas [12] suggested that formative assessment (FA) can be effective in improving student performance. Moreover, some studies have confirmed that conducting FA after learning can enhance students’ course performance [13]. While some research highlights the benefits of CT or FA in education, few studies have explored student learning performance in visual programming through CT core competencies and FA. Thus, this study develops a visual programming formative assessment system (VPFAS) that provides online tests for students and applies CT core competencies to the App Inventor programming course to determine if it can enhance learning performance of students in virtual programming.

2 Literature Review 2.1 Applications of Computational Thinking in Educational Research In the definition and discussion of CT, Wing [9] describes CT as an essential skill for system design, understanding and problem-solving human behavior through fundamental concepts of computer science. Additionally, Wing [9] notes that CT is not a program but a type of conceptual thinking. Thus, CT is viewed as a process of simplifying, embedding, transforming, and simulating complex and difficult problems into easy-to-solve solution [9].

132

Y.-P. Cheng et al.

Many scholars have proposed and defined different core competencies and characteristics in CT. For example, Wing [9] divided CT into four core competencies, namely decomposition, pattern recognition, generalization and abstraction, and algorithm design. The study argues that CT should not only enable people to think like computer scientists about how to program, but more importantly, be able to think about problems at multiple levels of abstraction. Brennan and Resnick [14] proposed a CT framework, defining key features of CT from three different dimensions, computational concepts, computational practices, and computational perspectives. Computational concepts are programming concepts and concepts that people need to use when writing programs, such as sequences, conditions, loops, operators, etc. Computational practices are working syntax that can be further practiced and developed by people using interactive media while programming. Computational perspectives are people’s perspectives on technological developments and attempts to create projects through interactive media. Shute, Sun and Asbell-Clarke [15] proposed six core competencies of CT. This research highlights that CT can become an effective problem-solving method through specific skill demonstrations. Therefore, Wei, Lin, Meng, Tan and Kong [11] view CT as a problem-solving approach. However, Wing [16] has stated that CT is inherently abstract, which creates obstacles and difficulties for beginners learning and developing CT. To address this issue, Brennan and Resnick [14] proposed an interactive media tool to allow students to develop programming fundamentals and practical CT concepts. This research shows that Scratch can be used to practice and develop CT in students. Zhao, Liu, Wang and Su [17] argue that visual programming is a suitable learning tool for beginners to develop CT. App Inventor, Alice, Scratch, and other familiar visual programming languages have been studied and found to effectively cultivate students’ CT [4, 18]. Additionally, programming ability is self-evidently important in the field of computer science and information engineering. However, students often encounter difficulties while learning to program in college and university programming courses. If students cannot comprehend the logic and syntax of programming from the course, they may lose motivation and achieve less. To enable students to effectively comprehend the logic and syntax of programming, this study uses the definition of CT proposed by Wing [9], with decomposition, pattern recognition, generalization and abstraction, and algorithm design as its core competencies. 2.2 Applications of Formative Assessment in Educational Research Formative assessment (FA) is a teaching and learning evaluation method that enables educators to understand students’ mastery of specific topics and skills [19]. Through FA, educators can monitor students’ learning progress, identify learning challenges and misconceptions, and provide timely and corrective feedback [20, 21]. Through this method, teachers can adjust their teaching methods and materials to help students achieve their learning objectives [22]. Numerous studies have shown that the use of FA can significantly improve teaching outcomes and students’ academic performance [13, 23]. In programming courses, FA can be used to evaluate students’ engagement and performance in various programming activities. For example, Veerasamy, Laakso and

Applying Computational Thinking and Formative Assessment

133

D’Souza [24] used FA tasks to document students’ participation and engagement in programming and predict their learning outcomes. They found that students’ participation in FA tasks correlates with their performance on programming tests, which can be used to identify at-risk students. Hooshyar, Ahmad, Yousefi, Fathi, Horng and Lim [25] applied online game-based formative assessment to explore students’ problem-solving abilities and learning performance in programming. Their findings suggest that the system can help students acquire and promote their problem-solving skills, improve their learning interest, and enhance their degree of technology acceptance. Despite the well-documented benefits of using FA, few studies have examined its effectiveness in promoting students’ learning performance in visual programming courses that incorporate CT core competencies. To fill this gap, this study develops the VPFAS that includes a programming concept test and applies the CT core competencies to the App Inventor programming course to investigate students’ learning performance in virtual programming.

3 Research Method 3.1 Participants This study adopted a quasi-experimental design method and recruited 52 university students, of which 27 students were assigned to the experimental group and 25 students were assigned to the control group. All participants were between the ages of 20 and 21. Before the experimental activities, none of the students had participated in the App Inventor class taught by the teacher. 3.2 Visual Programming Formative Assessment System (VPFAS) The VPFAS comprising 111 items related to the App Inventor concepts is developed by this study. Figure 1 shows the user interface of the VPFAS for students to take the test. All tests are true-false questions that students must answer based on combinations of questions and corresponding visual program blocks. Therefore, students in the experimental group can use VPFAS to conduct tests after learning about the weekly course progress to help them review and check their learning. The results are displayed on the screen immediately, and students can review their learning according to the answer scores. Moreover, these test items are automatically generated randomly, and students can take online tests repeatedly to reinforce their concepts and knowledge in virtual programming.

134

Y.-P. Cheng et al.

Fig. 1. The user interface of the VPFAS for students to conduct the tests.

3.3 Applying Core Competencies of Computational Thinking to the App Inventor Programming Course This study adopts the four core competencies of CT proposed by Wing [9], namely decomposition, pattern recognition, generalization and abstraction, and algorithm design. Decomposition can be expressed as decomposing a complex overall problem into several sub-problems or small problems. Pattern recognition can be expressed as identifying the regularity and repetition of several sub-problems and finding rules with the same pattern. Generalization and abstraction can be expressed as inducing rules with the same schema and generating their corresponding schemas or functions. Algorithm design can be expressed as designing a set of algorithmic processes that can correctly execute each generation pattern or function step by step, so as to formulate a solution to the entire problem. In addition, teachers integrate formative assessment tests into each unit, allowing students to use the four core competencies of CT to solve problems in formative assessment tests, thereby enhancing learning performance of students in virtual programming. Table 1 presents the descriptions of students in the experimental group applying CT core competencies to solve problems in formative assessment tests.

Applying Computational Thinking and Formative Assessment

135

Table 1. Application of CT core competencies in formative assessment tests - taking the factorial program as an example CT core competencies

Definition

Example description of applying CT core competencies to formative assessment tests

Decomposition

Decomposing a complex overall problem into several sub-problems or small problems

Students need to break down the factorial problem into several sub-problems that need to be solved. This includes inputting the number, multiplying it by a decreasing series of numbers, outputting the result, and making conditional judgments

Pattern recognition

Identifying the regularity and repetition of several sub-problems and finding rules with the same pattern

Students need to recognize the regularity of these problems, such as multiplication, conditional judgment, etc. Students should identify where multiplication is used and when conditional judgments are necessary

Generalization and abstraction Inducing rules with the same schema and generating their corresponding schemas or functions

Students need to summarize the rules of the same pattern and generate patterns or functions. Students should recognize when loops or functions can be used and how to implement them to address patterns or functions

Algorithm design

Students need to design a set of algorithms that can solve problems. Such as students should determine the appropriate number of inputs and design algorithms with functions that include loops and conditional statements. Parameters should be used as inputs and outputs for these functions

Designing a set of algorithmic processes that can correctly execute each generation pattern or function step by step, so as to formulate a solution to the entire problem

136

Y.-P. Cheng et al.

3.4 Experimental Process Figure 2 shows the experimental process for this study. 52 university students in Tainan, Taiwan were recruited for a ten-week experimental activity. An experimental group of 27 students apply CT core competencies to the App Inventor programming course and use the VPFAS to conduct online tests. A control group of 25 students taught the App Inventor course in a traditional lecture. Before the experiment starts, the teacher introduces the steps of the experiment to all the students and conducts a pre-test. From the second week to the ninth week, the teacher explains the basic principles and concepts of programming in the course. Unlike the control group, the experimental group use the VPFAS to test their understanding of programming objects and concepts. Also, the teacher guides the students in the experimental group to learn how to use the four core competencies of CT, and the teacher provides some practice questions during each class, allowing the experimental group to apply the CT core competencies (as shown in Table 1) to complete tasks in a visual programming environment of App Inventor. Finally, all students conduct a post-test at the end of the tenth week to measure their learning performance in virtual programming.

Fig. 2. The experimental process of this study.

3.5 Data Collection and Analysis To examine the learning performance of students in virtual programming, the pre-test and post-test papers include programming basic grammar, logic, conditions, functions, and other question types. All test papers are scored out of 100 points. A total of 52 test papers were collected in this study. Additionally, this study used IBM SPSS Statistics software to analyze the differences in learning performance between the two groups of students. To analyze the learning performance, this study used a one-way analysis of covariance (One-way ANCOVA) to determine whether there was a significant difference in the pre-test and post-test scores between the two groups.

Applying Computational Thinking and Formative Assessment

137

4 Results This study collected 52 effective test papers for pre-test and post-test, including 27 test papers for the experimental group and 25 test papers for the control group. This study was to investigate whether there was a difference in virtual programming learning performance between the two groups after implementing different instructional activities in the App Inventor course. The mean and standard deviation of the experimental group are 40.07 and 8.66, and those for the control group are 39.8 and 12.88. Table 2 displays the results of the withingroup homogeneity regression test for ANCOVA. The interaction between the group and pre-test does not reach a significant difference (F = 0.5, p > .05). This finding is consistent with the within-group homogeneity regression test, which indicates that there is no significant difference between the two groups in terms of knowledge and concepts of the procedure before the implementation of the experimental activity. Table 3 shows the ANCOVA post-test results for two groups. The mean and standard deviation of the experimental group are 85 and 13.46, and those for the control group are 55.56 and 12.67. After excluding the influence of the pre-test by ANCOVA, the adjusted mean and standard error of the experimental group are 84.96 and 2.47, and those for the control group are 55.6 and 2.57. The experimental group and the control group reached a significant difference in the post-test results (F = 67.795, p < 0.001), and the effect size is large (η2 = 0.58). This indicates that using the VPFAS and applying the CT core competencies to programming enabled the experimental group to learn abstract concepts and knowledge of programming, understand the logic and conditions of program execution and operation, and enhance their learning performance in virtual programming. Table 2. The result of the within-group homogeneity regression test for ANCOVA. Source

df

F

p

Pre-test

SS 552.79

1

3.32

0.075

Groups

285.55

1

1.71

0.197

83.68

1

0.5

0.482

Groups*Pre-test Error Corrected Total

8004.03

48

19808.77

51

Table 3. The post-test results of ANCOVA for two groups. Group

N

Mean SD

Experimental group 27 85 Control group *** p < 0.001.

Adjusted Mean SE

13.46 84.96

25 55.56 12.67 55.6

F

p

η2

2.47 67.795*** 0.05). Table 3. The ANOVA analysis results between SoA and SoO df

SS

MS

F

Pr(>F)

Class

1

0.1

0.113

0.039

0.844151

variable

9

100.6

11.178

3.837

0.000107 ***

Residuals

459

1337.1

2.913

In all three analyses above, the p-values for Class (rural and urban elementary schools) were greater than 0.05, which means there is no significant difference between the mean values of different classes (rural and urban elementary schools).

5 Conclusion The results of this study indicate that there is no significant difference in device operation and activity implementation between rural and urban elementary school students when using VR, which may encourage educators that the application of VR is universally applicable in education. This study also confirmed that the SoA has a significant effect on SoO. When participants felt that their movements were controlled by the virtual hand, their SoO changed in accordance with the movements of the virtual body. The results are consistent with previous studies [7, 8, 30]. After completing tasks in the immersive VR skills training system, students significantly perceived the occurrence of SoO and SoA. This means that the immersion of

584

L.-W. Lu et al.

VR technology can make students aware of their SoO and SoA. The results are consistent with previous studies [26, 31]. So, we can create a more realistic and immersive experience by controlling SoA and adjusting SoO. However, it should be noted that students develop SoO over the virtual avatar. If a student’s illusion is too strong, even a short-term experience or training in an immersive virtual environment may still cause the student to be unable to dissociate from the virtual body image. The results are consistent with previous studies [34, 38, 39]. Which may lead to anxiety, insecurity or lose their sense of reality. Therefore, based on the consideration of children’s mental health, teachers need to be aware of these potential risks in VR applications to ensure that the expected teaching and learning effects can be achieved in a safe environment. Acknowledgements. This research is partially supported by the National Science and Technology Council, Taiwan, R.O.C. under Grant No. MOST 109-2511-H-024 -005 -MY3 and MOST 1112410-H-024-001-MY2.

References 1. Pellas, N., Dengel, A., Christopoulos, A.: A scoping review of immersive virtual reality in STEM education. IEEE Trans. Learn. Technol. 13, 748–761 (2020) 2. Wang, W.S., Pedaste, M., Huang, Y.M.: Designing STEM learning activity based on virtual reality. In: Huang, Y.M., Cheng, S.C., Barroso, J., Sandnes, F.E. (eds.) Innovative Technologies and Learning. ICITL 2022. Lecture Notes in Computer Science, vol. 13449, pp. 88–96. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15273-3_10 3. Araiza-Alba, P., Keane, T., Kaufman, J.: Are we ready for virtual reality in K–12 classrooms? Technol. Pedagogy Educ. 31, 471–491 (2022) 4. Slavova, Y., Mu, M.: A comparative study of the learning outcomes and experience of VRin education. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 685– 686. IEEE (2018) 5. Lin, Y.-H., Lin, H.-C., Liu, H.-L.: Using STEAM-6E model in AR/VR maker education teaching activities to improve high school students’ learning motivation and learning activity satisfaction. In: Huang, Y.-M., Lai, C.-F., Rocha, T. (eds.) ICITL 2021. LNCS, vol. 13117, pp. 111–118. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-91540-7_13 6. Hussein, M., Nätterdal, C.: The benefits of virtual reality in education-A comparison study, Bachelor dissertation. Chalmers University of Technology. University of Gothenburg, Sweden (2015) 7. Kokkinara, E., Kilteni, K., Blom, K.J., Slater, M.: First person perspective of seated participants over a walking virtual body leads to illusory agency over the walking. Sci. Rep. 6, 28879 (2016) 8. Petersen, G.B., Petkakis, G., Makransky, G.: A study of how immersion and interactivity drive VR learning. Comput. Educ. 179, 104429 (2022). https://doi.org/10.1016/j.compedu. 2021.104429 9. Huang, C.Y., Lou, S.J., Cheng, Y.M., Chung, C.C.: Research on teaching a welding implementation course assisted by sustainable virtual reality technology. Sustainability 12(23), 10044 (2020). https://doi.org/10.3390/su122310044 10. Chung, C.C., Tung, C.C., Lou, S.J.: Research on optimization of VR welding course development with ANP and satisfaction evaluation. Electronics 9(10), 1673 (2020)

A Study of Virtual Skills Training on Students’

585

11. Burdea Grigore, C., Coiffet, P.: Virtual Reality Technology. Wiley, London (1994) 12. Freina, L., Ott, M.: A literature review on immersive virtual reality in education: state of the art and perspectives. In: Proceedings of eLearning and Software for Education (eLSE) Bucharest (2015) 13. Çakiro˘glu, Ü., Göko˘glu, S.: Development of fire safety behavioral skills via virtual reality. Comput. Educ. 133, 56–68 (2019) 14. Xie, B., et al.: A review on virtual reality skill training applications. Front. Virtual Real. 2, 645153 (2021). https://doi.org/10.3389/frvir.2021.645153 15. Piccione, J., et al.: Virtual skills training: the role of presence and agency. Heliyon 5(11), e02583 (2019) 16. Lai, C., Chang, Y.-M.: Improving the skills training by mixed reality simulation learning. In: Wu, T.-T., Huang, Y.-M., Shadieva, R., Lin, L., Starˇciˇc, A.I. (eds.) ICITL 2018. LNCS, vol. 11003, pp. 18–27. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99737-7_2 17. Chang, Y., Lai, C.L.: Exploring the experiences of nursing students in using immersive virtual reality to learn nursing skills. Nurse. Educ. Today. 97, 1–7 (2020) 18. Harrison, A., Derwent, G., Enticknap, A., Rose, F.D., Attree, E.A.: The role of virtual reality technology in the assessment and training of inexperienced powered wheelchair users. Disabil. Rehabil. 24, 599–606 (2002). https://doi.org/10.1080/09638280110111360 19. Gallagher, I.I.: Philosophical conceptions of the self: implications for cognitive science. Trends Cogn. Sci. 4(1), 14–21 (2000). https://doi.org/10.1016/s1364-6613(99)01417-5. PMID: 10637618 20. Botvinick, M., Cohen, J.: Rubber hands ‘feel’ touch that eyes see. Nature 391, 756 (1998). https://doi.org/10.1038/35784 21. Chen, W.Y., Huang, H.C., Lee, Y.T., et al.: Body ownership and the four-hand illusion. Sci. Rep. 8, 2153 (2018). https://doi.org/10.1038/s41598-018-19662-x 22. Liang, C., Lin, W.H., Chang, T.Y., et al.: Experiential ownership and body ownership are different phenomena. Sci. Rep. 11, 10602 (2021). https://doi.org/10.1038/s41598-021-900 14-y 23. Jo, D., et al.: The impact of avatar-owner visual similarity on body ownership in immersive virtual reality. In: Spencer, S.N. (ed.)Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, vol. Part F131944. Association for Computing Machinery (2017). https://doi.org/10.1145/3139131.3141214 24. Latoschik, M.E., Roth, D., Gall, D., Achenbach, J., Waltemate, T., Botsch, M.: The effect of avatar realism in immersive social virtual realities. In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, pp. 1–10 (2017) 25. Schwind, V., Knierim, P., Tasci, C., Franczak, P., Haas, N., Henze, N.: These are not my hands! Effect of gender on the perception of avatar hands in virtual reality. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp.1577–1582 (2017) 26. Kong, G., He, K., Wei, K.: Sensorimotor experience in virtual reality enhances sense of agency associated with an avatar. Conscious. Cogn. 52, 115–124 (2017) 27. Li, S., Gu, X., Yi, K., Yang, Y., Wang, G., Manocha, D.: Self-Illusion: a study on cognition of role-playing in immersive virtual environments. IEEE Trans. Vis. Comput. Graph. 28(8), 3035–3049 (2022) 28. Willumsen, E.C.: Is my avatar my avatar? Character autonomy and automated avatar actions in digital games. In: DiGRA Conference (2018) 29. van Gisbergen, M.S., Sensagir, I., Relouw, J.: How real do you see yourself in VR? The effect of user-avatar resemblance on virtual reality experiences and behaviour. In: Jung, T., tom Dieck, M.C., Rauschnabel, P.A. (eds.) Augmented Reality and Virtual Reality. Progress in IS, pp. 401–409. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37869-1_32 30. Castronova, E.: Theory of the Avatar. Available at SSRN 385103 (2003)

586

L.-W. Lu et al.

31. Gonzalez-Franco, M., Peck, T.C.: Avatar embodiment. Towards a standardized questionnaire. Front. Robot. AI 5, 74 (2018) 32. Lugrin, J.L., et al.: Any “body” there? Avatar visibility effects in a virtual reality game. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 17–24 (2018) 33. Braun, N., et al.: The senses of agency and ownership: a review. Front. Psychol. 9, Article 535 (2018). https://doi.org/10.3389/fpsyg.2018.00535 34. Yee, N., Bailenson, J.N., Ducheneaut, N.: The proteus effect: implications of transformed digital self-representation on online and offline behavior. Commun. Res. 36(2), 285–312 (2009). https://doi.org/10.1177/0093650208330254 35. Reinhard, R., Shah, K.G., Faust-Christmann, C.A., Lachmann, T.: Acting your avatar’s age: effects of virtual reality avatar embodiment on real life walking speed. Media Psychol. 23(2), 293–315 (2020) 36. Oyanagi, A., et al.: The possibility of inducing the proteus effect for social VR users. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds.) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science. vol. 13518, pp. 143–158. Springer, Cham(2022). https://doi.org/ 10.1007/978-3-031-21707-4_11 37. Praetorius, A.S., Görlich, D.: The proteus effect: how avatars influence their users’ selfperception and behaviour. In: tom Dieck, M.C., Jung, T.H., Loureiro, S.M.C. (eds.) Augmented Reality and Virtual Reality. PI, pp. 109–122. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-68086-2_9 38. Guegan, J., Nelson, J., Lamy, L., Buisine, S.: Actions speak louder than looks: the effects of avatar appearance and in-game actions on subsequent prosocial behavior. Cyberpsychology. J. Psychosoc. Res. Cyberspace 14(4) (2020) 39. Cadet, L.B., Chainay, H.: How preadolescents and adults remember and experience virtual reality: the role of avatar incarnation, emotion, and sense of presence. Int. J. Child-Comput. Interact. 29, 100299 (2021) 40. IJsselsteijn, W., de Kort, Y., Poels, K.: The game experience questionnaire. Eindhoven: Tech. Univ. Eindhoven 3–9 (2013)

Enhancing English Writing Skills through Rubric-Referenced Peer Feedback and Computational Thinking: A Pilot Study Sri Suciati1

, Elsa2

, Lusia Maryani Silitonga1,2 and Ting-Ting Wu2(B)

, Jim-Min Lin3

,

1 Universitas PGRI Semarang, Central Java 50232, Indonesia 2 National Yunlin University of Science and Technology, Douliu 64002, Taiwan

[email protected] 3 Feng Chia University, Taichung 40724, Taiwan

Abstract. This study highlights the significance of language and foreign language proficiency in today’s globalized world, where English is the most commonly spoken language. However, students who learn English as a second or foreign language often struggle to master writing skills, which are crucial for effective communication. To address this issue, computational thinking (CT) has been proposed as a method for enhancing English language learning. This study aims to explore students’ perceptions of the usefulness of peer feedback activities in CT-integrated writing courses that utilize rubrics. The research employed a descriptive qualitative approach to analyze the interview transcripts of six participants. The findings indicate that these activities positively impacted participants’ writing skills in terms of content, organization, vocabulary, language use, and mechanics. This study provides valuable insights for educators seeking to improve their English language instruction by incorporating rubric-referenced peer feedback activities into their curriculum. Keywords: Computational Thinking · English Writing · Rubric · Peer-feedback

1 Introduction Language is the fundamental tool used in human communication, allowing us to share our ideas and concepts with others. Since English is the language that is spoken the most frequently all over the world, its significance cannot be downplayed or ignored in today’s interconnected and globalized world. However, it is difficult and stressful for students who view English as a second language (ESL) or a foreign language (EFL) to master the language [1]. Students learning English as a foreign language are influenced by the first and second languages they learn. They must sometimes be instructed in their native language to comprehend the meaning of four skills: writing, listening, reading, and speaking [2]. Effective communication requires strong writing skills. Good writing skills allow us to communicate with clarity and ease to a vastly larger audience than through face-to-face or telephone conversations. Writing is important in language production, which is employed for global knowledge mediation [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 587–596, 2023. https://doi.org/10.1007/978-3-031-40113-8_58

588

S. Suciati et al.

However, writing requires complex cognitive skills [4], and the EFL students had writing issues that hindered their writing ability [5]. Grammar, cohesiveness, coherence, paragraph organization, diction, and spelling are among the issues [6]. Furthermore, [7] examined the writing abilities of Indonesian EFL students by assigning them writing tasks to complete in a set amount of time. Regarding the aforementioned writing problems, an innovative strategy was recommended to improve students’ English writing skills. Recent studies show that computational thinking (CT) has been proposed as a method for enhancing English language learning [2, 8, 9]. Since CT provides a structured approach to the writing process, it can help to enhance English writing skills. Students learn to think computationally by engaging in a mental process that helps them segment large problems into simpler ones [10, 11]. By breaking down the system into manageable pieces, students will be able to understand the process, determine how it works, and devise solutions and procedures that can be applied to other problems with similar characteristics [2]. However, whether the strategy is beneficial to the students needs to be discovered. Recent research on peer feedback in L2 writing [12] explored the comparative value of self- and peer-feedback for draft revision and writing quality development among students. To fill the research gap of this study, the researchers investigated students’ perspectives on the application of Computational Thinking to English writing skills. The purpose of this study is to respond to the following query: How do students perceive writing activity utilizing the computational thinking process?

2 Literature Review 2.1 Computational Thinking The CT process is a problem-solving that involves a number of thinking and dispositions [13]. CT, a way of thinking that helps solve problems and be critical, is a 21st-century skill like reading, writing, and arithmetic as it involves computing-based problem-solving, system design, and human behavior [10]. CT process that is applied effectively can encourage students to study English [14]. This process enables students to approach English writing systematically. [15] determined the five core components of the computational thinking process— abstraction, decomposition, algorithmic thinking, evaluation, and generalization—in all situations involving problem-solving. According to [16], the decomposition dimension evaluates an individual’s propensity to divide a problem into smaller parts. The abstraction dimension evaluates an individual’s ability to grasp a problem’s main ideas rather than its specifics. The algorithmic thinking dimension evaluates an individual’s propensity to methodically solve a problem. The evaluation dimension measures an individual’s disposition to evaluate and evaluate various problem solutions. The generalization dimension measures an individual’s propensity to generalize the solution to other problem-solving contexts with similar characteristics. 2.2 Rubric-Referenced Peer Feedback for Writing Skill Peer feedback is deemed beneficial to students’ writing because it encourages active consideration of task-specific processes and criteria [17]. Studies show that students who

Enhancing English Writing Skills

589

receive feedback from a wide range of peers outperform those who receive feedback only from an instructor or subject-matter expert in terms of writing growth [18]. Students who give and receive feedback from their classmates gain experience in problem detection, may become more aware of different kinds of writing problems, and may become more familiar with a variety of approaches to revision [19]. [20] discovered that having students receive feedback from their peers led to an improvement in the quality of their research reports. This improvement was observable in both the straightforward and more complicated aspects of the student’s reports. According to [21], there are three aspects in students’ responses to peer feedback; usefulness, factors influencing the perceived usefulness of peer feedback, and students’ perspectives on the role of the rubric. To assist students’ activity in peer feedback, providing students with appropriate assessments will help them develop their learning processes [22]. Utilizing a rubric is a method for evaluating writing skills as it is a reliable method for grading written expression papers. The rater is aware of which criteria to use when grading the student’s paper, thereby increasing the reliability of scoring [23].

3 Methods 3.1 Participants Six students were interviewed for this study. They are students of the English Department in the fourth and sixth semesters that volunteered to partake in this pilot study. 3.2 Materials Participants were instructed to write four different topics in pre-writing I, pre-writing II, pre-writing III, and final writing. After each topic, they reviewed each other’s writing by giving scores based on the rubric (Fig. 1). The scored writings were then returned to be discussed and revised. The peer feedback ratings and comments followed the rubrics provided by the teacher. The teacher employed a modified version of the rubric, as seen in Fig. 1 with a 4-point scale to evaluate various aspects of EFL composition [14]. The rubric consists of five aspects of EFL writing: (1) Content; (2) Organization; (3) Vocabulary; (4) Language use; (5) Mechanics. This study uses a semi-structured interview adapted from [24] with some modifications. It has a comparable research focus as this one, which is to examine students’ reflections on their use of peer feedback. The interview questions were as follows: 1.) What do you think about your peer’s feedback on this essay? 2.) What are the aspects of EFL writing that peer feedback can help you improve? 3.) What are the aspects of EFL writing that peer feedback cannot help you improve? 4.) What factors may influence the usefulness of peer feedback for draft revision? 5.) What are your perceptions of the rubric’s role in peer feedback practice?

590

S. Suciati et al.

Fig. 1. Student’s peer feedback rubric.

3.3 Pilot Study This pilot study was held for 8 meetings, as seen in Fig. 2. The initial meeting was the introduction to the academic writing training course. In the second meeting, teachers spent 40 min beginning the session training by discussing the result of the video assignment test. The assignment and material review were given one day before the class began. The writing activities were allocated for 60 min after receiving teacher feedback. In the third meeting, the teacher gave feedback on the student’s homework from the previous meeting, then continued to pair the students so they could exchange their writings. After collecting feedback from their pairs on their draft, the teachers had them double-check their writings. The participants were suggested to speak Indonesian as their mother tongue to discuss in the peer feedback activity so they would exchange ideas clearly. The 2nd and 3rd meeting activities were repeated twice. In the last meeting, participants write their final writing composition. After completing all activities, participants were interviewed individually using their mother language to help students express themselves better. The interviews were carried out in one-on-one sessions, conducted by the researchers themselves. 3.4 Data Analysis The source of data used in this research is the interviews that are recorded and then transcribed. Content analysis was applied in this study. The texts from the transcripts were coded using MAXQDA and categorized based on [21]‘s framework; usefulness of the peer feedback activity (content revision, organization revision, vocabulary revision, sentence structures, and mechanic use); factors influencing the perceived usefulness of peer feedback (shift of foci, boredom, limited English proficiency, and insufficient

Enhancing English Writing Skills

591

Fig. 2. Class Activity.

topical knowledge); and students’ perspectives on the role of the rubric in peer feedback practice (a guidance for feedback practice, a guide for scoring, a rigid criterion, doubts the applicability, and internalization of the rubric).

4 Result and Discussion 4.1 The Usefulness of the Peer Feedback Activity The participants described the usefulness of the peer feedback activity. The results are shown in Fig. 3. The benefits were ranked as follows: content (29,7%), mechanics (21,6%), vocabulary (18.9%), sentence structures (16,2%), and organization (13,5%). For content revision, the students said that after the activity, they were able to discuss the topic given and revised drafts that were relevant to the topics. For mechanics, the participants stated that they are more aware of spelling, punctuation, capitalization, and paragraphing. Furthermore, the participants considered that the course provided them with an opportunity to expand their academic vocabulary. Regarding sentence structures or language use, the participants agreed that the activity allowed them to use tenses, numbers, word order/function, articles, pronouns, and prepositions more precisely. They stated that the course had taught them valuable techniques that can be applied to other academic courses. In general, the course utilizing CT has been incredibly beneficial in enhancing their writing skills and preparing them for academic writing in the future [25]. 4.2 Factors Influencing the Perceived Usefulness of Peer Feedback Responses regarding factors influencing the perceived usefulness of peer feedback were analyzed (Fig. 4). As the participants described them, they were ranked as follows:

592

S. Suciati et al.

Fig. 3. Usefulness of the peer feedback activity.

insufficient topical knowledge (64.3%), limited English proficiency (21.4%), the shift of foci (7.1%) and boredom (7.1%). In terms of topical knowledge, participants agreed that individual differences in knowledge and understanding of content could influence the quality of feedback provided to peers. The context of the written work is also a crucial factor to consider, as divergent interpretations can lead to misunderstandings and confusion, even if the writing is technically accurate. If others do not comprehend the content, even technically accurate writing may put the participants at a disadvantage. Regarding English proficiency level, the students said that the proofreaders possibly might not fully grasp the intended meaning of the writer because the writing is ambiguous. This might arise due to the proofreader’s different levels of proficiency level so certain words that are appropriate for use in academic writing may not be deemed as such by some individuals. For the shift of foci and boredom, both of them are the same in percentage. So, topical knowledge and levels of proficiency affected the accuracy of assessments of writing, and this variability should be considered when evaluating feedback [21].

Fig. 4. Factors influencing usefulness of the peer feedback activity.

Enhancing English Writing Skills

593

4.3 Students’ Perspectives on the Role of the Rubric According to the responses, students’ perspectives on the role of the rubric were analyzed (Fig. 5). The values as described by the learners were ranked as follows: a guide for scoring (44.4%), a guide for feedback practice (27.8%), internalization of the rubric (5.6%) and a rigid criterion (5.6%). For the role of a rubric as a scoring guide, the interviewees stated that evaluating written work with a rubric is essential as it establishes clear standards for assessment. Without a rubric, they considered that evaluations might be subjective and based solely on personal opinions. This can lead to inconsistent evaluations and unfair treatment. In terms of the rubric as a guide for feedback practice, the participants said that a rubric provides feedback that is actionable and enables them to identify aspects they had improved and needed to evaluate. Moreover, they opined that a rubric could be a source of internalization in learning, so individuals with less understanding of a particular topic or context could use them as a benchmark to improve their understanding and knowledge. In addition, they considered that a rubric could be used as a fixed standard when evaluating written work. Overall, rubrics are an indispensable tool for evaluating performance [26] and providing constructive feedback in a transparent and objective manner.

Fig. 5. Students’ perspectives on the role of the rubric

5 Conclusion and Future Research After completing a writing training course activity utilizing the computational thinking process and participating in interview sessions, it is demonstrated that this teaching strategy can be implemented. Several studies pointed out the relationship between computational thinking and education [27]. Integrating computational thinking in writing significantly is beneficial for participants’ skills in the revision of content, organization, vocabulary, language use, and mechanics. However, all participants recognized that topical knowledge and understanding could affect peer editing feedback. The participants agreed that rubrics are essential for objective, transparent assessment and providing actionable feedback. Teachers’ roles are needed in delivering the computational thinking process in writing activities and guiding students to ensure the learning process is carried out excellently.

594

S. Suciati et al.

The study’s implications are that rubric-referenced peer feedback activities, when integrated with computational thinking, can be an effective method for enhancing students’ writing skills in terms of content, organization, vocabulary, language use, and mechanics. This finding has important implications for English language instruction as it suggests that incorporating these activities into the curriculum can help students improve their writing abilities. Additionally, the study highlights the importance of providing opportunities for students to practice their language skills in a supportive and collaborative environment. By engaging in peer feedback activities, students can receive constructive criticism and learn from their peers’ writing styles. Finally, the study underscores the potential benefits of integrating computational thinking into language learning beyond writing skills. A small pilot study may not be representative of the population. Since the sample size is small and not diverse, the study’s findings may not apply to all English language learners. To prove the rubric-referenced peer-feedback intervention for English language learners works, for future research, it would be beneficial to conduct larger-scale studies with more diverse participant groups to determine if the findings of this study can be replicated. Additionally, future research could explore how computational thinking can be integrated into other aspects of language instruction beyond writing skills. Finally, it would be interesting to investigate how different types of rubrics and feedback methods impact students’ writing skills and perceptions of their own abilities. Acknowledgment. This research is partially supported by the Ministry of Science and Technology, Taiwan, R.O.C. under Grant No. NSTC110-2511-H-035-002-MY2.

References 1. Rintaningrum, R.: Investigating reasons why listening in English is difficult: voice from foreign language learners. Asian EFL J. 20, 6–15 (2018) 2. Dijaya, S., Wang, G., Makbul, D.S.: Innovation in English language teaching for EFL context: students’ perceptions toward writing story activity using computational thinking process. In: Innovation in English Language Teaching for EFL Context: Students’ Perceptions Toward Writing Story Activity Using Computational Thinking Process, vol. 8, pp. 72–78 (2017) 3. Fareed, M., Ashraf, A., Bilal, M.: ESL learners’ writing skills: problems, factors and suggestions. J. Educ. Soc. Sci. 4, 81–92 (2016) 4. Shahid Farooq, M., Uzair-Ul-Hassan, M., Wahid, S.: Opinion of second language learners about writing difficulties in English language. South Asian Stud. 27, 183–194 (2012) 5. Toba, R., Noor, W.N., Sanu, L.O.: The current issues of Indonesian EFL students’ writing skills: ability, problem, and reason in writing comparison and contrast essay. Dinamika Ilmu 19, 57–73 (2019). https://doi.org/10.21093/di.v19i1.1506 6. Ariyanti, A., Fitriana, R.: EFL students’ difficulties and needs in essay writing. In: International Conference on Teacher Training and Education 2017 (ICTTE 2017), pp. 32–42 (2017) 7. Hasan, J., Marzuki, M.: An analysis of student’s ability in writing at Riau University Pekanbaru - Indonesia. Theory Pract. Lang. Stud. 7, 380–388 (2017). https://doi.org/10.17507/tpls.070 5.08

Enhancing English Writing Skills

595

8. Parsazadeh, N., Cheng, P.Y., Wu, T.T., Huang, Y.M.: Integrating computational thinking concept into digital storytelling to improve learners’ motivation and performance. J. Educ. Comput. Res. 59, 470–495 (2021). https://doi.org/10.1177/0735633120967315 9. Hsu, T.C., Liang, Y.S.: Simultaneously improving computational thinking and foreign language learning: interdisciplinary media with plugged and unplugged approaches. J. Educ. Comput. Res. 59, 1184–1207 (2021). https://doi.org/10.1177/0735633121992480 10. Wing, J.M.: Computational thinking (2006). https://doi.org/10.1145/1118178.1118215 11. Kale, U., et al.: Computational what? Relating computational thinking to teaching. TechTrends 62(6), 574–584 (2018). https://doi.org/10.1007/s11528-018-0290-9 12. Mawlawi Diab, N.: Assessing the relationship between different types of student feedback and the quality of revised writing. Assess. Writ. 16, 274–292 (2011). https://doi.org/10.1016/ j.asw.2011.08.001 13. Yadav, A., Zhou, N., Mayfield, C., Hambrusch, S., Korb, J.T.: Introducing computational thinking in education courses. In: Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (2011) 14. Nurhayati, N., Silitonga, L.M., Subiyanto, A., Murti, A.T., Wu, T.T.: Computational thinking approach: its impact on students’ English writing skills. In: Huang, YM., Cheng, SC., Barroso, J., Sandnes, F.E. (eds.) Innovative Technologies and Learning. ICITL 2022. Lecture Notes in Computer Science, vol. 13449, pp. 423–432. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-15273-3_47 15. Selby, C.C., Woollard, J.: Computational thinking: the developing definition (2010) 16. Tsai, M.J., Liang, J.C., Lee, S.W.Y., Hsu, C.Y.: Structural validation for the developmental model of computational thinking. J. Educ. Comput. Res. 60, 56–73 (2021). https://doi.org/ 10.1177/07356331211017794 17. Huisman, B., Saab, N., van Driel, J., van den Broek, P.: Peer feedback on academic writing: undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assess Eval. High. Educ. 43, 955–968 (2018). https://doi.org/10.1080/02602938.2018.142 4318 18. Kaufman, J.H., Schunn, C.D.: Students’ perceptions about peer assessment for writing: their origin and impact on revision work 39, 387–406 (2011). https://doi.org/10.1007/s11251-0109133-6 19. Patchan, M.M., Schunn, C.D.: Understanding the benefits of providing peer feedback: how students respond to peers’ texts of varying quality. Instr. Sci. 43, 591–614 (2015) 20. Tan, J.S., Chen, W.: Peer feedback to support collaborative knowledge improvement: What kind of feedback feed-forward? Comput. Educ. 187, 104467 (2022). https://doi.org/10.1016/ j.compedu.2022.104467 21. Wang, W.: Students’ perceptions of rubric-referenced peer feedback on EFL writing: a longitudinal inquiry. Assess. Writ. 19, 80–96 (2014). https://doi.org/10.1016/j.asw.2013. 11.008 22. Hasan, A.A.A.: Effectiveness of brain-based teaching strategy on students’ achievement and score levels in heat energy. J. Innov. Educ. Cult. Res. 3, 20–29 (2022). https://doi.org/10. 46843/jiecr.v3i1.45 23. Kahveci, N., Sentürk, ¸ B.: A case study on the evaluation of writing skill in teaching Turkish as a foreign language (2021) 24. Andrade, H., du, Y.: Student responses to criteria referenced self-assessment. Assess. Eval. High. Educ. 32, 159–181 (2007). https://doi.org/10.1080/02602930600801928 25. Yadav, A., Stephenson, C., Hong, H.: Computational thinking for teacher education. Commun. ACM. 60, 55–62 (2017). https://doi.org/10.1145/2994591

596

S. Suciati et al.

26. Chan, Z., Ho, S.: Good and bad practices in rubrics: the perspectives of students and educators. Assess. Eval. High. Educ. 44, 533–545 (2019). https://doi.org/10.1080/02602938.2018.152 2528 27. Wing, J.M.: Computational thinking’s influence on research and education for all Influenza del pensiero computazionale nella ricerca e nell’educazione per tutti. Ital. J. Educ. Technol. 25, 7–14 (2017). https://doi.org/10.17471/2499-4324/922

The Research of Elementary School Students Apply Engineering Design Thinking to Scratch Programming on Social Sustainability Wei-Shan Liu, Hsueh-Cheng Hsu, and Ting-Ting Wu(B) Graduate School of Technological and Vocational Education, National Yunlin University of Science and Technology, Douliou, Taiwan [email protected]

Abstract. The ability of computational thinking is one of the important relevant skills in the 21st century. It allows students to have the ability to organize and organize, and learn to identify and solve problems from it. The concept of sustainable earth society has also been proposed by the United Nations in recent years. Indicators, allowing school-age children to explore and perpetuate the world in which they live. The purpose of this research is to study the application of Scratch and engineering design thinking process by learners, open learners to program puzzles on the concept of earth sustainability, make sustainable games, and explore the analysis of students’ satisfaction with the course. Research participants: 47 middle-grade students who participated in information courses in the 2022 school year of an elementary school in Yunlin County, Taiwan. The results show that: the satisfaction of students in tidying up learning is very high. Most students love to learn Scratch courses, and also like to create their own Scratch games, and are willing to share and teach their own works to peer students. It can also be seen from the scale that if there is more time, students can use Scratch learning effectively. Better performance, the findings can be used to improve learning design in the future. Keywords: Scratch programming · Engineering design thinking · social sustainability

1 Introduction In the rapid advancement of the world today, the development of various technologies has led to changes in many aspects of work systems and technologies that promote and support human work. In recent years, global sustainability issues have received increasing attention, and school education has also begun to focus on cultivating students’ sustainable thinking abilities [1]. In this trend, combining engineering design thinking with Scratch programming can help students better understand and explore social sustainability issues [2]. The interdisciplinary education model aims to cultivate learners’ technological literacy, focusing on developing their abilities to use, manage, evaluate, and apply technology. The ability to solve problems is considered an important long-term © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 597–605, 2023. https://doi.org/10.1007/978-3-031-40113-8_59

598

W.-S. Liu et al.

life skill because problem-solving skills come from learning to find rational solutions until decisions are made to guide actions in order to find answers to problems through technological research and development, scientific methods, and scientific inquiry [3]. Learners must have the opportunity to apply their knowledge to design methods or processes to solve problems related to everyday life, which is the product of the engineering design process. The engineering design process is a problem-solving method that uses fundamental scientific, mathematical, and engineering concepts to create optimal solutions in design, plan and implement solutions, test, evaluate, and improve solutions, and provide solution outcomes [4]. Our study examined students’ satisfaction with the engineering design thinking course, including the problems and obstacles encountered by students. The results of this study will be used to design and develop a learning environment curriculum and formulate teaching strategies for future engineering design thinking courses. The concept of sustainable urban-rural development emphasizes the importance of integrating urban and rural areas, protecting natural resources, and promoting economic development and social progress [5]. This requires collaboration and efforts among governments, businesses, social organizations, and individuals. Achieving sustainable urban-rural development not only requires professional knowledge and skills but also a global perspective and a spirit of continuous learning [6]. With the continuous advancement of technology, students’ information literacy has become an essential knowledge and skill. Therefore, learning Scratch is one of the necessary skills. Through using Scratch to create projects, students can apply engineering design thinking to various creations. Scratch is a programming software developed by MIT Media Lab that can help students learn programming, logical thinking, and creativity [7]. Integrating engineering design thinking into Scratch learning can help students understand how to apply sustainability concepts to practical innovative solutions, such as games, simulators, and interactive web pages. In these projects, students can explore and solve social sustainability issues. For example, they can develop a waste sorting game to promote waste reduction and resource recycling [8]. This article will explore the concept and practice of sustainable urban and rural development, introduce the importance and challenges of sustainable urban and rural development, as well as the strategies and methods for promoting it. It is hoped that this will inspire readers to think and act on sustainable urban and rural development, promoting the sustainability and harmony of urban and rural development. With the advancement of globalization, there is more and more impact on sustainable development in urban and rural areas. This study applies the engineering design thinking process, combined with information technology curriculum and Scratch programming, to help students explore social sustainability issues and improve their programming and sustainable thinking skills. This will help cultivate future innovators and leaders and promote a more environmentally friendly, fair, and sustainable social development.

The Research of Elementary School Students

599

2 Literature Review 2.1 Social Sustainability The concept of “sustainable development” includes three principles: Fairness, Sustainability, and Commonality. At the societal level, it advocates for fair distribution to meet the basic needs of present and future generations [9]. At the economic level, it advocates for sustainable economic growth based on the protection of the earth’s natural systems. At the natural ecology level, it advocates for harmonious coexistence between humans and nature [10]. The journey towards sustainability has no end point, and continuous communication is the key to success. Due to challenges such as climate change, economic growth, social equity, and wealth inequality, in 2015, the United Nations announced the “2030 Sustainable Development Goals” (SDGs), which include 17 goals such as eliminating poverty, mitigating climate change, and promoting gender equality. These SDGs guide global efforts towards sustainability, with 193 countries agreeing to work towards achieving the 17 goals by 2030 [11, 12]. Comprehensive research on sustainable development generally encompasses various aspects, including the concept of sustainable development, conservation of water resources, land conservation, conservation of marine resources, biodiversity, global change and sustainable development, energy and economic development, land use planning, urban and rural development, environmental management and corporate operation, and sustainable society [13]. Some studies explore the Action Plan for Nature Conservation in the New Century from perspectives such as human and nature, biological resources, habitat management, and organizational systems, while others examine environmental protection policies from the perspectives of environmental conservation and pollution prevention [14]. Therefore, by incorporating the spirit of sustainable urban and rural development into the field of information technology education, we can enhance primary school students’ understanding of social sustainability and its practical applications. Through learning Scratch programming, students can develop their interest and concepts related to social sustainability. 2.2 The Relationship Between Design Thinking and Scratch Engineering design thinking is a cross-disciplinary learning of the engineering design process and design thinking. It is a fundamental skill in the engineering field and should be promoted in school education [15]. To describes problem-solving as a structurally ill-structured problem-solving design form. Alternative methods and various solution approaches are typical in engineering design. Engineering design is a learning management model that encourages students to develop creative thinking and problem-solving abilities [16]. Learning to program with Scratch can help improve students’ computational thinking and sustainability awareness, and can inspire their creativity and problemsolving abilities [17], using holistic thinking and progressive action to solve problems or meet needs. In each step of the engineering design process, students are encouraged to use creative and holistic thinking to gather data and information that may be used

600

W.-S. Liu et al.

and to articulate concepts to explain and communicate ideas to others who may only understand the concepts [18]. Therefore, by incorporating the spirit of sustainable urban and rural development into the field of information technology education, we can enhance primary school students’ understanding of social sustainability and its practical applications. Through learning Scratch programming, students can develop their interest and concepts related to social sustainability.

3 Research Method 3.1 Participants The research participants in this study are elementary school students from three schools in southern Taiwan. They are all third and fourth-grade students, between the ages of 8 and 10. The study includes four classes with a total of 47 students: 15, 11, 12, and 9 students per class. Among them, there are 26 boys and 21 girls. The students applied their information course time in this semester to learn the concept of sustainable urbanrural development and incorporated it into the Scratch programming course. All of the students participating in this study are learning Scratch programming for the first time and have no prior experience in this field. 3.2 Research Process

Fig. 1. The learning process

The Research of Elementary School Students

601

Fig. 2. The students’ work

This study utilized a one-semester information course, with a total of 18 weeks and one class per week, each lasting 40 min. The research process is shown in Fig. 1. From the first to the fifth week, students learned about Scratch interface recognition and operation, and were able to perform simple Scratch program writing. Next, students were introduced to the concept and spirit of sustainable cities and towns, using interesting picture books and videos to help them understand why we need to learn about sustainable cities and towns. In the following 5–6 weeks, students analyzed and understood the relevant information, and used their understanding of sustainable cities and towns to think about how to incorporate the concept of Scratch game programming. In weeks seven and eight, we introduced the concept of engineering design thinking, allowing students to begin learning about solution design. Students discussed with their groups

602

W.-S. Liu et al.

how to use their current Scratch programming abilities to create games that reflect the ideas of sustainable cities and towns, and the teacher provided basic programming concepts for them to apply. Therefore, from weeks nine to twelve, students began planning and developing, using their Scratch programming abilities to create small games that reflect the idea of sustainable cities and towns. During the development process, students had many creative ideas. From weeks thirteen to sixteen, students began testing and evaluating their engineering creative thinking, and made improvements. They could add more Scratch programming puzzle pieces to create more interesting gameplay, and design different appearances to make the game more interesting. Finally, in weeks 17– 18, we introduced the creation of self-made products. Each group of students shared their learning experiences, learning outcomes, and learning feelings, and completed the course feedback questionnaire. Figure 2 shows the students’ work, and Fig. 3 shows the students’ learning status throughout the course.

Fig. 3. The students’ learning status

3.3 Research Tool In the last week, students were asked to fill out a course feedback questionnaire to understand their feelings about the course after learning. The questionnaire was in the form of a Likert scale with options of “strongly agree”, “agree”, “neutral”, “disagree”, and “strongly disagree”, scored as 5, 4, 3, 2, and 1, respectively. There were a total of 23 questions. 3.4 Data Collection and Analysis Basic statistical analysis was used to analyze the data, using mean and standard deviation. The problems, obstacles, and suggestions in teaching and learning management were sorted and analyzed using the method of descriptive explanation.

The Research of Elementary School Students

603

4 Research Results and Discussion 4.1 A Subsection Sample The course feedback questionnaire consists of 23 questions, among which questions 8, 9, and 11 are reverse-scored. Therefore, after students complete the course questionnaire, statistical data analysis is conducted by reversing questions 8, 9, and 11 before analysis. The research results show that students have a high overall agreement on the feedback for the course on engineering design thinking and scratch programming (M = 4.06, SD = 0.45), as shown in Table 1. Table 1. The course feedback questionnaire. Num Question

M

SD

Agree Scale

Q1

I find it easy to operate Scratch

3.59 1.15 neutral

Q2

I think Scratch is a fun software

4.57 0.80 agree

Q3

I believe the concepts learned from Scratch can be applied to other subjects

3.68 1.00 neutral

Q4

I like to share my Scratch projects with classmates

4.19 1.05 agree

Q5

I am afraid of being ridiculed by classmates for my Scratch projects

3.12 1.43 neutral

Q6

I think Scratch allows me to unleash my creativity

4.53 0.65 agree

Q7

I feel a sense of accomplishment when I learn programming concepts through Scratch

4.44 0.65 agree

Q8

I prefer it when the teacher or classmates directly tell me how 3.46 1.26 neutral to write the code rather than figuring it out myself

Q9

I find Scratch boring and it is difficult to capture my interest

4.06 1.32 agree

Q10 If I had more time, I could improve my Scratch skills

4.23 1.12 agree

Q11 I prefer fixed topics rather than creating my own

3.36 1.20 neutral

Q12 I find it helpful to observe classmates’ projects or the teacher’s coding approach

4.34 0.96 agree

Q13 I think participating in activity-based lessons can help me improve my learning ability

4.34 0.81 agree

Q14 When I see a response on the Scratch stage screen, I feel more accomplished

4.48 0.74 agree

Q15 Having program instructions as an aid makes learning programming more interesting

4.48 0.74 agree

Q16 If a classmate needs help, I would proactively assist them

4.00 1.00 agree (continued)

604

W.-S. Liu et al. Table 1. (continued)

Num Question

M

SD

Agree Scale

Q17 I prefer to think and create on my own rather than discussing with others

3.17 1.05 neutral

Q18 I think using each student’s expertise to form groups can make the team more efficient

4.25 0.98 agree

Q19 I believe discussing with classmates can help me create better 4.38 0.82 agree Scratch projects Q20 I think everyone needs me when working in a group project

3.65 1.29 neutral

Q21 I feel this activity can increase my communication opportunities with classmates

4.19 0.92 agree

Q22 In class, I proactively ask the teacher when I have questions

4.42 0.87 agree

Q23 I think the teaching materials provided by the teacher are rich 4.57 0.85 agree in content Overall course feedback

4.06 0.45 agree

5 Conclusions and Future Directions This study aimed to explore the application of engineering design thinking in Scratch programming among primary school students, with a focus on socially sustainable themes. The research found that applying engineering design thinking to Scratch programming can enhance students’ problem-solving, creativity, and teamwork skills, while also enabling them to gain a deeper understanding of socially sustainable issues. Specifically, this study designed a series of Scratch programming activities to teach students how to use the engineering design thinking process for problem-solving and innovative design. These activities covered socially sustainable themes such as designing environmentally friendly cities, reducing waste, and conserving energy. In these activities, students had to design their own solutions and work collaboratively to transform them into feasible Scratch programs. The results showed that these activities significantly improved students’ problemsolving and creativity skills, allowing them to apply the engineering design thinking process more effectively to real-world problems. Additionally, students demonstrated good teamwork skills, which are crucial for future learning and work. Acknowledgments. This work is partially supported by Ministry of Science and Technology, Taiwan under grant MOST 110-2511-H-224 -003 -MY3 and MOST 111-2628-H-224 -001 -MY3.

References 1. Tang, K.H.D.: Correlation between sustainability education and engineering students’ attitudes towards sustainability. Int. J. Sustain. High. Educ. 19(3), 459–472 (2018)

The Research of Elementary School Students

605

2. Topalli, D., Cagiltay, N.E.: Improving programming skills in engineering education through problem-based game projects with Scratch. Comput. Educ. 120, 64–74 (2018) 3. Nebel, S., Schneider, S., Rey, G.D.: Mining learning and crafting scientific experiments: a literature review on the use of minecraft in education and research. J. Educ. Technol. Soc. 19(2), 355–366 (2016) 4. Dunlap, B.U., et al.: Heuristics-based prototyping strategy formation: development and testing of a new prototyping planning tool. In: ASME International Mechanical Engineering Congress and Exposition, vol. 46606, p. V011T14A019. American Society of Mechanical Engineers (2014) 5. Edström, K., Kolmos, A.: PBL and CDIO: complementary models for engineering education development. Eur. J. Eng. Educ. 39(5), 539–555 (2014) 6. Kanadli, S.: A meta-summary of qualitative findings about STEM education. Int. J. Instr. 12(1), 959–976 (2019) 7. Smirnova, E.V., Clark, R.P., (eds.).: Handbook of Research on Engineering Education in a Global Context. IGI Global, Pennsylvania (2018) 8. Blikstein, P.: Gears of our childhood: constructionist toolkits, robotics, and physical computing, past and future. In: Proceedings of the 12th International Conference on Interaction Design and Children, pp. 173–182 (2013) 9. Bantider, A., Haileslassie, A., Alamirew, T., Zeleke, G.: Soil and water conservation and sustainable development. In: Leal Filho, W., Azul, A.M., Brandli, L., Lange Salvia, A., Wall, T. (eds.) Clean Water and Sanitation. ENUNSDG, pp. 551–563. Springer, Cham (2022). https://doi.org/10.1007/978-3-319-95846-0_138 10. Locke, H., et al.: Three global conditions for biodiversity conservation and sustainable use: an implementation framework. Natl. Sci. Rev. 6(6), 1080–1082 (2019) 11. Xu, Z., et al.: Assessing progress towards sustainable development over space and time. Nature 577(7788), 74–78 (2020) 12. Huan, Y., Liang, T., Li, H., Zhang, C.: A systematic method for assessing progress of achieving sustainable development goals: a case study of 15 countries. Sci. Total Environ. 752, 141875 (2021) 13. Kumari, J., Behura, A.K., Kar, S.: Women’s attitude towards environment sustainability through natural preservation. Probl. Ekorozwoju 15(1), 103–107 (2020) 14. Karayel, D., Sarauskis, E.: Environmental impact of no-tillage farming. Environ. Res. Eng. Manag. 75(1), 7–12 (2019) 15. Nagel, J.K., Rose, C., Beverly, C., Pidaparti, R.: Bio-inspired design pedagogy in engineering. In: Schaefer, D., Coates, G., Eckert, C. (eds.) Design education today. Springer, Cham, pp. 149–178 (2019). https://doi.org/10.1007/978-3-030-17134-6_7 16. Moallem, M.: Effects of PBL on learning outcomes, knowledge acquisition, and higher-order thinking skills. The Wiley Handbook of Problem-Based Learning, pp. 107–133 (2019) 17. Caeli, E.N., Yadav, A.: Unplugged approaches to computational thinking: a historical perspective. TechTrends 64(1), 29–36 (2020) 18. Zhao, L., Liu, X., Wang, C., Su, Y.S.: Effect of different mind mapping approaches on primary school students’ computational thinking skills during visual programming learning. Comput. Educ. 181, 104445 (2022)

The Effects of Prior Knowledge on Satisfaction and Learning Effectiveness in Using an English Vocabulary Learning System Jui-Chi Peng1(B) , Yen-Ching Kuo2 , and Gwo-Haur Hwang3 1 Holistic Education Center, Cardinal Tien Junior College of Healthcare and Management,

Taipei City, Taiwan [email protected] 2 Department of Health and Hospitality, Cardinal Tien Junior College of Healthcare and Management, Taipei City, Taiwan 3 Graduate School of Intelligent Data Science, National Yunlin University of Science and Technology, Douliu, Taiwan

Abstract. In Taiwan, learning English vocabulary is never easy for the learners of the English as a second language. In fact, for students learning English, vocabulary acquisition and development can be one of the most challenging and frustrating experiences. Additionally, the learners’ previous learning experience and prior knowledge can also impact learning effectiveness. This study aims to explore the correlation between satisfaction and learning effectiveness when using an English vocabulary learning system with junior college freshmen at the beginning of their school year. The research subjects consisted of 233 students from a junior college in northern Taiwan who participated in a 4-week empirical research study. The research yielded positive results, demonstrating progress in English vocabulary learning through the use of the vocabulary learning system. Furthermore, it was observed that learners with lower levels of prior knowledge expressed higher levels of satisfaction and improvement compared to those with higher levels of prior knowledge. Based on these research findings, the authors provide conclusions and teaching suggestions for future reference in language teaching. Keywords: Prior Knowledge · English Vocabulary Learning System · Learning Effectiveness

1 Introduction As the lingua franca, English plays a crucial role in international communication and the challenges and opportunities to educational contexts worldwide [1]. Many researchers indicate that vocabulary is a core and fundamental component of language learning. Vocabulary size determines the level of communication and information comprehension [2]. Other studies have also shown a significant correlation between students’ vocabulary knowledge and their reading comprehension abilities [3]. Compared to native language learners, ESL learners (English as a Second Language) rely more profoundly © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 606–617, 2023. https://doi.org/10.1007/978-3-031-40113-8_60

The Effects of Prior Knowledge on Satisfaction

607

on vocabulary knowledge in reading [4]. However, learners often struggle to remember the vocabulary they have learned. In an effort to catch up with the teaching schedule, teachers often provide students with long vocabulary lists to memorize, neglecting to instruct them on learning strategies and techniques for memorizing new words [5], such as phonics rules for spelling or how to use a dictionary. Nation [6] pointed out that the majority of learners struggle to remember a new word upon initial encounter. It typically takes at least five exposures for individuals to grasp the meaning and usage of a word. There exists a correlation between vocabulary learning and word exposure frequency. The more frequently learners are exposed to a word, the longer it remains in their memory. Furthermore, the higher the frequency of exposure to a word, the faster and more effectively learners can comprehend and process it [7]. This results in more efficient learning and reduces learners’ anxiety towards memorizing words. Technology serves as an ideal tool for ESL learners. Utilizing emails, word processors, and reading software enables learners to engage in a secure and interactive environment at their own pace [8]. Information technology has been integrated into foreign language teaching to enhance word exposure for learners [9]. By immersing themselves in the target language’s context, learners can naturally acquire the language. Additionally, integrating information technology in teaching enables teachers to create an environment rich in language input for learners. Consequently, the Global Learning & Assessment Development (GLAD) has developed an English vocabulary practice and testing system to assess learners’ understanding of words. Learners possess prior experiences, skills, and cognitive abilities before acquiring new knowledge. This prior knowledge exists in the form of schemata [10]. Consistent with previous studies, it has been observed that prior knowledge impacts learning effectiveness. Chou and Tsai’s [11] findings suggest that learners with higher prior knowledge benefit more from mobile vocabulary learning. Furthermore, both prior knowledge and learning approach significantly influence the learning performance of e-learners, with prior knowledge having a stronger impact on performance than the learning approach [12]. In other words, when utilizing this type of learning system, learners demonstrate varying responses based on their levels of prior knowledge. Consequently, this research aims to examine the influence of prior knowledge on satisfaction and learning effectiveness when utilizing the English vocabulary learning system.

2 Introduction to the English Vocabulary Learning System 2.1 System Structure This research adopted the English vocabulary learning system developed by GLAD (the Global Learning and Assessment Development). The structure, as shown in Fig. 1, was divided into two sections: the learners and the teacher. The learners section included the following six learning categories: read the Chinese and spell the English (RCSE), read the English and choose the Chinese (RECC), listen to the English and choose the Chinese (LECC), listen to the English and choose the English (LECE), read the Chinese and choose the pronunciation (RCCP), and read the English and choose the pronunciation

608

J.-C. Peng et al.

(RECP). Furthermore, an archive was available for users to review their own learning history, and the teacher could examine individual learning history and records as well. To effectively memorize vocabulary, learners needed more than just spelling strategies. Therefore, the category of RECC aimed to train students to be able to “read” English words. The two categories, LECC and LECE, provided two-way training designed to strengthen “English listening” and “Chinese reading” abilities. The next two categories, RCCP and RECP, encouraged students to pronounce the English words on their own, reinforcing their response abilities instead of relying on traditional cramming learning styles. Finally, students could use the first category, RCSE, to assess their spelling abilities. By integrating Chinese, English, and pronunciation into a unit, learners could familiarize themselves with the meaning and pronunciation of words before spelling and reading (Reading, Listening, Spelling, and Speaking, RLSS) [13]. As a result, an intuitive English response ability was developed, followed by appropriate English learning and application abilities.

Fig. 1. System Structure

2.2 Functions and Interfaces of the System The learners would enter the homepage after logging in, as shown in Fig. 2, and then they could choose the appropriate level to practice, as shown in Fig. 3. After that, they could select the intended level to practice based on Table 1. Next, the learners would choose among the six categories, as shown in Fig. 4, and proceed with vocabulary practice and tests, as shown in Fig. 5. At the end of the test, the learners could immediately view their test report card, as shown in Fig. 6. If the learners wanted to know the answers to their mistakes, they could review the test results, as shown in Fig. 7, in order to improve their future test performance.

The Effects of Prior Knowledge on Satisfaction

609

The system assessed the learners’ vocabulary size by setting different levels and categories, as shown in Table 1. The testing methodology is outlined in Table 2. At the beginning of the English learning process, if learners gave up due to encountered difficulties, a wider gap would develop between their peers. However, it was important to establish a successful learning experience and maintain a high success rate for these low achievers [14]. Hence, the system aimed to offer different levels and vocabulary sizes for learners to choose from based on their learning objectives. Learners could select an appropriate level to practice according to their abilities, and as they made progress, their confidence in English learning would steadily reinforce. In addition to English/Chinese recognition, the system employed a listening test to enhance listening and speaking abilities, which determined if learners could understand the targeted vocabulary. Compared to traditional pen-paper tests, the system performed better in examining learners’ vocabulary comprehension. The practice hours and results were recorded, allowing learners to set their own vocabulary learning pace and objectives. Teachers could also regularly monitor learners’ learning progress. This mechanism enabled learners to conduct regular self-testing, track their vocabulary size, and achieve their learning objectives. Furthermore, teachers could instruct learners to practice and test according to their individual levels or vocabulary size. Appropriate teaching materials or lesson plans could be arranged based on learners’ vocabulary size.

Fig. 2. Homepage

610

J.-C. Peng et al.

Fig. 3. Graded Frame

Fig. 4. Learning Categories

Fig. 5. Test-RECC

The Effects of Prior Knowledge on Satisfaction

Fig. 6. Test Report Card

Fig. 7. Examining Test Results

Table 1. Vocabulary Levels Level

Vocabulary Size

Recommended to Students in

A

7000 8000 9000 10000 words

college

B

5000 6000 7000 8000 words

junior college

C

3000 4500 6000 words

high school

D

2600 3300 4000 words

vocational high school

E

1350 1700 2000 words

junior high school

F

250 500 750 1000 words

elementary school

611

612

J.-C. Peng et al. Table 2. Learning Categories

Category

Content

Type

Number of Questions

Total Score

Time Limit (minutes)

1

RCSE

Writing

100

100

20

2

RECC

Reading

100

100

10

3

LECC

Listening

100

100

10

4

LECE

Listening/Reading

100

100

10

5

RCCP

Listening

100

100

10

6

RECP

Reading

100

100

10

3 Research Method 3.1 Research Structure As shown in Fig. 8, the research studied on how the English learning satisfaction and learning effectiveness were influenced by different levels of prior knowledge. Hypothesis 1: Prior knowledge influenced the satisfaction on using the English vocabulary learning system. Hypothesis 2: Prior knowledge influenced the effectiveness in using the English vocabulary learning system.

Fig. 8. Research Structure

3.2 Research Instruments The research instruments used in this study were the English vocabulary learning system developed by GLAD. Both pretests and posttests were conducted using the English vocabulary learning system. To ensure comparability of the test results, the vocabulary size was set at 1,350 words for both the pretests and posttests. The test questions were randomly ordered by the system to ensure consistent difficulty.

The Effects of Prior Knowledge on Satisfaction

613

The learning satisfaction scale used in this study was adapted from the studies of Deng [15], Zheng and Wang [16]. The scale was constructed as a Likert scale [17], which consisted of five points ranging from “strongly agree” to “strongly disagree.” Each point was assigned a corresponding value from 5 to 1, respectively. All survey questions were formulated as positive items and closed-ended questions. The total score obtained from summing up the points on the questionnaire represented the level of satisfaction. A higher score indicated higher satisfaction. 3.3 Research Subjects The research subjects consisted of 233 nursing students from a nursing junior college in northern Taiwan. Eight invalid survey samples were excluded, resulting in a total of 225 nursing students who participated in the study. The experimental subjects were junior college freshmen who had recently completed the Comprehensive Assessment Program for Junior High School Students. Accordingly, the reading and listening test questions on the English subject were aligned with Appendix 3 of the Curriculum Guidelines of 12-Year Basic Education, which listed 1,200 words and was issued by the Ministry of Education in March 2006. Considering the English abilities of the subjects, the instructor instructed the learners to practice with a vocabulary level of 1,350 words, which was also used for the pretest and posttest. 3.4 Experimental Procedure The research procedure began by inviting all 233 learners to participate in the experiment. First, the learners were asked to complete personal information forms, followed by a pretest. Next, the learners logged onto the English vocabulary learning system, and a 4-week-long teaching experiment was conducted. Finally, the learners took a posttest and completed the satisfaction survey. As depicted in Fig. 9.

Fig. 9. Experimental Procedure

614

J.-C. Peng et al.

4 Results and Discussion 4.1 Satisfaction Reliability Analysis The Cronbach’s Alpha value of the satisfaction questionnaire was 0.944, indicating that the reliability of the scale was acceptable. 4.2 The Influence of Prior Knowledge on the Satisfaction in Using the System To examine the impact of prior knowledge on satisfaction when using the English vocabulary learning system, the research conducted an independent sample t-test on the satisfaction survey responses of learners with high and low-level prior knowledge. The results are presented in Table 3, indicating that learners with low-level prior knowledge expressed significantly higher satisfaction compared to those with high-level prior knowledge. In other words, the English vocabulary learning system was found to be more suitable for learners with low-level prior knowledge. Table 3. Learning Categories Dimensions

Prior Knowledge

Samples

Average

Standard Deviation

t Value

Satisfaction

High

116

3.76

0.74

−4.047***

Low

109

4.15

0.68

*** p < .001

4.3 The Influence on the Learning Effectiveness in Using the System To assess the influence of the system on learners’ learning effectiveness, the research conducted a paired sample t-test using the pre and posttests. The tests were divided into three parts: reading, listening, and total score (the sum of listening and reading tests). The purpose was to examine the progress between the pre and posttests, and the results are presented in Table 4. Across all three parts of the pre and posttests, there was a noticeable improvement in test performance, indicating positive learning effectiveness when using the system.

The Effects of Prior Knowledge on Satisfaction

615

Table 4. Learning Categories Paired Sample

Mean

Numbers

Standard Deviation

Mean of Pre/Post-tests

t Value

Reading: Pretest

57.68

225

13.08

−6.45

−12.266***

Posttest

64.12

225

10.95

Listening: Pretest

59.45

225

11.64

−6.04

−12.246***

Posttest

65.50

225

10.41

Total: Pretest

117.13

225

23.03

−12.49

−11.113***

Posttest

129.62

225

19.97

*** p < .001

4.4 The Influence of Prior Knowledge on the Progress of Learning Effectiveness in Using the System To determine the impact of prior knowledge on learning effectiveness when using the system, the study divided the samples into two groups: learners with high-level prior knowledge and learners with low-level prior knowledge. First, the researcher calculated the sum of the reading, listening, and total scores separately for the pretests and posttests. Then, the difference between the sum of each part for the posttests and the respective sum of the pretests was calculated. These differences indicated the progress made. The results, presented in Table 5, were analyzed using an independent sample t-test to compare the level of progress between the two groups. The table demonstrates that learners with low-level prior knowledge made greater progress compared to those with high-level prior knowledge. It also suggests that the system had a beneficial effect on learners with low-level prior knowledge. Table 5. Independent sample t-test of prior knowledge on the progress of learning effectiveness Dimensions

Prior Knowledge

Numbers

Average

Standard Deviation

t Value

Progress in Reading

High

116

3.52

5.42

−6.129***

Low

109

9.57

8.87

Progress in Listening

High

116

4.55

5.99

Low

109

7.63

8.40

Progress in Total

High

116

8.07

7.02

Low

109

17.20

11.54

** p < .01, *** p < .001

−3.150** −7.117***

616

J.-C. Peng et al.

5 Conclusion and Future Study The study conducted an experiment to examine the impact of the English vocabulary learning system developed by GLAD on learners’ satisfaction and learning effectiveness. The overall findings of the study indicate that the use of this learning system resulted in improvements in learners’ ability to learn English vocabulary. The researchers compared the scores of learners with different levels of prior knowledge. Specifically, they focused on learners with low prior knowledge and those with high-level prior knowledge. The study revealed that learners with low prior knowledge experienced significant improvements in reading, listening, and overall scores when using the English vocabulary learning system, in comparison to those with high-level prior knowledge. Notably, in the reading section, learners with low-level prior knowledge demonstrated considerably greater improvement than their counterparts with high-level prior knowledge. Furthermore, the study examined the satisfaction levels of learners with different levels of prior knowledge. Learners with low-level prior knowledge expressed significantly higher satisfaction with the English vocabulary learning system compared to those with high-level prior knowledge. The researchers propose that this disparity in satisfaction may be attributed to the system’s straightforward operation and the clear categorization of learning activities. Learners were able to engage in repeated practice in areas where they needed more reinforcement, a feature that proved particularly beneficial for learners with low-level prior knowledge. In terms of future directions, the study aims to address the limitations of the initial experiment and make the English vocabulary learning system accessible to learners of varying proficiency levels. By doing so, the researchers hope to facilitate new discoveries and advancements in English vocabulary learning. This suggests a commitment to further enhancing the system’s effectiveness and ensuring its suitability for learners across different knowledge levels.

References 1. Graddol, D.: English as a global language: Implications for education. Lang. Teach. 52(1), 36–45 (2019) 2. Zhang, D., Zheng, Y.: The impact of vocabulary size on reading comprehension among Chinese English as a foreign language learners. J. Lang. Teach. Res. 11(6), 659–666 (2020) 3. Jiang, M., Nekrasova-Beker, Y.: The relationship between vocabulary size and reading comprehension of Chinese EFL learners. Asia Pac. Educ. Res. 29(3), 223–233 (2020) 4. Amano, S., Fushiki, N.: Vocabulary size, reading strategies, and reading comprehension of EFL learners. J. Lang. Teach. Res. 11(4), 327–336 (2020) 5. Teng, F., Zhang, Y.: Vocabulary learning strategies in second language acquisition: a systematic review. Front. Psychol. 11, 1579 (2020) 6. Nation, I.S.P.: Learning Vocabulary in Another Language, 2nd edn. Cambridge University Press, Cambridge (2013) 7. Chang, A., Webb, S.: Second language vocabulary growth. RELC J. 43(1), 33–51 (2012) 8. Son, J.B., Park, H.: The effects of mobile-assisted language learning on learners’ English proficiency: a meta-analysis. Comput. Assist. Lang. Learn. 31(3), 263–290 (2018)

The Effects of Prior Knowledge on Satisfaction

617

9. Hsu, H.Y., Wang, C.Y.: Enhancing EFL vocabulary learning through mobile-assisted language learning: a comparative study. Comput. Educ. 138, 83–97 (2019) 10. Hidi, S., Renninger, K.A.: The four-phase model of interest development. Educ. Psychol. 41(2), 111–127 (2006) 11. Chou, C.Y., Tsai, C.C.: Effects of prior knowledge and task type on vocabulary learning in a mobile learning environment. Educ. Technol. Soc. 21(1), 211–224 (2018) 12. Chang, C.C., Tseng, S.S.: The effects of prior knowledge and learning approach on the learning performance of e-learners. Comput. Educ. 78, 141–149 (2014) 13. GLAD.: GLAD Official Site (2022). http://www2.gladworld.net/gladworldtest/CHT_PVQ C.php 14. Hattie, J., Timperley, H.: The power of feedback. R. Educ. Res 77(1), 81–112 (2007) 15. Deng, Z.Z.: The study of students’ learning satisfaction scale with physical education selections in Ilan institute of technology. J. Ilan Inst. Technol. 4, 117–123 (2000) 16. Wang, M.Z., Zheng, J.G.: The constructing and development of the learning satisfaction inventory on the students in institute. J. Chin. Inst. Technol. 36, 427–442 (2007) 17. Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. 140, 1–55 (1932)

Robot-Assisted Language Learning: A Case Study on Interdisciplinary Collaboration Design and Development Process Hsuan Li1

and Nian-Shing Chen2(B)

1 National Quemoy University, Kinmen County, Taiwan R.O.C.

[email protected]

2 National Taiwan Normal University, Taipei City, Taiwan R.O.C.

[email protected]

Abstract. This study is a case study that focuses on the interdisciplinary collaboration design and development process of a robot-assisted Chinese daily-life measure words learning course, which involves collaboration between three parties: the CSL teachers, the technology experts, and the educational and user experience design experts. We use Technological Pedagogical Content Knowledge (TPACK) and Activity Theory (AT) as the framework to analyze the knowledge of needed and the processes. Cross-disciplinary collaboration is the integration of professional knowledge from different fields. Each field has different perspectives, and the process requires communication, adjustment, and re-adjustment, especially in the conceptual design period. The knowledge and division of labor of each domain can be used as a reference for future cross-disciplinary development teams. Keywords: Chinese as a second language (CSL) · Robot-assisted language learning (RALL) · Chinese measure word · Technological Pedagogical Content Knowledge (TPACK) · Activity Theory (AT)

1 Introduction Measure words are a unique feature of the Sino-Tibetan language family. Learners of Chinese as a second language (CSL), who do not have a mother tongue in this language family, often find them confusing. This confusion can lead to incorrect or inappropriate usage of measure words or even avoiding their use altogether [1, 2]. In a CSL environment, the frequency of using measure words in daily life is high, and correct usage can improve communication effectiveness [1]. Currently, measure words instruction in CSL is mainly based on textbooks [1, 3], or through task-based teaching, such as setting up a dumpling-making activity [4] or learning through specific games [5]. However, there is a lack of an independent and daily-life-context-based measure words instruction curriculum. Using robots as tutors to assist CSL learners in Chinese daily-life measure words learning can provide them with more opportunities for learning and practice outside of regular courses and materials. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 618–624, 2023. https://doi.org/10.1007/978-3-031-40113-8_61

Robot-Assisted Language Learning

619

Robot-assisted language learning (RALL) is an emerging technology that involves collaboration among experts from different fields. In recent research, three collaboration models have been used: 1) collaboration between language teachers and technology experts (e.g., [6]), 2) collaboration between language teachers and educational technology teachers (e.g., [7]), and 3) collaboration among language teachers, technology experts, and educational technology teachers (e.g., [4]). Mishra and Koehler (2006) proposed the Technological Pedagogical Content Knowledge (TPACK) framework, which suggests that knowledge in the digital age should integrate content knowledge, pedagogical knowledge, and technological knowledge from different fields [8]. Based on the TPACK framework, when developing a RALL course, a three-way collaboration approach better meets the needs of interdisciplinary knowledge integration. The aim of this study is to establish a cross-disciplinary team consisting of CSL teachers, robot technology experts, and educational and user experience design professionals to collaborate on the development and implementation of a robot-assisted Chinese daily-life measure words learning course. The team will utilize the TPACK framework as a collaborative model to plan and execute the program. This study also aims to analyze the development process and the different needs, challenges, and problem-solving methods that arise when experts from various fields collaborate. The research question is “What are the necessary knowledge, work distribution, and interaction processes required for developing a robot-assisted Chinese measure words learning system using a cross-disciplinary collaborative design model?”.

2 Research Method This study is a case study that focuses on the development process of a robot-assisted Chinese measure words learning course, which involves collaboration between three parties, the technology experts, the educational and user experience design experts, and the CSL teachers. We use Technological Pedagogical Content Knowledge (TPACK) and Activity Theory (AT) as the framework to analyze the knowledge and the processes. TPACK emphasizes the integration of cross-disciplinary knowledge, which is applicable to this study’s different fields and interactive cooperation, and defines the necessary content for each field of knowledge and intersectional knowledge. AT provides a complete analytical framework that can analyze the context of individual and group activities, provide an explanatory theoretical framework for the relationship between behavior, tools, and the environment, and conduct a behavioral analysis of the development process in this study. The core of AT is to create “contradictions” through the internal and external influence of elements in the system during the process of activities, and by identifying the contradictions in the activity, the difficulties and challenges encountered in the development process of this study can be analyzed [9]. The research methods employed in this study involve participant observation and content analysis. Various research tools were utilized, including multiple versions of lesson conceptual designing plans, scripts, and meeting logs, which were recorded following each meeting.

620

H. Li and N.-S. Chen

3 Results The study has completed the overall planning of the curriculum for daily-life measure words, targeting foreign residents living in Taiwan with basic Chinese proficiency. The teaching sections focus on cognitive and situational teaching methods, selecting measure words and designing instructional materials based on the context of common measure words used in daily life situations in Taiwan. The nouns that are paired with measure words mainly consist of high-frequency nouns that appear in the given context. The learning activities are designed using common strategies in RALL and technology enhanced language learning (TELL), such as embodied learning, multimodal learning, and gamebased learning, and are integrated with cooperative learning and competitive elements across different units to make the learning experience more dynamic and diverse. The Kebbi Air S, with the R&T system developed by the technology expert team, is used to assist in the learning process. The robot is designed to function as an independent tutor. The development of the robot teaching material for Unit 1 has been completed and is working on the development and testing of the APK. The following sections will discuss the knowledge, work, cross-team collaboration, and development process analysis of each team. 3.1 Domain Knowledge in Cross-Disciplinary Collaboration Based on the TPACK Framework TPACK emphasizes the integration of interdisciplinary knowledge. When developing a Chinese measure words learning curriculum leading by a robot, three teams corresponding to the three elements of the TPACK framework were formed. The three elements and four boundary elements are: 1. Technological Knowledge (TK): corresponds to the technology expert team (T), including the education robot experts, program designers, and system developers’ team. They are primarily responsible for robot technology expertise and APK development (including programming and testing). 2. Pedagogical Knowledge (PK): corresponds to the educational and user experience design expert team (P), including the education design and user experience design expert team. They are primarily responsible for robot-assisted language teaching design strategies and user experience design. 3. Content Knowledge (CK): corresponds to the CSL teacher team (C), including teaching Chinese as a second or foreign language expert teachers and prospective teachers. They are primarily responsible for Chinese measure word professional knowledge and teaching knowledge. 4. Pedagogical Content Knowledge (PCK): jointly developed by the P and C teams, they are responsible for robot-assisted life measure word learning strategy design, human-machine interface, and human-machine interaction design. 5. Technological Pedagogical Knowledge (TPK): jointly developed by the T and P teams, they are responsible for the alignment of robot-assisted daily-life measure word learning activity design requirements and technology use.

Robot-Assisted Language Learning

621

6. Technological Content Knowledge (TCK): jointly developed by the T and C teams, they are responsible for the materials and scripts used in robot-assisted daily-life measure word learning activity design. 7. Technological Pedagogical Content Knowledge (TPACK): jointly developed by the three teams, it produces the finished product of the robot-assisted daily-life measure word learning course. 3.2 Elements in Cross-Disciplinary Collaboration Based on the Activity Theory Framework Based on the AT framework, this study depicts the activity system diagram of interdisciplinary collaboration as shown in Fig. 1, with developing a robot-assisted Chinese daily-life measure words learning course through interdisciplinary collaboration as the main activity to learn how to conduct interdisciplinary collaboration from the process of collaboration and development.

Fig. 1. The elements of developing a robot-assisted Chinese daily-life measure words learning course based on the Activity Theory

The six elements of AT are analyzed as follows: 1. Subject: Subject of AT in this study is the team T or P or C. Each of these agents interacts with different communities. These three subjects correspond to the three main elements in the TPACK framework illustrated in Fig. 1. 2. Object: To develop a robot-assisted Chinese daily-life measure words learning course. 3. Tools: includes knowledge of Chinese measure words, CSL teaching, robotics, R&T system expertise, and pedagogical methods.

622

H. Li and N.-S. Chen

4. Community: T, P, C teams, and the teaching and learning community formed among them. The subject needs to interact with different communities. 5. Rules: includes regular community discussions and documentation, as well as the development of general rules. Regular discussions include internal team discussions and cross-team discussions. As for the rules, there are guidelines for cloud file sharing, file naming conventions, and standardized files for robot expressions and action lists, as well as formats for meeting minutes and testing records. Each team follows these rules to develop, check scripts, and carry out program writing and process recording. 6. Division of labor: three teams, T, P, C, each completing different tasks to ensure the smooth progress of the activity. The division of labor can correspond to the required knowledge and job content of each field under the TPACK framework and the division of labor process is shown in Fig. 2. The development process is a continuous cycle of discussion and revision. 3.3 The Development Process of Cross-Disciplinary Collaboration Based on TPACK and Activity Theory Framework The process from identifying the learning difficulty of measure words to the completion of developing a robot-assisted measure words teaching unit, including the ideation, collaboration, conceptual design, script writing, coding and developing, testing stages (see Fig. 2). The conceptual design of the curriculum is a process of iterative confirmation between “content”, “learning activities”, and “teaching/learning theory-strategy, learning path”, and requires the joint collaboration of the T, P, and C teams. The responsibilities of each team are as follows: C is responsible for the design concept of the overall and different unit contents, and for drafting the design document of the curriculum; P is responsible for confirming which robot teaching activity can truly assist students’ learning; T is responsible for which technology is responsible for which part of the activity. The conceptual design of the curriculum and the script writing are overlapping and iterative processes. Based on the concept of AT, the main core is to create imbalances between the internal and external elements of the system through the activities in the system, which leads to contradictions. Contradictions create troubles and obstacles for the activity system. To cope with these obstacles, the activity system will form assisting strategies to resolve the difficulties. The emergence of contradictions makes the activity system have characteristics of change, flow, and continuous derivation [9]. By examining the meeting records based on the concept of activity theory, the following contradictions can be observed: 1. Based on the time spent on each task, the collaboration between the team P and C in conceptual design, including “developing teaching materials and designing activities” and “script writing and revisions” accounts for a large proportion of the total time, concentrated in the early stage. 2. The experts from different fields have different perspectives on the strategy and technical functions of robot-assisted measure words teaching. The CSL teachers focus on teaching efficiency, while the education design and user experience design experts emphasize learning with multimodle, embodied, personalization or cooperation.

Robot-Assisted Language Learning

623

Fig. 2. Cross-disciplinary Collaboration Process based on the TPACK and the Activity Theory Framework

4 Conclusion This study is a case study of cross-disciplinary development of a robot-assisted language learning curriculum. TPACK can be used as a framework for cross-disciplinary knowledge collaboration, with three different expertise teams working together: technology expert team (T), the educational and user experience design expert team (P), and CSL teacher team (C). Activity Theory can be used to clarify and analyze the development process. Cross-disciplinary collaboration is the integration of professional knowledge from different fields. Each field has different perspectives, and the process requires

624

H. Li and N.-S. Chen

communication, adjustment, and re-adjustment. The work allocation and collaboration process under the TPACK framework, as well as the activity theory, can be used as a reference for future cross-disciplinary development teams. However, which technologies to use for teaching and how to combine them to make learning efficient, effective, and maintain interest over the long term can be explored in further research. It is hoped that in the future, a list of suitable technologies and teaching activities for teaching different language skills can be compiled for reference in lesson planning.

References 1. Chen, Y.-M.: A study on the categorization of measure words in mandarin teaching. Master’s thesis of Graduate Institute of Teaching Chinese as a Second Language, National Kaohsiung Normal University, Taipei (2013) 2. Yeh, Q.-S.: A study of measure words in mandarin teaching materials: focusing on three mandarin textbooks. Master’s thesis of In-Service Master’s Program in Teaching Chinese as a Second Language, National Pingtung University, Pingtung (2020) 3. Liao, J.Y.: An analysis and teaching application of common measure words in mandarin teaching materials. Master’s thesis of Department of Teaching Chinese as a Second Language, National Taiwan Normal University, Taipei (2021) 4. Hsu, F.-H., Hsueh, Y.-T., Chang, W.-L., Lin, Y.-T., Lan, Y.-J., Chen, N.-S.: An innovative multimodal learning system based on robot and tangible objects for Chinese numeralclassifier-noun phrase learning. In: Chang, M., Chen, N.-S., Dascalu, M., Sampson, D.G., Tlili, A., Trausan-Matu, S. (eds.) ICALT 2022. Proceedings-2022 International Conference on Advanced Learning Technologies, pp. 221–223 (2022). https://doi.org/10.1109/ICALT55010. 2022.00072 5. Ko, C.-S.: The effectiveness of learning Chinese measure words through board games: a case study of Indonesian migrant workers. Master’s thesis of Department of Teaching Chinese as a Second Language, National Taiwan Normal University, Taipei (2021) 6. Cheng, Y.-W., Wang, Y., Yang, Y.-F., Yang, Z.-K., Chen, N.-S.: Designing an authoring system of robots and IoT-based toys for EFL teaching and learning. Comput. Assist. Lang. Learn. 34(1–2), 6–34 (2021). https://doi.org/10.1080/09588221.2020.1799823 7. Li, H., Tseng, C.-C.: TPACK-based teacher training course on robot-assisted language learning: a case study. In: Chang, M., Chen, N.-S., Dascalu, M., Sampson, D.G., Tlili, A., Trausan-Matu, S. (eds.) ICALT 2022. Proceedings-2022 International Conference on Advanced Learning Technologies, pp. 253–255 (2022). https://doi.org/10.1109/ICALT55010.2022.00082 8. Mishra, P., Koehler, M.J.: Technological pedagogical content knowledge: a new framework for teacher knowledge. Teach. Coll. Rec. 108(6), 1017–1054 (2006) 9. Engeström, Y.: Learning by Expanding: An Activity-Theoretical Approach to Developmental Research. Cambridge University Press, Cambridge (1987)

Exploring the Effect of Educational Games Console Programming with Task Scaffolding on Students’ Learning Achievement Po-Han Wu1,2(B) , Wei-Ting Wu1 , Ming-Chia Wu1 , and Tosti H. C. Chiang2 1 Department of Information and Learning Technology, National University of Tainan, Tainan

City, Taiwan [email protected], [email protected] 2 Department of Graduate Institute of Mass, Communication National Taiwan Normal University, Taipei City, Taiwan

Abstract. The purpose of this study was to investigate the effects of different learning tools on the learning achievement of elementary school students in programming. A quasi-experimental research design was adopted in this study, and the subjects were 39 sixth-grade students in two classes at a national elementary school in Kaohsiung City in Taiwan. The two classes were divided into a task scaffolding educational console programming group and a web simulator group. Both groups implemented the same MakeCode Arcade platform programming curriculum and conducted a three-lesson experiment. An ANCOVA was used to analyze programming learning achievement in programming. The results showed that educational console programming with task scaffolding helped to improve the learning achievement and motivation of the learners in programming. Keywords: Programming · Learning achievement · Cooperative learning

1 Introduction The Organization for Economic Co-operation and Development (OECD) (2018) proposed a learning framework for education in 2030 [1], with the goal of promoting individual and societal well-being, problem-solving, and cooperative coexistence, so learning to interact with others and working together for mutual benefit will be one of the current trends in education. With the rapid advancement of technology, the problems people face are becoming more and more complex, so people are constantly thinking about how to solve problems in a more efficient way. The most influential one is the computational thinking proposed by Wing, J. in 2006 [2]. To develop the ability to think logically, including using basic concepts of computer science to solve problems, systems design and understanding human behavior, while looking for ways and solutions that computers can work with [3].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 625–634, 2023. https://doi.org/10.1007/978-3-031-40113-8_62

626

P.-H. Wu et al.

The cognitive tasks involved in the teaching process of programming are one of the indispensable ways to cultivate computational thinking [2]. In recent years, one of the most common courses in elementary school teaching sites is Scratch project from the Massachusetts Institute of Technology (MIT), which was started in 2007. It is a programming tool developed to help children learn creative thinking, collaboration, and systematic thinking. Scratch is a visual programming language for learners ages 8 to 16 to learn programming through the creation of animated stories and games [4]. When teaching programming courses in schools, researchers have come into contact with different visual programming languages, among which Scratch is the most commonly used one in the teaching process. In the learning process, computational thinking can be used through programming to improve the ability to solve problems. However, it is found that some students will gradually lose interest in learning programming in Scratch at a later stage, and the engagement and extension of the course are gradually insufficient, resulting in a decline in learning motivation in the process of Scratch learning. Seyed raised some learning issues with Scratch, such as the lack of opportunities for collaborative learning within the curriculum [5]. In recent years, WiFiBoy, a local brand, is a handheld game console made up of buttons, a screen, and a development board, an electronic device that can be easily held with one or two hands [6]. The WiFiBoy game console is designed as a handheld game console to lower the threshold for learners to learn programming. The researcher found that some educators are introducing the WiFiBoy game console into the school curriculum for teaching. Therefore, the researcher wanted to apply it in the elementary school programming curriculum to enhance the students’ motivation. With WiFiBoy as a handheld game console, students can learn programming not only to make games by themselves but also to play games with their classmates on the game console after making games, thus providing opportunities for peer learning. Collaborative skills such as peer relationships, initiative, sharing of work roles, excitement and joy, and activity monitoring can also be developed. During the learning process, learners can experience the difficulties of making games, and when peers encounter difficulties, they can show empathy and teach them by sharing their work with them to increase their interaction with classmates. In view of the above background and motivation, this study is aimed at the senior students of primary and secondary schools using educational game console programming with task scaffolding. The handheld game console with an interactive programming teaching function is used as the learning tool of students in the implementation course and is equipped with a visual programming language to improve the effectiveness and motivation of students in learning programming.

2 Literature Review 2.1 Visual Programming Visual Programming Language is a language that uses visual elements to write programs. Different from the literal programming language, the visual programming language is intuitive and easy to use, which can increase students’ interest in learning [7]. It also conforms to the characteristics of digital natives who prefer to process multiple information, visualization, active exploration, and interactive learning [8]. Appropriate

Exploring the Effect of Educational Games

627

programming interfaces and visual or object-based environments are more suitable for beginners to generalize basic concepts of programming to other programming languages or related fields [9]. In cognitivism, the theory of cognitive development proposed by Piaget divides the cognitive development of schoolchildren into four stages, among which the third stage is Concrete Operational, meaning that schoolchildren aged 7–11 are able to solve problems according to concrete experience and operate concrete objects to help them think [10]. The fourth stage is the Formal Operational period, which means students aged 11–16 begin analogies, logical and abstract thinking. In the theory of cognitive learning proposed by Bruner [11], the three stages of the development of cognitive representation also mentioned that learning should be represented by the action of “learning by doing”, and then by the image representation of “learning by observing” through the perception of objects, and finally by the symbolic representation of “learning by thinking” through the use of symbols, language, and words. On the other naturalism, Rousseau, as the representative of naturalism, also advocates learning, which requires contact with real objects in order to obtain useful knowledge after observation and exploration. Most of the sixth-grade students in our country are 11 or 12 years old. They are in the interstage transition between the concrete operational period and the formal operational period proposed by Piaget. Logic and abstract thinking are beginning to develop. In the case of not fully developed, relying on specific experience and practical operation, for students to build scaffolding in learning, can help students think and learn at this stage. MakeCode is a children’s programming software developed by Microsoft in 2017. MakeCode Arcade is one of the most developed software for hardware circuit board design. In addition to MakeCode Arcade, MakeCode for micro:bit is a common platform in the MakeCode project. Previous studies have also proved this theory. When MakeCode for micro: bit is transformed from a computer simulator to a teaching combined with development boards, it has a significant positive impact on the learning attitude and achievement of most sixth-grade students in primary school. One of the principles of STEAM education, an educational craze that has emerged in recent years, emphasizes the importance of integrating hands-on lessons into the curriculum, as this approach enhances learners’ interests [12]and has a positive impact on learners [13]. Not only in foreign countries, but our government also mentions in the Outline of 12 Basic National Education Curriculum that science and technology should be used to cultivate the ability of research, analysis, practical implementation, creation, and design (Ministry of Education, 2018). It can be seen that the important content emphasized in the curriculum has many similarities with the STEAM education promoted by the United States. Recent studies have shown that adding level design like Code.org into programming courses as learning scaffolding, allowing beginners to gradually get familiar with program building blocks, syntax and architecture by passing through levels. Through the level design and prompt function from simple to difficult, when learners fail to solve problems in the process, the task scaffolding of learning programming can be set up for learners to promote learning motivation and interest in programming, self-efficacy, and attitude belief.

628

P.-H. Wu et al.

At present, visual programming is more suitable for primary school students and beginners in the common programming courses. The researchers chose WiFiBoy educational game console programming and self-developed building block mask expansion kit task scaffolding in the programming course, hoping to not only let students learn basic logic concepts but also make their own game topics to improve learners’ motivation in learning programming. Furthermore, interact and cooperate with classmates in the process of making games. Finally, learners send the games to WiFiBoy for physical operation to enhance the fun of playing games with classmates, cultivate peer emotion in the class, and enhance good interpersonal relationships. 2.2 Cooperative Learning In the process of collaborative learning, peers of similar ages can teach each other and learn from curriculum knowledge. Constructivism Vygotsky and Cole proposed a Zone of Proximal Development (ZPD) [14], If children interact with competent others or adults, they will provide Scaffolding for them, thus enhancing the Potential Development Level that children can achieve in the zone of proximal development. Recent studies have also found that children who learn cooperatively in groups perform better than those who complete similar learning activities alone [15], because after peer interaction in cooperative learning among children, both sides can contribute to cognitive development [16], and if paired (2 to 1 group), interaction may be of higher quality or more intensive than in larger groups, because it is difficult for any child not to participate in such cooperative learning [17, 18].

3 Research Method A quasi-experimental design was adopted in this study. The research objects were 2 classes of Grade 6 with 20 and 19 students respectively. The two classes were divided into educational console programming with task scaffolding group and web simulator group by class, and each group was separately divided into groups with two to three students in each group (see Fig. 1 and Fig. 2). Before the experimental teaching, the normal scores of the two groups in the previous semester were taken as the pre-test of learning achievement. After the experimental teaching course, learning achievement of programming was tested immediately. The research structure of this study is shown in Fig. 2. In this study, the MakeCode Arcade programming learning achievement post-test was developed by the researchers. The test questions were designed according to the actual teaching content. The questions cover concepts such as basic understanding of MakeCode Arcade platform, programming syntax (including correctly executed instruction blocks), and thematic applications. The questions are reviewed by experts in related fields, and then revised by researchers based on expert opinions for validity.

Exploring the Effect of Educational Games

629

Fig. 1. Photos from the teaching site for educational games console programming with task scaffolding group

Control variable 1. 2. 3. 4.

Students' basic computer skills (MakeCode Arcade) Teaching content Teaching time Teacher

Independent variable Dependent variable

Different learning tools 1.

2.

Educational games console programming with task scaffolding group Web simulator groups

Learning achievement in programming design

Fig. 2. Research Structure

4 Result Descriptive Statistics of Programming Learning Achievement The total score of the test questions before and after the programming learning achievement is 100. Table 1 shows the scores of two groups of different learning tools on the

630

P.-H. Wu et al.

learners’ programming learning achievement before and after the test. In the pretest, the mean value and standard deviation of the task scaffolding educational console programming group were 86.25 and 4.72. The mean of the web simulator group was 86.21, and the standard deviation was 4.65. In the post-test, the mean value and standard deviation of the task scaffolding educational console programming group were 66.60 and 11.39. The mean of the web simulator group was 54.16, and the standard deviation was 17.41. Table 1. Mean and standard deviation of pre-test and post-test learning achievement tests in Programming of different learning tools. Learning tools

N

Pre-test

Post-test

Mean

SD

Mean

SD

Educational games console programming with task scaffolding group

20

86.25

4.72

66.60

11.39

Web simulator groups

19

86.21

4.65

54.16

17.41

It can be seen from the above table that the pre-test scores of the task scaffolding educational console programming group are slightly higher than the average scores of the web simulator group, while the post-test scores are significantly higher than those of the web simulator group. A covariate analysis test will be conducted below to explore whether there is a significant difference in learning achievement between the two groups. Covariate Analysis of Programming Learning Achievement The purpose of this study was to test whether the two groups of subjects were interfered by the difference of prior knowledge, which resulted in the bias of the research results. In this study, the scores of learners in the “programming course of last semester” before teaching experiment were taken as covariable, and the two different “learning tools” were taken as self-variable. The results presented by the scores of the “programming learning achievement test” were discussed through covariable analysis. Table 2. Analysis of the homogeneity of regression coefficients of programming learning achievement with different learning tools Source of variation

sum of squares of deviation from mean (SS)

df

quadratic mean

F

p

Group X error of pre-test score

11.25

1

11.25

.06

.82

7066.03

35

201.89

As can be seen from Table 2, the homogeneity test result of in-group regression coefficient with the score of “last semester programming course” as the covariable shows that F value is.06 and significance is.82 (p > .05), which does not reach the significant

Exploring the Effect of Educational Games

631

level and conforms to the assumption of homogeneity of in-group regression coefficient, indicating that there is no significant difference in programming prior knowledge between the two groups of research objects. The covariate analysis of programming learning achievement of different learning tools was carried out to verify the differences in post-test scores of different learning tools. The analysis results are shown in Table 3 and 4. Table 3. Levene test equation of error variance of programming learning achievement for different learning tools F

df1

df2

P

3.29

1

37

.08

As can be seen from Table 3, the Levene test equation of error variation of programming learning achievement for different learning tools has a significance of .08 (p > .05), indicating that the null hypothesis cannot be rejected, and that there is no significant difference in error variation of the two groups’ post-test scores, showing homogeneity. Table 4. Covariate analysis of programming learning achievement for different learning tools Post-test of learning achievement

Group

N

Mean

SD

Adjusted mean

SE

F

P

Total Score

Educational games console programming with task scaffolding group

20

66.60

11.39

66.58a

3.14

7.62

.01*

Web simulator groups

19

54.16

17.41

54.18a

3.22

a. The covariate in the model was estimated according to the following values: pre-test of learning achievement = 86.23. b. *p < .05.

As can be seen from Table 4, after excluding the effect of pretest scores (covariates) on posttest scores (dependent variables), the results showed that the F-value was 7.62, significant .01 (p < .05), indicating that there was a significant difference between the groups, and the performance of the two groups of students on the learning achievement test differed depending on the learning tools. The adjusted mean of posttest scores was 66.58 for the Task Scaffold Educational Game Programming group and 54.18 for the Web Simulator group, indicating that the adjusted mean of posttest scores was higher for

632

P.-H. Wu et al.

the Task Scaffold Educational Game Programming group than for the Web Simulator group. In order to understand the main differences between the two groups of students on the achievement test, the results of the covariate analysis were further divided into two parts: multiple-choice questions and practice test. Table 5. The abstract of covariate analysis of Learning achievement post-test in Programming of different learning tools Post-test of learning achievement

Group

N

Mean

SD

Adjusted mean

SE

F

P

Multiple Choice question

Educational games console programming with task scaffolding group

20

32.50

11.18

32.50a

2.95

1.18

.28

Web simulator groups

19

27.89

14.75

27.90a

3.03

a. The covariate in the model was estimated according to the following values: pre-test of learning achievement = 86.23.

As can be seen from Table 5, after comparing the influence of pre-test results with post-test results of different learning tools, the F-value is 1.18, significant .28 (p < .05), which is not significant level, indicating that there is no significant difference between the groups, and there is no difference between the two groups of students in the multiple-choice test performance of learning achievement test due to different learning tools. As can be seen from Table 6, the influence of different learning tools on the pre-test results compared with the post-test results of the practice test is shown as F value of 12.82, significant .00 (p < .05), indicating significant differences between groups. The performance of the practice test in the learning achievement tests of the two groups of students is different because of the different learning tools.

Exploring the Effect of Educational Games

633

Table 6. Covariate analysis of programming learning achievement on practice test for different learning tools Post-test of learning achievement

Group

N

Mean

SD

Adjusted mean

SE

F

P

Practice test

Educational games console programming with task scaffolding group

20

34.10

9.00

34.08a

1.52

12.82

.00*

Web simulator groups

19

26.26

6.23

26.28a

1.56

a. The covariate in the model was estimated according to the following values: pre-test of learning achievement = 86.23. b. *p < .05.

5 Conclusion The results of this study show that the Task scaffolding educational game console group has positive performance in programming learning after teaching experiment, and the results of the post-test are better than those of the web simulator group. According to the results of analysis of covariance, the difference between the two groups reached a significant level, indicating that the performance of the two groups of students in the learning achievement test was different because of the different learning tools. Further, covariate analysis was carried out on the multiple-choice questions and the practice test in the achievement test respectively. The difference between the multiple-choice questions in the groups was significant, indicating that there was no difference between the two groups of students’ performance in the multiple-choice questions in the achievement test due to different learning tools. However, there was a significant difference between the groups in the practical tests, indicating that the performance of the two groups of students in the practice tests was different because of the different learning tools. In this study, the researchers observed that when learning programming with WiFiBoy task scaffolding educational game console and building Block mask extension Kit, the students were more active in learning programming. Through cooperative learning in groups, more cooperative behavior was promoted among and within the groups, and the learning effectiveness of programming was effectively improved. Especially obvious in the teaching experiment after the implementation. Acknowledgment. This study was supported by the National Science and Technology Council Special Studies Program (NSTC 111-2410-H-024 -005).

634

P.-H. Wu et al.

References 1. Organization for Economic Co-operation and Development (OECD). The future of education and skills: Education 2030. OECD Education Working Papers (2018) 2. Grover, S., Pea, R.: Computational thinking in K–12: a review of the state of the field. Educ. Res. 42(1), 38–43 (2013) 3. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006) 4. Maloney, J., Resnick, M., Rusk, N., Silverman, B., Eastmond, E.: The scratch programming language and environment. ACM Trans. Comput. Educ. (TOCE) 10(4), 1–15 (2010) 5. Seyed, T., et al.: MakerArcade: using gaming and physical computing for playful making, learning, and creativity. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019) 6. Stevenson, A.: Oxford Dictionary of English. Oxford University Press, Oxford (2010) 7. Zhao, L., Liu, X., Wang, C., Su, Y.S.: Effect of different mind mapping approaches on primary school students’ computational thinking skills during visual programming learning. Comput. Educ. 181, 104445 (2022) 8. Prensky, M.: Digital natives, digital immigrants part 1. On the Horizon 9(5), 1–6 (2001) 9. Funkhouser, C.: The influence of problem-solving software on student attitudes about mathematics. J. Res. Comput. Educ. 25(3), 339–346 (1993) 10. Huitt, W., Hummel, J.: Piaget’s theory of cognitive development. Educ. Psychol Interact. 3(2), 1–5 (2003) 11. Bruner, J.S.: Toward a Theory of Instruction, vol. 59. Harvard University Press, Cambridge (1966) 12. Rees, P., Olson, C., Schweik, C.M., Brewer, S.D.: Work in progress: exploring the role of makerspaces and flipped learning in a town-gown effort to engage K12 students in STEAM. In: 2015 ASEE Annual Conference & Exposition, vol. 1751, p. 26 (2015) 13. Chien, Y.-H., Chang, Y.-S., Hsiao, H.-S., Lin, K.-Y.: STEAM-oriented robot insect design curriculum for K-12 students. In: 2017 7th World Engineering Education Forum (WEEF), pp. 1–4. IEEE (2017) 14. Vygotsky, L.S., Cole, M.: Mind in Society: Development of Higher Psychological Processes. Harvard University Press, Cambridge (1978) 15. Asterhan, C.S., Schwarz, B.B., Cohen-Eliyahu, N.: Outcome feedback during collaborative learning: contingencies between feedback and dyad composition. Learn. Instr. 34, 1–10 (2014) 16. Topping, K., Buchs, C., Duran, D., Van Keer, H.: Effective Peer Learning: From Principles to Practical Implementation. Routledge, Milton Park (2017) 17. Webb, N.M.: Peer interaction and learning in small groups. Int. J. Educ. Res. 13(1), 21–39 (1989) 18. Tenenbaum, H.R., Winstone, N.E., Leman, P.J., Avery, R.E.: How effective is peer interaction in facilitating learning? A meta-analysis. J. Educ. Psychol. 112(7), 1303 (2020)

Metacognitive-Based Collaborative Programming: A Novel Approach to Enhance Learning Performance in Programming Courses Wei Li1,2

, Judy C. R. Tseng3(B)

, and Li-Chen Cheng4

1 STEM Education Research Center, Wenzhou University, Wenzhou, Zhejiang Province, China 2 Ph.D. Program in Engineering Science, Chung Hua University, Hsinchu, Taiwan 3 Department of Computer Science and Information Engineering, Chung Hua University,

Hsinchu, Taiwan [email protected] 4 Department of Information and Finance Management, National Taipei University of Technology, Taipei City, Taiwan

Abstract. Students’ computational thinking and programming skills may grow due to collaborative programming. But as the researchers have noted, students frequently do not use metacognition to manage their cognitive activities while collaborating, which negatively affects learning. This study created a metacognitionbased collaborative programming (MCP) system to improve students’ performance in collaborative programming. A seven-week study examined how the approach affected students’ performance in programming courses. The 88 middle school students were split into two groups: the experimental group received the metacognition-based collaborative programming approach, and the control group received the conventional computer-supported collaborative programming approach. The results indicated that the metacognitive-based collaborative programming approach enhanced students’ academic scores in programming courses and their computational thinking tendencies. Keywords: Metacognition · Collaborative Programming · Programming Course · Learning Performance · Learning System

1 Introduction Countries worldwide have focused on cultivating computational thinking skills in response to the new digital era. According to some researchers, learning to program can assist children in acquiring computational thinking [1]. Programming involves more than just creating code; it also involves using computer science concepts like abstraction and decomposition to solve problems [2]. However, the complexity of programming language is a great challenge for students new to programming [3], and it is difficult for students to complete a program independently. Therefore, it is recommended that collaborative learning be incorporated into programming learning activities to improve students’ computational thinking [4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 635–643, 2023. https://doi.org/10.1007/978-3-031-40113-8_63

636

W. Li et al.

However, just offering students collaborative learning may only sometimes result in effective learning results [5]. When working in groups, students frequently need help managing their cognitive learning processes [6]. Metacognition is a self-regulated and controlled learning process [7] that helps learners avoid social inertia in the learning process by planning, monitoring, and reflecting on their learning [8]. Middle school students who are new to programming languages may face greater challenges, particularly in regulating collaborative learning, as they may need more metacognitive skills [9]. Therefore, offering students guidance and activating metacognition is crucial for collaborative programming learning. This study developed an MCP system based on a metacognitive framework, and an experiment verifies its effectiveness. The main research questions are: Q1: Can the MCP approach improve students’ academic performance in programming courses? Q2: Can the MCP approach improve students’ computational thinking tendencies?

2 Literature Review 2.1 Computational Thinking and Collaborative Programming Computational thinking is a way of thinking about system design and problem-solving using core concepts from computer science [8]. It is a general way everyone should learn, not just computer scientists [8]. Computational thinking is considered an essential skill and way of thinking [10]. Therefore, developing and enhancing students’ computational thinking is a vital research issue. Programming is an effective approach to achieving computational results and demonstrates computational thinking skills [2]. Nonetheless, many students find programming difficult and complex [11]. To encourage pupils to learn programming languages, researchers advise taking a collaborative approach [12]. It has been shown that students who use collaborative learning programming learn better than those with individual learning programming [13]. Students’ computational thinking can also be enhanced through collaborative programming [14]. Some researchers believe that collaborative learning processes only benefit some [15]. A lack of metacognitive guidance may result in ineffective collaborative learning [16] and social slacking [17]. In other words, when programming collaboratively, students need metacognitive guidance to help them achieve better learning outcomes. 2.2 Metacognition and Its Role in Collaborative Learning In the 1970s, Flavell coined metacognition, which means “cognition about cognition” or “thinking about how to think and learning how to learn.” [18] Metacognition was split into knowledge of cognition and regulation of cognition by Baker and Brown [19]. According to research, pupils with high metacognitive abilities are better at solving problems and achieving higher achievement [20, 21]. Metacognition regulates learning by planning, monitoring, and assessing cognitive processes [22]. Metacognition is essential in students’ collaborative learning [23]. Zimmerman argued that in learning activities, metacognition allows learners and group members

Metacognitive-Based Collaborative Programming

637

to regulate their own or the group’s behavior, cognition, beliefs, and emotions in group interactions [24]. Yet, some students’ lack of metacognitive skills [25] prevents them from selecting the best strategies for addressing complicated issues and from keeping track of and reflecting on their learning [6], which leads to poor collaborative learning outcomes. As a result, this study created an MCP system and investigated its effects on students’ learning achievement in programming languages and computational thinking.

3 The Metacognition-Based Collaborative Programming System In this study, a metacognition-based problem scaffold was developed based on the suggestions made by Kramarski et al. [26] and will be combined with collaborative programming. Based on their proposed metacognitive questions, this study designed the corresponding question scaffolds, as shown in Table 1. Table 1. Metacognition problems and problems scaffolding Metacognition problems

Problems scaffolding

Comprehending problems and planning What problem does this programming task need to solve? What do you hope to accomplish with this assignment? Constructing connections

What distinguishes this work or problem from others that you have previously solved?

Formulating strategies

How should this problem be solved? Can you describe the solution to the problem?

Reflection and Evaluation

Did your group complete today’s programming assignment? Did you have any trouble figuring out the issue? How is it resolved? Is there any way to do it better?

This research created a collaborative programming system based on metacognition problem scaffolding. Figure 1 depicts the learning system architecture. The collaborative learning system consists of three main modules: the task assignment module, the collaborative learning module, and the metacognitive problem module. In addition, two databases are included: the learning task and learning process databases. Students will be randomly assigned to study groups of 4–5 people after logging into the system. The assignment guidance mechanism can then be used to view the assignment requirements and details. Students will then work together to tackle programming assignments while conversing on the discussion board under the direction of the metacognition problem module. The metacognitive module engages students’ metacognition using problem reminders during group task solutions. Understanding the problem is the first step. Students are expected to identify the issue that needs to be resolved during this stage

638

W. Li et al.

Fig. 1. Learning system architecture.

and attempt to articulate the task in their terms. The next step is to build the link between new and prior information and instruct pupils to focus on the similarities and differences between the activities they must do this time and those from earlier in the lesson. The learning system then directs students to develop problem-solving procedures, explain algorithms in natural language or with flowcharts, weigh the benefits and drawbacks of several algorithms, and select an approach for programming, debugging, and running. Finally, the system will direct students to evaluate their approach to and outcomes from problem-solving, which will also help them prepare for their subsequent collaboration.

Fig. 2. The interface of the learning system.

Metacognitive-Based Collaborative Programming

639

Figure 2 depicts the system interface. When utilizing the system to teach, teachers can include metacognition issue modules.

4 Methodology 4.1 Participants The pupils in this experiment had an average age of 15 years. Four classes totaling 88 pupils were randomly split into the experimental and control groups. The control group employed the conventional computer-supported collaborative programming approach, while the experimental group used the MCP approach. The experimental group had 41 students, and the control group had 47 students. All students are taught by the same teacher and have the same study materials. 4.2 Instrument This study’s pre-test and post-test consisted of 10 multiple-choice questions worth 10 points each, for 100 points. Two experienced IT teachers, and the classroom teacher designed the test questions. The pre-test was intended to evaluate students’ prior programming knowledge, and the post-test was designed to assess their learning outcomes. The Computational thinking tendency questionnaire was proposed by Hwang et al. [27]. There were six items in total. The Cronbach’s alpha of the computational thinking tendency questionnaire was 0.84, indicating reliability. A 5-point Likert scale was used to score the pre- and post-questionnaires (5 = totally agree; 1 = totally disagree). 4.3 Experimental Procedure The experiment lasted seven weeks, with a weekly time commitment of 45 min. The experimental procedure is depicted in Fig. 3. The teacher will introduce the collaborative programming learning system to the students during the first week and ask them to become acquainted with the operating interface. The students then complete the pre-test and pre-questionnaire. From weeks 2 through 6, the instructor introduces the basics of programming and assigns comprehensive tasks to guide students through collaborative work. “Design A Simple Calculator” and “Design A Complex Calculator” were given as collaborative programming tasks by the teacher. Both groups were given five weeks to complete the two tasks. The experimental group used the MCP approach. Instead of using the metacognitive questions module, the students in the control group engaged in conventional computer-supported collaborative learning. All students finished the post-test and post-questionnaire by week seven.

640

W. Li et al.

Fig. 3. Experimental design flowchart.

5 Results Analysis of covariance (ANCOVA) was used to assess the influence of different teaching approaches on students’ learning achievement and computational thinking. 5.1 Learning Achievement The homogeneity of the slopes of the regressions was confirmed (F = 0.27, p = 0.61 > 0.05), indicating that ANCOVA could be continued. In addition, Levene’s test of determining homogeneity of variance was not violated (F = 0.28, p = 0.60 > 0.05); this means that there is no significant difference in the variance of the error of these two dependent variables, and they are homogeneous. The results of the ANCOVA analysis are shown in Table 2. The results show that the learning achievement of the experimental group was significantly higher than that of the control group (F = 7.76, p = 0.007 < 0.01), with a medium effect size (η2 = 0.08 > 0.059) [28]. The results show that the MCP approach can improve students’ computer programming knowledge. Table 2. Results of a one-way ANCOVA on students’ learning achievement Groups

N

Mean

SD

Adjusted Mean

SE

F

η2

Experimental group

41

58.05

21.75

59.87

3.21

7.76**

0.08

Control group

47

49.15

23.05

47.56

2.99

** p < 0.01

Metacognitive-Based Collaborative Programming

641

5.2 Computational Thinking Tendency The regression slope homogeneity test (F = 0.18, p = 0.67 > 0.05) was performed, and the null hypothesis was accepted. The Levene test (F = 1.85, p = 0.18 > 0.05) also confirmed that there was no significant difference in the dependent variance between the two groups and that they were homogeneous. The results of the ANCOVA analysis are shown in Table 3. The adjusted means and standard deviations for the experimental and control groups were 3.63 and 3.02, respectively. The results showed that the experimental group had a significantly higher computational thinking tendency than the control group (F = 22.08, p = 0.000 < 0.001), with a large effect size (η2 = 0.21 > 0.15) [28]. In other words, the MCP approach can significantly improve students’ computational thinking tendencies. Table 3. Results of a one-way ANCOVA on computational thinking tendency Groups

N

Mean

SD

Adjusted Mean

SE

F

η2

Experimental group

41

3.02

0.58

3.01

0.09

22.08***

0.206

Control group

47

3.63

0.80

3.65

0.10

*** p < 0.001

6 Discussion and Conclusions This study created an MCP system and investigated how the system affected students’ learning achievement and computational thinking. The results showed that students who learned with the MCP approach performed significantly better than those who learned with the conventional computer-supported collaborative programming approach. This agrees with the conclusions of earlier research. It has been shown that metacognitive strategies can improve learners’ learning achievement [29]. In collaborative learning, learning behavior varies more as each student is at a different pace of problem-solving and has their ideas [30]. Without guidance, students may interfere with each other. In contrast, with the guidance of metacognitive problem scaffolding, students can discuss within the same problem framework and focus on the learning content itself, thus making learning more effective. The experimental results showed that students’ computational thinking tendencies increased significantly after using the MCP system. Sternberg elaborated on these two processes of planning and decomposing a problem in a metacognitive strategy that involves identifying the steps required to solve the problem and performing these steps in sequence [31]. This is potentially relevant to computational thinking in abstraction, decomposition, algorithmic thinking, etc. Thus, a metacognitive-based collaborative learning system can help to enhance students’ computational thinking. Metacognition can assist students in problem-solving by strategically encoding the nature of the problem, creating a mental model or representation of the problem aspects, selecting the most suitable approaches, and recognizing and removing potential roadblocks [32].

642

W. Li et al.

The MCP system developed in this study effectively improves students’ academic performance and computational thinking tendencies. However, certain things could still be improved with this study. Firstly, the system needs to be improved regarding support for student collaboration. Secondly, other aspects of the system’s influence on students, such as metacognitive tendencies and critical thinking, need further investigation. Funding. This work was supported by National Science and Technology Council of the Republic of China [MOST 109-2511-H-216-001-MY3] and the Ministry of Education of Humanities and Social Science Project of the People’s Republic of China [21YJA880027].

References 1. Wong, G.K.-W., Cheung, H.-Y.: Exploring children’s perceptions of developing twenty-first century skills through computational thinking and programming. Interact. Learn. Environ. 28(4), 438–450 (2018) 2. Lye, S.Y., Koh, J.H.L.: Review on teaching and learning of computational thinking through programming: what is next for K-12? Comput. Hum. Behav. 41, 51–61 (2014) 3. Qian, Y., Lehman, J.: Students’ misconceptions and other difficulties in introductory programming: a literature review. ACM Trans. Comput. Educ. (TOCE) 18(1), 1–24 (2017) 4. Iskrenovic-Momcilovic, O.: Pair programming with scratch. Educ. Inf. Technol. 24(5), 2943– 2952 (2019). https://doi.org/10.1007/s10639-019-09905-3 5. Kreijns, K., Kirschner, P.A., Jochems, W.: Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Comput. Hum. Behav. 19(3), 335–353 (2003) 6. Hadwin, A., Oshige, M.: Self-regulation, coregulation, and socially shared regulation: exploring perspectives of social in self-regulated learning theory. Teach. Coll. Rec. 113(2), 240–264 (2011) 7. Pintrich, P.R., Smith, D.A.F., Garcia, T., McKeachie, W.J.:A manual for the use of the motivated strategies for learning questionnaire (MSLQ). MI: National Center for Research to Improve Postsecondary Teaching and Learning (1991) 8. Kwon, K., Hong, R.-Y., Laffey, J.M.: The educational impact of metacognitive group coordination in computer-supported collaborative learning. Comput. Hum. Behav. 29(4), 1271–1281 (2013) 9. Hadwin, A.F., Bakhtiar, A., Miller, M.: Challenges in online collaboration: effects of scripting shared task perceptions. Int. J. Comput.-Support. Collab. Learn. 13(3), 301–329 (2018). https://doi.org/10.1007/s11412-018-9279-9 10. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006) 11. Shute, V.J., Sun, C., Asbell-Clarke, J.: Demystifying computational thinking. Educ. Res. Rev. 22, 142–158 (2017) 12. Akinola, S.O.: Computer programming skill and gender difference: an empirical study. Am. J. Sci. Ind. Res. 7(1), 1–9 (2015) 13. Goel, S., Kathuria, V.: A novel approach for collaborative pair programming. J. Inf. Technol. Educ. Res. 9, 183–196 (2010) 14. Wei, X., Lin, L., Meng, N., Tan, W., Kong, S.-C., Kinshuk.: The effectiveness of partial pair programming on elementary school students’ computational thinking skills and self-efficacy. Comput. Educ. 160, 104023 (2021) 15. Webb, N.M., Nemer, K.M., Ing, M.: Small-group reflections: parallels between teacher discourse and student behavior in peer-directed groups. J. Learn. Sci. 15(1), 63–119 (2006)

Metacognitive-Based Collaborative Programming

643

16. Baker, T., Clark, J.: Cooperative learning–a double-edged sword: a cooperative learning model for use with diverse student groups. Intercult. Educ. 21(3), 257–268 (2010) 17. Zhong, B., Wang, Q., Chen, J.: The impact of social factors on pair programming in a primary school. Comput. Hum. Behav. 64, 423–431 (2016) 18. Flavell, J.H.: Metacognition and cognitive monitoring: a new area of cognitive-developmental inquiry. Am. Psychol. 34(10), 906–911 (1979) 19. Baker, L., Brown, A.L.: Metacognitive skills and reading. In: Pearson, P.D., Kamil, M., Barr, R., Mosenthal, P. (eds.) Handbook of Research in Reading, vol. 1, pp. 353–395. Longman, New York (1984) 20. McCormick, C.B.: Metacognition and learning. In: Weiner, I.B., Freedheim, D.K., (eds.) Handbook of Psychology, Educational Psychology, pp. 79–102. Wiley, New Jersey (2003) 21. Cleary, T.J., Zimmerman, B.J.: Self-regulation differences during athletic practice by experts, non-experts, and novices. J. Appl. Sport Psychol. 13(2), 185–206 (2001) 22. Schraw, G., Moshman, D.: Metacognitive theories. Educ. Psychol. Rev. 7(4), 351–371 (1995) 23. Dindar, M., Järvelä, S., Järvenoja, H.: Interplay of metacognitive experiences and performance in collaborative problem solving. Comput. Educ. 154, 103922 (2020) 24. Jeong, H., Hmelo-Silver, C.E.: Seven affordances of computer-supported collaborative learning: how to support collaborative learning? how can technologies help? Educ. Psychol. 51(2), 247–265 (2016) 25. Cho, K.-L., Jonassen, D.H.: The effects of argumentation scaffolds on argumentation and problem solving. Educ. Tech. Res. Dev. 50(3), 5–22 (2002) 26. Kramarski, B., Mevarech, Z.R., Arami, M.: The effects of metacognitive instruction on solving mathematical authentic tasks. Educ. Stud. Math. 49, 225–250 (2002) 27. Hwang, G.J., Li, K.C., Lai, C.L.: Trends and strategies for conducting effective STEM research and applications: a mobile and ubiquitous learning perspective. Int. J. Mob. Learn. Organ. 14(2), 161–183 (2020) 28. Cohen, J.: Statistical Power Analysis for the Behavioral Sciences. L. Erlbaum Associates, Hillsdale (1988) 29. Öztürk, M.: An embedded mixed method study on teaching algebraic expressions using metacognition-based training. Thinking Skills Creativity 39, 100787 (2021) 30. Hwang, W.Y., Shadiev, R., Wang, C.Y., Huang, Z.H.: A pilot study of cooperative programming learning behavior and its relationship with students’ learning performance. Comput. Educ. 58(4), 1267–1281 (2012) 31. Sternberg, R.J.: Sketch of a componential subtheory of human intelligence. Behav. Brain Sci. 3(4), 573–584 (1980) 32. Davidson, J.E., Sternberg, R.J.: Smart problem solving: how metacognition helps. In: Hacker, D.J., Dunlosky, J., Graesser, A.C., (eds.) Metacognition in educational theory and practice, pp. 47–68. Routledge, Abingdon (1998)

Facial AI and Data Mining-Based Testing System in the Post-pandemic Era Ihao Chen1 , Yueh-Hsia Huang2 , Hao-Chiang Lin3 , and Chun-Yi Lu1(B) 1 Department of Information Management, National Penghu University of Science and

Technology, Magong, Taiwan [email protected] 2 Department of International Trade, Chinese Culture University, Taipei, Taiwan 3 Department of Information and Technology, National University of Tainan, Tainan City, Taiwan

Abstract. Amid the rise of the post-pandemic era, online examinations have gained traction across educational levels, assessing students’ learning outcomes similarly to paper-based methods. While offering opportunities for repeated practice, online testing presents the challenge of proxy testing. This study constructs a question bank and examination management system for teachers, providing students with a platform for self-reflection and post-exam analysis. Additionally, an AI algorithm suite records facial expressions as raw data for in-depth exploration and big data analysis. Employing SAS Enterprise Miner Tools and decision tree algorithms for supervised learning, the research conducts data mining with three objectives: facial expressions, response outcomes, and unanswered questions, establishing an innovative application environment for online testing systems in the post-pandemic era. Keywords: Big Data · Emotion Data Mining · Affective Tutoring Testing System (ATTS)

1 Introduction of Affective Tutoring Testing System The primary purpose of examinations is to evaluate students’ learning outcomes. Traditional paper-based tests have several drawbacks, such as teachers needing to spend time devising questions and grading exams and students receiving test results without explanations for their mistakes. However, with technological advancements and the shift to remote learning due to the COVID-19 pandemic, many teachers are adopting online testing for midterms and finals. Therefore, we aim to create a testing platform for students to practice licensure exams, which typically contain extensive question banks for repeated practice and assessment. Compared to traditional paper-based tests, this approach eliminates the shortcomings, allowing students to review questions anytime and anywhere without time or spatial constraints. Additionally, our team has identified two issues. First, regardless of whether it is a traditional paper-based or emerging online testing method, multiple-choice questions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 644–653, 2023. https://doi.org/10.1007/978-3-031-40113-8_64

Facial AI and Data Mining-Based Testing System

645

can lead to students guessing answers and scoring points by chance or “exam luck.” For students, guessing answers to questions they do not know, in the absence of a penalty for incorrect answers, is a positive expected value—it poses no loss if they guess incorrectly and offers a positive benefit if they think correctly. Second, traditional schools may become breeding grounds for disease transmission in the post-pandemic era, leading to increased home-based learning and testing. As a result, teachers may be uncertain whether the test-taker is the actual student or a proxy. Our team has developed the Affective Tutoring Testing System (ATTS) to address these issues. This system uses Google GCP’s OAuth API authentication mode (frontend student authentication) in conjunction with Microsoft ASP.Net Identity (backend administrator login authentication) and is developed using the mature ASP.Net MVC pattern. The development process also incorporates AI-based facial emotion recognition to help teachers assess students’ understanding of questions and their confidence in answering based on their emotions.

2 Literature Review In 2001, Kort introduced the Learning Spiral Model, which illustrated the impact of affective states on learning (Kort et al. 2001). In other words, students’ emotional states play a significant role in the learning process. However, the Kort model focuses on identifying, analyzing, and providing feedback throughout the learning process. Consequently, subsequent research proposed an affective computing-based intelligent tutoring system model. This study examined the importance of ten variables in the Affective Tutoring System (ATS). It divided them into three dimensions: learner, agent tutor, and technology, to address the lack of emotional interaction in e-learning (Mao and Li 2010). Moreover, affective states have been applied to understanding the emotional experiences of children with Attention Deficit Hyperactivity Disorder (ADHD) symptoms and assisting in their learning and development (Martinez et al. 2016). In other applications, affective states have been used to define learning content by detecting emotional states during digital learning processes (Shen et al. 2009). Over the past decade, research on detecting and applying students’ learning emotions has been thriving, such as seeing students’ chair and head movements (Woolf et al. 2009). With technological advancements and AI development, the Affective Tutoring System (ATS) has further integrated relevant physiological data, such as heart rate, blood pressure, skin moisture, sweating, temperature, and conductivity. By analyzing students’ self-assessment and self-esteem levels, ATS evaluates students’ learning levels, stress, learning interests, productivity, and learning styles, then implements personalized learning processes, such as selecting customized learning materials and providing personalized recommendations to suit their learning needs better (Petrovica and Ekenel 2017). Alternatively, the Affective Tutoring System (ATS) can record students’ learning processes in each teaching segment to be applied in the teacher’s instructional environment (Wang and Lin 2017). However, emotion recognition may face challenges in data accuracy when collecting information from different sensors. Petrovica’s 2017 study not only introduced several methods for collecting emotional data but also pointed out the accuracy issues faced by

646

I. Chen et al.

emotion-sensing data collection (Petrovica and Ekenel 2017). The paper also analyzed the Self-Assessment Manikin (SAM) and discussed its potential for development.

3 System Architecture and Operation Mode Based on the consolidation of past research data, this study proposes an Affective Tutoring Testing System (ATTS) incorporating an emotion-aware testing system and question bank. The question bank is sourced from past International Trade Certification Examination papers. The organizing institutions of the examination are the Ministry of Economic Affairs in Taiwan and the Taipei Importers and Exporters Association. The ATTS collects facial expression data from students as they respond to each question until the end of the test, using the data to evaluate their learning outcomes. This study employs a commercially available API component supported by a professional deep-learning engine model, which can enhance the accuracy of facial expression detection (https://www.aff ectiva.com/). Moreover, this study diverges from traditional research methods, such as using ANOVA for statistical validation, by analyzing the correlation between students and questions or the test as a whole through big data analysis after collecting students’ emotional response data. By developing a learning assessment system, this study aims to achieve the following objectives: 1. Provide a system for teachers to create and manage question banks and an information platform to monitor student learning outcomes. 2. Offer students an instant assessment platform that provides analysis reports on their weaknesses in answering questions. 3. Record facial emotion data during the test as raw data for data exploration and AI training, expecting to determine whether students truly understand the questions or rely on guessing when answering. 4. Utilize Google GCP’s cloud-based OAuth authentication mechanism, allowing the testing mode to be activated at specific times. Test takers only need a Google account to log in and verify their identity, reducing the occurrence of proxy testing during remote exams in the post-pandemic era. Since each person’s Google account is tied to numerous services, students will weigh the risks and opportunity costs when considering sharing their Google account credentials with others. Additionally, if Google is logged in on an unfamiliar computer, the system has security measures requiring phone text message and email verification. The features mentioned above will increase the difficulty of cheating, prolong the time required for such behavior, and ultimately result in delays and the expiration of the test period. 5. Teachers can access the backend system to review student responses and their emotional states. By utilizing big data analysis tools, the testing system maximizes the benefits of assessments. 3.1 System Framework The system used in this research is divided into five main parts: 1. Online Examination Subsystem (OES): Provides a platform for immediate testing and evaluation while simultaneously recording facial emotion data analysis results corresponding to each answered

Facial AI and Data Mining-Based Testing System

647

question. These emotion data analysis results will serve as raw data for future Big Data analysis, further measuring the correlation between students’ emotions and questions during responses and providing a post-exam weakness analysis feature for students. 2. Question Bank Management Subsystem (QBM): The QBM is a backend feature designed primarily for multiple-choice questions. Considering the grouped question format in some certification exams, the system also supports designing grouped questions (i.e., one question containing N sub-questions, with a customizable order of appearance). Additionally, the system allows image storage for questions that require visual explanations, storing images in SQL Server Image fields using a binary format rather than in the website directory, which facilitates automatic database backups using the SQL Agent mechanism. 3. Examination Paper Management Subsystem (EPM): EPM is a test paper management system based on shared question banks. For each test, administrators (teachers) can choose questions from the bank or generate tests automatically while also assigning scores to the generated test papers. Test papers can define the start and end dates of the exam as well as the total time (in minutes) allowed for the exam. If anyone exceeds the test time, they will be forcibly disconnected. 4. Test Record Management Subsystem (TRM): TRM is a test record management system for the online processing and storing of test and evaluation results. 5. Student Data Management Subsystem (SDM): Test-takers (students) can log in using Google OAuth authentication with their Gmail accounts. Our team’s API hosted on the Google Cloud Platform performs the authentication. Once verified, the system compares the authenticated test taker’s Gmail address with the system database; if the Gmail address exists, the student can participate in the exam. Conversely, even if the login is successful, those without eligibility will still be denied access by the system. Consequently, the SDM is a student data management system established to address the abovementioned functionalities used for reviewing, modifying, or deleting online student data and ensuring the legitimacy of test-takers’ login attempts. 3.2 AI Emotion Recognition Implementation Explanation In the facial AI emotion recognition aspect, this project utilizes the package provided by Affectiva (Mikhail and El Kaliouby 2009). For more information on the package, please refer to the Affectiva official website. In the implementation, our team spent significant time addressing the following issue: How to determine which question the student’s current facial emotion data belong to? During remote testing, problems can arise in recording emotional data if students become distracted, fall asleep, give up, or leave their seats halfway through the test. The image below shows a partial data record: Fig. 1 shows the facial emotion detection screen during the answering process. Figure 2 presents a comprehensive inventory of file names corresponding to each participant’s testing sessions within the current system. In other words, each file represents the emotional data of a test taker’s exam number. Currently, approximately 400 files have been collected. Upon opening each text file, the contents reveal a set of distinct fields, namely: Primary Key Value, Timestamp, Joy (Delight), Sadness (Sorrow), Disgust (Revulsion), Contempt (Disdain), Anger (Rage), Fear (Terror), Surprise (Astonishment), Valence (Anticipation), Engagement (Interest), Smirk (Snicker), Eye Widen (Eyes Widening),

648

I. Chen et al.

and Attention (Focus). We also record data on facial expressions, including Inner-Brow Raise (eyebrows move upward on the inside), Brow Raise (eyebrows lift up), Brow Furrow (wrinkling of the forehead), Nose Wrinkle (creasing of the nose), Upper Lip Raise (elevation of the upper lip), Chin Raise (upward movement of the chin), Lip Pucker (puckering of the lips), Lip Press (compression of the lips), Lip-Suck (holding the lips inward), Mouth Open (opening the mouth), Eye Closure (closing the eyes), Lid Tighten (narrowing the eyes), Jaw Drop (lowering the jaw), Dimple (formation of dimples), Cheek Raise (lifting of the cheeks), and Lip-Stretch (widening of the lips).

Fig. 1. AI-based facial emotion recognition interface

Facial AI and Data Mining-Based Testing System

649

Fig. 2. Emotion Filename List

4 Narrative Statistics and Facial Emotion Data Mining 4.1 Emotion Expression Descriptive Statistics In this project, the standard Emotion icons are defined and encoded, as shown in Table 1. A total of 14,089 data points were collected, representing the number of valid responses and detections from all participants. In the preliminary symbol statistics, it was found that students mostly used the general expression “E.” when answering questions, followed by the expression “C.” and “F.”. There were a total of six sets of questionnaires provided for students to practice repeatedly. The statistics of the questionnaire responses are shown in Table 2. The number of responses for the questionnaire with the ID 1006 was the highest, with a total of 4,550 times being answered by the participants. In addition, the overall test completion time had an average of 661 s, approximately 11 min. The shortest completion time was 97 s, while the longest was 2557 s. The standard deviation was significant, indicating that the test had discriminant validity, with each person taking a different amount of time to think and answer each question. 4.2 Facial Emotion Data Mining This study used SAS Enterprise Miner Tools for mining emotional data. The decision tree algorithm was applied for supervised learning, and data mining was conducted using three different approaches: A) using facial expressions as the mining target, B) using answer results as the target, and C) using unanswered questions as the target. Figure 3. Depicts the mining process using facial expression symbols as the mining target. The obtained results are as follows:

650

I. Chen et al. Table 1. Emotion Expression Symbol Statistics.

Emotion symbol

Symbol code

Frequency

Percentage



A

258

1.83

B

83

0.59

C

439

3.12

D

96

0.68

E

12,140

86.17

F

338

2.40

G

85

0.60

H

49

0.35

I

237

1.68

J

45

0.32

K

202

1.43

L

117

0.83

14,089

100.00

Total

Table 2. Statistics on Response Time for Each Exam Exam ID

Frequency

Percentage

Cumulative

Cumulative Percentage

1006

4550

32.29

4550

32.29

1007

3807

27.02

8357

59.32

1008

3947

28.01

12304

87.33

2008

757

5.37

13061

92.70

2009

507

3.60

13568

96.30

2010

521

3.70

14089

100.00

A. If emotion is the mining target, the decision path is JOY → contempt → Lip Pucker → Surprise → Lip Corner Depressor. B. If the answer result is the target, there are two parts: B1. If the target is correct answers, the decision path is determined by the question number (2844, 2715, 2717) → Emo Symbol (E, K, F, I, C, H, G) → answering time between 168 and 597 s → Brow Raise expression. B2. If the target is wrong answers, the decision path is determined by Emo Symbol (NO: E, K, F, I. C, H, G) as the first branch point. Then, there are two branching paths based on answer time (594 s). The first group is with answering time between 168 and 594 s, corresponding to question numbers 2542, 2543, and 2544 (all were answered incorrectly). The second group is with answering time of over 634 s and less than 1012 s, corresponding to the emotion label “attention,” but there is no corresponding

Facial AI and Data Mining-Based Testing System

651

Fig. 3. SAS Enterprise Miner Tools Decision Tree Mining

question number in the system. It is inferred that the participants were deeply thinking and ultimately did not select any answer to the question. C. If the target is unanswered questions, the decision path is brow Furrow → contempt → valence → attention → eye-widening. Based on the data mining analysis results, when the target of the mining is set on facial expressions, the order of emotional changes during the answering process of the participants is Joy → Contempt → Lip Pucker → Surprise → Lip Corner Depressor. The process starts with a happy emotion but then turns into negative emotions, except for the positive feeling of surprise. It can be inferred that the participants did not enjoy this type of test. The detection of positive emotion during surprise may be due to the relief of finishing the test, while Lip Corner Depressor may be the immediate reaction to the poor test results. Further analysis shows that if the target is set on the accuracy of the answer, a correct answer is associated with the positive emotion of Brow Raise, while an incorrect answer is associated with a lengthy thinking process. Finally, if the participant did not answer the question, it could be because they had no idea about the question and were staring at it with wide eyes.

5 Conclusion and Further Work 5.1 Conclusion Due to the impact of the pandemic over the past two years, distance learning may prevent traditional in-person testing. Therefore, the value of developing this system lies in several factors. First, it allows users to practice repeatedly before taking certification exams.

652

I. Chen et al.

Second, it provides a post-test weakness analysis for students and teachers to reference. Third, using AI algorithms to detect and record users’ facial expressions during testing provides raw data for further exploitation and big data analysis. Finally, the system uses the Google GCP cloud platform’s OAuth 2.0 mechanism to increase the difficulty of remote test-takers finding someone to take the test for them. Moreover, this study recorded 12 emotions and 16 facial expression data, even if the manager (teacher) is unfamiliar with big data analysis techniques or theories. They can easily determine the positive or negative correlation between the answer rate and score and the emotional expression of the students at that moment by presenting the positive or negative expressions. It also allows students to reflect on themselves after the test, why their facial expressions may have changed from positive to negative when they saw the questions and their testing mentality. In terms of data analysis, this study used SAS Enterprise Miner tools for data mining and decision tree analysis. Decision tree analysis was performed on emotional symbols and whether the question was answered correctly or left unanswered, providing preliminary inferential results. For instance, positive emotions were observed when answering questions correctly, while longer thinking time was observed when answering questions incorrectly. Furthermore, this analysis model can provide initial insights for future related research, such as identifying the difficulty of questions or analyzing students’ emotional states during testing. 5.2 Further Work The development timeline of this system was limited to one year, which resulted in some limitations. For example, the system does not have a facial recognition feature to prevent cheating during exams (which could cause privacy concerns and legal issues). A possible solution is to conduct exams in a specific computer classroom with a teacher present to monitor. The system could also consider implementing a countdown timer for each question to control the answering time and prevent test-takers from flipping through books for answers. Acknowledgment. This research was supported by the National Science and Technology Council, Taiwan, under project numbers MOST 109-2511-H-024-005-MY3,111-2410-H-024-001MY2, MOST PBM1110317 (2022-08-01~2023-07-31). This research would like to thank the students from the Department of Information Management at the National Penghu University of Science and Technology, including Wang Shu-Chen, Tsai Jia-Rong, Lin Yu-Hung, Lu Jian-Cheng, Zhuo Xu-Hui, Zhang Zhong-Jun, Peng Jing-Chun, and others, for their assistance in system development and testing.

References Kort, B., Reilly, R., Picard, R.: An affective model of interplay between emotions and learning: reengineering educational pedagogy-building a learning companion. In: IEEE International Conference on Advanced Learning Technologies 2001, pp. 43–46. IEEE, Madison, WI, USA (2001)

Facial AI and Data Mining-Based Testing System

653

Mao, X., Li, Z.: Agent based affective tutoring systems: a pilot study. Comput. Educ. 55, 202–208 (2010) Martinez, F., Barraza, C., González, N., González, J.: KAPEAN: understanding affective states of children with ADHD. Educ. Technol. Soc. 19(2), 18–28 (2016) Mikhail, M., El Kaliouby, R.: Detection of asymmetric eye action units in spontaneous videos. In: 16th IEEE International Conference on Image Processing (ICIP), pp. 3557–3560 (2009) Petrovica, S., Anohina-Naumeca, A., Ekenel, H.: Emotion recognition in affective tutoring systems: collection of ground-truth data. Procedia Comput. Sci. 104, 437–444 (2017) Shen, L., Wang, M., Shen, R.: Affective e-learning: using “emotional” data to improve learning in pervasive learning environment. Educ. Technol. Soc. 12(2), 176–189 (2009) Wang, C.-H., Lin, H.-C.: Constructing an affective tutoring system for designing course learning and evaluation. J. Educ. Comput. Res. 55(8), 1111–1128 (2017) Woolf, B., Burleson, W., Arroyo, I., Dragon, T., Cooper, D., Picard, R.: Affect-aware tutors: recognizing and responding to student af-fect. IJLT. 4, 129–164 (2009). https://doi.org/10. 1504/IJLT.2009.028804

Author Index

A Abdykhalykova, Akzhan 232 Amrenova, Assel 232, 463 Andersen, Synnøve Thomassen Anjarani, Shelia 542 Atamuratov, Dilshod 463 Aziez, Feisal 542

223

B Baigunissova, Gulbarshin 232, 463 Bauters, Merja 403 Beisembayeva, Zhanargul 232 C Carvalho, Diana 501 Catarino, Paula 375 Chang, Chi-Cheng 50, 163, 268 Chang, Li-Yun 513 Chang, Maiga 260 Chang, Shao-Yu 50 Chao, Yu-Ju 289 Chen, Dyi-Cheng 22 Chen, Gwo-Dong 187 Chen, Hsih-Yueh 421 Chen, Hsin-Chin 197 Chen, Hsueh-Chih 513 Chen, Ihao 644 Chen, Jessica H. F. 109 Chen, Kuan-Yu 57 Chen, Mu-Sheng 99 Chen, Nian-Shing 618 Chen, Pei-Zhen 513 Chen, Xiao-Wei 22 Chen, Yao-En 187 Chen, Ying-Tsung 22 Chen, Yu-Chieh 550 Cheng, Bo-Yuan 494 Cheng, Li-Chen 635 Cheng, Shu-Chen 130

Cheng, Yu-Ping 77, 130 Cheung, William Man-Yin 13 Chiang, I-Chin Nonie 67 Chiang, Ming-Yu 57 Chiang, Tosti H. C. 625 Chung, Fu-Ling 37, 260 Chung, Hsin-Hsuan 37 D Dang, Chuanwen 463 Della Ventura, Michele 3 Deng, Wei-Lun 22 Ding, Ming Yuan 242 E Elsa 587 Eybers, Sunet 385 F Fayziev, Mirzaali 232, 463 Fodor, Szabina 326 Furqon, Miftahul 542 G Ghenia, George 67 Gouws, Patricia 345 Guan, Zheng-Hong 315 H Häkkinen, Päivi 87 Hattingh, Marié 356 Hawanti, Santhy 542 Hsieh, Yi-Zeng 197 Hsu, Hsueh-Cheng 597 Hsu, Tai-Ping 99 Hsu, Ting-Chia 99 Hsu, Wei-Chih 120 Huang, Chia-Nan 295 Huang, Shin-Ying 295

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Y.-M. Huang and T. Rocha (Eds.): ICITL 2023, LNCS 14099, pp. 655–657, 2023. https://doi.org/10.1007/978-3-031-40113-8

656

Author Index

Huang, Tien-Chi 57 Huang, Yueh-Hsia 644 Huang, Yueh-Min 77, 130, 295, 441, 484 Hwang, Gwo-Haur 606 Hwang, Jan-Pan 197 Hwang, Wu-Yuin 67 J Jakobsen, David

430

K Kao, I-Lin 268 Kawasaki, Yuka 473 Keskitalo, Pigga 153 Kong, Siu Cheung 13 Korte, Satu-Maarit 13, 153 Kritzinger, Elmarie 345 Kuo, Yen-Ching 606 L Lai, Yen-fang 523 Lai, Yu-Fu 67 Lan, Yu-Ju 37, 260 Lau, Chaak Ming 153 Lee, Hsin-Yu 77, 295 Li, Chien-Kuo 473 Li, Hsuan 618 Li, Jerry N. C. 315 Li, Pin-Hui 77 Li, Wei 635 Li, Wen-Ju 577 Liao, Min-Hsun 250 Lin, Chia-Ching 494 Lin, Chia-Ju 441 Lin, Chih-Huang 304 Lin, Hao-Chiang 644 Lin, Hao-Chiang Koong 567 Lin, Jim-Min 130, 587 Lin, Koong Hao-Chiang 577 Lin, Kuo-Hao 550 Lin, Shu-Min 37 Lin, Sunny S. J. 315 Lin, Yu-Hsuan 567 Liu, Fan-Chi 577 Liu, Wei-Shan 597 Lu, Chun-Yi 644 Lu, Li-Wen 577 Lu, Shang-Wei 22 Lu, Yen-Hsun 409

Lu, Yi-Chen

409

M Maasilta, Mari 153 Martins, Paulo 501 Mets, Juri 403 Murti, Astrid Tiara 560 N Nascimento, Maria M. Nurtantyana, Rio 67 O Øhrstrøm, Peter 430 Opanasenko, Yaroslav

375

451

P Pedaste, Margus 87, 441, 451, 484 Peng, Jui-Chi 606 Pillay, Komla 367 Pöysä-Tarhonen, Johanna 87 R Rampuengchit, Kiattisak 207 Rannastu-Avalos, Meeli 87 Rocha, Tânia 501 Rong, Jie-Yu 567 Rønningsbakk, Lisbet 279 Rossouw, Amore 174 S Samat, Charuni 207 Sandnes, Frode Eika 143 Sarro-Olah, Bernadett 326 Shadiev, Narzikul 232, 463 Shadiev, Rustam 232, 463 Shih, Ru-Chu 494 Siiman, Leo A. 87, 451 Silitonga, Lusia Maryani 542, 587 Silva, Rui Manuel 501 Smuts, Hanlie 174, 356 Starˇciˇc, Andreja Isteniˇc 77 Su, King-Dow 421 Suciati, Sri 587 Sumardiyani, Listyaning 560 Sung, Han-Yu 289 Syu, Chuan-Wei 50 T

Author Index

657

Thorvaldsen, Steinar 430 Tsai, Chih-Yu 295 Tsai, Meng-Chang 197 Tsai, Ming-Hsiu Michelle 67 Tsai, Yun-Cheng 531 Tseng, Judy C. R. 635 Tseng, Pin-Hsiang 163 Tukenova, Natalya 463 V Viriyavejakul, Chantana W Wang, Jen-Hang 187 Wang, Lixun 153 Wang, Tao-Hua 577 Wang, Wei-Sheng 484 Wang, Wei-Tsong 242 Weilbach, Lizette 356 Wen, Fu-Hsiang 120 Wen, Kuo-Cheng 22

336

Weng, Ting-Sheng 473 Wu, Ching-Lin 513 Wu, Ming-Chia 625 Wu, Po-Han 625 Wu, Tienhua 120 Wu, Ting-Ting 409, 441, 542, 560, 587, 597 Wu, Wei-Ting 625 X Xu, Chen-Yin

163

Y Yang, Ming 130 Yang, Su-Hang 187 Yeh, Yao-ming 523 Yen, Wan-Hsuan 268 Yi, Suping 463 Z Zain, Dodi Siraj Muamar 542