Measuring and Developing Professional Competences in COMET: Method Manual (Technical and Vocational Education and Training: Issues, Concerns and Prospects, 33) 981160956X, 9789811609565

This book is a detailed manual for the implementation of competence diagnostics in the field of vocational training. Wit

99 80

English Pages 579 [572] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Series Editors Introduction
Contents
Chapter 1: Introduction
1.1 The Possibilities and Limitations of Large-Scale Competence Diagnostics
1.2 Modelling Professional Competence
1.3 The Format of the Test Tasks
1.4 Modelling and Measuring Professional Identity and Professional Commitment
1.5 Modelling and Measuring the Competence of Teachers in Vocational Subjects
1.6 The Quality Criteria for Professional Competence Diagnostics and the Design of Tests
Chapter 2: Professional Competence as a Subject of Competence Diagnostics
2.1 Design Instead of Adaption
2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics (LS-CD)
2.2.1 Implicit Professional Knowledge (Tacit Knowledge)
2.2.2 Professional Competence (Employability)
2.2.3 Craftsmanship
2.2.4 Social and Key Competences
2.2.5 Abilities That Are Expressed in the Interactive Progression of the Work
Chapter 3: Categorial Framework for Modelling and Measuring Professional Competence
3.1 The Occupation Type of Societal Work
3.1.1 Employability or Professional Competence
3.1.2 Architecture of Parallel Educational Paths
3.1.3 Professional Validity of Competence Diagnostics
3.2 The Design of Work and Technology: Implications for the Modelling of Professional Competence
3.2.1 Professional Work Tasks and Professional Competence
3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks
3.3.1 Professional Scientific Task Analyses Include
3.3.2 Identifying Professional Work Tasks: Expert Specialist Workshops (EFW)
Professional versus Experience-Based Description of Work Tasks
Participants: Expert Specialists
Researcher and Moderator
3.3.3 Professional Scientific Work Process Studies
Goals and Structure of Work Process Studies
The Steps of Occupational Scientific Work Process Studies (Table 3.3)
Definition and Formulation of Preliminary Research Questions and Hypotheses
Preparation of the Study: Approach to the Research Field
Implementation of the Work Process Study
The Action-Oriented Expert Discussion
Paraphrasing
Enquire, Reflect and Clarify
Qualitative Experimentation
Planned Explorative-Experimental Work Process Studies
Situative ad-hoc Experiments
Documentation of the Research Process: Tape and Memory Records
The Memory Records (Table 3.4)
Situation Film as Supplementary Documentation of the Working Reality
Evaluation of the Study
Qualitative Text and Material Analysis
3.4 Guiding Principles and Objectives of Vocational Education and Training
3.4.1 Professional `Gestaltungskompetenz´ (the Ability to Shape or Design One´s Professional Future)
3.4.2 Design-Oriented Vocational Education.
3.4.3 Professional Competence
3.5 Theories of Vocational Learning and Professional Development
3.5.1 The Novice-Expert Paradigm: Competence Development in Vocational Education
3.5.2 Work Process Knowledge
Tacit Knowledge (Polanyi, 1966b; Neuweg, 2000)
3.5.3 Practical Knowledge
Practical Terms and Practice Communities
3.5.4 Multiple Competence
Clarity/Presentation (K1)
Functionality (K2)
Sustainability (K3)
Efficiency/Effectiveness (K4)
Orientation on Business and Work Process (K5)
Social Compatibility (K6)
Environmental Compatibility (K7)
Creativity (K8)
Chapter 4: The COMET Competence Model
4.1 Requirements for Competence Modelling
4.2 The Levels of Professional Competence (Requirement Dimension)
4.2.1 Operationalisation of the Competence Criteria: Development of the Measurement Model
4.3 Structure of the Content Dimension
4.4 The Action Dimension
4.4.1 The Importance of Action Types
4.5 A Cross-Professional Structure of Vocational Competence
4.6 Extending the Competence Model: Implementing Planned Content
4.6.1 Operational Projects/Orders
4.6.2 The Expert Discussion
4.6.3 Rater/Examiner Training for Assessing the Practical Exam
4.7 Identity and Commitment: A Dimension of Professional Competence Development
4.7.1 Normative Fields of Reference for Commitment and Work Morale
4.7.2 Professional Ethics
4.7.3 Organisational versus Occupational Commitment
4.7.4 Construction of Scales to Capture Work-Related Identity and Commitment Occupational Identity
4.7.5 Organisational Identity
4.7.6 Modelling the Connections Between Identity and Commitment
4.7.7 Occupational Commitment
4.7.8 Organisational Commitment
4.7.9 Work Ethics
4.7.10 Example of an Analysis of the Measurement Model (Performed by Johanna Kalvelage and Yingy Zhou, 6.4)
4.7.11 International Comparisons
Chapter 5: Developing Open Test Tasks
5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks
5.1.1 The Preparation of Expert Specialist Workshops
5.1.2 Implementation of the Workshop
5.1.3 External Validation
5.1.4 Evaluation of the Validation
5.2 An Open Test Format
5.2.1 Representativeness
5.2.2 Authenticity/Reality Reference
5.2.3 Difficulty
5.2.4 The Description of the Situation
5.2.5 Standards and Rules to be Complied with
5.3 Cross-Professional and Subject-Related Test Tasks
5.4 Test Arrangements for Related Vocational Training Courses with Different Levels of Qualification
5.4.1 The S II Test Arrangement
5.4.2 The Post-SII Test Arrangement
5.4.3 The Third Test Arrangement: Graduates of Professionally Qualifying Bachelor Programmes As Primary Test Groups
5.4.4 Validity of the Test Tasks for Different Training Courses and Test Arrangements
5.5 Description of the Solution Scopes
5.6 Evaluation and Choice of Test Tasks: The Pre-Test
5.6.1 Determining the Test Group(s)
5.6.2 Training of Test Task Authors
Identification of Professional Fields of Action
Didactical Evaluation and Revision of the Test Tasks
Rater Training and Rating
The Aims of Rater Training
The Trainers
The Participants in Rater Training
Organisation
5.6.3 Calculating of the Finn Coefficient
Calculation of Interrater Reliability for the COMET Test Instruments
Fleiss´ and Conger´s Kappa
The Finn Coefficient
Intraclass Correlation Coefficient (ICC) for One-Way and Two-Way Models
Prospect
5.6.4 Rating Results
Rating Results (pre-test)
Example: Trainees for Shipping and Logistics Services (SLS)
5.6.5 Interviewing the Test Participants
Comprehensibility
Difficulty of the Test Tasks
Practical Relevance
5.6.6 Selection and Revision of Test Tasks and Solution Scopes
5.7 Test Quality Criteria
5.7.1 Objectivity
Objectivity of Implementation
Objectivity of Evaluation
Reliability (Credibleness)
Validity (Significance)
Validity of Content (Face Validity and Ecological Validity)
Criterion Validity
Construct Validity
5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended to Measure Professional Competence
5.8.1 Standardised Test Tasks
5.8.2 Criteria-Oriented Test Tasks
Difficulty of Tasks in a Cross-Professional Comparison
Test-Theoretical Problems for Vocational Education
5.8.3 The Variation Coefficient V: A Benchmark for the Homogeneity of the Task Solution
5.8.4 Conclusion
Chapter 6: Psychometric Evaluation of the Competence and Measurement Model for Vocational Education and Training: COMET
6.1 What Makes it So Difficult to Measure Professional Competence?
6.1.1 Procedures Based on the Analysis of the Covariance Matrix
6.1.2 Mixed Distribution Models
6.2 Ensuring the Interrater Reliability of the COMET Test Procedure
6.2.1 Example: Securing Interrater Reliability (COMET Vol. I, Sect. 4.2, Birgitt Erdwien, Bernd Haasler).
6.3 Latent Class Analysis of the COMET Competence and Measurement Model
6.3.1 On the Connection between Test Behaviour and Personal Characteristics
6.3.2 On the Relationship Between the Structure and Modelling of Vocational Competences
6.3.3 Mathematical Properties of a Test Model
6.3.4 Characteristics of a Competence Model for Vocational Education and Training
6.3.5 Dilemma: Dependent Versus Independent Test Items
6.3.6 The Search for a Bridge Between Theory and Measurement
6.3.7 The Example COMET
6.3.8 Empirical Procedure
6.3.9 On the Reliability of the COMET Rating Procedure
6.3.10 On the Content Validity of the COMET Rating Dimensions
6.3.11 Population
6.3.12 Step 1: Determination of Interrater Reliability
6.3.13 Step 2: Sorting of Task Solutions
6.3.14 Step 3: Verification of the Homogeneity of the Competence Criteria
6.3.15 Step 4: Identification of Typical Competence Profiles.
6.3.16 Distribution of Tasks Among the Competence Models
6.3.17 Step 5: Longitudinal Analysis of Competence Measurement.
6.3.18 Discussion of the Results
6.3.19 Need for Research
6.3.20 Prospect
6.4 Confirmatory Factory Analysis
6.4.1 The I-D Model ( 4, Fig. 4.5)
6.4.2 Original Scales
6.4.3 Confirmatory Factor Analysis for the Original Model
6.4.4 Explanations
6.4.5 Modification
6.4.6 Discussion
6.4.7 Explorative Factory Analysis
6.4.8 Results
6.4.9 Discussion
6.4.10 Discussion
6.4.11 Discussion
6.4.12 Discussion
6.4.13 Overall Discussion EFA
6.4.14 Considerations for Further Action
6.5 Validity and Interrater Reliability in the Intercultural Application of COMET Competence Diagnostics
6.5.1 Method
6.5.2 Preparation and Translation
6.5.3 Cultural Adaption
6.5.4 Rater Training
6.5.5 Analysis of the Reliability and Validity of the Evaluation Item
6.5.6 Results
Effect of Rater Training on Increasing Interrater Reliability
Analysis of the Structural Validity of the Evaluation Items
Analysis of the Reliability of the Evaluation Items
Discussion
Chapter 7: Conducting Tests and Examinations
7.1 How Competence Diagnostics and Testing are Connected
7.1.1 Reviews of Professional Competence: The New Examination Practice
Characteristics of `Measuring´ and `Testing´ Professional Skills
7.1.2 Context Reference: Work and Business Processes
7.1.3 Levelling of Test Results
7.2 The Measurement of Professional Competence
7.2.1 COMET as the Basis for a Competence-Based Examination
Examination Variant of the Examination Structure (TAKING into Account the COMET Examination Procedure)
7.2.2 Examination Format for the Extended Final Examination (GAP)
The Operational Order
Assessment of the Examination Result Part A (Operational Order/Practical Task)
Expert Discussion
Practical Task
Holistic Tasks
Evaluation of the Task Solutions by the Examiners (Dual or Team Rating)
7.2.3 Procedure for the `Operational Order´ (OO)Examination
Application Process
Approval of the Application
Procedure for the OO Examination (Fig. 7.11)
7.2.4 The Examination Result
Total Score (TS)
Conclusion
7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics for Automotive Mechatronics Technicians
7.3.1 Comparison of the Examination and (COMET) Test Results
7.3.2 Results of the Statistical Influence Analysis
Interpretation of the Results
Overall Result (Final Examination): Total Score (COMET)
Practical Part of the Examination
Written Examination
7.3.3 Conclusion
7.4 Measuring the Test Motivation
7.4.1 Preliminary Study: The Time Scope of the Test as Influence on Test Motivation
Results of the Survey on Test Motivation (cf. COMET Vol. III, Sect. 7.7)
Comparison of the Results of the First and Second Test Item
Comparison of the Processing Time for the First and Second Test Item
7.4.2 Explorative Factor Analysis of the Relationship Between Test Motivation and Test Performance
Capturing the Test Motivation
Questionnaire for Recording Test Motivation
The Test Motivational Model: Data Structure of Motivational Variables in the COMET Test Procedure
Sample
The Connection between Test Motivation and Test Performance
Question and Hypothesis
Method
Outcomes
Discussion
7.4.3 Influence of Processing Time on Overall Performance
Examples: Test Motivation of Nursing Students (Switzerland)
Representation of Factor Values in the form of a Matrix
Example: Test Motivation of Electronics Engineers
Differences Between (COMET) Test and Exam Motivation
7.4.4 Results of the Comparative Study: Test and Examination Motivation among Motor Vehicle Trainees
7.4.5 The Cultural Dimension of Test Motivation
7.4.6 Conclusion
7.5 Planning and Executing COMET Projects
7.5.1 Research Design and Research Strategies
Hypothesis-Driven versus Discovering Research
Example: Discovering the Phenomenon of Stagnation in Competence Development (Rauner, Piening, and Zhou, 2014)
Looking for an Explanation for the Stagnation Hypothesis: Longitudinal Studies
Industrial Mechanic (Hesse)
7.5.2 Defining the Project Design
Agreement on Project Objectives
The Establishment of a Project Organisation
Determining the Participants (Sample)
Examples: First Example: COMET Electronics Technician (China)
Representativeness
Example 2: Representativeness in the PISA Project (Prenzel et al., 2004, Sect. 2.4).
Example 3: Automotive Mechatronics Technician (NRW)
Example 4: Realisation of Representativeness and Situatedness
7.5.3 Selecting and Developing the Test Items, the Test Documents for the `Commitment´ Survey and Performing the Context Analy...
Survey of Trainees/Students
Characteristics of In-Company Training
Characteristics of Vocational Schools in Dual Vocational Education and Training
Context Analyses of the Scientific Support
Quality Diagrams
The Benefit of the Quality Diagram for Presenting and Interpreting the Results of the Context Analyses
Capturing the Quality of Training at a Glance
Illustration of Heterogeneity
Data Protection and Coding of the Personal Data of the Test Persons
7.5.4 Informing about the Objectives and the Implementation of the Test
Example: COMET Project Electronics Technician (Hesse)
Time Schedule of the Project and the Problem of Test Duration
Test Scope
Online Rating
7.5.5 Research as a Cooperative Project between Science and Practice
Interpretation of Test Results in the Context of Feedback Workshops
Basis of the Identified Examples of Good and Best Practice
7.5.6 Transfer Activities
Documentation and Publication of Project Results
Chapter 8: Evaluating and Presenting the Test Results
8.1 Classification of Individual Performance in Professional Competence Levels
8.1.1 Determination of the Scores for the Three Competence Dimensions
8.1.2 Sub-Competences, Competence Dimensions and Competence Levels
8.1.3 Classification of Individual Performance in Professional Competence Levels
8.2 Graphical Representation of the Test Results
8.2.1 Competence Levels
8.2.2 Differentiation according to Knowledge Levels
8.2.3 Transfer of Competence Levels Differentiated according to Knowledge Levels into a Grading Scale
8.3 Competence Development as a Competence Profile
8.3.1 Homogeneous versus Selective Competence Profiles
8.4 Heterogeneity of Professional Competence Development
8.4.1 Heterogeneous Levels of Competence
8.4.2 Percentile Bands
8.4.3 The Heterogeneity Diagram
8.4.4 The Causes of Heterogeneity
Prior Schooling
Selection and Admission Rules for Vocational Education and Training as Determinants of the Degree of Heterogeneous Performance...
The Teacher as an Influencing Factor
The Heterogeneity Diagram
Learning Venue Cooperation
8.5 Measuring Identity and Commitment
8.5.1 On the Construction of Scales
8.5.2 Calculating the Results
8.5.3 `Commitment Lights´
8.5.4 Commitment Progression
8.5.5 Four-Field Matrices
The Identification Potential of Professions: A Professional Typology
Consistently High Identity: Work Orientation
Professional Identity: Occupational Orientation
Organisational Identity: Occupational Orientation
Weak/No Occupational Identity: Employment and Job Orientation
Professional and Organisational Commitment: A Professional Typology
Consistently Committed
Professionally Committed
Organisationally Committed
The Weakly/Non-Committed
8.5.6 Identity and Commitment Profiles
Two Content-Related Occupations with Different Identification Potentials (Fig. 8.25)
8.6 Identity and Commitment as Determinants of Professional Development
8.6.1 Professional and Organisational Identity as Determinants of the Quality of Vocational Training
8.6.2 Professional Identity
8.6.3 Organisational Identity
8.6.4 Professional Commitment
8.6.5 Organisational Commitment
Chapter 9: The Contribution of COMET Competence Diagnostics to Teaching and Learning Research
9.1 A New Dimension for Teaching-Learning Research in Vocational Education and Training
9.1.1 Introduction
Measuring Professional and Organisational Identity and the Commitment-Based Thereon
9.1.2 Teaching and Learning Research Based on COMET Research Data
9.1.3 Competence Diagnostics
9.1.4 Teachers as Determinants of Professional Competence Development
9.1.5 Professional Competence Development and Professional Identity/Professional Commitment
The Development of Professional Identity
9.1.6 Professional Competence Development: Context Data
Teacher Competence: Context Data
Perspectives
Professional Identity, Professional Commitment and Context Data
9.2 Professional Competence and Work Ethic
9.2.1 Introduction: From a Function-Oriented to a Design-Oriented Vocational Training Concept
9.2.2 The Characteristics of Vocational Education and Training (Chap. 3)
Employability
The Contents of Vocational Training: Work Process Knowledge
Shaping Competence
Professional Identity and Work Ethic
The Training Paradox
9.2.3 Competence Profiles for the Representation of Competence Development and Professional Work Ethic
Examples
The Professional Understanding and Problem-Solving Patterns of Teachers as Determinants of the Homogeneity of Their Students´/...
9.2.4 The Relationship Between the Level of Competence and the Homogeneity of Competence Development
Competence Profiles and Professional Work Ethic
9.2.5 Conclusion
9.3 Professional Identity and Competence: An Inseparable Link
9.3.1 Justification of the Hypothesis
9.3.2 Methodical Approach
9.3.3 Test Results
Electronics Technician (Fig. 9.8)
Car Mechatronics
Carpenter (Fig. 9.10)
Medical Specialist Assistant (Fig. 9.11)
9.3.4 Conclusions and Perspectives
The Criteria of Modern Professionalism
Identity, Willingness to Perform, Sense of Quality and Responsibility
9.4 Training Qualities and Competence Development
9.4.1 The Question
9.4.2 Methodical Approach
The Assessment of the Training Quality by the Trainees
9.4.3 Results on the Relationship between Competence and Training Quality
The Quality Criteria Correlate Differently with the Values of the Competence Level
Differentiations According to Professions
Training Quality (Companies) (Fig. 9.14)
Learning Venue Cooperation (Fig. 9.17)
Training Support (Trainers) (Fig. 9.18)
Trainer Assessment
Teaching Quality (Fig. 9.20)
Learning Climate (Fig. 9.23)
9.4.4 Conclusion
9.5 The Training Potential of Vocational Schools
9.5.1 Introduction
9.5.2 Methodical Approach
Teachers: A Determinant of Competence Development Underestimated by Students
9.6 Teachers and Trainers Discover Their Competences: A ``Eureka´´ Effect and Its Consequences
9.6.1 The Development of Test Items
Informing and Preparing (Conceptualising) the Project
Development of Test Items (Drafts), Formulation of Solution Aids
9.6.2 The Changed Understanding of the Subject Shapes the Didactic Actions of Teachers
9.6.3 Context Analyses: The Subjective View of Learners on the Importance of Learning Venues
The Weighting of Learning Venues
Vocational School Learning Environment
Nursing Training at Technical Colleges
9.6.4 Conclusion
Comparability of Test Groups
Example: Pilot Study (Industrial Clerks)
Example: COMET Project Nursing Training, Switzerland
9.6.5 Conclusion
Chapter 10: Measuring Professional Competence of Teachers of Professional Disciplines (TPD)
10.1 Theoretical Framework
10.2 Fields of Action and Occupation for Vocational School Teachers
10.2.1 Proposal for a Measurement Method by Oser, Curcio And Düggeli
10.2.2 Competence Profiles
10.2.3 Validity of the Oser Test Procedure
10.2.4 The Action Fields for TPD
Planning, Implementing and Evaluating Vocational Learning Processes
Development of Educational Programmes
Planning, Developing and Designing the Learning Environment
Participation in School Development
10.3 The ``TPD´´ (Vocational School Teacher) Competence Model
10.3.1 The Requirements Dimension
Functional Competence
Procedural Competence
Shaping Competence
Nominal Competence
10.3.2 The Contextual Dimension
10.3.3 The Behavioural Dimension
10.4 The Measurement Model
10.4.1 Operationalisation of the Requirements Dimension (Fig. 10.5)
10.4.2 The Competence Dimensions
10.4.3 The Competence Levels
10.4.4 Operationalisation of Competence Components for Teachers of Professional Disciplines (TPD) (Rating Scale A)
10.4.5 Vocational Competence
10.4.6 Vocational/Technical Didactics
10.4.7 Technical Methodology (Forms of Teaching and Learning)
10.4.8 Sustainability
10.4.9 Efficiency
10.4.10 Teaching and Training Organisations
10.4.11 Social Compatibility
10.4.12 Social-Cultural Embedment
10.4.13 Creativity
10.5 Test Tasks
10.5.1 Test Tasks for Measuring Cognitive Dispositions (Conceptual-Planning Competence)
10.5.2 Time Scope of the Test Tasks (for Large-Scale Projects)
10.6 State of Research
10.6.1 A Pilot Study with Student Teachers
Test Results
10.6.2 The Research Programme: Competence Development of Teachers and Lecturers in Vocational Education and Training in China
Pretest (China)
Main Test
Test Reliability
10.6.3 Investigating the Link Between Measured Teacher Competence and Quality of Teaching
10.7 Evaluation of Demonstration Lessons in the Second Phase of Training Teachers with Professional Discipline (TPD): A Test M...
10.7.1 The Lesson Plan
10.7.2 Class Observation
10.7.3 The Interview (Following the Demonstration Lesson)
10.7.4 Final Evaluation of the Examination Performance
10.8 Development and Evaluation of the Model ``Social-Communicative Competence of Teachers´´
10.9 Outlook
10.9.1 Psychometric Evaluation of the Competence Model
10.9.2 Investigating the Link Between Measured Teacher Competence and Quality of Teaching
Chapter 11: The Didactic Quality of the Competence and Measurement Model
11.1 The Learning Field Concept Provides Vocational Education and Training with an Original, Educational-Theoretical Foundation
11.1.1 Professional Action Fields as a Reference Point for the Development of Learning Fields
11.2 Designing Vocational Education Processes in Vocational Schools
11.2.1 Professional Knowledge
11.2.2 The Training Paradox
11.2.3 Designing Learning Tasks
What Distinguishes Learning Tasks from Work Tasks
A ``Product´´
A Learning Outcome
Step 1: Identifying Competence-Promoting Work Situations/Tasks
Step 2: Developing and Describing Learning Tasks from Work Situations/Tasks
Prospectivity
The Concept of Holistic Task Solution
Action Consolidation and Accentuation
Solution and Design Scopes
Representativity
Competence Development
Publication of Learning Tasks
Step 3: Identify Learning Opportunities and Diagnose Competence Development
Learning Tasks: With Solutions Possible at Different Levels
Possible Differentiation in the Evaluation of Task Solutions
The Teaching Objective: Dealing with Heterogeneity Aims at the Individual Promotion of Professional Competence
Developing Competences
11.2.4 Designing Teaching-Learning Processes
Step 1: Selection of a Customer Order with ``Suitable´´ Learning Potential and Formulation of a Learning Task
Step 2: Analysing and Functionally Specifying the Customer´s Situation Description
Manufacture Two Grippers (Material: 1.2842) from 20 x 15 Flat Steel According to the Drawing
Step 3: Development and Definition of Evaluation Criteria
Step 4: Provisional Definition (Rough Plan) and Implementation of the Task-Solving Procedure: Development of Vocational Concep...
Learning Within a Group
11.2.5 Dealing with Heterogeneity
11.2.6 Step 5: Evaluating the Task Solution (Self-Assessment)
11.2.7 Step 6: Reflection on Work and Learning Processes
11.2.8 Step 7: Presenting and Evaluating the Task Solution, Work and Learning Process as Well as Learning Outcomes (External E...
11.2.9 Step 8: Systematising and Generalising Learning Outcomes
11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical Colleges in Switzerland: Examples of Teaching and Exa...
11.3.1 The Higher Vocational Nursing Schools in Switzerland
11.3.2 COMET in the Context of the BZ-GS [Health and Social Education Centre]
Example Lesson: Pain and Temperature Regulation
11.3.3 Example Lesson: Nursing Relatives and Palliative Care
11.3.4 Example Lesson: CPR-Cardiopulmonary Resuscitation
11.3.5 Examinations
Synthesis Examination as an Example
Diploma Examination as an Example
Examinations: Conclusion
11.3.6 Suggestion for Lesson Preparation
Portfolios and Patient Documentation Tool
11.3.7 Conclusion
Appendix A: The Four Developmental Areas
Appendix B: Rating Scale
Appendix C: Examples for Test Tasks
Note
Example Millwright
Example Electrician
Example Welder
Appendix D: Four-Field Matrix (Tables)
Appendix E: Correlation Values for the Correlation Between Occupational Competences and I-C Averages
List of References
Bibliography
List of COMET Publications Bd. I
COMET-Berichte
Index
Subject Index
Recommend Papers

Measuring and Developing Professional Competences in COMET: Method Manual (Technical and Vocational Education and Training: Issues, Concerns and Prospects, 33)
 981160956X, 9789811609565

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Technical and Vocational Education and Training: Issues, Concerns and Prospects 33

Felix Rauner

Measuring and Developing Professional Competences in COMET Method Manual

Technical and Vocational Education and Training: Issues, Concerns and Prospects Volume 33

Series Editor Rupert Maclean, RMIT University, Melbourne, Australia Associate Editors Felix Rauner, TVET Research Group, University of Bremen, Bremen, Germany Karen Evans, Institute of Education, University of London, London, UK Sharon M. McLennon, Newfoundland and Labrador Workforce Inno, Corner Brook, Canada Advisory Editors David Atchoarena, Division for Education Strategies & Capacity Building, UNESCO, Paris, France András Benedek, Ministry of Employment and Labour, Budapest, Hungary Paul Benteler, Stahlwerke Bremen, Bremen, Germany Michel Carton, NORRAG c/o Graduate Institute of International and Development Studies, Geneva, Switzerland Chris Chinien, Workforce Development Consulting, Montreal, Canada Claudio De Moura Castro, Faculade Pitágoras, Belo Horizonte, Brazil Michael Frearson, SQW Consulting, Cambridge, UK Lavinia Gasperini, Natural Resources Management and Environment Department, Food and Agriculture Organization, Rome, Italy Philipp Grollmann, Federal Institute for Vocational Education and Training (BiBB), Bonn, Germany W. Norton Grubb, University of California, Berkeley, USA Dennis R. Herschbach, University of Maryland, College Park, USA Oriol Homs, Centre for European Investigation and Research in the Mediterranean Region, Barcelona, Spain Moo-Sub Kang, Korea Research Institute for Vocational Education and Training, Seoul, Korea (Democratic People’s Republic of) Bonaventure W. Kerre, Moi University, Eldoret, Kenya

Günter Klein, German Aerospace Center, Bonn, Germany Wilfried Kruse, Dortmund Technical University, Dortmund, Germany Jon Lauglo, University of Oslo, Oslo, Norway Alexander Leibovich, Institute for Vocational Education and Training Development, Moscow, Russia Robert Lerman, Urban Institute, Washington, USA Naing Yee Mar, GIZ, Yangon, Myanmar Munther Wassef Masri, National Centre for Human Resources Development, Amman, Jordan Phillip McKenzie, Australian Council for Educational Research, Melbourne, Australia Margarita Pavlova, Education University of Hong Kong, Hong Kong, China Theo Raubsaet, Centre for Work, Training and Social Policy, Nijmegen, The Netherlands Barry Sheehan, Melbourne University, Melbourne, Australia Madhu Singh, UNESCO Institute for Lifelong Learning, Hamburg, Germany Jandhyala Tilak, National Institute of Educational Planning and Administration, New Delhi, India Pedro Daniel Weinberg, formerly Inter-American Centre for Knowledge Development in Vocational Training (ILO/CINTERFOR), Montevideo, Uruguay Adrian Ziderman, Bar-llan University, Ramat Gan, Israel

More information about this series at http://www.springer.com/series/6969

Felix Rauner

Measuring and Developing Professional Competences in COMET Method Manual

Felix Rauner University of Bremen Bremen, Germany

ISSN 1871-3041 ISSN 2213-221X (electronic) Technical and Vocational Education and Training: Issues, Concerns and Prospects ISBN 978-981-16-0956-5 ISBN 978-981-16-0957-2 (eBook) https://doi.org/10.1007/978-981-16-0957-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

In less than a decade, the methods of competence diagnostics in accordance with the COMET test procedure have become an internationally established instrument for quality assurance and quality development in vocational education and training. At its core, the methodological instruments comprise the COMET competence and measurement model as a basis for the development of test and examination tasks and the evaluation of task solutions: the rating procedure. The insight that, when solving and working on tasks in the working environment, there is always a situational solution space as well as extensive scope for creativity on a social level that must be exploited was translated into the form of open complex test tasks. The insight that professional tasks must always be solved completely, if only for reasons of occupational safety, health protection, environmental and social compatibility, and not least for reasons of the qualitative competition to which the companies are exposed, justifies the theory of holistically solving professional tasks. After the psychometric evaluation of the COMET competence and measurement model by Thomas Martens and Birgitt Erdwien had already been successful in 2009, the COMET project developed into an international research and development network with projects encompassing numerous industrial-technical, commercial and personalised service occupations. In particular, the cooperation projects with the COMET consortia in China, headed by Professor Zhao Zhiqun and in South Africa, supported by the “Sector Education and Training Authority merSETA” and a group of doctoral students, have contributed to expanding and profiling internationally comparative vocational education research in intercultural teaching and learning research. Thanks to Professor Martin Fischer’s initiative, an interim report on COMET research was presented at a conference at KIT in October 2013 in keeping with the slogan: “COMET under the microscope”. The documentation of presentations and discussions by COMET experts in practice, educational administrations and vocational training research and, above all, the exchange of experience with colleagues who evaluated the COMET project from an overarching external vocational education and training perspective, contributed to a conference result that had a lasting v

vi

Preface

effect on the further development of the COMET methodology (Fischer, Rauner, & Zhao, 2015). Above all, the criticism that the COMET competence model only covers conceptual and professional planning competence but not practical skills, decisively contributed to the further development of the competence and measurement model. Meanwhile, an extended measurement model has been developed as a foundation for conducting competence-based examinations, including their “practical” part. This manual serves as response to a frequently expressed request for a summary of the methods developed and tested in the COMET projects in a method manual. Such extensive work has only been possible with the participation of the large number of colleagues who have contributed to the development of these methods. The spectrum of the documented methods ranges from the development of test tasks to the performance of pre-tests, cross-sectional and longitudinal studies, the development and evaluation of scales for measuring professional and organisational identity and the commitment based thereon, culminating in the development of context analyses to form a procedure for measuring test motivation. Particular importance is attached to the presentation and exemplary illustration of the methods of psychometric testing of the competence and measurement model, as well as the scales and models of context analysis. In hardly any other field of vocational education and training research is the participation of teachers and trainers in the research process as indispensable as in competence diagnostics. This is one of the main findings of COMET research in recent years. I would, therefore, like to thank the numerous project groups that have so far played a very decisive role in the implementation of projects in an increasing range of occupations and specialist areas in initial vocational training, technical colleges and higher technical schools, as well as tertiary vocational training courses. This applies above all to the evaluation of the wide variety of solutions to the test tasks, the didactic evaluation of the rating scales and the interpretation of the test results, whereby the latter requires intimate knowledge of the respective teaching and learning contexts. My thanks also go to the Manufacturing, Engineering and Related Services Sector Education and Training Authority (merSETA) in South Africa who supported the translation of the handbook from German to English, and the Institute for PostSchool Studies at the University of the Western Cape, South Africa, under the leadership of Prof Joy Papier who, together with Dr. Claudia Beck-Reinhardt, managed the book translation project. In the first part, the manual introduces the COMET competence and measurement model in three introductory chapters. The fifth chapter describes the methods of developing and evaluating test tasks. The sixth chapter provides a detailed insight into the psychometric evaluation of the test instruments using practical examples. Chapters 7 and 8 document the steps required for planning, conducting and evaluating the tests. Chapter 9 presents the contribution of COMET competence diagnostics to teaching-learning research. Once the participation of teachers and trainers in the

Preface

vii

“student” tests had led to new findings regarding the transfer of the professional competence profiles of teachers/lecturers of vocational subjects (LbF [TPD]) to their students, COMET competence diagnostics was also developed for LbF (TPD). Chapter 10 shows those methods of competence diagnostics and development for LbF (TPD) are available for the implementation of large-scale projects and for the training and further education of LbF (TPD). The concluding eleventh chapter deals with the issue of the application of COMET instruments for the design, organisation and evaluation of VET processes, which is regarded as crucial from the perspective of VET practice. I hope that this methodological manual will provide a handy toolkit and therefore a powerful boost to quality assurance and development in vocational education and training. Furthermore, I would like to thank several colleagues for (co-)drafting specific chapters: Joy Backhaus (Sect. 5.6.2), Thomas Martens (6.1 and 6.3), Johanna Kalvelage and Yingyi Zhou (6.4), Rongxia Zhuang and Li Ji (6.5), Jürgen Lehberger (10.7, 10.8 and Chap. 11) as well as Karin Gäumann-Felix and Daniel Hofer (11.3). Additionally I thank the many contributors who were involved in the realisation of this book in various ways: Martin Ahrens, Nele Bachmann, Birgitt Erdwien, Jenny Franke, Jenny Frenzel, Bernd Haasler, Ursel Hauschildt, Lars Heinemann, Dorothea Piening and Zhiqun Zhao. Bremen, Germany June 2021

Felix Rauner

Series Editors Introduction

This ground breaking volume by Professor Felix Rauner, on Measuring and Developing Professional Competencies in COMET: Method Manual, is the latest book to be published in the long-standing Springer Book Series “Technical and Vocational Education and Training”. It is the 33rd volume to be published to date in the TVET book series. This is an important book on an important topic and will no doubt be widely read and respected. Through its eleventh chapters, the volume comprehensively and critically examines and evaluates key aspects of measuring and developing professional competencies (COMET). As Professor Rauner points out, in less than a decade, the methods of competence diagnostics, in accordance with the COMET test procedure, have become an internationally established instrument for quality assurance and quality development in vocational education and training. The book focuses particularly on examining what teachers and trainers can learn from modelling and measuring vocational competence learning tasks related to each other and vocational identify development for the design and organisation of vocational training processes; whether test and learning tasks are related to each other and what distinguishes them from each other. Professor Felix Rauner is very well qualified to write this important and timely book since he is widely regarded and respected as being an outstanding, widely influential researcher, author and opinion leader working in the area of education, with particular reference to technical and vocational education and training (TVET). Professor Rauner is based in Germany, working for many years at the Institut Technik und Bildung, University of Bremen. He has published very widely in the field of TVET, including being co-author of the widely used and highly respected comprehensive (1103 page) Handbook of Technical and Vocational Education, published in the Springer International Library of Technical and Vocational Education and Training. That Handbook is published in both English and German. In terms of the Springer Book Series in which this volume is published, the various topics dealt with in the series are wide ranging and varied in coverage, with an emphasis on cutting edge developments, best practices and education innovations ix

x

Series Editors Introduction

for development. More information about this book series is available at http://www. springer.com/series/5888 We believe the book series (including this particular volume) makes a useful contribution to knowledge sharing about technical and vocational education and training (TVET). Any readers of this or other volumes in the series who have an idea for writing their own book (or editing a book) on any aspect of TVET, are enthusiastically encouraged to approach the series editors either direct or through Springer to publish their own volume in the series, since we are always willing to assist perspective authors shape their manuscripts in ways that make them suitable for publication in this series. School of Education, RMIT University, Melbourne, Australia 10 February 2021

A. O. Rupert Maclean

Contents

1

2

3

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The Possibilities and Limitations of Large-Scale Competence Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Modelling Professional Competence . . . . . . . . . . . . . . . . . . . . . 1.3 The Format of the Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Modelling and Measuring Professional Identity and Professional Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Modelling and Measuring the Competence of Teachers in Vocational Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 The Quality Criteria for Professional Competence Diagnostics and the Design of Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professional Competence as a Subject of Competence Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Design Instead of Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics (LS–CD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Implicit Professional Knowledge (Tacit Knowledge) . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Professional Competence (Employability) . . . . . . . . 2.2.3 Craftsmanship . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Social and Key Competences . . . . . . . . . . . . . . . . . 2.2.5 Abilities That Are Expressed in the Interactive Progression of the Work . . . . . . . . . . . . . . . . . . . . .

1 2 3 3 3 4 4

. .

5 5

.

8

. . . .

10 10 10 11

.

12

Categorial Framework for Modelling and Measuring Professional Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The Occupation Type of Societal Work . . . . . . . . . . . . . . . . . . 3.1.1 Employability or Professional Competence . . . . . . . . . 3.1.2 Architecture of Parallel Educational Paths . . . . . . . . . 3.1.3 Professional Validity of Competence Diagnostics . . . .

13 16 18 18 19 xi

xii

Contents

3.2

.

20

.

20

. .

24 26

. .

27 30

.

39

. . .

39 40 40

.

42

. . . .

42 45 50 54

The COMET Competence Model . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Requirements for Competence Modelling . . . . . . . . . . . . . . . . . 4.2 The Levels of Professional Competence (Requirement Dimension) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Operationalisation of the Competence Criteria: Development of the Measurement Model . . . . . . . . . . 4.3 Structure of the Content Dimension . . . . . . . . . . . . . . . . . . . . . 4.4 The Action Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 The Importance of Action Types . . . . . . . . . . . . . . . . 4.5 A Cross-Professional Structure of Vocational Competence . . . . . 4.6 Extending the Competence Model: Implementing Planned Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Operational Projects/Orders . . . . . . . . . . . . . . . . . . . . 4.6.2 The Expert Discussion . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Rater/Examiner Training for Assessing the Practical Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Identity and Commitment: A Dimension of Professional Competence Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Normative Fields of Reference for Commitment and Work Morale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Professional Ethics . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61

3.3

3.4

3.5

4

The Design of Work and Technology: Implications for the Modelling of Professional Competence . . . . . . . . . . . . . . . . . . 3.2.1 Professional Work Tasks and Professional Competence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Task Analyses: Identification of the Characteristic Professional Work Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Professional Scientific Task Analyses Include . . . . . . 3.3.2 Identifying Professional Work Tasks: Expert Specialist Workshops (EFW) . . . . . . . . . . . . . . . . . . 3.3.3 Professional Scientific Work Process Studies . . . . . . Guiding Principles and Objectives of Vocational Education and Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Professional ‘Gestaltungskompetenz’ (the Ability to Shape or Design One’s Professional Future) . . . . . . . 3.4.2 Design-Oriented Vocational Education. . . . . . . . . . . 3.4.3 Professional Competence . . . . . . . . . . . . . . . . . . . . Theories of Vocational Learning and Professional Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 The Novice-Expert Paradigm: Competence Development in Vocational Education . . . . . . . . . . . 3.5.2 Work Process Knowledge . . . . . . . . . . . . . . . . . . . . 3.5.3 Practical Knowledge . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Multiple Competence . . . . . . . . . . . . . . . . . . . . . . .

63 68 68 72 73 74 76 77 78 79 79 81 82

Contents

xiii

4.7.3 4.7.4

.

82

. .

83 84

. . . .

84 85 85 85

. .

86 87

Developing Open Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 The Preparation of Expert Specialist Workshops . . . . . 5.1.2 Implementation of the Workshop . . . . . . . . . . . . . . . . 5.1.3 External Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Evaluation of the Validation . . . . . . . . . . . . . . . . . . . 5.2 An Open Test Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Representativeness . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Authenticity/Reality Reference . . . . . . . . . . . . . . . . . 5.2.3 Difficulty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 The Description of the Situation . . . . . . . . . . . . . . . . 5.2.5 Standards and Rules to be Complied with . . . . . . . . . 5.3 Cross-Professional and Subject-Related Test Tasks . . . . . . . . . . 5.4 Test Arrangements for Related Vocational Training Courses with Different Levels of Qualification . . . . . . . . . . . . . . . . . . . . 5.4.1 The S II Test Arrangement . . . . . . . . . . . . . . . . . . . . 5.4.2 The Post-SII Test Arrangement . . . . . . . . . . . . . . . . . 5.4.3 The Third Test Arrangement: Graduates of Professionally Qualifying Bachelor Programmes As Primary Test Groups . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Validity of the Test Tasks for Different Training Courses and Test Arrangements . . . . . . . . . . . . . . . . . 5.5 Description of the Solution Scopes . . . . . . . . . . . . . . . . . . . . . . 5.6 Evaluation and Choice of Test Tasks: The Pre-Test . . . . . . . . . . 5.6.1 Determining the Test Group(s) . . . . . . . . . . . . . . . . . 5.6.2 Training of Test Task Authors . . . . . . . . . . . . . . . . . . 5.6.3 Calculating of the Finn Coefficient . . . . . . . . . . . . . . 5.6.4 Rating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.5 Interviewing the Test Participants . . . . . . . . . . . . . . .

91

4.7.5 4.7.6 4.7.7 4.7.8 4.7.9 4.7.10

4.7.11 5

Organisational versus Occupational Commitment . . . Construction of Scales to Capture Work-Related Identity and Commitment Occupational Identity . . . . Organisational Identity . . . . . . . . . . . . . . . . . . . . . . Modelling the Connections Between Identity and Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Occupational Commitment . . . . . . . . . . . . . . . . . . . Organisational Commitment . . . . . . . . . . . . . . . . . . Work Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of an Analysis of the Measurement Model (Performed by Johanna Kalvelage and Yingy Zhou, ! 6.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . International Comparisons . . . . . . . . . . . . . . . . . . . .

91 91 92 95 95 96 98 98 98 99 99 100 101 102 102

103 104 104 105 107 107 113 117 122

xiv

Contents

5.6.6

5.7 5.8

6

Selection and Revision of Test Tasks and Solution Scopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Quality Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Objectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended to Measure Professional Competence . . . . . . . . 5.8.1 Standardised Test Tasks . . . . . . . . . . . . . . . . . . . . . . 5.8.2 Criteria-Oriented Test Tasks . . . . . . . . . . . . . . . . . . . 5.8.3 The Variation Coefficient V: A Benchmark for the Homogeneity of the Task Solution . . . . . . . . . . . . . . . 5.8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Psychometric Evaluation of the Competence and Measurement Model for Vocational Education and Training: COMET . . . . . . . . 6.1 What Makes it So Difficult to Measure Professional Competence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Procedures Based on the Analysis of the Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Mixed Distribution Models . . . . . . . . . . . . . . . . . . . . 6.2 Ensuring the Interrater Reliability of the COMET Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Example: Securing Interrater Reliability (COMET Vol. I, Sect. 4.2, Birgitt Erdwien, Bernd Haasler). . . . . 6.3 Latent Class Analysis of the COMET Competence and Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 On the Connection between Test Behaviour and Personal Characteristics . . . . . . . . . . . . . . . . . . . . . . 6.3.2 On the Relationship Between the Structure and Modelling of Vocational Competences . . . . . . . . . . . . 6.3.3 Mathematical Properties of a Test Model . . . . . . . . . . 6.3.4 Characteristics of a Competence Model for Vocational Education and Training . . . . . . . . . . . . . . 6.3.5 Dilemma: Dependent Versus Independent Test Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.6 The Search for a Bridge Between Theory and Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.7 The Example COMET . . . . . . . . . . . . . . . . . . . . . . . 6.3.8 Empirical Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.9 On the Reliability of the COMET Rating Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.10 On the Content Validity of the COMET Rating Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.11 Population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.12 Step 1: Determination of Interrater Reliability . . . . . . . 6.3.13 Step 2: Sorting of Task Solutions . . . . . . . . . . . . . . .

124 126 127 132 132 136 145 146 147 147 148 149 150 150 154 155 156 156 157 157 158 158 159 159 160 160 161 161

Contents

xv

6.3.14

6.4

6.5

7

Step 3: Verification of the Homogeneity of the Competence Criteria . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.15 Step 4: Identification of Typical Competence Profiles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.16 Distribution of Tasks Among the Competence Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.17 Step 5: Longitudinal Analysis of Competence Measurement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.18 Discussion of the Results . . . . . . . . . . . . . . . . . . . . . 6.3.19 Need for Research . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.20 Prospect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confirmatory Factory Analysis . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 The I-D Model (! 4, Fig. 4.5) . . . . . . . . . . . . . . . . . 6.4.2 Original Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Confirmatory Factor Analysis for the Original Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Explanations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7 Explorative Factory Analysis . . . . . . . . . . . . . . . . . . . 6.4.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.11 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.12 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.13 Overall Discussion EFA . . . . . . . . . . . . . . . . . . . . . . 6.4.14 Considerations for Further Action . . . . . . . . . . . . . . . Validity and Interrater Reliability in the Intercultural Application of COMET Competence Diagnostics . . . . . . . . . . . 6.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Preparation and Translation . . . . . . . . . . . . . . . . . . . . 6.5.3 Cultural Adaption . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 Rater Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.5 Analysis of the Reliability and Validity of the Evaluation Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Conducting Tests and Examinations . . . . . . . . . . . . . . . . . . . . . . . 7.1 How Competence Diagnostics and Testing are Connected . . . . 7.1.1 Reviews of Professional Competence: The New Examination Practice . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Context Reference: Work and Business Processes . . . 7.1.3 Levelling of Test Results . . . . . . . . . . . . . . . . . . . . 7.2 The Measurement of Professional Competence . . . . . . . . . . . .

161 163 167 168 169 169 171 171 171 172 174 175 175 178 181 181 181 182 183 183 184 184 185 185 186 186 188 188 188

. 193 . 193 . . . .

194 197 201 202

xvi

Contents

7.2.1

7.3

7.4

7.5

8

COMET as the Basis for a Competence-Based Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Examination Format for the Extended Final Examination (GAP) . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Procedure for the ‘Operational Order’ (OO) Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 The Examination Result . . . . . . . . . . . . . . . . . . . . . . Interrelationship Analyses Between Examinations and Competence Diagnostics for Automotive Mechatronics Technicians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Comparison of the Examination and (COMET) Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Results of the Statistical Influence Analysis . . . . . . . . 7.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measuring the Test Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Preliminary Study: The Time Scope of the Test as Influence on Test Motivation . . . . . . . . . . . . . . . . . . . 7.4.2 Explorative Factor Analysis of the Relationship Between Test Motivation and Test Performance . . . . . 7.4.3 Influence of Processing Time on Overall Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Results of the Comparative Study: Test and Examination Motivation among Motor Vehicle Trainees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 The Cultural Dimension of Test Motivation . . . . . . . . 7.4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning and Executing COMET Projects . . . . . . . . . . . . . . . . . 7.5.1 Research Design and Research Strategies . . . . . . . . . . 7.5.2 Defining the Project Design . . . . . . . . . . . . . . . . . . . . 7.5.3 Selecting and Developing the Test Items, the Test Documents for the ‘Commitment’ Survey and Performing the Context Analyses . . . . . . . . . . . . . . . . 7.5.4 Informing about the Objectives and the Implementation of the Test . . . . . . . . . . . . . . . . . . . . 7.5.5 Research as a Cooperative Project between Science and Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Transfer Activities . . . . . . . . . . . . . . . . . . . . . . . . . .

Evaluating and Presenting the Test Results . . . . . . . . . . . . . . . . . . 8.1 Classification of Individual Performance in Professional Competence Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Determination of the Scores for the Three Competence Dimensions . . . . . . . . . . . . . . . . . . . . . 8.1.2 Sub-Competences, Competence Dimensions and Competence Levels . . . . . . . . . . . . . . . . . . . . . . . .

207 209 213 216

219 219 219 224 225 225 230 236

242 244 246 248 248 257

264 274 278 282

. 285 . 285 . 286 . 287

Contents

xvii

8.1.3

8.2

8.3

8.4

8.5

8.6

9

Classification of Individual Performance in Professional Competence Levels . . . . . . . . . . . . . . . . Graphical Representation of the Test Results . . . . . . . . . . . . . . . 8.2.1 Competence Levels . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Differentiation according to Knowledge Levels . . . . . 8.2.3 Transfer of Competence Levels Differentiated according to Knowledge Levels into a Grading Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Competence Development as a Competence Profile . . . . . . . . . . 8.3.1 Homogeneous versus Selective Competence Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heterogeneity of Professional Competence Development . . . . . . 8.4.1 Heterogeneous Levels of Competence . . . . . . . . . . . . 8.4.2 Percentile Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 The Heterogeneity Diagram . . . . . . . . . . . . . . . . . . . 8.4.4 The Causes of Heterogeneity . . . . . . . . . . . . . . . . . . . Measuring Identity and Commitment . . . . . . . . . . . . . . . . . . . . 8.5.1 On the Construction of Scales . . . . . . . . . . . . . . . . . . 8.5.2 Calculating the Results . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 ‘Commitment Lights’ . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Commitment Progression . . . . . . . . . . . . . . . . . . . . . 8.5.5 Four-Field Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.6 Identity and Commitment Profiles . . . . . . . . . . . . . . . Identity and Commitment as Determinants of Professional Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Professional and Organisational Identity as Determinants of the Quality of Vocational Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Professional Identity . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Organisational Identity . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Professional Commitment . . . . . . . . . . . . . . . . . . . . . 8.6.5 Organisational Commitment . . . . . . . . . . . . . . . . . . .

The Contribution of COMET Competence Diagnostics to Teaching and Learning Research . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 A New Dimension for Teaching-Learning Research in Vocational Education and Training . . . . . . . . . . . . . . . . . . . . . 9.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Teaching and Learning Research Based on COMET Research Data . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Competence Diagnostics . . . . . . . . . . . . . . . . . . . . . . 9.1.4 Teachers as Determinants of Professional Competence Development . . . . . . . . . . . . . . . . . . . . . 9.1.5 Professional Competence Development and Professional Identity/Professional Commitment . . . . .

287 290 290 292

296 297 299 301 302 302 307 309 311 311 312 313 313 315 320 323

323 323 325 327 328 331 331 331 333 335 336 338

xviii

Contents

9.1.6

9.2

9.3

9.4

9.5

9.6

10

Professional Competence Development: Context Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professional Competence and Work Ethic . . . . . . . . . . . . . . . . . 9.2.1 Introduction: From a Function-Oriented to a Design-Oriented Vocational Training Concept . . . . . . 9.2.2 The Characteristics of Vocational Education and Training (Chap. 3) . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Competence Profiles for the Representation of Competence Development and Professional Work Ethic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 The Relationship Between the Level of Competence and the Homogeneity of Competence Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Professional Identity and Competence: An Inseparable Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Justification of the Hypothesis . . . . . . . . . . . . . . . . . . 9.3.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Conclusions and Perspectives . . . . . . . . . . . . . . . . . . Training Qualities and Competence Development . . . . . . . . . . . 9.4.1 The Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . 9.4.3 Results on the Relationship between Competence and Training Quality . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Training Potential of Vocational Schools . . . . . . . . . . . . . . 9.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Methodical Approach . . . . . . . . . . . . . . . . . . . . . . . . Teachers and Trainers Discover Their Competences: A “Eureka” Effect and Its Consequences . . . . . . . . . . . . . . . . . 9.6.1 The Development of Test Items . . . . . . . . . . . . . . . . . 9.6.2 The Changed Understanding of the Subject Shapes the Didactic Actions of Teachers . . . . . . . . . . . . . . . . 9.6.3 Context Analyses: The Subjective View of Learners on the Importance of Learning Venues . . . . . . . . . . . . 9.6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

340 342 342 345

347

349 354 354 354 356 357 360 361 361 362 363 370 373 373 373 374 374 377 378 382 386

Measuring Professional Competence of Teachers of Professional Disciplines (TPD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 10.1 Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 10.2 Fields of Action and Occupation for Vocational School Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Contents

xix

10.2.1

10.3

10.4

10.5

10.6

10.7

Proposal for a Measurement Method by Oser, Curcio And Düggeli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Competence Profiles . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Validity of the Oser Test Procedure . . . . . . . . . . . . . 10.2.4 The Action Fields for TPD . . . . . . . . . . . . . . . . . . . The “TPD” (Vocational School Teacher) Competence Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 The Requirements Dimension . . . . . . . . . . . . . . . . . 10.3.2 The Contextual Dimension . . . . . . . . . . . . . . . . . . . 10.3.3 The Behavioural Dimension . . . . . . . . . . . . . . . . . . The Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Operationalisation of the Requirements Dimension (Fig. 10.5) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 The Competence Dimensions . . . . . . . . . . . . . . . . . 10.4.3 The Competence Levels . . . . . . . . . . . . . . . . . . . . . 10.4.4 Operationalisation of Competence Components for Teachers of Professional Disciplines (TPD) (Rating Scale A) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.5 Vocational Competence . . . . . . . . . . . . . . . . . . . . . 10.4.6 Vocational/Technical Didactics . . . . . . . . . . . . . . . . 10.4.7 Technical Methodology (Forms of Teaching and Learning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.8 Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.9 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.10 Teaching and Training Organisations . . . . . . . . . . . . 10.4.11 Social Compatibility . . . . . . . . . . . . . . . . . . . . . . . . 10.4.12 Social-Cultural Embedment . . . . . . . . . . . . . . . . . . . 10.4.13 Creativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Test Tasks for Measuring Cognitive Dispositions (Conceptual-Planning Competence) . . . . . . . . . . . . . 10.5.2 Time Scope of the Test Tasks (for Large-Scale Projects) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . State of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 A Pilot Study with Student Teachers . . . . . . . . . . . . 10.6.2 The Research Programme: Competence Development of Teachers and Lecturers in Vocational Education and Training in China . . . . . . 10.6.3 Investigating the Link Between Measured Teacher Competence and Quality of Teaching . . . . . . . . . . . Evaluation of Demonstration Lessons in the Second Phase of Training Teachers with Professional Discipline (TPD): A Test Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 The Lesson Plan . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

393 394 394 394

. . . . .

399 400 402 402 403

. 403 . 404 . 405

. 405 . 405 . 406 . . . . . . . .

406 407 407 408 408 409 409 410

. 410 . 411 . 411 . 411

. 412 . 418

. 419 . 419

xx

Contents

10.7.2 10.7.3

10.8 10.9

11

Class Observation . . . . . . . . . . . . . . . . . . . . . . . . . . The Interview (Following the Demonstration Lesson) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Final Evaluation of the Examination Performance . . . Development and Evaluation of the Model “SocialCommunicative Competence of Teachers” . . . . . . . . . . . . . . . Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.1 Psychometric Evaluation of the Competence Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.2 Investigating the Link Between Measured Teacher Competence and Quality of Teaching . . . . . . . . . . .

The Didactic Quality of the Competence and Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The Learning Field Concept Provides Vocational Education and Training with an Original, Educational-Theoretical Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Professional Action Fields as a Reference Point for the Development of Learning Fields . . . . . . . . . . 11.2 Designing Vocational Education Processes in Vocational Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Professional Knowledge . . . . . . . . . . . . . . . . . . . . . 11.2.2 The Training Paradox . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Designing Learning Tasks . . . . . . . . . . . . . . . . . . . . 11.2.4 Designing Teaching-Learning Processes . . . . . . . . . . 11.2.5 Dealing with Heterogeneity . . . . . . . . . . . . . . . . . . . 11.2.6 Step 5: Evaluating the Task Solution (SelfAssessment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.7 Step 6: Reflection on Work and Learning Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.8 Step 7: Presenting and Evaluating the Task Solution, Work and Learning Process as Well as Learning Outcomes (External Evaluation) . . . . . . . . . . . . . . . 11.2.9 Step 8: Systematising and Generalising Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 COMET as a Didactic Concept in Nursing Training at Higher Technical Colleges in Switzerland: Examples of Teaching and Examinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 The Higher Vocational Nursing Schools in Switzerland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 COMET in the Context of the BZ-GS [Health and Social Education Centre] . . . . . . . . . . . . . . . . . . . . . 11.3.3 Example Lesson: Nursing Relatives and Palliative Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 419 . 420 . 420 . 421 . 421 . 422 . 422 . 423

. 423 . 429 . . . . . .

430 430 431 432 447 459

. 461 . 462

. 466 . 468

. 470 . 470 . 472 . 474

Contents

xxi

11.3.4 11.3.5 11.3.6 11.3.7

Example Lesson: CPR—Cardiopulmonary Resuscitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suggestion for Lesson Preparation . . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

477 479 483 485

Appendix A: The Four Developmental Areas . . . . . . . . . . . . . . . . . . . . . 487 Appendix B: Rating Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Appendix C: Examples for Test Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Appendix D: Four-Field Matrix (Tables) . . . . . . . . . . . . . . . . . . . . . . . . 509 Appendix E: Correlation Values for the Correlation Between Occupational Competences and I-C Averages . . . . . . . . . . . . . . . . . . . . . 511 List of References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

Chapter 1

Introduction

Methods for developing and measuring professional competence, professional identity and professional commitment have been developed and widely introduced in COMET1 projects since 2006 (COMET vols. 1–5; Fischer, Rauner, & Zhao, 2015a, 2015b). Numerous publications have also reported on the methodological aspects of this research and the VET practice based on it. In discussion both with the test community and with the VET administrations responsible for quality assurance, and not least with the many teachers and trainers involved in the COMET projects, interest in a summary of the methods of the COMET project has meanwhile been expressed. The methodical and methodological interest is directed not only at the theoretical test aspects of vocational competence diagnostics, but also at the didactic significance of the competence model on which the competence diagnostics process is based: What can teachers and trainers learn from modelling and measuring vocational competence and vocational identity development for the design and organisation of vocational training processes? Are test and learning tasks related to each other and what distinguishes them from each other? In terms of test theory, the questions involved seem to have been underestimated in the previous discussion on the methodology of competence diagnostics in vocational education and training. As the paradigm of ‘right/wrong’ test tasks in the search for innovative, practical and creative solutions to professional tasks in the working world is only of very limited significance for specific aspects of professional task solutions, the standards-oriented test format loses its significance— especially in the widespread form of multiple-choice testing. It is replaced by reallife tasks, whose variety of possible solutions are sometimes difficult to understand, even for experts. A heating engineer’s routine task of advising a customer in the modernisation of his heating system, taking into account the variety of technical

1 Note on the spelling of KOMET/COMET: The spelling as COMET has been applied since the international COMET conference organised by the European Training Foundation (ETF) in 2010.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_1

1

2

1 Introduction

possibilities, their environmental compatibility, their practical value and their investment and follow-up costs, and integrating this project into the operational work plan, already shows that professional competence is characterised by exploiting the scope for solutions and design in line with each specific situation, taking into account competing criteria (and values). In general, this requires the development of competence diagnostics guided by the realisation that professional specialists are involved in the execution of large and small tasks in design and responsibility processes that always involve the search for intelligent compromises. A bicycle mechanic, for example, is distinguished by the fact that he is able to find out which bicycle configuration might be the most suitable for a customer when talking to him. Thomas MARTENS and Jürgen ROST therefore appropriately classify the measurement of professional competence in the context of the competence-diagnostic discussion, in which they state ‘In the COMET project, an ability model is examined. The aim is to model how testees, whose solutions have different degrees of development, can cope with open professional tasks’ (Martens & Rost, 2009, 98). Whether and how it is possible to establish solid international comparative competence diagnostics, both in terms of content and in accordance with psychometric criteria, in a world with very different vocational training systems and, in addition, with open test tasks, requires convincing answers. This already applies due to the fact that the abundance of problems to be solved seems to be an insurmountable hurdle (cf. Baethge, Achtenhagen, Babie, Baethge-Kinsky, & Weber, 2006). The successful empirical review of the COMET test procedure (2007–2012) resulted in an available competence and measurement model, which opens up access to the complex world of occupations, occupational fields and the almost conspicuous diversity of vocational training courses and systems for competence diagnostics. However, this methodological manual deals not only with the fundamental questions of modelling vocational competence and the psychometric evaluation of the COMET test procedure, but also with the following topics:

1.1

The Possibilities and Limitations of Large-Scale Competence Diagnostics

A mere glance at the job descriptions of occupations shows that numerous professional competences can be measured with an acceptable amount of effort using methods of competence diagnostics. Those professional skills that can be easily recorded empirically and those that can only be recorded empirically with greater effort are described in the first chapter. The third and sixth chapters describe how methods of competence diagnostics can be used to improve the quality of tests.

1.4 Modelling and Measuring Professional Identity and Professional Commitment

1.2

3

Modelling Professional Competence

A methodological manual cannot avoid the simple question of what constitutes professional competence. The answer to this seemingly simple question is made more difficult by the fact that vocational educational literature offers very different answers. The need to take up this question from the perspective of modelling professional competence in addition to competence diagnostics and the design of examinations calls for a convincing, internationally compatible answer. The COMET competency model is presented in the third chapter and the underlying categorical framework in the second chapter.

1.3

The Format of the Test Tasks

The concept of the test tasks and their development procedure is one of the acid tests that prove in practice whether the test procedure can be applied beyond a national framework. Prior experience with the COMET project shows that the participation of the countries involved in international comparison projects primarily depends on whether the subject didactics or the subject teachers and trainers assess these as representative and as valid for the respective occupation or training course, even if they have not participated in the development of the test tasks. Chapter 5 is devoted in detail to the complex questions of the COMET test arrangement.

1.4

Modelling and Measuring Professional Identity and Professional Commitment

A special feature of the COMET test procedure is the modelling and ‘measuring’ of professional identity and professional commitment. In addition to measuring motivation as a variable that is used to interpret the measured competence, this is a central concern (objective) of vocational education and training. In vocational education, the development of professional competence and professional identity is regarded as an interdependent, indissoluble relationship (Blankertz, 1983). The expansion of the competency and measurement model by this aspect of professional development is dealt with in Sect. 4.6.

4

1.5

1 Introduction

Modelling and Measuring the Competence of Teachers in Vocational Subjects

After the participation of teachers and trainers in the ‘student’ tests had led to new insights into the transfer of teachers’ professional competence profiles to their students, the COMET competence diagnostics method was also developed and tested for teachers of vocational subjects (LbF [TPD]). The eighth chapter describes and explains the COMET competence and measurement model. The current state of research shows that a toolkit is now available for large-scale projects as well as for the training and further education of LbF [TPD]).

1.6

The Quality Criteria for Professional Competence Diagnostics and the Design of Tests

As the methods of competence diagnostics in vocational education and training quite obviously differ in essential points from those of general education, it must be clarified how the quality criteria for competence diagnostics in vocational education and training must be interpreted. This book is designed in such a way that the hurried reader can skip in-depth discussions of test-statistical procedures and methodological questions. The COMET method manual is intended for • Teachers, trainers and personnel developers who want to familiarise themselves with the state of development and research in competence diagnostics in vocational education and training. • Students and scientists of vocational education, vocational fields and their didactics as well as vocational education research, who want to focus on this research area. • Vocational training planners and members of vocational training management, who are faced with the task of utilising the methodological competence measurement and development instruments for quality assurance and development in vocational education and training at the level of designing and organising vocational learning processes.

Chapter 2

Professional Competence as a Subject of Competence Diagnostics

2.1

Design Instead of Adaption

The determination of the objectives of vocational education and training has always been characterised by the tension between the educational objectives aimed at the development of the personality and qualification requirements of the world of work and the (training) objectives derived from these. In the vocational education discussion, a large number of attempts can be made to resolve this tension in the form of a holistic vocational training concept (Heid, 1999; Ott, 1998). The tradition of mastery (in the broader sense) is often referred to as an example of holistic vocational training. Richard Sennett has examined the social-historical and philosophical roots of mastery in his book Handwerk and has attributed it a significance that goes far beyond institutionalised craftsmanship by opposing mastery to the world of fragmented skills (Sennett, 2008). The emphatic formula of ‘education in the medium of occupation’ probably best represents the ever-new attempts to reconcile education and qualification (Blankertz, 1972). With his deskilling thesis, Harry Braverman classifies such attempts as idealistic misconceptions of reality in the work environment, which is based on the principle of deskilling, at least in industrial work with its processes of the progressive mechanisation of human labour—and subject to the conditions of capitalist value realisation (Braverman, 1974). In the sociological studies initiated by the Federal Institute for Vocational Education and Training Research (BBF) in 1969 on changes in qualification requirements in industrial technical work, the authors confirm this thesis or modify it to form the so-called polarisation thesis, according to which the larger number of those deskilled is contrasted by the smaller number of those more highly qualified: the winners of rationalisation (Baethge et al., 1976; Kern & Schumann, 1970). This stance can occasionally be found in more recent contributions to the discussion. Nico Hirtt sees the Anglo-Saxon tradition of ‘competency-based education’ as an expression of the

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_2

5

6

2 Professional Competence as a Subject of Competence Diagnostics

‘marketisation of education’ and as a response to a technologically and economically induced ‘low skilled work force’ (Hirtt, 2011, 172). Two strategies were set against the trend of the supposedly progressive Taylorisation of vocational work. 1. The withdrawal of vocational education and training to the safe terrain of crossvocational education under the protective umbrella of state education policy provides access to general and therefore also to ‘academic’ education (Kruse, 1976, 262; Grünewald, Degen, & Krick, 1979, 15). 2. The occupational research initiative around The Humanisation of Work was supported by research and development programmes of the Federal Government and the federal states (cf. in summary Brödner & Oehlke, 2008). These opposed the progressive division of labour with a holistic concept of complete work processes and justified this in terms of action theory according to which a human action comprises a number of defined, successive steps in order to once again view and design the work process as a unit of planning, execution and evaluation. This provides a theoretical (scientific) base for the central idea of the complete work process (cf. Hacker, 1973; Volpert, 2003). On closer inspection, the concepts of humane and socially acceptable work design are based on establishing refuges for humanisation goals in a world of work that seems to be determined by technical and economic change. It was only in the 1980s, in the course of a critical examination of economic and technological determinism, that the foundations for the paradigm of design and the need for design of ‘work and technology’ were laid and the guiding principle of the empowerment to participate in shaping the world of work was formulated (Rauner, 1988). In this case, technology is no longer seen as a factor that determines professional action (and thus also professional qualification requirements). Rather, it is assumed that there is an interaction between technological development and the design and organisation of work and education. The qualification of employees is no longer defined by qualification requirements, but rather as a relatively independent potential for innovations in the work process. Dieter Ganguin justifies this change of perspective from an economic point of view: ‘If flat organisational structures, cooperative management, teamwork and autonomous decisions are essential characteristics of future work organisation, this must be both taught and trained. Vocational training must therefore take a completely new approach [. . .]. The basic pattern of a mature, responsible and socially active citizen must become the guiding principle of all education’ (Ganguin, 1992, 33)1. The paradigm shift in educational programmes and educational theory from one towards vocational training aimed at adapting to the world of work and at (co)shaping it marks a consistent step towards modern vocational education. It is hard to understand today that numerous sciences, research traditions and politics adhered to a

1 Member of an IBM working group on the development of an open architecture for integrated information systems in the manufacturing industry (1984/85).

2.1 Design Instead of Adaption

7

technico-deterministic understanding of the world until the end of the 1980s (cf. Lutz, 1988, 16 ff.). Until the implementation of non-deterministic vocational education and training, which is consistently oriented towards the guiding principle of shaping competence, it was a path marked by a variety of diversions and aberrations, which is still not mature enough to enable its effortless pursuit in vocational education and training practice. A milestone can be seen in the agreement reached by the Conference of Ministers of Education on vocational schools in 1991 and the claim formulated with regard to the general educational mandate of vocational schools to enable trainees to take part in shaping the working world and society in socially and ecologically responsible manner (KMK, 1991). The resulting discussion of the Vocational Education and Training Subcommittee of the Conference of Ministers of Education and Cultural Affairs soon led to the realisation that this change in perspective from adaptive-oriented to design-oriented vocational education and training requires a fundamental reform of the curriculum (Gravert & Hüster, 2001, 89). With the far-reaching reform project concerning the introduction of framework curricula based on learning fields, which is still underestimated in the debate around vocational education today and which intends to aim vocational education and training at shaping competence, the paradigm shift to a non-deterministic understanding of the world and the resulting guiding principle of shaping competence was implemented in education planning. Now that the difficulties of implementing such a fundamental reform in educational practice—a process that continues to this day—have become evident, there is a certain risk that the reform project, which can be classified as historic, will fail, nonetheless. There is a great temptation to turn to new pedagogical concepts that offer a convenient way out of the reform project ‘Learning-field-oriented curriculum development’, which involves some significant effort. With the European Qualifications Framework (EQF), the European Union offers an apparently handy toolkit for committed teachers and educational planners to try out something new—this time even something international. The development of a National Qualifications Framework (NQF) opens up the possibility of being integrated in an international trend that has its starting point in the development of a modular certification system (National Vocational Qualifications, NVQ) in Great Britain. This development is considered highly problematic from an educational perspective (Granville, 2003; Grollmann, Spöttl, & Rauner, 2006; Hirtt, 2011; Young, 2007). Another attractive invitation to turn away from pedagogically demanding guiding principles and projects seems to be the empirical turning point in educational research and education policy. The success of the international PISA project clearly fuels the regularly recurring educational-policy-inspired wishes to finally put the pedagogical art of good education on a calculable basis, so that verifiable outputs can also be offset against state inputs in the form of educational resources. With the PISA project, empirical educational science suggests that the economic input and output calculations can now also be applied to measuring pedagogical returns, promising an empirically founded pedagogy that allows educational processes and systems to be organised according to defined standards and with effective management instruments (Klieme et al., 2003). The authors of the PISA 2000 study point out this risk

8

2 Professional Competence as a Subject of Competence Diagnostics

themselves and also address the scope of their project. ‘It cannot be overemphasised that PISA has no intention of measuring the horizon of modern general education. It is the strength of PISA in itself to refuse such fantasies of omnipotence. . .’ (Baumert et al., 2001, 21). A comparable attempt at the education system’s technological renewal was already pursued by the federal and state governments with the educational technology reform project of the 1970s. The development and testing of computersupported forms of teaching and learning determined the fantasies and attempts to substitute and program teaching work for more than a decade (cf. BLK 1973, 75). For a while, it seemed possible to objectify educational processes and make them technologically available. The Society for Programmed Instruction (GPI) and Cybernetic Pedagogy promised the liberation of educational systems and processes from pedagogy as an art that could not be unified by educational policy up until now, but which many educators somehow possess to varying degrees, and which had so far eluded all attempts at rationalisation (Frank, 1969). However, the attempts associated with every new information technology to cope with the change in educational technology in pedagogy have lost their power, since the ever faster succession of the failure of IT-supported educational reforms—most recently the interest was directed towards the Internet—has contributed to the insight that the control of educational technology in educational processes was possibly a fixed—and also an expensive— idea from the very beginning (Heinze, 1972). It is foreseeable that future attempts to control education systems through measurable outputs and inputs will also fail, since the more important educational goals and contents will evade ‘input/output didactics’ shaped by economic calculation (Young, 2009). The hastily concluded considerations of education experts to control educational processes via standards, the success of which can also be measured in the form of a large-scale assessment, reduce education to the measurable. This is where the affinity to the educational technology reform project lies. The excessive expectations of large-scale competence assessment as a comprehensive pedagogical reform idea are also problematic because a realistic assessment of the pedagogical-didactical and educational-political potentials of competence diagnostics can significantly enrich the actors’ toolbox.

2.2

The Possibilities and Limitations of Large-Scale Competence Diagnostics (LS–CD)

The differentiation of vocational skills according to qualifications and competences is of some importance for the examination of vocational aptitude and the recording of vocational competences. An examination provides information about

2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics. . .

9

Table 2.1 ‘Qualification’ versus ‘Competence’ (COMET Vol. I, 33) Object-subject-relations

Learning

Objectifiability

Qualifications Qualifications are objectively given by the work tasks and processes and the resulting qualification requirements. In the process of acquiring qualifications, the human being is a carrier medium for qualifications, a (human) resource that enables people to perform specific activities through training. Qualifications describe the not yet objectified/mechanised skills and abilities and define people as carriers of qualifications that are derived from the work processes.

Competences Competences are sector-specific abilities and strategies in line with psychological performance dispositions; they are application-oriented. The acquisition of competences is part of personality development and also includes the skills resulting from the educational goals.

Professional competencies are primarily aimed at the difficult or impossible to objectify skills of professional specialists, who go beyond current professional tasks and aim to solve and process future tasks.

• Whether the skills defined in a job description and in the corresponding training regulations are mastered in terms of qualification requirements, • Whether the required competency is achieved. This requires differentiation according to • Abilities/qualifications that must be fully and safely mastered, as they may be relevant for safety reasons, • Skills/qualifications that have to be mastered to a certain degree and finally according to, • Skills/qualifications that are not core qualifications and are therefore classified as more or less desirable (Table 2.1). An examination must include all qualifications and requirements relevant to employability. Practical skills must necessarily be tested in real professional situations (situational testing). In contrast, cognitive dispositions in the form of actionguiding, action-explanatory and action-reflecting knowledge of work processes can be tested with standardised examination methods. The COMET method of competence diagnostics to identify competence levels and competence profiles, to carry out comparative competence surveys with the aim of comparing educational programmes and education systems, goes far beyond the examination in the context of regulated vocational training programmes and the examination of ‘learning success’ in relation to the learning objectives defined in a specific curriculum. In particular, international comparative LS-CD projects do not primarily define the contextual validity of the competency survey in curricular terms, since it is an essential goal of competence research to gain insights into the strengths and weaknesses of national educational structures (including curricula). In line with the International World Skills (IWS), the contextual validity of the test tasks in the

10

2 Professional Competence as a Subject of Competence Diagnostics

LS-CD projects in vocational training is based on professional validity (Hoey, 2009). Various aspects of professional skills—in individual cases also significant ones— are beyond the methods of measurement. Not infrequently, for example, the ‘Tacit Knowledge’, or implicit knowledge (cf. Polanyi, 1966a; Neuweg, 1999; Fischer, 2000a, 2000b), constitutes important professional skills that can only be proven in a practical examination. This requires an extended competence and measurement model with a corresponding rating procedure (! 4.6, 7.1).

2.2.1

Implicit Professional Knowledge (Tacit Knowledge)

Implicit skills can be observed, and their quality can be assessed in the performance of professional activities and, above all, on the basis of work results. Although they are largely beyond an explicit technical description and explanation, they are often of central importance for professional ability and therefore also subject to examinations. The rating procedures developed in the COMET project also allow the determination of tacit skills.

2.2.2

Professional Competence (Employability)

As a rule, professional competence is determined using the more or less traditional forms of examination. In addition to the examination of professional knowledge, the most important thing in an examination is to test the qualification requirements defined in the job descriptions as practical skills in real professional work situations. Examinations therefore include proof of sufficient practical experience during training. The qualifications defined for employability in the defined occupational profiles are also examined. This is necessary for a professional examination practice to facilitate the certification of professional competence, which is usually also connected with the granting of authorisations. With its methods of standardised assessment of professional competences, competence diagnostics provides a set of instruments with which the requirements for the test quality criteria can be met (Table 2.2).

2.2.3

Craftsmanship

Craftsmanship is an essential criterion of professional qualification for a large number of professions—not only in the arts and crafts (Sennett, 2008). Craftsmanship requires a high degree of practice based on a minimum of kinaesthetic intelligence (cf. Gardner, 2002). Not only dental technicians and goldsmiths but also

2.2 The Possibilities and Limitations of Large-Scale Competence Diagnostics. . .

11

Table 2.2 Possibilities and limitations of measuring professional competencies Measuring . . .is possible for Cognitive domain-specific Performance dispositions Competence levels Vocational and cross-occupational, independent of the forms and structures of educational programmes of test groups based on individual test results Competence dimensions in the form of competence profiles The heterogeneity of the competence dimensions In combination with the data from the context surveys, this provides insights into a large number of control and design relevant interrelationships. Among other things: • Education systems and programmes. • Contents and forms of professional learning. • Cooperation between learning locations and educational plans. • Work organisation. • School organisation. • International comparisons.

. . .can only be achieved with the appropriate effort Situated professional qualifications Implicit professional knowledge (tacit knowledge) Individual situated professional ability (professional competency) Craftsmanship

Social competences (with limitations) Abilities that are expressed in the interactive form of the work (with limitations) Competences expressed in creative action (e.g. in the arts and crafts)

toolmakers and other industrial-technical professions belong to a class of professions in which craftsmanship is an essential part of professional ability. Skill measurement is also possible. Rating methods such as those commonly used in the field of gymnastics, for example, are used here.

2.2.4

Social and Key Competences

Social skills play a very important role in vocational work and thus also in vocational education and training. It is controversial whether social skills can be measured as ‘key’ skills across all occupations. According to Jochen Gerstenmaier, research into learning and expertise disproves the thesis of devaluing knowledge in terms of content in favour of general skills such as ‘problem-solving’. However, it can be shown that the competence to solve problems is based on domain-specific knowledge (Gerstenmaier, 1999, 66, 2004, 154 ff.). Grob and Maag Merki have made an interesting attempt to approach the empirical survey of interdisciplinary competences. They measured interdisciplinary competences on the basis of a large number of scales (Grob & Maag Merki, 2001), not as ‘key competences’, but rather as competences that promote the professional execution of work tasks at a general level. It is indisputable in this context that, for example, professional work necessarily involves cooperation with other specialists

12

2 Professional Competence as a Subject of Competence Diagnostics

from the same community of practice and with experts from related fields and thus represents a central dimension of professional competence. Within the framework of the COMET project, the context survey provides information on the concepts of professional cooperation among respondents.

2.2.5

Abilities That Are Expressed in the Interactive Progression of the Work

These abilities are based on the type of creative action—in contrast to the type of purposeful action (cf. Brater, 1984). According to Brater, artistic action is the prototype of this form of action. The results of this type of action can only be anticipated to a limited extent in terms of planning and concept. Especially in the area of secondary technical work (maintenance, troubleshooting, etc.), ‘. . . the situation must be turned into opportunities, ideas must be born, solutions must be found. Here it is not adherence to plans but originality that is required’ (Brater, 1984, 67). The established forms of measuring professional competence reach their limits here, which are given by the open form of working processes. This applies in particular to occupations with a highly intersubjective share, e.g. in the education and healthcare sector. The interactive aspect of professional work can, to a certain extent, be covered by the open structure of the LS-CD test tasks or by a rating based on observations.

Chapter 3

Categorial Framework for Modelling and Measuring Professional Competence

When it comes to identifying the requirements for competency modelling in vocational education and training, pedagogical discussion and vocational training research are confronted with a wide variety of competency definitions and terms whose significance and manageability for the design of vocational education and training processes and the empirical recording of vocational competencies differ widely. The proposals made in the vocational pedagogical discussion to develop competence models based on learning goal taxonomies or on the concept of vocational competence introduced by Heinrich Roth are critically evaluated in competence research. Dieter Euler critically assesses vocational educational attempts to model professional competence: 1. ‘The works apply different competency models whose connectivity to existing models remains open for in-company and school-based vocational training. 2. Developments remain partial with regard to a comprehensive concept of competence [. . .], i.e. only individual dimensions and facets of competence are taken up and covered. There is a close focus on expertise, particularly in the developments for the commercial sector. The alleged references to social competences (Winther & Achtenhagen, 2008, 531) lack a sound theoretical foundation. 3. [...] However, it remains questionable whether these developments can be transferred into the standard practice of final examinations [...]’ (Euler, 2011, 60). The Klieme Commission shares this critical assessment in its own critical assessment of abstract concepts of competence as the basis of competence modelling (Klieme et al., 2003). In this context, Tenorth emphasises that ‘cross-disciplinary key competences’ such as social, personal and methodological competences, which are often equated with the concept of competence, are not suitable either for the establishment of educational standards or for competence modelling’ (Tenorth, 2009, 14).

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_3

13

14

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.1 On the relationship between guiding principles of vocational education and the measurement of vocational competence

The criticism of the guiding principle of vocational competence for vocational training as a connection between technical, personnel and social competence, with reference to Heinrich Roth (1971), must not be misunderstood as a criticism of this ground-breaking guiding principle of vocational training. Instead, the criticism is aimed at the attempts to use this guiding principle of vocational education and training as a competence model in vocational education and training research. The Klieme Commission’s expertise and the subsequent justification of the DFG Priority Programme convincingly reasoned that the function of a competence model consists in mediating between the guiding principles and objectives of a subject or learning area and the development of test tasks (and learning tasks) (Fig. 3.1). In this respect, competence models also have a didactic function. However, this does not mean that didactic models are also suitable as reference systems for the development of test and evaluation tasks. Conversely, according to the Klieme Commission, competence models should also be characterised by a fundamental didactic function (cf. Katzenmeyer et al., 2009). Educational goals and their classification, e.g. according to BLOOM’s learning goal taxonomy (cf. Anderson & Krathwohl, 2001; Brand, Hofmeister, & Tramm, 2005; Lehmann & Seeber, 2007, 26), can be used for the explanatory framework if they prove to be internationally connectable. Competency models, on the other hand, are not. This results in further requirements for competency models and the measurement models derived from them. These have to • Include general educational goals and guiding principles which can be justified in educational theory and which can be taught internationally, as well as domainspecific educational goals of training systems and training courses, • Be in line with the basic findings of teaching, learning, development and evaluation research, • Be tailored to learning areas that allow the content dimensions of the competency models to be sufficiently concrete, • Represent sufficiently concrete instructions for the development of test tasks (Fig. 3.2). These requirements imply the need for simple models. Simple models are those with no more than three dimensions and no more than four to five competence levels.

3 Categorial Framework for Modelling and Measuring Professional Competence

15

Fig. 3.2 Justification framework for the COMET competence model

In addition, Weinert proposes not to include personality traits that lie outside the definition of competence as domain-specific cognitive performance disposition in competency models or to model them separately. Additional requirements for competency models are listed as follows: • The suitability of the competence model for developing test procedures and for recording and evaluating learning outcomes in relation to educational monitoring, school evaluation and feedback to teachers/trainers and students/trainees on levels and development of competence, • Its suitability for international comparative competence surveys. For vocational training, the following requirements are also indispensable: • The application of open complex (holistic) test tasks, since only these can serve to map real professional tasks; • The solution or processing of professional tasks usually require weighing up between alternative solutions, which can be assigned to a solution space staked out by defined requirement characteristics—‘staked out’, since the solution spaces must in principle be open for unforeseeable solutions (COMET Vol.I); • The recording of professional commitment and professional identity as a dimension of the professional development process that is equivalent to the development of professional competence; • A competence assessment that also allows comparisons across educational programmes. This results in the demand for a competency model that regards vocational education and training as one area of learning and which at the same time enables vocational and occupational field-specific implementation; • A distinction from the tradition of assessment as a form of outcomes verification, as used to determine credit points for the skills defined in the national certification

16

3 Categorial Framework for Modelling and Measuring Professional Competence

systems. This is a form of verification of the qualifications defined in module descriptions, which has little to do with professional competence. The standardisation of examinations and test tasks has a long tradition in vocational education and training. With the help of ‘multiple choice’ tasks, an attempt was made to rationalise the examination procedure while simultaneously introducing quality criteria. The determination of the degree of difficulty and the selectivity of test tasks has since been regarded as proof of good test quality (see e.g. Schelten, 1997). In a 1975 report commissioned by the Federal Institute for Vocational Education and Training Research, Hermann Rademacker pointed out that this format of examination tasks is fundamentally unsuitable for the examination of vocational skills (Rademacker, 1975). The justification framework for the modelling of vocational competence comprises the description and justification of the special features of vocational education and training: • • • •

Their guiding principles and goals, The basic theories of vocational learning and development, The concept of professional identity and commitment The vocational concept and the guiding principle of work design that is conducive to learning.

3.1

The Occupation Type of Societal Work

In the discussion on the professional organisation of societal work in the 1990s, the assessments of social historians and sociologists in particular consolidated into the thesis of the erosion of professionalism. As early as 1972, Giddens predicted the reasonable expectation that new forms of democratic participation would gradually emerge and that civil society would replace the working society. Kern and Sabel publish a committed plea against professionally organised skilled work, justified with the fact that the occupational form of work, especially in industrial production, leads to company demarcations, which hinder the necessary flexibilisation in modern companies. The adherence to the tradition of professional skilled work at best enables companies to reproduce what already exists, but not to innovate. As a way out, they recommend the Japanese model of company organisation development, which does without the occupational form of work and thus also without a vocational education system (Kern & Sabel, 1994). Finally, biographical research has coined the term ‘patchwork biography’ and wants to draw attention to an erosion of professionally organised wage labour (cf. Beck, 1993). Ulrich Beck and others argue in this context for a reflective modernisation with which the division into instrumental economic action and communicative political action can be overcome (Beck, Giddens, & Lash, 1996). No later than with the publication of Richard Sennett’s book The Corrosion of Character (Sennett, 1998), this discussion takes a turn. SENNETT deals with the

3.1 The Occupation Type of Societal Work

17

development of flexible working structures under conditions of specialisation, permanent re-engineering embedded in a new neoliberal global economy. If flexibility leads to the dissolution of professional work, then this is accompanied by the erosion of professional ethics, with fears for the future, with the devaluation of experience and therefore the loss of personal identity: ‘Our experience can no longer be cited in dignity. Such beliefs endanger the self-image, they are a greater risk than that of the gambler’ (Sennett, 1998, 129). He criticises the postmodern model of patchwork biography and picks up on Max Weber’s concept of the profession and work ethic in his criticism: If professional careers are sacrificed to this new flexibility, ‘there are no longer any paths that people can follow in their professional lives. They must move as if on foreign territory’ (ibid., 203). Sennett’s analysis can be interpreted as a reason for a modern career. The withdrawal of function-oriented operative organisational concepts and their overlapping by business process-oriented operational structures in modern companies are a decisive contribution to increasing operational flexibility. This is not contradicted but rather met by the concept of a modern professional life. In the reform dialogue on vocational education and training at the turn of the century, the concept of a modern profession emerged (KMK, 1996; Heidegger & Rauner, 1997). In 2007, Wolfgang LEMPERT initiated a vocational education discussion on the question ‘Profession without a future? Vocational education without professions?’ with reference to the work ‘Die Berufsform der Gesellschaft’ (Kurtz, 2005) presented by Thomas Kurtz. The results of the discussion, as summarised by Wolfgang Lempert, are in clear contradiction to the sociologically based forecasts of the 1970s to 1990s. That’s why there’s little point in pursuing abstract and general guesswork about the future of the profession in our society. To question its future formally and generally is wrong. Instead, we should ask about a future-oriented professional concept and the conditions for its implementation. This is how I understand Meyer and Rauner: both assume that it will still make sense in the future to structure the production and use of human working capacity professionally (Lempert, 2007a, 462). Open, dynamic bundling of employment-related potentials for action, which would have to replace many conventional training occupations, [fulfil] fully the abstract, formal criteria that Kurtz—in connection above all with Max Weber—emphasises as primary characteristics of occupations (ibid., 463).

Under the heading ‘Perspectives of the rescue, regeneration and future consolidation of an (also) professionally accentuated organisation of societal work’, Lempert concludes By including the academic professions, the nightmare vision of a total ‘disposal’ of the professional principle would become absurd and be banished from the outset (ibid., 463).

The concept of open, dynamic careers and core occupations (Heidegger & Rauner, 1997) has meanwhile also emerged in the European Vocational Education and Training Dialogue as a guiding principle with an impact on vocational research and development. One prominent example is the development of the ‘European’ profession of ‘motor vehicle mechatronics technician’ (Rauner & Spöttl, 2002).

18

3.1.1

3 Categorial Framework for Modelling and Measuring Professional Competence

Employability or Professional Competence

All vocational training is aimed at the employability of its trainees. An apprentice becomes employable once he/she has acquired the knowledge and skills defined in the respective job description and is therefore in a position to practise the respective profession in a qualified manner. During the examination of professional competence in the form of testing an apprentice’s professional knowledge (theoretical examination) and professional ability (practical examination), the skills that must be mastered are of particular importance: they must all be mastered safely and without exception. This applies above all to safety and health-related tasks and, to a certain extent, also to environment-related tasks. In principle, each occupation must ultimately be learnt in practice (during the work process) in order to achieve employability (Harold Garfinkel). Therefore, the dual organisation of vocational education and training—learning a profession—is an indispensable basic form of vocational learning. There are three forms of duality: 1. Single-phase—integrated—duality, This form of dual organisation of vocational education and training has its roots in the master craftsman apprenticeship. In Germany, it is regulated primarily in the Vocational Training Act and is therefore the basic form of vocational training in all sectors of the employment system. 2. Two-phase—alternating—duality, This form of dual vocational training is widespread in academic education. A course of study (first phase) is often followed by a regulated second phase of academic professional training (e.g. with doctors, teachers, lawyers). 3. Informal duality. All forms of vocational (university) education and training which are not followed by a regulated second phase of vocational education and training have in practice developed informal forms of familiarisation with a profession. For example, university-trained engineers in Great Britain are certified as ‘chartered engineers’ after a certain period of time and in compliance with defined regulations.

3.1.2

Architecture of Parallel Educational Paths

However, the increased vocationalisation of higher education as a result of the Bologna reform entails risks for the quality of both academic and vocational education and training. This is due to the one-dimensional systems of classification of successive educational levels, which describe the structures of national education systems (ISCET, ISCO, EQF). Common to all one-dimensional classification systems is that the lower (vocational) and upper (academic) levels are clearly defined by the definition of higher education qualifications. Higher education is academic education, which is subject to the constitutionally defined freedom of scientific

3.1 The Occupation Type of Societal Work

19

teaching and research. The entitlement to award the degrees bachelor, master, PhD lies—internationally—with the universities. These include—in a more or less differentiated way—qualifications and training courses in vocational education and training. The barrier between vocational and academic education is high; it almost hermetically separates the two worlds of education: academic-scientific and executive-oriented vocational education. All attempts to make this educational architecture more accessible have led to the vocationalisation of academic and vocational education and training and therefore to a development that impairs the quality of both educational traditions. In contrast, an architecture of parallel educational pathways holds the potential for a new quality of vertical permeability and the realisation of the equivalence of vocational and academic education. At the same time, the establishment of a continuous path of dual education creates a new dynamic in the interaction between the education and employment system. The idea is a concept of modern professionalism, a necessary basis for the implementation of an architecture of parallel educational paths. Even if the constitutional freedom of teaching and research protects universities from aligning their teaching with the qualification requirements of the employment system, it can be expected that the occupational profiles developed in the processes of vocational training planning will trigger a new discussion on professionalisation in academic education. This could also contribute to a significant reduction in the proliferation of specialisation in degree programmes and to the participation of organisations in the world of employment in the design and organisation of (dual) vocational training courses at universities (Rauner, 2015a), modelled on the Vocational Training Act.

3.1.3

Professional Validity of Competence Diagnostics

Assuming that the internationalisation processes cover not only the academic professions, but also the professional organisation of work in the intermediary employment sector, then there is every reason to identify professional work as the reference point for substantiating the validity of competence diagnostics in the field of vocational education and training (! 4.7). The curricular validity of tests would limit their function in the investigation of different forms of vocational education and training, including the quality of vocational curricula. On the other hand, test tasks whose contextual validity is based on reference to vocational work (vocational validity) make it possible to identify strengths and weaknesses of various vocational training systems and arrangements. In particular, it is possible to check whether trainees/students have a vocational work concept (Bremer, 2006) as well as vocational qualification upon completion of their vocational training—and not just technical and functional knowledge and skills, as is taught in typical (university) forms of vocational training or in traditional basic vocational training. The professional fields of action therefore apply to

20

3 Categorial Framework for Modelling and Measuring Professional Competence

internationally comparative occupational competence diagnostics. The International WorldSkills (IWS) can be referred to as a reference system (cf. Hoey, 2009).

3.2

The Design of Work and Technology: Implications for the Modelling of Professional Competence

The identification of ‘important work situations’ for the development of vocational competences as a pivotal point for vocational training plans oriented towards learning fields (KMK, 1996) can be based on fundamental theories of vocational competence development and expertise research (Lave & Wenger, 1991; Röben, 2004). It therefore seems obvious to deal with the labour and occupational scientific tradition of work (process) analysis and design. Both research traditions inevitably transcend the postulate of purposeless science whenever they interpret themselves as formative sciences (Corbett, Rasmussen, & Rauner, 1991). In summary, this is shown by a list of the characteristics of work design by Ulich (Table 3.1). Since the vocational-pedagogical and above all the vocational scientific discussion is often based on theories, methods and research results from occupational science in order to clarify the connections between working and learning and to shape them from a pedagogical perspective, some conceptual clarifications will be made below.

3.2.1

Professional Work Tasks and Professional Competence

A professional work task describes a specific task to be performed by an employee in relation to its results. This must relate to work contexts that allow employees to understand and evaluate their function and significance for a higher-level operational business process. The structuring and organisation of professional work according to work tasks forms the basis of the concept of work mediating an understanding of context (Laur-Ernst, 1990). Professional work tasks are always normative in two respects. First of all, professional work tasks are embedded in a profession. These, however, are developed in negotiation and research processes guided by interests (cf. Schmidt, 1995). For this reason alone, the expression ‘objective’ qualification requirements, from which job descriptions and training regulations could be derived, is misleading. Furthermore, the design of work tasks results from competing concepts of the organisation of social work. Here, the tradition of developing and testing humane work design and work organisation can be continued. Emery and Emery, Hackman, Oldham and Ulich in particular have dealt with the justification of characteristics for a humane work design. Since it has been shown that humane work design and ‘Human Centred Systems’ (Cooley, 1988) are competitive in the implementation of computer-aided work systems or even create competitive

3.2 The Design of Work and Technology: Implications for the Modelling of. . .

21

Table 3.1 Characteristics of task design based on Emery and Emery (1974), Hackman and Oldham (1976) and Ulich (1994, 61) Design feature Holistic character

Variety of requirements

Possibilities for social interaction

Autonomy

Opportunities for learning and development Time elasticity and stress-free adjustability

Sense of purpose

Assumed effects Employees recognise the importance and value of their work Employees receive feedback on their own work progress from the activity itself Different skills, knowledge and abilities can be applied One-sided demands can be avoided Difficulties can be overcome together Mutual support helps to cope better with demands Strengthens self-esteem and willingness to take responsibility Provides the experience of not being without influence and meaning General mental flexibility is maintained Vocational qualifications are maintained and further developed Counteracts inappropriate work consolidation Creates leeway for stress-free thinking and self-chosen interactions Makes you feel involved in the creation of socially useful products Provides certainty that individual and social interests are in harmony

Realisation by. . . . . . tasks with planning, executing and controlling elements and the possibility of checking the results of one’s own activities for compliance with requirements . . . tasks with different demands on body functions and sensory organs

. . . tasks, the accomplishment of which suggests or presupposes cooperation

. . . tasks with disposition and decision possibilities

. . . problematic tasks for which existing qualifications must be used and extended or new qualifications acquired . . . creating time buffers when setting target times

. . . products whose social benefits are not questioned . . . products and production processes whose ecological harmlessness can be checked and guaranteed.

advantages, these concepts have found their way into operational organisational development (Ganguin, 1992). The identification of professional work tasks must therefore consider the normative aspects of professional development and work organisation, as well as both in their context. HACKER comes to a similar conclusion in his analysis of diagnostic methods to expert knowledge: As a preliminary consequence for the diagnosis of knowledge, it seems advisable to consider a paradigm shift from [...] a reproducing to a (re-)constructing process of the task-related performance prerequisites with individual and cooperative problem-solving and learning offers for the experts (Hacker, 1986, 19).

22

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.3 Professional work in the field of tension between work contexts and work practices (Rauner, 2002a, 31)

Professional work tasks can be divided into subtasks. Subtasks are characterised by the fact that their sense for the employee is not derived from the subtasks themselves, but only from the context of the higher-level work tasks. If the subtasks of a superordinate task are delegated to different persons who do not work together in a working group, the employees lose sight of the working context. According to this organisational model, the subtasks dissolve the work context not only organisationally, but also in the subjective perception (as an understanding of context) and in the subjective experience of the employees. In this context, occupational science primarily deals with questions of order and condition analysis, the division of human–machine functions and, above all, with questions of stress and less with the aspect of professionally organised work as a point of reference for educational processes. Therefore, a detailed subdivision of work tasks into subtasks, work actions and occasionally beyond that in operations can be quite appropriate when carrying out empirical work analyses. In VET research, on the other hand, if tasks and work actions become context-free reference points for the design of VET plans and processes—detached from the work context (Fig. 3.3)—this induces decontextualised learning that stands in the way of teaching VET competence aimed at understanding and shaping the world of employment (see Connell, Sheridan, & Gardner, 2003).

3.2 The Design of Work and Technology: Implications for the Modelling of. . .

23

Scientific interest in the working process is also directed towards the structure of the complete working process. The vocational pedagogical and occupational scientific interest in this occupational scientific concept is based on its normative interpretation through design-oriented occupational science: Employees should learn to plan, carry out and evaluate their work (cf. Table 3.1). Accordingly, a professional activity that is based on performance alone is an incomplete work activity. As a pedagogical category, however, the term ‘complete work activity’ is only suitable if the meaning or content-related aspect of the work activity is not excluded. In this context, Frieling refers to the limited range of standardised analytical methods as developed by McCormick (1979), Frei and Ulich (1981), Volkert, Oesterreich, Gablenz-Kollakowicz, Grogoll, and Resch (1983) and other occupational scientists. Although these instruments could be used as a structuring aid for recording essential aspects of work activity (Frieling, 1995, 288), the abstract formulation of the items is unsuitable for the analysis and evaluation of concrete work contents in their significance for the working persons (Lamnek, 1988). This critical assessment is of central importance for the design of vocational curricula and vocational training processes. A further source for the educational theoretical development of a vocational competence concept is the work of the VDI on technology assessment and the corresponding philosophical discussion on the ethics of technology. An essential aspect of technology assessment is the technology impact assessment, which is oriented towards policy advice (Ulrich, 1987). The concept of technology assessment already has the potential to be expanded by technology genetics research and the concept of technology design (Sachverständigenkommission Arbeit und Technik, 1986). Its guideline on technology assessment, the VDI committee ‘Fundamentals of Technology Assessment’ states: ‘Technology assessment here means the planned, systematic, organised procedure that [...] derives and elaborates options for action and design [from the assessment of technical, economic, health, ecological, human, social and other consequences of technology and possible alternatives]’ (VDI, 1991). In this guideline developed by the VDI, technology is understood as an objectification of values and related interests. In this case, the quality of ‘responsible’ technical development is assessed with reference to the overriding criteria of personality development and the quality of social development. Six ‘values in technical trading’ can be assigned to these superordinate values (Fig. 3.4). These are • • • • • •

Functionality (usability, effectiveness, technical efficiency), Economic efficiency (in line with individual economic profitability), Prosperity (in line with macroeconomic benefit), Security (for individuals and humanity), (well-being, health protection) Environmental quality (natural and cultural components) (ibid., 7 ff.).

The second root of a ‘technical education’, used to establish the connection between the technically possible and socially desirable (Rauner, 1986), is the discussion on technology philosophy, which gained momentum in parallel with

24

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.4 Relationship between goals and values for petrol engines (VDI, 1991, 3–5)

‘work and technology’ research (Hastedt, 1991; Lenk & Ropohl, 1987; MeyerAbich, 1988). Heiner Hastedt in particular deals with the possibilities of technology design in his research on ‘basic problems regarding the ethics of technology’ (Hastedt, 1991, 138), whereby he defines very similar evaluation and design categories as the VDI. Technology design implies not only interdisciplinarity, but new forms of participation, according to the motto formulated by Walter Bungard and Hans Lenk: ‘Technology is too important, now and in the future, to be left to the technicians alone’ (Bungard & Lenk, 1988, 17).

3.3

Task Analyses: Identification of the Characteristic Professional Work Tasks

Vocational education and training are a form of education and qualification in the world of employment as well as an intentional process of learning for the world of employment, dependent on knowledge of the expertise and skills required in the work process. Three questions need to be answered: • What are the skills that enable ‘skilled’ workers to carry out their work adequately?

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

25

• What other skills must they have in order to participate in the process of operational organisational development—both within and beyond their own area of responsibility? • Which skills are developed in the work process itself or how should ‘learning’ and ‘qualifying’ work processes and work systems be designed? In vocational education and training practice, some of these questions are rarely asked because the teaching and learning content and the related educational and qualification objectives are specified in organisational systems—the training regulations with their framework training plans for in-company and (framework) curricula for school-based vocational education and training. They represent the occupational profiles in an operationalised form. However, since occupations are now mostly traditional and fixed attributions of tasks for the organisation of social work, embedded in the industrial-cultural development of regions and countries, occupations represent general socio-economic and less the qualification requirements aimed at company organisational development. The increasingly rapid pace of technological and operational innovations in industry, commerce and crafts requires the examination of occupational profiles and occupational regulations with regard to their topicality and prospectivity and to relate them to the reality of work in their attributions of tasks. Here, the working reality is understood not only as one which is empirically given, but also as one to be developed. In the study of a professional field (occupational field science)1, vocational scientific work studies therefore play a central role. The subject matter of professional scientific work studies is described in more detail below, and information is provided on their methodological implementation. The vocational sciences deal with the contents and forms of skilled work in established and developing occupations and occupational fields, with vocational learning processes for the world of employment and with implicit and explicit learning in the work process. In the analysis, design and evaluation of vocational training processes and work processes that promote learning, the link must be established between the work processes of vocational working reality, the learning and educational processes and the systems of vocational organisation (Fig. 3.5). Figure 3.5 shows three correlations between the world of employment and vocational education and training. The widespread idea that in a first step, the means of vocational classification can be derived from the analysis of the reality of work and that the contents and forms of vocational training processes result from this in a linear connection is called qualification determinism. This deterministic misunderstanding is as widespread in the everyday actions of vocational educators as it is in vocational training planning and research. On closer inspection, this linear relationship evaporates and gives way to a differentiated, non-deterministic concept of correlations between the three poles of the outlined relationship. This is where the studies and development tasks for design oriented vocational education and training

1

The more common term ‘professional science’ is used below.

26

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.5 The relationship between professional work and education processes

can be found (! 2.4). Figure 3.5 illustrates two widespread reductions and deficits in the vocational education activities of teachers and trainers and in vocational education research.

3.3.1

Professional Scientific Task Analyses Include

1. The identification of the work contexts and qualification requirements characteristic of a job description—the occupational profile. 2. The identification of work tasks covering the job description. A distinction must be made between the core tasks and the industry- and application-specific professional areas of responsibility. 3. The logical systematisation of developmental tasks according to criteria as suggested by the novice expert paradigm. 4. The differentiation of the work task according to the categories. – Subject of skilled work, – Methods, tools, organisation of skilled work, – Requirements for the subjects and the forms of skilled work. The professional work tasks do not arise from a process of aggregation of elementary, abstract basic skills and knowledge, as they are assumed to be the smallest units in the microanalysis of work activities. Conversely, the higher-level, meaningful work context and the profession with its potential for creating identity become the starting point for identifying the work tasks that constitute the profession. For a profession that requires about 3 years of training, experience has shown that between 15 and 20 professional tasks can be specified that meet the criteria for professional working contexts. A differentiation of each of these work tasks into their subtasks would make sense when specifying and operationalising the work tasks but is not necessary as a starting point for the task analysis. Professional work tasks are only

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

27

those that can be formulated as action-oriented and as part of corporate valuecreating processes. Professional work tasks have very different qualities in terms of the required professional experience, the degree to which they can be routinised, and the extent and level of theoretical knowledge required to master them. While some professional tasks can only be mastered safely and effectively after many years of professional experience, other tasks can be performed from the very beginning of a career, without this necessarily meaning that these tasks have a lower priority with regard to their qualification requirements. Since all vocational work tasks are acquired in the course of learning a profession from a beginner (novices) to mastery level, it is obvious that the vocational work tasks are arranged as a starting point for the development of vocational training plans in such a way that they support the process of vocational training and qualification on the way to mastery or specialist.

3.3.2

Identifying Professional Work Tasks: Expert Specialist Workshops (EFW)

Expert specialist workshops are suitable for the identification of professional work tasks. This procedure is based on the ‘Design A Curriculum’ (DACUM) concept developed by Bob Norton at NCRVE2 (Ohio State University) in the 1980s and on a task analysis procedure tested in the Leonardo project ‘Car-Mechatronic’. A two-day workshop for experts is the core component of this process (! 4.1).

Professional versus Experience-Based Description of Work Tasks In this form of vocational scientific task analysis, the methodological challenge is to transform the context-related experiences of ‘expert specialists’ into a context-free description of occupational tasks. Only if this is successful will the work tasks identified with EFW form the basis for the development of test tasks. At the same time, professional work is always tied to subjectivity and situativity and is therefore unique. This means that the quality of professional work processes and tasks can only be assessed in context-related manner as a first step. This has far-reaching consequences for EFW’s method. The workshop participants are experts in their work. The source of their expertise, which they can contribute to the workshops, are their reflected work experiences. The vast experience from research practice shows that attempts to question the expert specialists as professional experts about the characteristic professional work tasks—and not about their actual competence: their subjective work experience—lead to failed analyses. The expert specialists would be given the role of qualification researchers to provide information on findings that 2 National Center for Research in Vocational Education (at Ohio State University until 1988 and then at the University of California, Berkley).

28

3 Categorial Framework for Modelling and Measuring Professional Competence

require a process of scientific reduction of empirical data and the associated content analyses. Their real competence as experts of their work experience would fall by the wayside. When planning, implementing and evaluating the EFW, it must be considered that teleological elements cannot be avoided in the description of professional tasks and developmental processes from beginner to expert. Professional action always includes dealing with and weighing the environmental and social compatibility of professional task solutions (! 3.2). It is critical to note in this context that the logical approach to educational research has so far been developed mainly with reference to developmental psychology or even merges into it. For vocational education and training and for all forms of technical education, in which the technical competence to be imparted is expressed in educational goals, the logical approach to educational research largely misses its central subject: the educational contents. Therefore, in the further logical approach to the research and design of vocational work and educational processes, it is important to clearly work out the specifics of these development processes, for example, in comparison with general education (Table 3.2).

Participants: Expert Specialists In order to be able to identify expert specialists, the following characteristics should be fulfilled: • With their professional biography, their professional competence and their current work tasks, expert specialists represent a background of experience and knowledge that can be used to determine future-oriented working contexts. In the last few years, the experts should have passed through several stages of their careers, know various departments of the company and have been involved in innovative projects. • Expert specialists are not representatives who have been shaped by the given professional structure, but rather embody innovative and forward-looking professional practice (prospectivity) for a particular professional field. Depending on the occupational field, the industry structures must also be considered when selecting experts so that a profession can be covered in its core and marginal tasks. • In practical EFW, 10–12 participants have emerged as a favourable group size. Two-thirds of the participants should be at the skilled worker level and one-third at the superior level. The representatives of the advanced skilled workers (foremen, master craftsmen and workshop managers) represent the work-oriented management perspective in this process. Above all, they are the ones who can assess the professional work tasks in relation to the company’s task organisation.

Personality Children and adolescents

Pupils of natural science education Trainees (adolescents, adults)

Education General (formal) education

Natural science education Vocational education

Acquirement/development of professional skills in specific professions

Acquirement (development) of scientific theory

Subject of development Development of – cognitive – social – moral competence

Development structures Development steps of – cognitive – social – moral competence Stages of increasing specialist competences Stages of increasing professional competence

Table 3.2 Differentiation in developmental analysis and design of educational processes

Training regulations according to BBiG, professional curricula

Scientific facts/learning content

Teaching/learning contents Largely interchangeable

Vocational (field) science and its didactics, vocational pedagogy

Natural sciences and their didactics

Sciences to identify the development structure of teaching/learning content General pedagogy, developmental psychology

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks 29

30

3 Categorial Framework for Modelling and Measuring Professional Competence

Researcher and Moderator The workshop is usually conducted by two researchers, at least one of whom has relevant professional training—and if possible, also relevant work experience. The ‘second’ researcher acts as moderator and pays special attention to the methodical approach and the realisation of a trusting and creative workshop atmosphere, which enables all participating experts to contribute all their experience and competence to the analysis process. The ‘first’ researcher leads the expert discussion, clarifies technical contradictions and deepens the discussion through technical suggestions and interventions.

3.3.3

Professional Scientific Work Process Studies

Goals and Structure of Work Process Studies Work process studies can be used to gain insights into the skills incorporated into practical professional work. In the tradition of didactics in vocational education and training, this question is rather undervalued. The method is widely used to derive specialist knowledge from objective scientific knowledge in a process of simplification—of didactic reduction or transformation—in order to teach it to students or trainees in specialist instruction (Schein, 1973). It is assumed that this ‘knowledge’ must have a connection to professional action. In this tradition, knowledge contents in the form of ‘subject theory’ are regarded as objectively given facts whose objectivity is based on the specialist sciences. However, the real importance of this knowledge for practical professional action remains unclear. What we do know is that this context-free knowledge can only be used as a basis for professional competence when it is incorporated into concrete professional activities. Parts of this context-free theory are safely transformed into work process knowledge in the process of professional work. The foundation of vocational education and training on the basis of in-depth knowledge of work process knowledge marks a fundamental change of perspective in vocational education and training practice: ‘If it is possible to find access to what constitutes the practical skills, the incorporated knowledge of vocational work, its findings will be invaluable and exert a lasting, if not revolutionary influence in many areas—for example in curriculum and evaluation research’ (Bergmann, 1995, 271). In this regard, vocational training and work appear in a new light. There are interesting references to historical developments in which, for example, the art of building was not based on engineering science, but on the knowledge of the great master builders of the working process, which had developed into a process of work experience over centuries. Work process studies are therefore an important instrument of qualification research for the identification of professional knowledge and skills as a basis for competence diagnostics.

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

31

The Steps of Occupational Scientific Work Process Studies (Table 3.3) If a work process study is carried out with the aim of developing professionally valid test tasks for competence diagnostics, then the identification of ‘important work situations’ for professional competence development (KMK, 1999) is the focus of research interest. In addition to the criterion of qualitative representativeness or exemplariness of the work process in terms of the purpose of the study, another selection criterion is the clarity of the work process, which is given if it is possible for the researcher qualified in vocational science to record the work situation in all essential objective and subjective moments under the given operational framework conditions and within the time available for examination. Finally, the work process should be directly accessible to the researcher so that he can be present in the work situation. The main criteria for the selection of the object of investigation are therefore • Validity of content through qualitative representativeness and exemplariness, • Manageability of the work process (limitation of the field of investigation while maintaining its complexity of content), • Accessibility of the work process (for an emphatically action-oriented processoriented research). The analysis of professional work processes requires professional competence on the part of the researchers, which enables them to conduct expert talks and discussions at the level of domain-specific technical language. This includes knowledge of the work process and context to be analysed, i.e., • The technical issues: work object, work equipment and tools, and work processes, • Specifications for the professional or operational work tasks in which the work process to be examined is integrated, • The instructions and documentation available for the execution of the corresponding work tasks, • The subject-systematic (theoretical) connections as far as these are of importance for the competent working action. This preparatory step can also include the practical handling of work objects and tools to such an extent that the work process to be examined is technically clear for the researcher and he can fully concentrate on the research of the concrete work action of the actors and the specific and general competences expressed therein. The Table 3.3 Steps of occupational scientific work process studies • Selection of the work process. • Analysis of the objective conditions shaping the work process. • Definition and formulation of preliminary research questions and hypotheses. • Preparation of the study: Approach to the research field. • Implementation of the work process study. • Evaluation of the study.

32

3 Categorial Framework for Modelling and Measuring Professional Competence

analysis and—if necessary—the appropriation of the objective side of the work process is therefore of particular importance, as the mystification of implicit abilities and intuitive competence in action (tacit knowledge, tacit skills), which can often be found, and the ‘arationality’ of competent action can be avoided in the terminology of Dreyfus and Dreyfus (1987). Since the researcher has the theoretical and—if necessary—also a certain practical professional competence for the work process to be analysed, an essential prerequisite is given for comparing the process-related working action and the interpretation of this action by the actors with their own interpretations in the research situation and, if necessary, to couple the differences back to the actors. This is an essential prerequisite for the validity of the test results.

Definition and Formulation of Preliminary Research Questions and Hypotheses In identifying and formulating the preliminary research questions and hypotheses, general questions such as the following are taken up: • How do beginners, advanced and expert workers act in the work context under investigation? • Which professional, social and methodological competencies enable the actors to act professionally and competently? • How do theoretical and experience-based action intermesh and how is subjective theory formed among the actors (work process knowledge)? • To what extent and in what quality is competent working action obligatory? • To what extent and how do skilled workers use the tools available for their work? • Which (de)qualifying effects can be identified in the work process and how are they triggered and favoured? In contrast to the social science research context, the focus here is not on developing generalisable theories about occupational work structures, vocational learning (learning theories) or company organisational development. The aim of the occupational scientific analysis of work contexts is to determine the relationship between work content and professional competence for a specific occupation or for specific professional work contexts and processes. Despite the professional competence of the researcher, given by his professional scientific qualification and his professional preparation for the study, the researcher can assume that the actors’ knowledge of the work process can only be made accessible through the research process. It is therefore essential to avoid prematurely deducing subjective working behaviour and professional knowledge and competence from the objective conditions of the work process. The preliminary hypotheses are therefore open, so that they can be clarified, modified or even completely rejected and replaced by others as the investigation progresses. This process-oriented research aims at a dialogue consensus between researcher and actor(s) and includes the questions and hypotheses of the investigations (cf. Scheele, 1995).

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

33

Preparation of the Study: Approach to the Research Field After the work process to be examined has been selected, justified by professional science and analysed from the perspective of theory-based work and a suitable examination situation (company, skilled worker, supervisor, etc.) has been selected, the actors are informed and made interested in the project. It should normally be in the interest of professionals and management to enable and actively participate in vocational work studies, as they aim to improve the design and organisation of work and vocational qualification. When presenting the project, the researcher also points out his professional qualifications in the area of responsibility to be examined. This not only favours the relationship of trust always demanded for the question, but also defines the examination situation as one ‘among experts’. The intention of the investigation, the form of the investigation and the methodical procedure are presented to the participants. The situations to be examined are commented on from a technical perspective on the basis of the previous analysis of the objective side of the work and the objective/interest of the investigation is justified. By emphasising the technical content of the investigation, the investigator becomes somewhat of a participant in the research process. The aim is to guarantee or promote: • • • •

The acceptance of the researcher by the study participants, The greatest possible scope for action and design in the investigation; Extensive identification of the parties involved with the investigation project; A definition of verbal and non-verbal communication at the level and in the quality of professional and work-related professionalism; those to be examined know what they can expect of the researchers in terms of content and that they can communicate without distorting their accustomed forms of expression. • An emotional opening of the persons to be examined • A climate of trusting cooperation based on specialist professional collegiality.

Implementation of the Work Process Study The empirical basis of the occupational scientific work study is provided by • The structure of the operational work process, • The operational organisation of work processes and tasks, • The persons involved in the study and their professional, technical and sociooccupational competences, • The concrete work contexts to be analysed in their technical and companyspecific contents as well as their forms, represented by the concrete work action and the objective circumstances. It is necessary for the researcher to be able to understand the (technical and methodological) content of the work context to such an extent that he can understand the significance of the concrete work steps for the processing of a work task in an

34

3 Categorial Framework for Modelling and Measuring Professional Competence

examination and interpret them with regard to strategic, spontaneous, creative, ‘programmed’ and systematic/technical procedures as well as evaluate incorrect and inappropriate procedures. This professional competence enables the researcher to sound out the work situation to the necessary depth. Therefore, the researcher should possibly acquire any missing specialist skills before starting the study (Frieling, 1995, 285; Bergmann, 2006).

The Action-Oriented Expert Discussion The key questions for the technical discussion are formulated in advance in a discussion guideline. Interviewing different groups of people naturally also requires different key questions. It is important to assign the main questions to the higherlevel research questions. Which questions and combinations of questions should be used to cover which aspects of the study? The main questions rather have the function of a ‘checklist’, which allows the researcher to get deeply involved in the work process to be investigated, since he can always return to the reflection level of the more detached analyser with the help of the main questions. The researcher takes part in the work situation and encourages (e.g.) the skilled worker to voice his thoughts about what he is doing at the moment by means of appropriate impulses and questions. The interview is conducted according to the situation, very close to the work process.

Paraphrasing If the researcher has the impression during the ‘technical discussion’ that an utterance remains on the surface of the work situation and is misleading or that it is even incomprehensible to him, it makes sense to repeat the utterance interpretively, so that the actor has the possibility of clarifying, deepening or even just clarifying the previous utterance. Example Researcher: ‘I have now understood . . .’. Dialogue partner: ‘Not quite, e.g. if I . . .’.

Enquire, Reflect and Clarify Enquiries usually lead to a certain interruption of the work situation. During work situations, the researcher may also find himself in clarifiable technical situations, which he may understand in their content but not in relation to the work action of the skilled worker. If the researcher now assumes that a specific work action is of particular importance for one of his research questions, this requires a more in-depth enquiry and, if necessary, a special expert discussion about the specific

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

35

‘case’ and its processing possibilities. Such conversational situations usually mean an interruption of the work action. An explicit interruption of the work situation is achieved through an intervention that the researcher initiates with remarks such as ‘Just a moment, isn’t what they are doing risky?’ or ‘Couldn’t we solve the problem this way?’ A technical intervention is appropriate, • If the researcher cannot re-enact a work action that he understands from its technical side; • If only an intervention can clarify why the skilled worker prefers one of several possible work steps; • If it seems expedient to encourage the skilled worker to apply an alternative procedure or to play through it mentally.

Qualitative Experimentation The experiment has a dual function in work studies. First of all, experimental testing is part of the repertoire of the theory- and experience-led work trade of skilled workers. One very typical aspect is fault isolation in technical systems in which an experimental approach promises success. This is the case, for example, if the incremental elimination of error causes from a large number of possible error causes leads to the identification of an error cause. This work action in the form of the systematic and experimental cause of an error is underestimated in the relevant investigations. Skilled workers very often refer to their ‘empirical values’, but often acquire them by experimenting with their work. This includes the form of thought experiments. Another important aspect in the context of occupational scientific work studies is the qualitative experiment. In contrast to the laboratory experiment, the researcher creates a quasi-experimental situation in the form of explorative, heuristic experimentation through his intervention in the process of action-oriented observation. Often, detached observation and active experimentation are understood as two opposite forms of researcher behaviour towards his ‘subject’. In reality, these two knowledge-generating methods are mutually (dialectically) intertwined. The experiment becomes significant only through precise observation and—conversely— observation becomes significant through the systematic and systematising activities of the researcher. There is a considerable need for development in advancing forms of explorative and qualitative experiments for research into complex work situations. Two forms of qualitative and explorative experiments are available.

Planned Explorative-Experimental Work Process Studies In this case, the researcher influences the variation of two or more factors assumed to be relevant for a typical, real work context in such a way that their effects on the

36

3 Categorial Framework for Modelling and Measuring Professional Competence

management of the working context to be examined or even only certain aspects thereof can be examined by it.

Situative ad-hoc Experiments The participating investigation of work processes, as it is based on action-oriented research discussions and action-oriented exploration, suggests using the explorative character of the participating observation to redesign work situations in a quasiexperimental way. Idioms such as the following can be used to create such quasiexperimental situations ad hoc. • ‘What would you do if you didn’t have computer self-diagnostics’? • ‘Could you demonstrate how to work on the problem without the manual’? Such interventions can also challenge ‘thought experiments’.

Documentation of the Research Process: Tape and Memory Records The methods recommended in the relevant methodological manuals—especially in empirical social research—can be used to document the study. The tape and memory records are particularly important in this context. The shorter the time interval between recording and the event for which a record is created, the better the quality of the memory records. A two-column procedure is recommended for the memory record.

The Memory Records (Table 3.4)

Table 3.4 The memory record Documentation of the work process This column describes the work process in as much detail as possible in all its dimensions and aspects. Written and other documents are documented in the appendix or noted as sources. If a tape recording is available, it is important to document in the memory record those facts which cannot be taken directly from the tape or video recording.

Documentation of the researcher’s actions and their justification/spontaneous impressions In an action- and process-oriented work process study, the actions of the researcher are of particular importance. The actions of the researcher—synchronous with the actions of the actor—are documented in the memory record and briefly explained if this does not result from the context of the action. In this column the spontaneous impressions are also noted according to the motto: ‘What went through my mind’.

3.3 Task Analyses: Identification of the Characteristic Professional Work Tasks

37

Situation Film as Supplementary Documentation of the Working Reality In contrast to the method of thick description, it is not important to document a work situation as completely as possible when creating cinematic documentations and representations of the work contexts to be examined. The result is probably a hardly manageable mismatch between the few ‘key scenes’ and the abundance of unusable recordings. The concept of the situation film assumes that the researcher has a deep understanding of the work situation and is therefore in a position to document ‘key situations’ on film. This means that ‘recognisable’ situations are already selected during the documentation process. This is feasible within the framework of professional scientific research, because the work context to be investigated is a field familiar to the researcher. Nevertheless, in practice, a ‘yield’ of only about 20% usable film material is assumed. In a series of research projects, the resulting cinematic documentaries were didactically processed into ‘situation films’ for vocational training. Unlike conventional educational films, the viewer is challenged to interpret the documented situation himself or to develop a (or competing) interpretation(s) in a discussion process between the participants and researchers. Therefore, such situation films are particularly suitable for work process studies with actors outside the concrete work situation. The situation film represents a quasi-work situation for the actors. He sees his own work situation through the film and projects the film onto it. This makes it possible to hold a technical discussion between researcher and actor that is very close to the reality of the actor’s work. The possibility of referring to cinematic scenes that have an affinity to their own work situations significantly increases the actors’ willingness to express themselves and their possibilities of expression. The more directly the working reality documented in the situation film corresponds to that of the actors, the better this succeeds (cf. Müller, 1995; Petermann, 1995).

Evaluation of the Study Determining the direction of analysis. The direction in which the analysable material is to be evaluated is determined first. Here one needs to ask whether the object of the analysis is • • • •

The subjects of the study, their actions and competences, The object of work (e.g. a new tool) and the associated work method/procedure, The object of work and the organisation of work, or, The qualification requirements (and how they are reflected in the work process).

38

3 Categorial Framework for Modelling and Measuring Professional Competence

Qualitative Text and Material Analysis Only then does the content analysis begin. The procedures proposed here go back to Mayring (1988) and can be distinguished by summarising, explicating and structuring analyses. The purpose of the summary content analysis is to reduce the material in such a way that the meaningful value is retained and at the same time a compressed short text is created. In several text passages: [original sentence is incomplete] (Fig. 3.6). In explicative content analysis, it is important—rather in reverse to a summary content analysis—to clarify the meaning of initially unclear text passages with the aid of memory protocols, other materials and theoretical assumptions. Qualitative content analyses in occupational science can be distinguished between

Reformulation (Paraphrasing)

The text passages not relevant for the investigation as well as redundant and repeating text parts are deleted, the content-relevant text passages are reformulated at the desired output level - without changing the content of the statements.

↓ First reduction: Generalising text passages with the same content

The cleaned text is structured according to categories, which result from the research objectives and the material, according to corresponding text groups. Within the groups, redundant texts are omitted, as are the texts that do not have the same content.



Second reduction: Summary of the texts

In the text groups divided into categories, identical and similar statements are now bundled and summarized. Theoretical assumptions about objective conditions of the work activity can be used as an aid, if the meaning of the statements remains the same.

↓ Checking the categories and the scientific relevance of the compiled text

The thus constructed and integrated text is compared with the categories formed at the beginning and it is examined whether and how if necessary, the categories would need a change or further development. The short text can now be examined with regard to its importance in vocational science.

Fig. 3.6 Processing steps of the interview material

3.4 Guiding Principles and Objectives of Vocational Education and Training

39

• Explications along the skilled worker statements to determine the specific quality of the skills expressed in the work actions. The observed work actions themselves are used for the explication. • An extended context analysis including all documents relevant to the work situation (company data, socio-cultural background, geographical data, data on technological innovations, etc.). Structuring content analyses aim at examining cross-sectional aspects and filtering out the corresponding texts from the statement material (core statements). Structuring analyses usually require hypothesis-driven and therefore theoretically justified work process analyses. This results in the categorial framework for the structuring content analysis. All forms of content analysis already lead to an interpretation of the statements thus obtained, either during the analysis or after presentation of the evaluated material, taking into account the corresponding professional and subject-theoretical contexts. In the final step of the work process studies, this ultimately leads to the review, reformulation, clarification and further differentiation of hypotheses, test and development tasks for the design and evaluation of vocational training processes.

3.4 3.4.1

Guiding Principles and Objectives of Vocational Education and Training Professional ‘Gestaltungskompetenz’ (the Ability to Shape or Design One’s Professional Future)

In the world of employment, skilled workers are confronted with more or less pronounced scope for creativity and solutions when solving professional tasks. When weighing up alternative solutions and solutions, a ‘good’ compromise must always be found between the criteria of functionality, environmental and social compatibility, efficiency and sustainability, as well as the design of work and business processes to be related to a specific situation. The ability to exploit the specific scope for solutions and design in everyday professional work is based on professional ‘shaping competence’ (cf. KMK, 1991). In 1996/1999, the KMK formulated this guiding principle as an educational mandate for dual vocational training and based the introduction of the learning field concept on it: ‘Vocational schools and training companies fulfil a joint educational mandate in dual vocational training. The vocational school is an independent place of learning... [It] aims at basic and specialised vocational training and expands previously acquired general education. In this way, it aims to enable people to fulfil their professional tasks and to play a part in shaping the world of employment and society with social and ecological responsibility’ (KMK 1999, 3; 8). The Alliance for Jobs, Vocational Training and Competitiveness (1999, 54) assumes this educational mandate for the ‘structural further development of dual vocational training’.

40

3.4.2

3 Categorial Framework for Modelling and Measuring Professional Competence

Design-Oriented Vocational Education.

The concepts of the design of work tasks and technology developed in technology evaluation practice and occupational science research found their way into the establishment of design-oriented work and technology research in the mid-1980s, in which education and qualification as an inseparable factor associated with this research were considered and taken into account from the very beginning (Sachverständigenkommission Arbeit und Technik, 1986, 1988). The Enquete Commission of the German Bundestag, ‘Future Education Policy— Education 2000’ included the concept of design-oriented vocational training in the documentation of its recommendations: ‘If the humanity of our future society depends decisively on whether it is possible to stop divisions and fragmentation [...] then education must first and foremost help to develop the will to design [...] and must strive for designability [...]’ (Deutscher Bundestag, 1990). Professional Gestaltungskompetenz refers to the contents and scope of design in the solution of professional tasks. The central pedagogical idea of the ‘ability to help shape the world of employment’ presupposes the organisation of learning in the process of vocational work in such a way that the work contexts to be mastered—the work tasks—challenge this creative competence. This was already reflected in 1991 in an agreement of the KMK on the vocational school (KMK, 1991). Here, vocational education and training with its educational mandate goes far beyond ‘pure’ academic education. In a world that has become historic—especially in the world of employment with its conditions, which basically arise from the objectification of purposes and the interests and needs contained therein, the skilled workers are always challenged to weigh up various technical, ecological and social criteria when solving professional tasks. For example, every technology can therefore only be understood in its context of what is socially desirable and technically possible. Table 3.5 compares central categories of vocational education and training that are ideally suited to the prevailing technology and one that is aimed at the design of work and technology.

3.4.3

Professional Competence

The guiding idea of professional competence, as defined, for example, in the Vocational Training Act, goes back to Heinrich Roth, who founded professional competence as a connection between material, personnel and social competence (Roth, 1971). Article 1 (3) of the Vocational Training Act reads ‘Vocational training must impart the professional skills, knowledge and abilities (professional competence) necessary for the exercise of a qualified professional activity in a changing working environment in an organised training course’. The basic commentary on the Vocational Training Act of Nehls and Lakies states:

3.4 Guiding Principles and Objectives of Vocational Education and Training

41

Table 3.5 Comparison of characteristics of adaptation-oriented and design-oriented vocational education and training with regard to the analysis of vocational work tasks for curriculum development (Rauner, 2002a, 42 f) Characteristics Basics Guiding educational principle

Adaption-oriented vocational education The personality is qualified as a human resource for specific tasks; the qualification requirements are derived from the organisational and technological innovations. Technology and work are predefined, and the quality requirements appear as dependent variables.

Qualification research Goals of qualification research

Identification of activity requirements for partial tasks defined in business management and technology and the resulting execution of actions and professional skills. Activities and activity structures are seen as given and thus as an independent variable in relation to qualification requirements.

Analysis strategies

Complementary analysis: Definition and identification of (residual) activities in human-machine interaction; ‘operation’ as key competence.

Analysis area

(remaining) activities in the context of operational functional areas; professional competence to act.

Analysis dimensions

Operational functions, processes and resulting work functions; work operations (activities); activity requirements; requirements for quality control; work and work processes in order processing in the context of operational functions.

Theoretical framework

S-R (behavioural) theory; Human resources development (HRD); deterministic planning and control concepts; function-oriented company organisation.

Design-oriented vocational education Ability to participate in shaping the working world; education as a prerequisite for an autonomous, selfconfident and self-responsible personality; educational content and educational goals are regarded as simultaneously dependent and independent factors in relation to work and technology. Identification and description of professional work tasks for work contexts in the context of open dynamic professionalism as a basis for task analysis. Deciphering the work and workprocess-related content of working and learning, with consideration to professional competence development. Optimisation of the tool character in human–machine interaction; improvement of the tutorial quality of computer-aided tools; logical task analysis; ‘shaping’ as key competence. Professional work tasks as Paradigmatic and developmental tasks, as a basis for logical developmental learning from beginner to reflected mastery. Work contexts and their breakdown in • The object of work. • The methods, tools and organisation of work. • The demands on work from asocial, subjective, company and customer perspective. As dimensions of working and learning in the context of company business processes. Humanistic personality theory; dialectics of education and qualification; (vocational training theory); (partially) autonomous working groups and decentralised control concepts; (continued)

42

3 Categorial Framework for Modelling and Measuring Professional Competence

Table 3.5 (continued) Characteristics

Analysis methods

Adaption-oriented vocational education

Experimental analyses; quantitative methods of empirical social research; expertise research.

Design-oriented vocational education business process-oriented company organisation. Situational experimentation; actionoriented expert discussions; vocational qualification research; methods of qualitative empirical social research.

The wider concept of competence is used in the debate on vocational education and training policy. The general term competence initially refers to abilities, knowledge, attitudes and values, the acquisition, development and use of which relate to a person’s entire lifetime. Competence development is seen from the perspective of the subject, his or her abilities and interests as well as his or her social responsibility. (...) Competence development should create professional competence and a skill that enables working actions to be carried out with extensive co-determination and participation in work and undertakings. Reflective professional competence means the conscious, critical and responsible assessment and evaluation of actions on the basis of experience and knowledge (Nehls & Lakies, 2006, 52).

The manifold attempts to enrich the category of professional competence developed by Heinrich ROTH, which has found its way into the Vocational Training Act in one aspect or another, do not go beyond ROTH’s explanations. In their totality, they instead contribute to the confusion in the discussion around professional competence. Herwig Blankertz has succeeded in further developing the guiding idea of professional competence. His concept ‘Education in the medium of the profession’ has found its way into the discussion on vocational education. Professional competence includes the connection between knowledge and skills. This results in its importance for the examination of professional competence in the sense of examinations in accordance with the Vocational Training Act.

3.5 3.5.1

Theories of Vocational Learning and Professional Development The Novice-Expert Paradigm: Competence Development in Vocational Education

The KMK agreement (1999) on the development of vocational curricula, the contents of which are to be oriented towards ‘significant occupational work situations’ and company business processes, aims to replace the previous systematic structuring and systematisation of vocational training plans with learning fields: ‘Didactic reference points (for the design of vocational training processes) are situations that are important for vocational training’ (ibid., 10). What is remarkable about this

3.5 Theories of Vocational Learning and Professional Development

43

agreement is the associated fundamental change of perspective in curriculum development practice: The tradition of curricula structured according to subject systems is to be replaced by one that emphasises the work and business processes characteristic of a profession as a reference point for curriculum development. At the same time, however, the processes formulated as objective requirements constitute a subjectrelated quality of the curriculum. This is what the change of perspective mentioned above depends on. The learning field concept is not based on a factually systematic sequence of material, but on the thought of a meaningful connection between important professional situations of action, which trainees should learn to cope with better and better. If the factually systematic sequence of material no longer forms the starting point for curriculum development, but rather the professional requirements concretised in situations, then the subject of learning in the form of the professionally competent actor also comes into focus. The principle ‘Actions should promote a holistic understanding of professional reality, e.g. include technical, safety, economic, legal, ecological and social aspects’ (ibid., 10) emphasises the concept of holistic solutions for professional tasks. With the emphasis on learning as a subjective construction process, the more recent didactics discussion and teaching and learning research have more clearly than ever emphasised the fundamental difference between instruction aimed at knowledge and knowledge-acquiring learning. The concepts of educational science implicitly taken up by the KMK together with the learning field concept correspond to further theories of pedagogical importance that start with the development of competences. Vocational training courses can be systematised not only technically but also as a development process from beginner level (novices) to reflected mastery (experts) (cf. Benner, 1997; Dreyfus & Dreyfus, 1987; Lave & Wenger, 1991; Rauner, 1999). In development theory, the objective side—that is, the side that presents the subject with the requirements of learning—always remains in place. This reflects the idea of development tasks (Bremer, 2001; Gruschka, 1985; Havighurst, 1972) that are facing someone who has not yet solved them: What someone cannot do at first—due to a lack of developed competences—he or she learns in confrontation with the task that triggers the appropriate competence development. Due to this basic developmentalmethodological pattern, the concept of development tasks is particularly suitable for structuring vocational learning processes. Characteristic work tasks ‘paradigmatic’ for professional work (Benner, 1997) are referred to when the work contexts characteristic of a profession are simultaneously given a quality that promotes professional competence development. Their identification first requires an analysis of the objective conditions constituting a defined profession: the subject of professional work, the tools and methods and the (competing) requirements for professional work.

44

3 Categorial Framework for Modelling and Measuring Professional Competence

The reconstruction of work tasks that are important for professional competence development (KMK, 1996) is most successful on the basis of ‘expert specialist workshops’ (! 5.1)3. For the application of the methodological instruments of the expert specialist workshops, this means above all that the respective work context in which the work tasks are embedded must be consistently incorporated in the survey situation. Both difficulties can be countered by professional scientific studies that address the analysis of professional work processes and tasks in their situation (Lave & Wenger, 1991, 33; Becker, 2003; Kleiner, 2005). The five levels of competence development were identified by Hubert L. Dreyfus and Stuart E. Dreyfus and the corresponding four learning areas arranged in development theory.7 For the application of the methodological instruments of the expert specialist workshops, this above all means that the respective work context in which the work tasks are embedded must be consistently taken into account in the survey situation. Both difficulties can be countered by professional scientific studies that address the analysis of professional work processes and tasks in their situation (Lave & Wenger, 1991, 33; Becker, 2003; Kleiner, 2005). The five levels of competence development identified by Hubert L. Dreyfus and Stuart E. Dreyfus and the corresponding four learning areas arranged in development theory (Fig. 3.7) have a hypothetical function for identifying thresholds and levels in the development of vocational competence and identity as well as a didactic function in the development of work- and design-oriented vocational training courses. Development tasks and their functional equivalents are also of central importance for competence development in expertise research. Patricia Benner, for example, highlights the paradigmatic importance of development tasks for the gradual development of professional competence in that of nurses4. With Benner, these development tasks refer to ‘paradigmatic work situations’ in line with cases that challenge the skills of the nursing staff5. It took almost two decades in Germany before the impetus given by the attempt to justify competence development in vocational education and training in terms of development theory was translated into didactic concepts. Over the last fifteen years, extensive projects have been carried out to this end, both in educational theory and empirical research. For the profession of car mechatronics, for example, a

3 In the practice of domain-specific qualification research, the expert-specialist workshops are supplemented by management workshops and evaluating expert surveys, above all to increase the prospective quality of the results. 4 Benner bases her domain-specific qualification research in the nursing field and its curriculum development on the novice expert paradigm developed by Dreyfus and Dreyfus (Benner, 1997; Dreyfus & Dreyfus, 1987). 5 Theoretically and practically, there is a difference between BENNER’s concept of ‘paradigmatic work situations’, which she identifies in reference to the novice expert concept formulated by Dreyfus and Dreyfus with methods of expertise research, and Gruschkas hypothesis-led studies of beginners (cf. Rauner & Bremer, 2004).

3.5 Theories of Vocational Learning and Professional Development

45

Fig. 3.7 Professional competence development ‘From beginner to expert’ (Rauner, 2002b, 325)

developmentally structured curriculum was developed in a Europe-wide pilot project (Rauner & Spöttl, 2002). In the pilot project ‘Business and work process-oriented VET’, training courses were also developed and tested for five core industrial occupations that are based on development theory assumptions (see in detail Bremer & Jagla, 2000; Rauner, Schön, Gerlach, & Reinhold, 2001).

3.5.2

Work Process Knowledge

Work process knowledge is regarded as a central knowledge category in the context of the change in the didactics of vocational education and training in relation to work and work processes; it arises from the reflected work experience; it is the knowledge incorporated into practical work. Work process knowledge is a form of knowledge

46

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.8 Work process knowledge as the connection between practical and theoretical knowledge as well as subjective and objective knowledge (Rauner, 2002b, 34)

that guides practical work; as context-related knowledge, it goes far beyond contextfree theoretical knowledge. The pilot projects ‘Decentralised Learning’ and ‘Learning at the Workplace’ (cf. Dehnbostel, 1994) already incorporated this development by shifting training back into the work process. Since then, however, the vocational educational discussion on ‘Learning at the Workplace’ has been characterised by the fact that terms such as workplace, work process, professional action, professional activity and work situation have not been used very clearly. The phrase ‘Learning at the Workplace’ has now been largely displaced by the phrase ‘Learning in the Work Process’. Despite all the vagueness of the terms that characterise the relevant discussion, the shift to the concept of the work process takes into account the structural change in the organisation of operational work and business processes: The principle of function-oriented organisation is increasingly superimposed by that of orientation to operational business processes. This has increased awareness of the process character of work and organisation into a technique that must only be developed in the process of operational implementation and organisational development. Following the discussion on work process knowledge initiated by Wilfried Kruse (Kruse, 1986), this central category for vocational learning was identified and developed in numerous research projects as a fundamental form of knowledge for vocational learning (cf. Fischer, 2000a, 2000b). In a first approximation, work process knowledge can be characterised as the connection between practical and theoretical knowledge (Fig. 3.8). The development of a scientific and pedagogical knowledge framework used to model vocational competence suggests the introduction of distinctions which enable the differentiation

3.5 Theories of Vocational Learning and Professional Development

47

Fig. 3.9 The three successive levels of work process knowledge

between three successive levels of knowledge in work process knowledge based on Hacker: action-guiding, action-explaining and action-reflecting knowledge (Fig. 3.9). Action-guiding knowledge comprises the rules and regulations relevant for professional action. It can therefore also be characterised as rule-based knowledge. The traditional form of in-company instruction aims at rule-based knowledge, i.e. ‘know that’. The level of action-explaining knowledge is aimed at understanding the rules to be observed in a profession. Professionals who not only have knowledge that guides their actions, but who also comprehend their professional tasks. They understand what they are doing and are able to act on their own responsibility based on their insights into their professional tasks. This level of knowledge has a certain affinity to the concept of ‘know-how’. Action-reflecting knowledge (know-how) facilitates the exploitation of a smaller or greater scope for shaping professional work projects, the most varied situationbased approaches and solution possibilities as well as in dialogue with the client and to balance all relevant criteria in the process. At this (highest) level of work process knowledge, professionals are able to answer the question: Why like this and not in any other way? (Know Why). Erpenbeck’s differentiation of knowledge into ‘explicacy’ and ‘value’ also stems from a differentiation of the knowledge category owed to the research object. This subdivision largely corresponds to the insight formulated in the discussion on technical theory and didactics into the indissoluble connection between what is technically possible and what is socially desirable (cf. Rauner, 1995). In this context, ‘value’ refers to technology as the process and the result of the objectification of

48

3 Categorial Framework for Modelling and Measuring Professional Competence

social purposes and the interests and needs incorporated therein. ERPENBECK uses this distinction into ‘pure’ knowledge and the knowledge representing the expediency of social facts for a four-field matrix (Erpenbeck, 2001, 113), with which he illustrates the proof that explicit, pure knowledge, as it exists in the form of scientific fact and legal knowledge, only contains very little knowledge relevant for competence development. The differentiation of the category of practical knowledge as a dimension of the work process enables domain-specific knowledge research, which allows more detailed information about work process knowledge and therefore also promises results about the mediation of work process knowledge in or for professional work processes. However, this only partly answers the overriding question of whether the disintegration of validity resulting from the accelerating change in the working world fundamentally devalues this knowledge as a point of reference for professional competence development. According to a popular thesis, technical competences are devalued by the disintegrating validity of professional knowledge. The professional dimension is therefore virtually shifted to a meta-level at which it is only important to have appropriate access to the expertise documented in convenient media, knowledge stores and knowledge management systems. The situational development of the ‘knowledge’ required for the specific work tasks—knowledge management—is therefore essential6. Studies on the exponential increase in ‘objective knowledge’ seem to confirm this assumption. Professional competence would then evaporate as a form of domain-specific methodological competence. However, this thesis was refuted in the extensive studies on the change in skilled work and qualification requirements, especially in the field of diagnostic work. On the contrary, relevant vocational-educational studies have confirmed the thesis that professional work process knowledge, which provides the basis for professional expertise, has tended to increase in importance7. To the extent that domain-specific qualification research succeeds in regaining ground under the feet of empirical curriculum research, the diffuse formula of key qualifications loses its placeholder function. At the same time, expertise and qualification research supports the concept of vocational learning in the context of important work situations and thus the guiding principle of a curriculum structured according to learning fields. The orientation of vocational learning towards 6

The thesis of the de-specialisation of vocational education and training has been supported at the latest since the flexibility debate in the 1970s. According to the central argument, vocational education and training that takes account of the accelerated technological change must urgently strive to promote and maintain the associated, necessary basic scientific and social understanding necessary and only impart knowledge and skills specific to activities on a secondary level (Kern/ Schumann, quoted by Grünewald, Degen, & Krick, 1979, 115). Wilfried Kruse comes to very similar conclusions in his assessment of qualification research in the 1970s: ‘The expansion of qualification in the state school system and the extensive separation of vocational training from direct production are expressions of the increase in general, more theoretical elements in the change in the production of the working capacity of young workers’ (Kruse, quoted from Grünewald et al., 1979, 121). 7 Cf. Drescher (1996), Becker (2003), Rauner and Spöttl (2002).

3.5 Theories of Vocational Learning and Professional Development

49

(vocational) work and business processes—in a design-oriented perspective— assumes an autonomy of working action beyond the one-dimensionality of scientific rationality as it is characteristic for the subject-systematic curriculum (cf. Fischer & Röben, 2004). According to this, the category of ‘subject’ knowledge is problematic in that it refers to subject-systematic knowledge, the sources of which are based on the specialist sciences. Professional action and design competence are not based on (scientific) specialist knowledge, but on the knowledge of action and work processes incorporated into practical professional work.

Tacit Knowledge (Polanyi, 1966b; Neuweg, 2000) With the theory of implicit knowledge (Tacit Knowledge) Polanyi has drawn attention to a dimension of knowledge that Neuweg attributes a paradigmatic meaning for professional ability. Since then, the concept of Tacit Knowledge has been regarded as a key category for the development of the concept of professional competence. This special weighting of implicit knowledge as the basis for competent professional action can also be attributed to the fact that social science-based attempts to approach the specificity of professional knowledge and skills had to fail simply because the theoretical and empirical access to knowledge incorporated in practical professional work is largely blocked (cf. Bergmann (1995); Garfinkel (1986)). Once the concept of Tacit Knowledge had been formulated, this triggered approval far beyond the discussion of knowledge psychology, especially in educational practice, illustrated by numerous examples. It removed vocational training practice and, to a certain extent, vocational training research from the requirement to decipher and name the knowledge incorporated into practical vocational work. The withdrawal of the surveyed experts to the position that ‘these are empirical values’ was and is often accepted as the last answer to the many unanswered questions about qualification requirements. Georg Hans Neuweg has presented a differentiated development of this knowledge concept and examined its didactic implications for academic vocational education in German-speaking countries. With his comprehensive theory of implicit knowledge, Neuweg characterises the didactic concept of subject-systematic knowledge as a reference point for professional competence development in the field of an ‘intellectualistic legend’. The widespread assumption in vocational education that subject-systematically structured knowledge represents some kind of shady professional action, which—in procedural terms—leads to professional ability, is based on a fundamental category mistake (cf. Fischer, 2002; Neuweg, 2000). Using his own experience in dealing with Ohm’s law as an example, Matthew Crawford illustrates the difference between theoretical and practical knowledge and the limited relevance of theoretical knowledge for action (Crawford, 2010, 215 f.). Theo Wehner in particular pointed out the danger of mystifying professional skills with the category of Tacit Knowledge. A large proportion of the implicit knowledge could be explicated if qualification and knowledge research were to

50

3 Categorial Framework for Modelling and Measuring Professional Competence

improve its research methods. Similar to Garfinkel, Theo Wehner and Dick (2001) see the challenge of qualification and knowledge research in identifying work process knowledge and not hastily qualifying this knowledge as ‘tacit’. Professional competence is therefore developed in a process of reflected practical experience (reflection-in-action). Schoen’s professional competence development is based on the expansion of the repertoire of unique cases. In this context, at best, one can speak of systematic learning. On the other hand, competence development cannot be substantiated by technical system.

3.5.3

Practical Knowledge

In the following, the category of practical knowledge will be examined in more detail, as it has so far hardly found its way into curriculum research. This is particularly serious for vocational education and training, as it is directly related to work experience, knowledge and skills. Here, we should refer to the current discussion on the basis of a theory of social practices, such as that initiated by Andreas Reckwitz from a sociological perspective. His reference to the implicit logic of practice, as expressed, for example, in the artefacts of the working world and the knowledge, interests and purposes objectified in them, is of interest to vocational science and vocational education. Central to the practical understanding of action is that, although action also contains elements of intentionality [...], the status of intentionality, normativity and schemata are fundamentally modified if one assumes that action within the framework of practices can first and foremost be understood as knowledge-based activity, as an activity in which practical knowledge, ability in the sense of ‘know-how’ and practical understanding are used (Reckwitz, 2003, 291 f.).

Practical knowledge according to Reckwitz includes in practical theory 1. ‘Knowledge in line with an interpretative understanding, i.e. a routine assignment of meanings to objects, persons, etc., 2. A methodical knowledge of script-like procedures, how to produce a series of actions competently, 3. A motivational-emotional knowledge, an implicit sense of what one actually wants, what it is about and what would be unthinkable’ (ibid., 292). With this definition, Reckwitz hides a dimension of practical knowledge relevant to vocational science and education. The materiality of practice, as emphasised by Reckwitz, reduces technical artefacts to the dimension of the technical as a social process, just as in established sociological research on technology. In curriculum theory, an expanded concept of technology is required that includes the dimension of knowledge about the technical itself. In researching the paradigmatic work situations and tasks for nurses, Patricia Benner attaches a constitutive importance to practical knowledge for professional competence and takes up Schoen’s epistemological positions, which he founded in

3.5 Theories of Vocational Learning and Professional Development

51

Table 3.6 The six dimensions of practical knowledge (based on Benner, 1997; Rauner, 2004) Dimensions of practical knowledge Sensitivity

Contextuality

Situativity

Paradigmaticity

Communicativity

Perspectivity

With increasing work experience, the ability to perceive and evaluate increasingly subtle and the subtlest differences in typical work situations develops. The increasing work experience of the members of the professional practice groups leads to the development of comparable patterns of action and evaluations as well as to intuitive communication possibilities that go far beyond linguistic communication. Work situations can only be adequately understood subjectively if they are also understood in their genesis. Assumptions, attitudes and expectations guided by experience lead to comprehensive awareness and situational action and constitute an extraordinarily fine differentiation of the action plans. Professional work tasks have a paradigmatic quality in the sense of ‘development tasks’ if they raise new content-related problems in the development process, which force us to question and newly establish existing action concepts and well-coordinated behaviours. The subjective significance of the communicated facts is highly compliant in a practice community. The degree of professional understanding is far higher than that of external communication; the context-related language and communication can only be fully understood by members of the practice community. The management of unforeseeable work tasks on the basis of the fundamentally incomplete knowledge (knowledge gap) is characteristic for practical work process knowledge. This gives rise to a metacompetence that enables us to deal with non-deterministic work situations.

his ‘Epistemology of Practice’ (Schoen, 1983). She distinguishes six dimensions of practical knowledge (Benner, 1997). With reference to results of qualification research in industrial-technical domains, these dimensions of practical knowledge will be outlined below in order to further differentiate the category of work process knowledge (Table 3.6). From the point of view of ethnomethodology (Garfinkel), practical knowledge has its own quality, which results from the mode of its origin. Harold GARFINKEL has defined ethnomethodology generally as ‘the exploration of the rational characteristics of indexical expressions and other practical activities as a contingent of evolving appropriation of organised and artistic practices of daily life’ (Garfinkel, 1967, 11). This suggests an expanded or modified concept of competence able to do justice to the complex dynamics of circulation between the two ethnomethodologically basic concepts of ‘producing’ and ‘acquiring’ (in each case from practice). The ‘methods’ that ethnomethodology uses to research what social

52

3 Categorial Framework for Modelling and Measuring Professional Competence

reality both creates and allows to understand cannot be presented without correspondingly complex competence8. The proximity to the theory of multiple intelligence founded by Gardner is obvious. Both, the debate on knowledge and competence, and the departure from the concept of universal intelligence, refer to the diversity of human abilities. In the preface of his work ‘Frames of Mind: The Theory of Multiple Intelligences’, Gardner formulates his central thesis: ‘If we want to grasp the entire complex of human cognitions, I think we have to consider a much larger and more comprehensive arsenal of competencies than we are used to. And we must not deny the possibility that many and even most of these competences cannot be measured with those standard verbal methods that are predominantly tailored to a mixture of logical and linguistic skills’ (Gardner, 1991, 9). Almost a decade before Gardner, Donald Schoen’s analysis of the problemsolving behaviour of different professions provides comparable insights into professional skills and cognitive requirements to Gardner. Gardner’s analyses are concerned with the psychological (cognitive) performance requirements for competent action (Professional Knowledge Systems). Schoen’s merit is to prove, corresponding to the category of practical intelligence, the fundamental importance of practical competence and professional artistry as an independent competence not guided by theoretical (declarative) knowledge. At the same time, this leads him to a critical evaluation of academic (disciplinary) knowledge as a cognitive prerequisite for competent action. Schoen summarises his findings on practical competence in the following insight: I have become convinced that universities are not devoted to the production and distribution of fundamental knowledge in general. There are institutions committed, for the most part, to a particular epistemology, a view of knowledge that fosters selective inattention to practical competence and professional artistry (Schoen, 1983, VII).

In this context, he quotes from an examination by a medical practice: ‘85% of the problems a doctor sees in his office are not in the book’. Schoen sees the deeper cause for the inability of the education system to impart knowledge that forms the basis of professional competence in disciplinary subject-systematic knowledge: ‘The systematic knowledge base of a profession is thought to have four essential properties. It is specialized, firmly bounded, scientific and standardized. This last point is particularly important, because it bears on the paradigmatic relationship which holds, according to Technical Rationality, between a profession’s knowledge base and its practice’ (ibid., 23). He takes a critical look at the concept of didactic reduction that was developed in the USA in connection with the term ‘Applied Academics’. Thus, for example, the concept of ‘contextual learning’ in high schools is not interpreted as imparting

8 With the ethnomethodological research concept of ‘Studies of Work’, Harold Garfinkel has established a research strand that can be made fruitful in many ways in vocational education and training research. The theories of ‘Tacit Knowledge’ and ‘Studies of Work’ assume a multiple concept of competence without it already unfolding in its dimensions.

3.5 Theories of Vocational Learning and Professional Development

53

practical knowledge and problem-solving competence, but rather as a form of learning for acquiring ‘academic knowledge’ (cf. also Grollmann, 2003). Theoretical knowledge (Academic Knowledge) is then taught in an application-oriented manner. Schoen notes critically: ‘This concept of ‘application’ leads to a view of professional knowledge as a hierarchy in which ‘general principles’ occupy the highest level and ‘concrete problem solving’ the lowest’ (Schoen, 1983, 24). This training and curriculum practice is in stark contradiction to the results of his analyses of the thoughts and actions of ‘professionals’ (Schoen, 1983, 138 ff.).

Practical Terms and Practice Communities The concepts of practical knowledge and reflection on and in action correspond to the concept of practical terms by Klaus Holzkamp (1985, 226 f.), according to which the terms that people subjectively have are basically practical, in so far as their aspects of meaning, their scope of meaning and the fields of meaning (as the sum of the aspects of meaning and their linkage) are shaped by the respective development processes. Pursuant to Schoen, it is therefore not important in training to teach and learn scientifically defined terms. These represent only a fraction of the meaning of practical terms and thus justify only very limited (professional) competence to act. The relationship between theoretical-scientific and practical terminology will be examined in more detail using the example of the category ‘electrical voltage’. Electrophysically, electrical voltage is defined as follows: A small body carrying the constant amount of electricity Q travels a distance S from a starting point to an end point in an electric field. The field forces on the body perform a work A12, which is proportional to the amount of electricity Q. The quotient A12/Q is therefore a variable independent of Q and assigned to path S from 1 to 2. This is called electrical voltage U between 1 and 2, in short U1,2 (see in detail Adolph, 1984, 107 ff.).

According to this, electric voltage is a field quantity that cannot be understood without insight into field theory. True to the pedagogical-didactical rules of the systematic and scientific consolidation of professional experience or work-related learning, it would be important to skilfully convey this definition. Didactic dexterity is characterised by the use of forms of inductive learning such as experimental or action-oriented learning (cf. e.g. Pätzold, 1995). The scientific definition of electrical voltage serves to define the physical phenomenon of electrical voltage as it can be experimentally reproduced. The real technical and economic facts of electrical voltage, on the other hand, are something completely different. The technical realisation of electrical voltage follows the specifications (serviceability properties), which are defined for the unlimited variety of different voltage sources and forms from the single cell to the 400-kV highvoltage system. The immense variety of voltage forms and sources—and with them the available forms of electrical voltage—in principle have an infinite number of practical value properties and object meanings (in the sense of meaningful knowledge). The technical facts of ‘electrical voltage’ make very different demands on

54

3 Categorial Framework for Modelling and Measuring Professional Competence

consumers, development engineers, specialists, teachers, nurses or economists. The action-relevant aspects of meaning and fields of meaning of the respective practical terms of electrical voltage are manifold and at the same time highly relevant for competent action (cf. in detail Rauner, 2004). Didactical and professional research is faced with the task of determining the preliminary understanding and subjective fields of meaning of beginners’ technical terms and of opening up the professionally related fields of meaning of central technical terms of experts. Only then can teaching and learning strategies be developed that facilitate the gradual transformation of the fields of meaning and structures of everyday terms and theories into professionally related fields of meaning. However, the decisive factor here is that the action-guiding technical terms are not categorically restricted but are retained in their scope as practical terms and are constantly developed further.

3.5.4

Multiple Competence

According to Klieme and Hartig (2007, 17), the reference to ‘real life’ is regarded as a key feature of the concept of competence. In this context, Andreas Gruschka considers a concept of competence necessary that is not limited to individual actions: ‘Competences are not bound to a specific task content and a correspondingly narrowly managed application, but allow for a variety of decisions. They certainly have this in common with education, since in the acceptance and solution of such open situations and tasks it is preferably updated as a progressive movement of the subject’ (Gruschka, 2005, 16). In this sense Connell, Sheridan and Gardner (2003) succeed in fundamentally contributing to the categorical differentiation between abilities, competencies and expertise, an important step towards establishing a theory of multiple competence. The concept of multiple competence, based on Howard Gardner’s concept of multiple intelligence, takes account of the state of competence and knowledge research, according to which several relatively autonomous competences can be distinguished in humans, and which can vary greatly among individuals—depending on their professional socialisation and qualification. The concept of multiple competence can be based on the results of expertise research and vocational qualification research, which have shown that vocational competences are domain-specific and, above all, that vocational-specific practical knowledge has its own quality (Haasler, 2004; Rauner, 2004). According to this, practical knowledge does not arise from theoretical knowledge as it exists in the objectified form of subject-systematic knowledge in the system of sciences. It has its own quality, which is based on its mode of its origin. In this context, Gardner points out that theories and concepts with which crossvocational (key) competences are assumed cannot be supported on the basis of his theory. He exemplifies this with the term ‘critical thinking’: ‘I doubt whether this critical thinking should be seen as a process of thinking in its own right. As I have

3.5 Theories of Vocational Learning and Professional Development

55

explained with reference to memory and other presumed horizontally operating abilities, their existence becomes questionable upon detailed analysis. The various functional areas are probably assigned their own forms of thinking and criticism. Critical thinking is important for musicians, historians, systems biologists, choreographers, programmers and literary critics. To analyse a fugue, however, a fundamentally different way of thinking is required than to observe and classify different biological species, to publish poems, to debug a computer program or to choreograph and work on a new ballet. There is little reason to believe that the practice of critical thinking in one domain could be identical to the corresponding training in other fields [...] because each has its own objects, procedures and modes of connection’ (Gardner, 2002, 130). This determines a second essential feature of multiple competence. Modern intelligence research’s criticism of the concept of the one-dimensional concept of intelligence is comparable to criticism of a reductionist concept of vocational competence limited to the subject-functional dimension and the associated definition of a cross-professional area of general vocational (key) competences. The attempts of Howard Gardner’s research group to transfer the concept of multiple intelligence to the empirical analysis of vocational competences make it possible to define the concept of multiple competence more precisely via the abstract definition as a domain-specific performance disposition (cf. Connell et al., 2003). With multiple competence, two different aspects of professional competence can be highlighted: • Abilities can be conceptualised as functionally integrated intelligence profiles. The development of specific abilities provides a space for competence development (ibid., 140 f.). The concept of multiple intelligence and a model of multiple competence based on it allows the realistic emphasis of potentials of competence development given by professional work on the one hand and the intelligences belonging to the individuals on the other hand. These differ greatly not only from individual to individual but also from occupation to occupation. • The designation of the eight components of professional competence, which in their interaction constitute the ability to solve holistic tasks, as multiple competence, emphasises the second aspect of a theory of professional competence differentiating according to competence profiles—and not only according to competence levels. Vocational competence (development) is then a process of developing vocational skills, which is given on the one hand by the individual intelligence potential and on the other hand by the requirement structure of the holistic solution of vocational tasks. Vocational action always takes place in working contexts, which can also subjectively be seen and understood as such in their manifold meaning. Therefore, it is necessary to supplement the concept of complete working action with requirement criteria that result from the objective circumstances as well as the subjective demands on the content and organisation of social work.

56

3 Categorial Framework for Modelling and Measuring Professional Competence

Fig. 3.10 The criteria of the complete (holistic) solution of professional tasks (COMET Vol III, 22)

The criteria for the complete (holistic) solution of professional tasks represent the partial competencies of professional competence. Their expression according to the levels of work process knowledge can be included in the modelling of the requirement dimensions (Fig. 3.10). Eight overriding demands are placed on the processing or solution of professional work tasks, which can be differentiated according to the three levels of work process knowledge. In each specific case, the professionals must ensure that all or a subset of these requirements are relevant to the specific task. These criteria can be described in more detail as follows (COMET Volume III, pp. 56)9:

Clarity/Presentation (K1) The result of professional tasks is anticipated in the planning and preparation process and documented and presented in such a way that the client (superior, customer) can communicate and evaluate the proposed solutions. In this respect, the illustration and presentation or the form of a task solution is a basic form of vocational work and vocational learning. A central facet of professional communication represents the ability to communicate clearly structured descriptions, drawings and sketches. The

9

For the operationalisation of the criteria in the form of a rating scale, see Appendix B.

3.5 Theories of Vocational Learning and Professional Development

57

appropriateness of the presentation in relation to the respective facts is an expression of professional action.

Functionality (K2) The functionality of a proposed solution for professional tasks is an obvious core criterion for their evaluation. Functionality refers to the instrumental expertise or the context-free, professional knowledge and the technical skills. Proof of the functionality of a task solution is fundamental and decisive for all other requirements placed on task solutions.

Sustainability (K3) Finally, professional actions, procedures, work processes and work orders always refer to a customer whose interest lies in the sustainability of the work result. In production and service processes with a high degree of division of labour, the sustainability and utility value aspects of the division of labour in the execution of subtasks and in vocational training reduced to the action aspect often evaporate. In addition to direct use by the user, the avoidance of susceptibility to faults and the consideration of aspects of easy maintenance and repair in industrial and technical occupations are important for the sustainable solution of professional tasks. To what extent a problem solution will remain in use in the long term and which expansion options it will offer in future are also central evaluation aspects for the criterion of sustainability and practical value orientation.

Efficiency/Effectiveness (K4) In principle, professional work is subject to the aspect of economic efficiency. The context-related consideration of economic aspects in the solution of professional tasks distinguishes the competent action of experts. In doing so, it is important to continuously assess the economic efficiency of the work and to take into account the various costs and impact factors. Costs incurred in the future (follow-up costs) must also be included in decisions on the economic design of vocational work. For decision-making purposes, the ratio of expenses to operating benefits is accounted for. In addition, economically responsible action also distinguishes the level of social assessment. Not all strategies that are coherent on a business management level are also economically and socially acceptable.

58

3 Categorial Framework for Modelling and Measuring Professional Competence

Orientation on Business and Work Process (K5) It comprises solution aspects that refer to the upstream and downstream work areas in the company hierarchy (the hierarchical aspect of the business process) and to the work areas in the process chain (the horizontal aspect of the business process). Especially under the conditions of working with and on program-controlled work systems in networked operational and inter-company organised work processes, this aspect is of particular importance. The conscious and reflected perception and execution of professional work tasks as part—and embedded in—operational business processes are based on and promote contextual knowledge and understanding as well as the awareness of quality and responsibility based on it.

Social Compatibility (K6) It concerns above all the aspect of humane work design and organisation, health protection and, if necessary, also the social aspects of professional work that go beyond professional work contexts (e.g. the frequently different interests of clients, customers and society). Aspects of occupational safety and accident prevention are also taken into account, as well as possible consequences that a solution of professional tasks has on the social environment.

Environmental Compatibility (K7) It has become a relevant criterion for almost all work processes. This is about more than the aspect of general environmental awareness, namely, the professional and technical requirements for professional work processes and their results, which can be assigned to the criteria of environmental compatibility. The extent to which environmentally compatible materials are used in solutions must be taken into account, as well as the environmentally compatible design of work in coping with the task at hand. Furthermore, energy-saving strategies and aspects of recycling and reuse are aspects that must be taken into account for the environmental compatibility of a solution.

Creativity (K8) The creativity of a solution variant is an indicator that plays a major role in solving professional tasks. This results from the highly varied scope for the solution of professional tasks depending on the situation. The ‘creative solution’ criterion must be interpreted and operationalised in a special way for each profession. Creativity is a central aspect of professional competence in the design trade. In other professions, the ‘creative solution’ criterion is relatively independent as a concept of professional

3.5 Theories of Vocational Learning and Professional Development

59

work and learning. The creativity of a solution variant also shows sensitivity to the problem situation. Competent experts are looking for creative and unusual solutions in their professional work that also serve the purpose of achieving goals. It is implicitly assumed with professional competence that the professionally competent person is not only able to carry out professional actions completely, but also able to classify and evaluate the professional actions in their professional and social significance, hence the relevance of the relevant criteria. For example, the legal regulation that came into force in 2009 prohibiting the use of incandescent lamps—for reasons of efficient use of electrical energy—has a direct impact on the design and operation of electrical lighting systems. In the implementation of heating systems, for example, the objective conditions include not only a wide variety of heating technologies, but also the equally diverse controls for their efficient use and design of heating systems in the specific application situations in accordance with environmental, safety and health requirements. The objective circumstances, together with the customers’ subjective requirements for practical value, sustainability and aesthetic quality as well as the subjective interests of the employees in a humane and socially acceptable work design and organisation, form the solution space in which the specific solutions of professional work tasks can be located. On the basis of the eight criteria shown, the dimension of requirements can be determined in terms of content in the sense of a holistic action and design concept. Completeness is required in that the solution of professional tasks in all sectors of social work always refers to not overlooking any of these solution aspects. For example, if the aspect of the technological solution level is over-estimated in a work order and the aspect of financial feasibility or user-friendliness is underestimated or forgotten, then this can mean the loss of a work order. If safety and environmental aspects are overlooked in order processing and work design, this may even have legal consequences. If one refers the steps of the complete work action to the criteria of the holistic solution of vocational tasks, then the concept of the complete work action results from the basic concept of the complete (holistic) task solution for the organisation of vocational education processes and the modelling of vocational competence. The objective of domain-specific qualification research is to determine which qualification requirements and which content-related characteristics are included with which weight in the processing and solution of professional tasks and how the respective requirement profile can be described as a domain-specific qualification and competence profile. This can also form the basis for describing the scope for solving and shaping professional tasks.

Chapter 4

The COMET Competence Model

4.1

Requirements for Competence Modelling

The outline of the explanatory framework suggests developing a competence structure model for vocational education and training that is open to the specific content of the profession. Competence models must convey the connection between their theoretical and normative justification as well as their empirical validation by subject-didactic and learning psychology research. Competence development as an object of competence diagnostics and research refers to questions, methods and results that cannot escape the inextricable connection between intentionality (normativity) and empirical-analytical factuality. With the expertise of the Klieme Commission (Klieme et al., 2003) for the development of educational standards and the establishment of the DFG priority programme ‘Competence models for recording individual learning outcomes and for balancing educational processes’ (Klieme & Leutner, 2006), vocational education and training research was challenged to develop a competence model based on the superimposed objectives and guiding principles of vocational education and training which not only takes into account the characteristic features of vocational education and training, but is also oriented to the criteria for the development of competence models predicated by established educational research. The report convincingly explained that the function of a competence model is to mediate between the guiding principles and goals of a subject or learning area and the development of test items (see Fig. 3.1). The definition of competence by established competence research (cf. Klieme & Leutner, 2006) can be guided by the interest in describing competence as narrowly and precisely as possible with regard to the context of empirical questions, as this is the only way to expect meaningful empirical research results. Klieme and Leutner therefore define context-specific service arrangements for the DFG priority programme, which functionally refer to situations and requirements in specific © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_4

61

62

4 The COMET Competence Model

domains. Competences are therefore learnable and conveyable performance dispositions. This concept of competence, now established in competence diagnostics, is suitable for 1. 2. 3. 4.

differentiation from general cognitive abilities (such as intelligence), differentiation from motivation1, the assessment of the adequacy for the objectives of study courses, the modelling of competence and therefore the development of test tasks.

With restriction to the measurement of domain-specific cognitive performance dispositions, a far-reaching distinction is made for the verification of vocational qualification requirements, as they are defined, for example, in the occupational profiles of regulatory instruments in Germany. If it is possible to develop a competence model which, on the one hand, maps the specific requirements of the ‘VET learning area’—across all occupations—in the form of competence levels and simultaneously represents a guide for selecting the occupation-specific content for the construction of test tasks, then this would be a major step for the development of competence research. At the same time, this requires clarification of the connections between testing and competence diagnostics. The latter addresses the following questions in particular: 1. Which competence levels do the test persons reach in their profession or in the phases of their competence development from beginner to expert? 2. This is related to the question of the characteristic competence forms and competence profiles—in line with the competence model—for individuals, educational pathways, forms and systems of education. 3. What is the degree of heterogeneity (between individuals, groups, etc.)? 4. Under what conditions does professional competence develop? 5. How does vocational identity and commitment develop under the various conditions of vocational education and training? It must therefore first be clarified whether the criterion of developing competence models for a specific subject or learning area can be applied to vocational education and training. Can vocational education and training be justified as one learning area? Or should one define the individual professions as ‘subjects’ instead? As this criterion is rather vaguely defined by the terms ‘subjects’ and ‘learning areas’, there is much to suggest that a definition of the spectrum of learning areas should obviously be avoided. A brief glance at vocational education and training shows that the large number of training occupations – in Germany this is already 350 within the scope of the Vocational Training Act – would result in the development of occupation-specific competence models. The training contents for such different professions as stonemason and insurance broker make it seem hopeless at first to develop a cross-occupational competence model. At the same time, it is immediately

1 The COMET project follows the path proposed by Weinert (2001) to separately capture motivation in the form of professional commitment (! 4.7).

4.2 The Levels of Professional Competence (Requirement Dimension)

63

obvious that the development of hundreds of different competence models as the basis for a large-scale competence survey covering a significant proportion of established occupations would be doomed to failure for practical reasons alone. The knowledge gained by competence research on the basis of occupation-specific competence models would also be largely worthless from an educational planning and education policy perspective, as the basis for cross-occupational and systemcomparative competence research would not be given. In this regard, VET research is obviously facing a dilemma, to which the COMET project offers a solution. The following section will examine whether the COMET competence model in the form of a framework model can also be applied to vocational education and training as a characteristic learning area. If one has a competence model based on education theory and empirically validated, then this can be used to mediate between educative goals and the construction and evaluation of concrete test tasks. Competence models show a structure of competence dimensions (cf. Klieme & Hartig, 2007). This can be used to describe the cognitive requirements that a learner should have in order to – in the case of vocational education and training – (conceptually) solve occupation-specific tasks and problems. Whether and to what extent the competence dimensions and their components are interrelated can then be examined empirically, for example by means of dimensionality analyses. For vocational education and training, the guiding principles and objectives of vocational education and training as well as the basic theories of vocational learning, as presented in the explanatory context, can be translated into a three-dimensional competence model (Fig. 4.1). The COMET competency model distinguishes between • The requirement dimension (competence levels), • The content dimension and, • The action dimension.

4.2

The Levels of Professional Competence (Requirement Dimension)2

The requirement dimension reflects the successive levels of professional competence, which are defined on the basis of skills resulting from working on and solving professional tasks (s. Fig. 3.3, ! 2.3). The objective and subjective requirements for processing and solving professional tasks directly refer to professional skills. The requirements dimension of the COMET model takes up the criteria of holistic task solving (! 3.5.4) and therefore enables the concrete description of empirically ascertainable competences at different competence levels (Table 4.1): How, for

2

Cf. Rauner (2006), Rauner, Grollmann, and Martens (2007).

64

4 The COMET Competence Model

Fig. 4.1 The COMET competence model of vocational education and training

example, does a more or less highly competent, skilled worker solve a professional task? Of interest here are the qualitative and quantitative competence differences that exist between the competence levels as well as the competence profiles of the test groups that result from the recording of the eight competence components (! 8.3). The evaluation of the test results allows a criterion-oriented interpretation of the quantitative test results (performance values). The eight criteria (partial competences) of the competence level model with its four competence levels serve as an interpretation framework (s. Fig. 3.10). The criteria-oriented interpretation of quantitative values includes a pragmatic justification of rules, since quantitative limit values must also be defined for the transitions between two competence levels and rules according to which a test participant is assigned to a competence level (! 8.1, 8.2). This distinguishes the COMET diagnostic procedure from standards-oriented test procedures, which justify the gradations between competence levels by the complexity or degree of difficulty of the test tasks. A level model implies that the competence levels represent a value in the form of increasingly high-quality competences. In the COMET concept, the first level of competence is the lowest and the third level of competence is the highest level of competence to be achieved. The competence levels that a trainee achieves or can achieve apply irrespective of the time of his training. This established competence model facilitates the qualitative and quantitative determination, on the basis of open test tasks, to which competence level a test

4.2 The Levels of Professional Competence (Requirement Dimension)

65

Table 4.1 Competence levels in scientific and industrial-technical vocational education and training Competence levels Nominal

Functional

Bybee (1997) I nominal literacy: Some technical terms are known. However, the understanding of a situation is essentially limited to the level of naive theories. Slim and superficial knowledge. II functional literacy: In a narrow range of situations and activities, scientific vocabulary is used appropriately. The terms are not very well understood and connections remain incomprehensible.

Conceptualprocedural

III conceptual and procedural literacy:Concepts, principles and their connections are understood as well as basic scientific ways of thinking and working.

Multidimensional, holistic

IV multidimensional literacy:At this level, an understanding of the nature of science, its history and its role in culture and society is achieved.

COMET 2008 I nominal competence/ literacy: Superficial conceptual knowledge that does not guide action; the scope of the professional terms remains at the level of their colloquial meaning. II. Functional competence/literacy: Elementary specialist knowledge is the basis for technicalinstrumental skills. ‘Professionalism’ is expressed as contextfree expert knowledge and corresponding skills (know that). III procedural competence/literacy: Professional tasks are interpreted and processed in relation to company work processes and situations. Work process knowledge establishes professional ability to act (know-how). IV holistic shaping competence/literacy: Professional work tasks are completed in their respective complexity and, taking into account the diverging requirements, solved in the form of wise compromises.

PISA, basic scientific literacy I. Nominal competence: Simple factual knowledge and the ability to draw conclusions does not go beyond everyday knowledge. II functional competence I:Everyday scientific knowledge justifies the ability to assess simple contexts on the basis of facts and simple rules. III functional competence II (scientific knowledge):Scientific concepts can be used to make predictions or give explanations. IV conceptualprocedural competence I:Elaborated scientific concepts can be used to make predictions and give explanations.

V conceptualprocedural competence (models): Analysing scientific studies with regard to design and the tested assumptions, simply developing or applying conceptual models.

performance can be assigned, irrespective of the level of competence development in the course of several years of training. The cross-over test arrangement described below also enables the measurement of professional competence development over

66

4 The COMET Competence Model

time. In this context, we speak of competence development stages pursuant to the novice-expert paradigm. When dealing with concepts of competence measurement in empirical educational research, we encounter the term ‘literacy’. In the context of the PISA study, for example, basic scientific education was interpreted as ‘literacy’3. In line with the concept of studying literacy levels presented by Bybee (1997) and taken up in a variety of ways, four corresponding competence levels can also be distinguished for vocational education and training with reference to the developed explanatory framework (Table 4.1). The differentiation of Bybee’s concept of scientific education (literacy) was conducted by the Science Expert Group (2001) on the basis of an analysis of the test items. The functional and conceptual-procedural competence levels were divided into two subgroups. It remains to be seen whether this will contribute to an understanding of basic scientific education. The didactic explanatory value of the Bybee concept certainly lies in the fact that a clear distinction is made between functional and process-related competence or literacy. From a didactic perspective, there is also an interesting parallel between the competence level of multidimensional literacy (Bybee) and holistic shaping competence at COMET. The emphasis on the processual aspect in the form of procedural competence in the PISA texts for the 2003 survey (‘Prozess und Prozeduren’, ‘Wissen wie’, Prenzel et al., 2004, 19) establishes a further affinity between the models of professional and scientific competence. Process orientation is considered a key category in the reorganisation and further development of occupational order means and the design of vocational training processes no later than with the introduction of vocational educational concepts of work process knowledge and business process orientation. In contrast, the theoretical distinction between declarative and procedural knowledge is considered to be of rather limited value in curriculum research (Gruber & Renkl, 2000; Minnameier, 2001; Neuweg, 2000). The operationalisation of competence levels by means of the eight requirement criteria for the solution of professional tasks or the corresponding competence components is based on the following reasoning. The functionality of a task solution and its clear presentation must first be given before the other solution criteria can unfold their significance. If economic efficiency, practical value and sustainability as well as business and work process orientation are taken into account when solving the test tasks, then the test subjects have a professional work concept and thus the level of ‘procedural competence’—in contrast to a merely limited technical-scholastic, functional understanding of the task (Fig. 4.2).

In contrast to the didactics of general education, ‘literacy’ has not yet found its way into vocational education.

3

4.2 The Levels of Professional Competence (Requirement Dimension)

67

Fig. 4.2 Professional competence: levels, partial competences (criteria) and dimensions

The solutions to tasks that can be assigned to this level of competence show that the competences that must be considered as a matter of priority from a professional and company perspective are given. The third level of competence, one of holistic shaping competence, is defined by skills that go beyond the perspective of company work and business processes and refer to solution aspects that are also of social relevance. In this respect, this results in a hierarchisation of the competence components or solution aspects as an extension of the professional competence scope of the test persons in accordance with their problem-solving horizon. Operational and company-related solution competences are based on a purely functional competence. Nominal competence is not part of vocational competence if, as here, the development of vocational competence is introduced into modelling as a characteristic criterion for the success of vocational education and training. Trainees who only reach the level of nominal competence are assigned to the risk group. If one considers the definition of the first level of competence (functional competence), it is highly likely that trainees who do not achieve this level of competence will fail to achieve the training objective: i.e. after completing their training, they will independently carry out specialist professional tasks in accordance with the rules typical of the profession. They are only competent at the level of unskilled and semi-skilled workers. This does not preclude them from developing into skilled workers in professional practice on the basis of reflected work experience.

68

4 The COMET Competence Model

Table 4.2 Two examples: Rating scales for the sub-competences ‘functionality’ and ‘environmental compatibility’ (Appendix B) The requirement is. . . not rather fulfilled not rather at all fulfilled fulfilled

completely fulfilled

Functionality/professionalism Is the solution working? Is the ‘state of the art’ taken into account? Is practical feasibility taken into account? Are the professional connections adequately represented and justified? Are the illustrations and explanations correct? Environmental compatibility Are the relevant provisions of environmental protection taken into account and justified? Does the solution use materials that meet the criteria of environmental compatibility? To what extent does the solution take into account an environmentally sound work design? Does the proposed solution take into account and justify the aspects of recycling, reuse and sustainability? Are energy-saving aspects taken into account?

4.2.1

Operationalisation of the Competence Criteria: Development of the Measurement Model

The measurement of professional competence requires the operationalisation of the competence criteria (partial competences) in evaluation criteria (Table 4.2). The raters evaluate the solutions to the open test tasks (! 5.2) on the basis of five rating items for each sub-competence (Appendix B), whereby they can differentiate between four possible ratings (! 6.1, 6.2, 6.3). The authors of the test questions and (finally) the raters of the pre-test procedure determine which of the 40 rating items are not valid for a test item and do not apply (! 5.6).

4.3

Structure of the Content Dimension

The content dimension of a VET competence model refers to the vocational fields of action and learning as a basis for the construction of test tasks. In international comparative competence diagnostics projects, it is important to identify content that is considered characteristic of a subject or learning area in line with a ‘world

4.3 Structure of the Content Dimension

69

curriculum’ (PISA). This necessarily abstracts from the specific national or local curricula. Deriving the test content from vocational training plans is therefore ruled out for vocational education and training for several reasons. 1. One of the reasons for comparative large-scale competence diagnostics in VET is that the test results can also be used to compare the weaknesses and strengths of established VET programmes and systems with their specific curricula (Hauschildt, Brown, Heinemann, & Wedekind, 2015, 363 f.). For the COMET project, professional validity was therefore justified as a criterion for determining the contents of test tasks. The test tasks for the respective professional fields must prove to be valid. For example, the professional groups manage to agree on job descriptions (job profiles) for the respective professions and, above all, on the project tasks for ‘vocational competitions’ with a surprising matter of course. For the representatives of the respective ‘Community of Practice’, it is almost obvious what true mastery in their profession looks like. 2. Vocational curricula are geared to specific forms and systems of vocational education and training. A comparative competence survey cannot therefore be geared to a specific form of training—e.g. dual vocational training. The vocational curricula in countries with developed dual vocational training, such as Switzerland, Denmark and Norway, would already be too different. Above all, the relationship between the definition of higher (national) standards and their local structure in the form of concrete education plans is regulated very differently. In both Switzerland and Denmark, responsibility for the implementation of lean national vocational regulations in concrete vocational training plans lies with the actors ‘on site’. The structure of vocational training courses in terms of content and time is based on very different systemisation concepts. In addition to the systematic structuring of vocational training courses, the timing of the training content is largely pragmatic. Scientifically based vocational training concepts are the exception. In Germany, for example, the introduction of the learning field concept was a move away from framework curricula with a systematic structure. However, an alternative systematisation structure for the arrangement of learning fields or training content was not explicitly specified. The reference to the ‘factually logical’ structure of the learning fields leaves open what distinguishes them from a subject-systematic content structure. For vocational education and training, the establishment of a validity criterion for the content of vocational education and training or the corresponding test tasks is therefore of particular importance, as training is very different for the same field of employment. Scholastic, vocational scholastic, in-company and dual forms of training compete with each other – nationally and internationally. It is indisputable that vocational training aims at employability. This includes the qualifications that enable students to pursue a profession. The terms ‘qualification’ and ‘competence’ are often used synonymously in colloquial language and in vocational education and training policy discussions. It was explained why it is necessary to distinguish between the two categories in the scientific justification of testing and diagnostic procedures (! 2.2). The degree to which different forms of training are capable of teaching

70

4 The COMET Competence Model

cognitive performance disposition at the level of employability within the framework of corresponding vocational training courses is the subject of competence diagnostics. This also applies to measuring competence development at defined points in time during vocational training. The decisive quality criterion for the content dimension of the competency model is therefore professional validity. If the content dimension is described in the form of a model for systematising the training content, which can claim general validity for vocational education and training, then this has two advantages. Firstly, it meets the criterion of providing for a procedure for identifying training content that allows vocational education and training to be defined as a learning area. It was explained that the novice-expert model makes it possible to structure the occupation-specific training contents according to a learning area model (COMET Vol. I, Sect. 3.3). Franz Weinert describes the novice-expert paradigm ‘as the most important empirical-analytical approach to expertise research’ (Weinert, 1996, 148). The paradigmatic meaning of the model is based on development and learning theories such as • The theory of situational learning and the ‘Community of Practice’ (Lave & Wenger, 1991), • The theory of ‘cognitive apprenticeship’ (Collins, Brown, & Newman, 1989), • The development theory of Havighurst and its application in (vocational) education research (! 3.5), and, on the other hand, on expertise research, which is consistently based on the novice-expert paradigm with its models of graduated competence development. Developmental educational research has been considered a cornerstone of curriculum development and research since the 1970s (Aebli & Cramer, 1963; Bruner, 1977; Fischer, Jungeblut, & Römmermann, 1995; Lenzen & Blankertz, 1973). The paradigm of developmental logic only emerged gradually during the scientific support of model experiments (Blankertz, 1986; Bremer & Haasler, 2004; GirmesStein & Steffen, 1982) and in the extensive empirical studies on the development of competence in educational and nursing professions. If one assumes that vocational education and training primarily bases its legitimacy on the fact that it challenges and promotes ‘growing’ into a profession—the development from beginner to expert—by giving learners the opportunity to develop their professional competence by solving professional tasks, then a development theory-based model for structuring the content dimension of a vocational competence model presents itself. The systematisation of the work and learning tasks characteristic of a profession for beginners, advanced beginners, advanced and expert persons in the field provides a cross-professional basis for the systematic identification and selection of content for the construction of occupation-specific test tasks. The COMET competence model therefore features a content dimension based on learning and development theory, the didactic implementation of which for the professional and occupational field-related development of test tasks makes it possible to implement a cross-occupational test concept in a job-specific manner. This permits a comparison between the development and levels of competence of

4.3 Structure of the Content Dimension

71

Fig. 4.3 Assignment of test tasks to the VET learning areas as a basis for cross-over design (cf. COMET Vol. II, 27)

learners in different occupations and different vocational training systems. At the same time, this concept of structuring training content offers the possibility of systematically measuring vocational competence at different stages of vocational training (Fig. 4.3). When applying and designing the content dimension of the competence model, a distinction must be made between the stages of competence development (from beginner to skilled worker) and professional competence at the end of vocational training. If one classifies the acquisition of vocational qualifications in the gradual development of vocational aptitude according to qualifications that build on one another in terms of development logic and the corresponding fields of vocational action and learning on the basis of domain-specific qualification research, then one has a task structure as a basis for the development of test tasks. Whenever one wants to examine competence development over the entire training period, it is necessary to identify the characteristic professional work tasks and to arrange them in line with development tasks. The simpler application exists if the vocational competence level is to be measured towards the end of a vocational training course. In this case, the reference point for the content of the test development is professional competence, which is available in the form of job profiles or job descriptions. In international comparative studies, it is advisable not to harmonise the formalised job descriptions in the form of standards or training regulations. Formal regulations would become disproportionately important and would stand in the way of test development. According to the experience of the internationally comparative COMET project (German and Chinese apprentices/students of electrical engineering and electronics), the selection of appropriate (characteristic) test tasks in terms of content is largely trouble-free. In addition to their common professional subject, the implicit validity criterion that the involved content

72

4 The COMET Competence Model

specialists apply is specialist work in electrical engineering and electronics. On the one hand, the understanding of competence development contents takes place at the level of professional fields of action and, on the other hand, directly through the selection and development of test tasks. The operationalisation of the content dimension includes the definition of the concept of open complex test tasks. In the COMET project, two test tasks were processed by each test person, with a maximum processing time of 120 min per test task. The number of open test tasks, which are completed in a test time of around 240 min, must be decided by the content specialists on a job-specific basis. Two criteria must be taken into account: the types of action to be distinguished in a profession and the representativeness of the test tasks for the professional fields of action.

4.4

The Action Dimension

Parallel to the vocational educational differentiation of the categories of vocational education and vocational competence, the guiding principles of ‘complete work action’ prevailed in the discussion on labour science and the research on labour science aimed at the humanisation of working life. The many scientific attempts to scientifically justify the concept of the complete working process obscure the insight that this category also has normative roots. The concept of the complete working process arises from the critical examination of the Tayloristic working structures and the interest in opposing the deskilling of fragmented work processes with a labourscientific design concept. Empirically, the concept of complete work action is based on a large number of ‘HdA’ (Humanisation of Working Life) or ‘Work and Technology’ projects in which it could be demonstrated that non-Tayloristic forms of organisation of social work under the conditions of international quality competition offer a competitive advantage (Ganguin, 1992). With reference to Hellpach (1922, 27) Tomaszewski (1981), Hacker (1986) and Volpert (1987), ULICH highlights five characteristics of the ‘complete tasks’: 1. The independent setting of goals that can be embedded in overriding goals, 2. Independent preparation for action in line with the perception of planning functions, 3. Selection of means including the necessary interactions for adequate achievement of objectives, 4. Implementation functions with process feedback for possible corrective action, 5. Control and feedback of results and the possibility to check the results of one’s own actions for conformity with the set goals (Ulich, 1994, 168). It is remarkable that Ulich emphasises the category of ‘complete tasks’ and therefore establishes a connection to work design as a central subject of labourscientific research. If we include the action dimension in the COMET competence model, it is in the tradition of this labour-scientific task design, which always also

4.4 The Action Dimension

73

considers the design of work tasks from the aspect of personal development. The programmatic significance that the concept of complete action (task design) has acquired in vocational education has one of its roots here. Another is the degree of average operationalisation in the form of differentiation of the complete work and learning action into successive action steps. For the didactic actions of teachers and trainers, this scheme offers a certain degree of certainty. In the meantime, this action structure model has also been used internationally in connection with the introduction of the learning field concept in the development of vocational curricula. The inclusion of the action dimension in the COMET competence model and its differentiation in accordance with six action steps was undertaken with the intention of establishing the concept of complete task and problem solution. This is formed by the criteria of the requirement and action dimension. This further differentiates the competence model as a basis for the development of test and learning tasks and the evaluation of task solutions.

4.4.1

The Importance of Action Types

The description of the action dimension must be restricted, as the steps of the complete working action lead to the implementation of a structure of rational didactic action, which does justice above all to the action situations of beginners and less to those at advanced and expert levels (cf. above all Dreyfus & Dreyfus, 1987). In this context, a distinction is made in the vocational educational discussion between the rational and the creative-dialogical type of action (Brater 1984). Both types of action are fundamentally significant in all occupations, each with a different weight. Professional tasks with a clearly defined goal, e.g. in the form of a specification for the solution of a technical task, are characterised by the fact that the precisely specified goal suggests a well-structured procedure. The purpose determines the procedure for solving the task. The concept of complete working action has a clear affinity to this type of rational action. This type of action is particularly pronounced in specified work projects and processes in which the scope for action and design is limited. If there is room for manoeuvre in the phase of order formulation, then this is already restricted or eliminated in the work preparation processes by precisely specified work steps. An open objective and a course of action that can only be planned to a limited extent are characteristic of the creative-dialogical type of action. The consequence of the action steps only results in the work process itself. For example, educational processes are largely open. Teachers and educators absorb the impulses, suggestions, questions and answers of the children/students. As subjects of the learning process, the learners participate in determining the course of the educational process. To a certain extent, a teacher anticipates the possible reactions of his students when planning the lessons – he mentally acts out the lesson with its different possible situations. However, the actual course of lessons can only be anticipated to a very limited extent. Actions in diagnostic work processes are very similar, for example in

74

4 The COMET Competence Model

personal services as for industrial-technical professions, in which fault diagnostics play a special role. The creative-dialogue type of action is particularly pronounced in artistic professions. A painter is guided by an idea of content when painting a picture. However, the way in which a painting takes on its final form arises from a constant dialogue between the artist and the resulting painting. In professional work, both types of action overlap. If the form of design dialogue predominates in a professional activity, then it is expedient to create open test tasks in the form of situation descriptions such that the time frame for the test persons remains manageable and describable in the possibilities and branches of action. These considerations must be taken into account when designing test tasks. For the planning and conceptual analysis and processing of a technical task, two test tasks are justifiable if the rational type of action prevails. The anticipation of an educational situation in the context of a test with processing times of approx. 120 min seems rather unrealistic, since the pedagogical-didactic action of educators and teachers can usually only be anticipated for shorter time cycles. A comparison with a chess player makes sense. The anticipation of possible moves is based on the anticipation of the other player’s behaviour. More than four to five moves cannot be estimated as the number of possible moves increases exponentially (which is why the actual action of chess players is mainly based on knowledge of typical positional patterns). We therefore propose increasing the number of open complex test tasks for occupations with pronounced design and dialogical forms of activity to about four test tasks. This reduces the processing time to (max.) 60 min per test task. The form of thinking through branched action processes should be retained, since it is a characteristic of professional competence or professionalism in occupations with a high proportion of design and dialogical forms of activity. At the same time, this test form reaches its limits if the action situations to be anticipated are extended too far in time. The rule to be observed in this context is therefore: Test tasks must remain manageable in their alternative courses of action. Only on the basis of empirical studies will it be possible to define this rule more precisely. This includes studies that define the limits of standardised competence diagnostics that are given by the contents of professional work.

4.5

A Cross-Professional Structure of Vocational Competence

In order to take all occupational fields into account, the definitions of competence levels must be sufficiently general or supplemented by differentiating references to the different employment sectors. Thus, for example, in a cross-professional description of procedural competence, terms referring to ‘company work’ are avoided, since, for example, activities in educational institutions, in the health sector or in administration are rarely associated with the category of ‘company’. Comparable

4.5 A Cross-Professional Structure of Vocational Competence

75

Fig. 4.4 Adjustment effort for the implementation of the competence model in different occupational fields (examined on the basis of 48 expert assessments)

editorial corrections are offered for the description of the competence level ‘holistic shaping competence’ (! 4.2). At the level of the eight competence components assigned to the competence levels, the challenge is to define these components in such a way that they trigger a sufficiently concrete idea among users in the different occupational fields of the competences to be imparted. An analysis of the content of the explanatory descriptions as part of an empirical review of the criteria for occupations in the education and health sector and commercial occupations will then reveal the need for adaptation. If one differentiates the adjustment effort according to the sectors of industrialtechnical, commercial-service-providing and personal service occupations, then the adjustment effort increases steadily in this order. If the professional effort for the adaptation of the criteria and items of the competence and measurement model is applied to the vertical plane of a two-dimensional diagram, then 100% corresponds to a completely new version of the criteria and items. On the horizontal plane, the imaginary distance between the training content and objectives and the electrical professions involved in the COMET project can be deducted. The greatest assumed distance in terms of content is to the professions in the education and health sector. Content specialists estimate the effort required to adapt the formulation of competence criteria and evaluation to be a maximum of 20% in personal service occupations (Fig. 4.4). The ‘subject’ of training is a technical one for industrial-technical occupations and an economic one for commercial occupations. For pedagogical professions, on the other hand, it is about the development of personality. This mainly explains the

76

4 The COMET Competence Model

Table 4.3 Adaptation of the evaluation criterion ‘Orientation towards utility value/sustainability’ to different occupational fields (deviating criteria are highlighted in grey) (Appendix B) Industrial-technical professions Is the solution highly practical for the customer?

Commercial professions Is the solution highly practical for the customer?

How user-friendly is the solution for the immediate user/ user/operator?

How user-friendly is the solution for the immediate user/ user/operator?

Is the aspect of avoiding susceptibility to malfunctions/ unpredictability taken into account and justified in the solution? Are aspects of long-term usability and expansion possibilities considered and justified in the solution (for example, creating a reusable template)? Is the proposed solution easy to maintain and repair?

Is the aspect of avoiding susceptibility to malfunctions/ unpredictability taken into account and justified in the solution? Are aspects of long-term usability and expansion possibilities considered and justified in the solution (for example, creating a reusable template)? Is the solution adaptable/flexible? (e.g. quick reactions to disturbance factors)

Personal service professions What are the subjective benefits of the solution for patients, qualified medical employees and doctors? What are the objective benefits of the solution for patients, qualified medical employees and doctors? Is the aspect of avoiding susceptibility to malfunctions/ unpredictability taken into account and justified in the solution? Are aspects of long-term usability and expansion possibilities considered and justified in the solution (for example, creating a reusable template)? Is the task solution aimed at long-term success (avoiding the revolving door effect)?

differences in the description of competence criteria and items with which the competence levels are defined (Table 4.3)4.

4.6

Extending the Competence Model: Implementing Planned Content

The test-methodological argument of restricting competence diagnostics to the measurement of domain-specific cognitive disposition is omitted if the COMET test procedure is further developed into an examination procedure. The resources available for conducting examinations in accordance with the Vocational Training Act allow practical skills to be included in competence diagnostics as a subject of the practical examination (! 7.1).

4 For processing the criteria and items assigned to the competence levels when assessing solutions to tasks in personal services, see appendix.

4.6 Extending the Competence Model: Implementing Planned Content

77

Table 4.4 Steps of the practical exam

4.6.1

Operational Projects/Orders

Established forms of practical examinations are company projects (for IT occupations), journeyman’s pieces (for some trades) or company contracts (e.g. for electronics occupations). The practical examination usually includes a planning phase. In a first step, the project or operational order to be implemented is conceptually planned before the planned content is practically implemented in a second step. This can result in corrections to the planned procedure and the expected result (product). The result of the project/order must be checked (quality control) and documented before it is handed over to the customer/client (Table 4.4). This is followed by an expert discussion in which the candidate has the opportunity to justify his ‘project’ (procedure and result). A competence-based integrated examination includes the evaluation of the practical implementation of the project or the operational order as well as the documentation and justification of the work result and the planned procedure. The documentation of the project comprises three points: 1. The task definition (e.g. an operational order) of the client/customer. The order/project is described from the customer’s perspective or the utility value perspective. Specifications in the sense of a specification sheet are avoided if this already concerns solution aspects. A central aspect of the examination is that the examinee must translate the customer order (the situation description of the client) into a specification. It may also turn out that individual customer wishes are not feasible or individual requirements contradict one another: wish A excludes the consideration of wish B. 2. Description and justification of the order’s planned solution. 3. Documentation of the project/order result (implementation of the planned content), quality control and quality evaluation. A competence-oriented practical examination includes the assessment of professional competence to act. Therefore, the solution and/or the result and the procedure for the implementation of the plan are not only documented, but explained in detail (‘Why like this and not differently’). This documentation (second and third) is evaluated using the COMET evaluation procedure (extended measurement model, see ! 7.1).

78

4 The COMET Competence Model

All cases in which the practical implementation of an order includes products with their own quality that cannot be determined from the documentation require the inclusion of these in the evaluation. In all professions where practical competence consists of communication and interaction skills, i.e. counselling, teaching and educating, loyalty of learners, clients, customers, etc., requires a rating process based on observation. Professions with such competence profiles require the development of specific competence models. The competence and measurement model for measuring vocational teacher competence is an example for this sector of professional activities. The first results of the psychometric evaluation of this competency and measurement model are available (Zhao, 2015; Zhao, Zhang, & Rauner, 2016).

4.6.2

The Expert Discussion

A high value is attached to the expert discussion as a central component of the practical examination (BMBF, 2006b, ! 7.2). In some examination regulations, the assessment of the practical examination is based exclusively on a (maximum) 30-min expert discussion. This practice represents an excess of expert discussion, as the entire evaluation of the practical examination depends on its course. The reliability and validity of such expert discussions are not very high because expert discussions are dialogues, the course of which inevitably results from the situation and is decisively influenced by the experiences and expectations of the examiners. The expert discussion, which is conducted following the rating of the project documentation in the context of an extended rating procedure, can be based on the documentation, including the justification of the project result and the course of the project with a standardised rating procedure (assessment of professional competence to act). The rating result shows which aspects of the solution were not or insufficiently taken into account by the candidates. The expert discussion is then given the function of checking whether the examinee may know more than is shown in his documentation and justification. The expert discussion can serve to examine, for example, whether an examinee can justify specific solution aspects on a higher level of knowledge. Example In the documentation, an examinee (electronics technician) justified the decision for overcurrent protection with reference to a VDE 0100 regulation. The expert discussion did serve to clarify whether the examinee can explain the meaningfulness of this regulation for the specific case and justify it, if necessary, in consideration to other solution possibilities.

4.7 Identity and Commitment: A Dimension of Professional Competence Development

79

The expert discussion, which takes place after the rating of the project documentation, can be based on the rating results and clarify whether the examinee knows more than he has described and justified in his documentation. The result of the expert discussion then either leads to confirmation of the rating result: the candidate has already documented and justified his project according to his competence or this results in corrections for individual rating criteria. For the application of the COMET rating procedure, a double rating is recommended – as is customary in examining practice: two examiners independently evaluate the project result and then agree on a joint rating (for all items) or a team rating is carried out right from the start. In both cases, this contributes to a higher degree of consistency in the assessment of examination results. A change in the composition of the examiner/rating teams is one form of implicit rating training.

4.6.3

Rater/Examiner Training for Assessing the Practical Exam

Rater/Examiner training is based on the procedure of the COMET rater training (! 4.5.5). Rater/Examiner training is based on selected documentation about company projects. In order to achieve a high degree of interrater reliability, it is advisable to make the practical examples as well as the reference values of the already conducted training courses available nationwide.

4.7

Identity and Commitment: A Dimension of Professional Competence Development

The novice-expert paradigm describes how beginners become experts from the perspective of developing professional competence. Herwig Blankertz and Andreas Gruschka can be merited with having introduced an extended understanding of development in their work on the logical structuring of professional curricula. Vocational training is always about a coherent process of competence and identity development. Herwig Blankertz explained that, without the development of professional identity, no competence development would be conceivable (Blankertz, 1983,139). In this context, Walter Heinz points to another aspect of professional identity development, that of shaping one’s own biography: ‘In the industrialised service society, the gravitational point of professional socialisation processes shifts (...) from socialisation (in line with learning conventional social roles) to individualisation. For professional socialisation, this means that the internalisation of labour standards is gradually giving way to the formulation of subjective

80

4 The COMET Competence Model

demands on work content and the active shaping of professional biographies’ (Heinz, 1995, 105). In this situation, educationalists are not the only ones who refer to the importance of vocational identity as a self-concept and vocational training as a form of education that protects trainees and employees from disappointed confidence in the care of companies towards their employees (see also Brown, Kirpal, & Rauner, 2007). If the four levels of increasing work experience and the corresponding learning areas (Fig. 3.7) are applied as described above, the successive learning areas can be assigned levels of identity development. In the transition from vocational choice to vocational training, the learning area of occupation-oriented work tasks corresponds more or less to a hypothetical job description which, depending on the quality of occupation-oriented training, corresponds more or less to the reality of the profession. Notwithstanding the above, there may already be strong identification with the training occupation at the start of vocational training. This is particularly true for trainees who have very strong career aspirations at an early stage. At the start of the vocational training, the professional identity is shaped by a job description that the novice has acquired through narratives, literature for children and young adults, the public media and, increasingly rarely, experience, e.g. parental professional work. In the best case, trainees have experience gained through work experience in the course of vocational training. In any case, the subjective occupational profiles—the pre-professional identity—are confronted with professional reality. In vocational education and training with a developmental structure, the vocational work tasks at the start of training give an idea of ‘what the chosen occupation is mainly about’. Initially, the outlines of an experience-based occupational profile emerge, one which develops into a mature subjective occupational profile as vocational training progresses—and above all with increasing breadth and depth of reflected work experience. Trainees gradually develop a reflected professional identity that allows them to classify their professional role in the company’s business processes and the company’s organisational development processes. The development of experience-based professional identity goes hand-in-hand with the ability to experience one’s own work develops as a result of a superordinate context-related viewpoint from the perspective of cooperation with specialists from other occupations, with managers from different management levels and with the customers of the work orders. The dialectic between taking on the professional role and simultaneously being able to reflect on it from a distance (role distance) unfolds its effect. Professional identity development is based on four sources. • Formal professional role identity is defined by training regulations and regulated job descriptions. These are reflected in examination and training regulations. • Informal professional roles represent the expectations of society: occupations have a social image that trainees and professional specialists are aware of. To what extent this shapes their role identity is the subject of vocational research. • Both the formal and informal professional role identities are decisively influenced by the requirements and expectations of the community of practice and company managers and trainers. This is associated with the development of professional

4.7 Identity and Commitment: A Dimension of Professional Competence Development

81

identity as a passive or active role identity. For example, early participation in the processes of company organisational development with an emphatically business process-oriented training concept will promote the development of an active role identity. • The interest in the content of professional tasks is a fundamental determinant for the development of professional identity. If this interest is very pronounced, then the other determinants of professional identity development lose importance.

4.7.1

Normative Fields of Reference for Commitment and Work Morale

In his article ‘The Cultural Embedding of the European Market’, Carlo Jäger (1989) explains the need to distinguish between work morale and professional ethics, as both categories refer to different normative fields. Work has lost the odium of curse in modern culture. With the emergence of wage labour, a normative field has emerged on a global scale that has been experienced and accepted as one of the driving forces behind the success story of industrial society. Since then, the central value of work has been supported in industrial culture by a wreath of different work ethics, which were later (in the twentieth century) critically described as secondary ethics (diligence, discipline, punctuality, etc.). Industrialisation was accompanied by a large exodus of workers from agriculture. Migration movements and flows reinforced the emergence of a labour market for everyone’s work (mass work). The development and rapid expansion of mass production required mass training of the workforce. Kliebard suggests that performance-related wages have become a characteristic of mass industrial work and not only the consistent hierarchical and vertical division of labour. Job satisfaction should be ensured by increasing wages, while the basic source of motivation was a performance-related work ethic. Jäger, Bieri and Dürrenberger (1987, 75) understand working morale as ‘a constitution of conscience that demands that the work—no matter whether laborious or misunderstood in essence—be carried out in accordance with the contract, obediently, promptly, precisely, punctually, etc.’. This confirms that scientific management, as formulated by Taylor, had also found its way into European industry. For example, a manual from the Central Association of the German Electrical Industry explains the industrial electrical professions ordered in 1972: ‘The task of the communication device mechanic is to assemble modules and components, to assemble simple device parts and devices, and to perform and connect these according to samples and detailed instructions. He carries out simple tests of electrical components, assemblies and device parts with the corresponding measurements according to precise testing and measuring instructions. His area of responsibility also includes simple maintenance and repair tasks’ (Zvei, 1973,

82

4 The COMET Competence Model

13). Until the 1970s, vocational training planning in Germany was clearly influenced by Taylorism and the normative field of work ethics.

4.7.2

Professional Ethics

With reference to a series of industrial sociological studies, Carlo Jäger shows how work ethics deteriorated in the second half of the twentieth century. He explains this process with the wage explosion in combination with the fact that unskilled migrants are not available in any number, from which he derives the thesis that a European labour market solely oriented to the normative field of work morale would inevitably result in mass unemployment and sluggish productivity development (Jäger, 1989, 566). Based on his theoretical and empirical studies, he concludes: ‘Regardless of work ethics, there seems to be a normative field that emphasises the qualities of cooperation and communication rather than the character of deprivative duty in professional life. We call this normative field ‘professional ethics’ (ibid., 567). In summary, Carlo Jäger comes to an interesting result for vocational education and vocational training research, which challenges them in their creative tasks: ‘European culture, understood as a comprehensive normative field, developed a new form of social differentiation and personal identity formation with professional ethics at the end of the Middle Ages. The social system of the European labour market, which has been crystallising for several decades, has so far hardly taken this into account and instead referred to normative fields with their work ethics, which have become significantly less important in the same period’ (ibid., 570).

4.7.3

Organisational versus Occupational Commitment

The erosion of work ethics is directly associated with the rise and fall of commitment to organisations, as investigated by commitment research. Since the 1950s, various forms of commitment have been empirically researched in management and behavioural research (especially in the USA). Despite all the differences in the theoretical location of the research approaches in different sciences, there is one striking commonality. The categorical distinction between work ethics and professional ethics corresponds to the distinction in commitment research between organisational and occupational commitment (Baruch, 1998; Cohen, 2007). This differentiation can be interpreted as one between organisational and professional commitment. In commitment research, metastudies have shown that operational commitment has been declining steadily since the 1970s. As this is based on the employees’ emotional attachment to the company, this means that these ties are gradually becoming less strong. The volatilisation of stable relations between companies and employees confronts commitment research with the erosion of its basic category and opens up

4.7 Identity and Commitment: A Dimension of Professional Competence Development

83

a field of research for vocational training research to elucidate the interactions between the development of vocational and organisational identity as well as professional ethics and work ethics (Heinz, 1995, Rauner, 2007a). Here, one can speak of a paradox, ‘as the flexibilisation of labour markets is not accompanied by a flexibilisation of professional work: with a departure from the profession of social work, but—on the contrary—with an upgrade of the professional form of social work’(Kurtz, 2001). The stronger commitment of employees to their profession also justifies their willingness to perform and to take on responsibility in the sense of intrinsic motivation and at the same time emancipates them from a deceptive emotional attachment to a company that may not be able or willing to reciprocate this commitment and loyalty.

4.7.4

Construction of Scales to Capture Work-Related Identity and Commitment Occupational Identity

As occupational identity is related to the respective profession, it is not possible for a cross-professional concept to develop a scale for its coverage that is based on assumptions about the respective specific content of such an identity. It is therefore not possible to determine to what extent a specific professional role has been assumed. This distinguishes the term occupational identity used here from those frequently used ones, which aim more strongly at acquiring implicit or explicit knowledge in addition to professional action competence, or being a member of a certain profession, i.e. sharing a certain universe of thought and action. This also excludes the more or less successful adoption of a profession-specific habit that socialisation in a community of practice brings with it and is often equated with professional identity. This meta-level carries the risk of excluding essential aspects of growing into a specific professional role. Therefore, the scale of occupational identity to be measured does not refer directly to processes of professional socialisation, but to the subjective disposition to assume the professional role successfully. Martin Fischer and Andreas Witzel rightly point out in this context that the term professional identity should not be idealised, for example by deducing a lack of professional competence from an untrained professional identity, for example due to a change of occupation (Fischer & Witzel, 2008, 25). It also makes sense to make subjective dispositions regarding the assumption of the professional role and general professional values in so far as these can be ascertained independently of the respective qualification path. The type of training organisation that favours or hinders the development of such an occupational identity thus becomes an empirical question.

84

4.7.5

4 The COMET Competence Model

Organisational Identity

The organisational identity is defined as the emotional attachment of employees to a company. The maintenance and increase of this attachment is a central concern of management research, which sees it as the central cause of professional commitment (organisational commitment). In a series of meta studies (e.g. by Randall, 1990; Cohen, 1991) in the last decades of the last century, the declining operational commitment of employees was identified and interpreted as a crisis of commitment research. Baruch (1998), for example, aptly expressed this development in his essay ‘Rise and Fall of Organizational Commitment’. Contrary to this trend, Womack, Johns and Roos (1990) identified a high degree of organisational commitment in the MIT study on lean manufacturing for the Japanese automotive industry. As a central feature of Japanese industrial culture, the industrial sociology literature highlighted the firm and lifelong ties of employees to ‘their’ company, as far as they belong to the core workforce, as the reason for their proverbial high performance. The professional form of industrial work and the underlying willingness to perform is alien to this work culture. The qualification of specialists is embedded in the company’s organisational development and the processes of continuous improvement (Georg & Sattel, 1992). The high motivation of the core workforce is also the result of the structure of the Japanese employment system: the division into core and non-core workforces. The members of the peripheral workforce have a low wage level and socially insecure working conditions. This structure of the labour market is regarded as a decisive determinant of the extraordinarily high motivation of the core workforce in Japanese companies. The discussion about transferring the Japanese production concept to other industrial cultures (cf. Kern & Sabel, 1994) soon dried up, however, since the European culture of ‘humanising working life’ and introducing leaner corporate structures geared to business processes soon proved to be just as competitive and innovative as the Japanese ones. The introduction of broadband core occupations was seen in this context as a way for the European working world to support a professional motivation of employees based on occupational identity (Grollmann, Kruse, & Rauner, 2005). Commitment research can also be used to develop a scale to capture organisational identity, but it does not usually distinguish between organisational identity and organisational commitment.

4.7.6

Modelling the Connections Between Identity and Commitment

Three questions are of professional and economic interest in this context. 1. How strong is the willingness to perform professionally? 2. Is professional motivation based on factors of intrinsic or extrinsic motivation?

4.7 Identity and Commitment: A Dimension of Professional Competence Development

85

3. To what extent do professional identity, emotional loyalty to the company and the willingness not to (obediently) question predefined work tasks contribute to professional motivation? Comprehensive approaches from commitment research are available for the empirical recording of organisational and occupational commitment, which can be described as bonding—mainly affectively conceptualised—as a result of which commitment in the work activity is expected. There are further attempts to empirically conceptualise different forms of employee bonding. However, approaches such as the Job Involvement Scale (Kanungo, 1982) mix precisely the reference fields of commitment, which here are to be kept as distinct as possible. Preliminary work in organisational psychology was used to determine organisational commitment. Among the existing scales for measuring organisational commitment, the generally accepted scale of Meyer and Allen (1991) was used, among others.

4.7.7

Occupational Commitment

It is based on identification with the profession. Professional self-confidence and identity vary depending on the profession and vocational training and therefore have an impact on the degree of occupational commitment.

4.7.8

Organisational Commitment

It is based on identification with the company and the underlying emotional attachment to the company: ‘I am committed to the company’.

4.7.9

Work Ethics

It makes sense to design the reference field of work ethics pursuant to JÄGER—as an extrinsic work motivation that undoubtedly accepts external guidelines. A scale designed in this way should be limited to abstract working virtues. This is shown by the factor analyses carried out so far. The term ‘work ethics’ is therefore used to describe a willingness to perform based on a more or less ‘blind’ execution of instructions. Following Carlo Jäger, it is an identification with the work ‘in itself’, without consideration of concrete contents. The scales used to record occupational identity, occupational commitment, occupational identity, organisational commitment and work ethics have been

86

4 The COMET Competence Model

psychometrically evaluated several times. They have already been used in an international context across all professions (COMET vol. IV, 230). Qualitatively comparable scales can be found for persons who have already completed their vocational training, but not for trainees. This is one of the particular strengths of the COMET competence model: the psychometrically evaluated recording of personality traits of trainees, which are of central importance in the context of competence diagnostics. The original model showed only one form of identity, the occupational identity. In connection with the internationally comparative COMET projects, the situation had to be taken into account that in countries with an underdeveloped occupational form of social work, emotional ties to companies had to be given greater weight. This tradition is particularly pronounced in the core workforce of Japanese companies. The lifelong commitment to a company and the labour market, which is divided into core and non-core workforces, are regarded as a prerequisite for the highest motivation of the core workforces. The psychometric evaluation of the extended identity commitment model confirmed a differentiation between both forms of identity among trainees and technical college students in German-speaking countries (Kalvelage, Heinemann, Rauner, & Zhou, 2015). A differentiation according to occupational and organisational commitment proved useful in the development of a typology of occupations (! 8.5.4). These scales already contain the proposed corrections of model verification based on two extensive confirmatory and exploratory factor analyses (! 8.5.3). The recommendation to combine the scales for occupational and organisational commitment, since both scales measure the same, was not included. An alternative was to revise the items to achieve the required selectivity. The argument in favour of this approach was that, in an analysis of the individual occupations, the two scales already proved useful in the first version (cf. the four-field matrix on occupational and organisational commitment: Abb. 126 and Abb. 127).

4.7.10 Example of an Analysis of the Measurement Model (Performed by Johanna Kalvelage and Yingy Zhou, → 6.4) In two extensive studies (A: n ¼ 1121; B: n ¼ 3030), the model extended by the component ‘organisational commitment’ was evaluated using both a confirmatory and an explorative factor analysis. For the psychometric evaluation of the identity engagement model, this means defining the possible fields of I-E research as precisely as possible so that this can be taken into account in the development and evaluation of the scales. How are occupational identity and commitment as well as organisational commitment and work ethics connected? There are many interactions between occupational and organisational identity as well as occupational commitment, organisational commitment and work ethics. In

4.7 Identity and Commitment: A Dimension of Professional Competence Development

87

the psychometric application, occupational commitment, organisational commitment and work ethics are operationalised as latent constructs by means of indicators. In this respect, COMET’s instruments enable interdisciplinary and internationally comparative research. The relationships between the latent constructs can be modelled empirically (! 6.4). In this way, quantitative surveys can be used to identify special features for trainees in different occupations and, if necessary, to develop pedagogically sound interventions. In this example, there are pronounced correlations between occupational identity and occupational commitment (r ¼ 0.65). All correlation values shown are highly significant. The likelihood of error that these results might not correspond to the data is only 1%. Working with trainees and their teachers and trainers has increasingly shown that another dimension rooted in personality dispositions could be relevant for competence diagnostics: organisational identity. The differentiation between organisational and occupational identity should make it possible to plan pedagogical diagnostics more precisely and to intervene in different training occupations.

4.7.11 International Comparisons For international comparative surveys, it is necessary to evaluate the five scales (Tables 4.5, 4.6, 4.7, 4.8 and 4.9) and—if necessary—to change them so that comparisons are possible. For example, the categories ‘organisational identity’ and Table 4.5 Occupational identity scale Items I like to tell others what profession I have/learn. I ‘fit’ into my profession. I would like to continue working in my profession. I’m proud of what I do. For me, the job is like a piece of ‘home’. I’m not particularly interested in my profession. (recoded)

Cronbach’s Alpha α ¼ 0.87

Table 4.6 Organisational identity scale Items For me, the company is like a piece of ‘home’. I would like to stay with my company in the future—even if I have the opportunity to move elsewhere. I like to tell others about my company. I ‘fit’ into my company. The future of my company is close to my heart. I feel little connected to my company. (recoded)

Cronbach’s Alpha α ¼ 0.90

88

4 The COMET Competence Model

Table 4.7 Occupational commitment scale Items I am interested in how my work contributes to the company as a whole. For me, my job means delivering quality. I am absorbed in my work. I know what the work I do has to do with my job. Sometimes I think about how my work can be changed so that it can be done better or of a higher quality. I would like to have a say in the contents of my work.

Cronbach’s Alpha α ¼ 0.82

Table 4.8 Organisational commitment scale Items I try to deliver quality for my company. I want my work to contribute to operational success. I like to take responsibility in the company. Belonging to the company is more important to me than working in my profession. I am interested in the company suggestion scheme. The work in my company is so interesting that I often forget time.

Cronbach’s Alpha α ¼ 0.71

Table 4.9 Work ethics scale Items I am motivated, no matter what activities I get assigned. I am reliable, no matter what activities I get assigned. I am always on time, even when work does not require it. I carry out work orders according to instructions, even if I do not understand them. Instructions that I consider to be wrong I will still carry out without contradiction. For me, work means carrying out professional activities according to precise instructions.

Cronbach’s Alpha α ¼ 0.69

‘organisational commitment’ are omitted when scholastic vocational training systems are included. For such a situation, the other scales also lose some of their significance.

4.7 Identity and Commitment: A Dimension of Professional Competence Development

89

Fig. 4.5 Extended theoretical model on the relationship between commitment, identity and work ethics

In contrast to measuring vocational competence, international comparisons of identity and commitment with the previous scales are only possible for comparisons between countries with a dual vocational training system (Fig. 4.5).

Chapter 5

Developing Open Test Tasks

5.1

Expert Specialist Workshops for Identifying Characteristic Professional Tasks

An internationally established method for identifying the characteristic professional work tasks is the method of expert specialist workshops (! 3.3.2).

5.1.1

The Preparation of Expert Specialist Workshops

Researchers acquire the most precise insights and knowledge possible about the objective prerequisites and conditions constituting the field of activity to be analysed on the basis of the state of the art in occupational scientific research, relevant specialist publications on operational and technological innovations and other sources. In industrial-technical specialist work, this includes the technical work process-relevant expertise on technical systems, tools and working processes, as well as the corresponding documentation and working documents. Similar requirements apply to commercial occupations and occupations in the health sector. Additional work experience or relevant professional studies form the basis for a checklist of questions that can be used if researchers feel that additional questions are necessary. The study of the objective side of the professional field of activity to be analysed should not lead to the formulation of differentiated hypotheses on the job description and the fields of activity, in order not to limit the dialogue with the experts from the outset and to draw attention to the framework provided by the formulation of the hypotheses. This would unacceptably limit the chances of a high internal validity of the investigation in terms of consensus validity.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_5

91

92

5 Developing Open Test Tasks

→ Please name the most important stages (no more than five) of your professional development as an “expert in skilled work”. → For each professional position, please provide three to four typical examples of the tasks you have carried out in your professional practice. → Please note the professional stations and the examples of tasks on the prepared overhead slide for the presentation of the results. → After 15-20 minutes, we will ask you to present your professional career in plenary. Fig. 5.1 Work assignment: individual professional career

5.1.2

Implementation of the Workshop

The workshop roughly follows the following temporal and content organisational scheme. Assignment 1: Individual Professional Career The work assignment ‘individual professional career’ contains a list of the most important stages of professional development, from training to expert level in skilled work. To avoid too fine a breakdown of the career, the number of stations to be described is limited to a maximum of five examples. Participants whose professional development consists of more than five stations must combine several stations or make a selection of the most important ones. Each of the stations of professional development mentioned above should assign 3–4 professional tasks to examples from their professional practice, which they have performed there (Fig. 5.1). Assignment 2: ‘Challenging and Qualifying Professional Tasks’ After the participants have formulated their individual professional careers, they are asked to mark the professional examples of tasks which they found particularly challenging in their current professional practice and in the course of which they have further qualified themselves. Which of the professional examples of tasks you mentioned have particularly challenged and qualified you for your current professional practice? Please mark these examples of professional tasks. This additional assignment can also be set during the presentation, so that the participants can specify the particularly challenging and qualifying professional tasks at the moderators’ request. Assignment 3: Presentation of Individual Professional Careers Participants are given the opportunity to present their professional careers on the basis of the documents they have prepared.

5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks

93

What was the challenge in the professional examples of tasks you mentioned? Did these tasks challenge your professional expertise and were you yourself not yet sufficiently prepared for these tasks? Difficult task: What was the difficult thing about the tasks? At what point did you realise it was difficult? How did you overcome the difficulty? How would you deal with such a difficulty today? Insufficiently prepared: How come you had to take on a task for which you were not yet sufficiently prepared? What did you find difficult about the task? When did you realise that you were insufficiently prepared for the task? How did you overcome these difficulties?

Assignment 4: Creation of Task Lists in Working Groups The professional work tasks performed by all team members are compiled, discussed and documented. The professional work tasks completed by only a few team members are then identified and briefly introduced by the team members concerned. Afterwards, the team discusses and decides whether the mentioned professional tasks should be included in the common list. Finally, each team examines whether there are professional tasks that no one in the team has worked on, but which are nevertheless typical for the respective profession and which may shape the profession in the near future. Such work tasks can also be included in the documentation. After these first four work steps, the jointly identified characteristic professional tasks are assigned to the four learning areas, alternating between small groups and the entire team (Fig. 5.2). This is followed by both an internal validation and an external validation of the work tasks. Assignment 5: Internal Validation Internal validation is carried out by the participants of the expert workshops. After the results of the workshop have been interpreted and evaluated and the work tasks assigned to the learning areas, the work result is validated internally. The questionnaire for internal validation (Fig. 5.3) contains the following items: • • • •

Frequency: How often is the professional task performed? Significance: What is the significance of the professional task for the profession? Difficulty: What level of difficulty does the task have? Significance for one’s own professional development: What significance does the professional task have for one’s own professional development?

For the items ‘importance’ and ‘frequency’, the future development is assessed first, i.e. whether the importance or frequency of the professional work task is likely to increase (") or decrease (#) in the future.

94

5 Developing Open Test Tasks

Fig. 5.2 Systematisation of professional work tasks (Rauner, 1999, 438) Professional work task

Frequency

Significance

Difficulty

Significance for one’s own professional development

Evaluation

Development

Evaluation

Development

Evaluation

Evaluation

(1-10)

(↑ O ↓)

(1-10)

(↑ O ↓)

(1-4)

(1-10)

1. Professional work task 2. Professional work task 3. Professional work task 4. …

Fig. 5.3 Questionnaire for the validation of professional tasks

5.1 Expert Specialist Workshops for Identifying Characteristic Professional Tasks

5.1.3

95

External Validation

The subject matter of external validation is the result of the ESWs: the characteristic professional tasks identified for a profession and their assignment to the four learning areas. As a rule, occupational scientists, employer and employee representatives as well as trainers and teachers of vocational fields participate in external validation. The aim of external validation is to check the professional work tasks outside the operational context in which the expert specialist workshops are held. The possible influence of company or industry-specific peculiarities on the description of the skilled work can thus be uncovered and corrected if necessary. Therefore, the participants in external validation should have sound knowledge of the profession to be examined in companies of different sizes and in different industries and regions.

5.1.4

Evaluation of the Validation

In a first step, averages are calculated for the categories ‘significance’ and ‘frequency’ as well as for the assigned development trends from the assessments of the participants in the validation of the individual occupational tasks. The mean values are entered in a diagram the axis designations of which correspond to the two criteria (Fig. 5.4). The averaged values of the development trends can also be translated in the form of vector arrows, which indicate the future development of professional tasks in terms of their significance and frequency. This diagram can be used to determine the core area of a profession. In this example, the core area is limited by a minimum frequency and significance of 40%. Professional work tasks outside of this core area can be assigned to a specific company or sector. The future development of the job description can be estimated using the trend arrows. For example, professional work tasks that are not yet part of the core area of the profession at the time of the analysis but will become more important and frequent in the future can already be taken into account in the job description. The evaluation of the item ‘difficulty’ can be used to check the assignment of the professional work tasks to the four learning areas. Approximately the same number of tasks are planned for each of the four areas of responsibility. Particular attention must be paid to ensuring that the tasks fit into the logic of the regulatory scheme (Fig. 5.2) (cf. in detail Kleiner, Rauner, Reinhold, & Röben, 2002).

96

5 Developing Open Test Tasks

Fig. 5.4 Evaluation of the items ‘Significance’ and ‘Frequency’

5.2

An Open Test Format

The obvious choice as reference points for the development of test tasks is established professions/professions. Another obvious choice is a pragmatic procedure as established in the International World Skills (IWS). A relatively large number of occupations are internationally established. This applies not only to the craft and health professions such as carpenter, chef or nurse, but also to modern professions such as electronics technician, computer scientist and numerous commercial professions. The internationalisation of economic development and the emergence of a European labour market have led to greater harmonisation of professional activities and occupations. Occupations with similar professional titles therefore also include comparable fields of professional activity. It is therefore advisable to compare occupations on the basis of their fields of action in international comparisons. The competence levels and the characteristic competence profile of test groups are measured with the test format of the open complex test tasks (Fig. 5.5) (! 8).

5.2 An Open Test Format

97

Fig. 5.5 Open test tasks for assessing process and shaping competence

These can represent local, regional, national and international courses and systems of the same or formally different levels of qualification. Test items are therefore developed according to the following criteria. Test tasks are open for different task solutions. They are authentic for professional reality. This is the prerequisite for recording the different competence levels and profiles. The format of the open test tasks and the associated requirement to justify the task solution in detail (experts should be able to understand, explain and take responsibility for their task solutions) increase the scope for solutions and offer the possibility of participating in a test of relevant technical training courses at various qualification levels. The prerequisite for participation of training courses in a competence diagnostics project is that the validity of the content of the test tasks is assessed as given. As these are open test tasks that can be solved at different levels of knowledge or justification, there is a wide range of participation in comparative competence surveys by courses at different levels of qualification and training organisations (dual, school-based), insofar as these pursue the goal of qualifying for the exercise of relevant professional activities.

98

5.2.1

5 Developing Open Test Tasks

Representativeness

The criterion of the representativeness of the test tasks determines whether and to what extent test tasks cover a profession’s fields of action. Professional competences are open to application as cognitive performance disposition specific to the field. On the other hand, qualifications that are examined in the examination procedures are objectively given by the work tasks and processes and the resulting qualification requirements. When assessing professional competence, the qualification requirements must be fully reviewed for safety reasons alone. Nevertheless, both forms of professional skills overlap. The decision on the representativeness and validity of test items for related programmes concerns both the vertical and horizontal structure of the education system and the degree of its scholastic (academic) and dual (occupationally qualifying) structure of the curricula. In contrast to an examination, competence diagnostics aims to record the competence levels and competence profiles of test groups (! 8). A complete review of the professional qualification requirements defined in the job descriptions is not necessary. In practice, the teachers and trainers decide on which and how many complex test tasks are required to cover the fields of action characteristic of a profession or to record the competence levels and profiles of the test participants.

5.2.2

Authenticity/Reality Reference

The test tasks represent authentic work situations. It is taken into account that the partial competences corresponding to the requirement criteria are challenged in their complete solution (! 4). This ensures that not only partial competences such as environmental compatibility or the functionality of a task solution are measured. A restriction of the complexity of professional tasks in reality would limit or call into question the validity of the content of the test tasks.

5.2.3

Difficulty

When assessing the difficulty of test and examination tasks, a distinction must always be made in ‘the degree of training’. Thus, beginner tasks are easier to solve for experts and advanced users than for beginners. In principle, professional tasks are not solved correctly or incorrectly, but are always more or less expedient. The criterion of correctness also applies to partial aspects of professional tasks if, for example, the relevant VDI [Association of German Engineers] safety regulations and electrophysical laws are to be observed when planning office lighting—and above all when installing it.

5.2 An Open Test Format

5.2.4

99

The Description of the Situation

Test tasks are always based on authentic, situation-specific descriptions. They represent the reality of the professional working world. This is formulated from a customer perspective in such a way that it directly or indirectly includes all relevant requirements for the task solution. The situation description is not a specification. Specifications are derived from the situation description by the test subjects (subjects) and are therefore already part of the solution. The COMET test tasks are therefore based on a criteria-oriented test format and not on a standards-oriented test concept.

5.2.5

Standards and Rules to be Complied with

The standards and regulations to be observed when solving an occupational test task, e.g. accident prevention, health protection and occupational safety as well as the relevant VDI or DIN regulations are not specified in the situation description, since the test task is used to check whether and to what extent the test persons are familiar with the subject-related standards and rules and how they apply them in relation to the situation. The test authors base the development of the test questions on examples of related professions and the general criteria for the test questions development (Table 5.1).

Table 5.1 Guidelines for the development of test tasks (Appendix C: Examples of test tasks) The test tasks • Entail an authentic problem of professional and company work practice, • Define a profession-specific—rather large—scope for design and thus enable a multitude of different solution variants of varying depth and width, • Are open to design; i.e., there is no right or wrong solution, but requirement-related solution variants, • Require the consideration of aspects such as economic efficiency, practical value orientation and environmental compatibility (see the concept of holistic task solution) in addition to technicalinstrumental competences, • Require a typical professional approach to their solution. The solution of the tasks concentrates on the planning-conceptual aspect and is documented using relevant forms of presentation, • Can also include the practical solution if the test tasks are to be used to test concrete professional skills, • Challenge the test persons to solve, document and justify the tasks in the sense of professional professionalism (at the respective development level) without excluding reduced solutions.

100

5.3

5 Developing Open Test Tasks

Cross-Professional and Subject-Related Test Tasks

There is often an interest in competence surveys on cross-occupational—subjectrelated—fields of action. Welding, for example, is a component of a large number of metalworking occupations. Since this is a central field of action in the metalworking professions, there may be good reasons for making this or comparable professional fields of action the subject of professional competence diagnostics. The limit to the dissolution of professional working contexts and thus also the understanding of the context as an object of competence diagnostics is exceeded if the professional fields of action are selected according to subject-systematic aspects or if abstract subtasks (tasks) become the content of test tasks within professional fields of action (! 3.2). This risk is always present in educational practice when (university) vocational training courses are involved in competence diagnostics studies in addition to (dual) vocational training courses. Figure 5.6 shows typical competence profiles of higher education programmes in which a subject-systematic apprenticeship predominates. An essential criterion for the participation of training courses in comparative studies is a vocational or occupationally qualifying training concept. If a higher education programme claims to be professionally qualifying and there is an interest in checking whether and to what extent students achieve professional competence, then an essential prerequisite for a professional competence survey according to the COMET competence model is given. For the practical implementation of comparative COMET projects involving (higher) academic vocational training programmes, it makes sense to develop the

Fig. 5.6 Competence profiles of college students (China) (Zhou, Rauner, & Zhao, 2015, 400; for calculating and presenting competence profiles ! 8) (In an earlier version of the competence profiles, the three dimensions of functional, process-related and holistic shaping competence were still indicated as competences—the terms KF, KP and KG thus correspond to the dimensions DF, DP and DG)

5.4 Test Arrangements for Related Vocational Training Courses with Different. . .

101

test tasks first for SII and post-SII training programmes that clearly qualify for vocational training, and then in a second step to check whether the test tasks are assessed as representative and valid in content by the subject teachers/lecturers of related (higher) academic training programmes.

5.4

Test Arrangements for Related Vocational Training Courses with Different Levels of Qualification

The decision on the representativeness and validity of test tasks for related programmes concerns both the vertical and horizontal structure of the education system and the degree to which the work tasks of the training regulations are oriented towards ‘subjects’ or lead to vocational qualifications. Initial experience and research results are now available for the inclusion of vertically consecutive courses of education from upper-secondary level to the level of higher education vocational training courses. These test arrangements are divided into primary and secondary (associated) test groups (Table 5.2). Primary test groups represent training courses for and with which the test tasks are developed. Typical examples of this are the COMET projects for the training occupations of electronics technician, automotive mechatronics technician, industrial mechanic and other training occupations regulated by BBiG, related vocational school and vocational training courses that are regulated according to the model of alternating duality at SII level. Once the set of test items has been developed and tested in a pre-test (see below), it makes sense to check whether these test items can also be used to measure vocational competences that are taught in courses building on initial vocational training (associated test groups). These are, for example, technical school programmes, further training to become a master craftsman as well as relevant technical university programmes. Whether such a test arrangement is possible depends solely on how the representativeness and validity of the content of the test tasks are evaluated by the teachers in these courses. If the test tasks represent the main fields of action of the occupations Table 5.2 Test arrangements for primary and associated test groups Test arrangements Formal quality level Tertiary programmes at bachelor level Post-S II Technical schools/master craftsman qualification Sec II Dual vocational training Vocational schools

1 Associated test group 2 Associated test group 1

2 Associated test group 1 Primary test group

3 Primary test group Associated test group 1

Primary test group

Associated test group 1

Associated test group 2

102

5 Developing Open Test Tasks

for which the training courses qualify and if the validity of the test tasks in terms of content is assessed as appropriately high, then nothing stands in the way of participation of this test group. The degree of representativeness and validity of the content of the test tasks determines the possibility and design of the test arrangement.

5.4.1

The S II Test Arrangement

In addition to the primary test group, the S II test arrangement identifies two associated test groups that are formally assigned to higher qualification levels. Vocational schools are upgraded by one qualification level and bachelor courses by two qualification levels in accordance with international and national qualification frameworks. A frequently asked question about this test arrangement is: Are the technical college students (and master students) systematically underchallenged by the test tasks of the primary test group (here trainees in the second year and third year of training) and therefore cannot prove their real competence? In the case of closed test tasks (multiple choice tasks), such a test arrangement would not be possible, or only to a very limited extent, since norm-based test tasks are always assigned school levels or school years or a defined professional qualification level. A decisive allocation criterion is then the degree of difficulty of the test tasks. The COMET test format is based on the concept of open and complex test tasks. These are criteria-oriented test tasks throughout. This gives each test task a scope for solutions (scope for design) that offers room for solutions of different quality and quantity requirements (simple to very professional). Even if a trainee in the second or third year of training presents a task solution of comparable quality to that of a technical college or university student, students have the opportunity to justify their solutions in great depth and range of subjects. The ‘range’ and ‘depth’ of the explanatory statement are indicators of the level of work process knowledge incorporated into the task solutions that the test persons have. In test practice, this leads to the solution spaces for the test tasks developed for SII training courses being exhausted to a higher degree on average by the test participants in higher education courses. Since the solution spaces also include the knowledge that guides and reflects action, they are usually ‘exploited to the full’, which is only rarely the case.

5.4.2

The Post-SII Test Arrangement

Formally, the post-SII test arrangement differs from the first and third test arrangements in that the formal qualification differences to the subordinate and superior courses of study each constitute only one level. The professional fields of action of the post-SII graduates are the reference point for the development of the test tasks. A certain difficulty in international comparative studies is that the same vocational

5.4 Test Arrangements for Related Vocational Training Courses with Different. . .

103

fields of action are trained in vocational training courses that are formally assigned to different qualification levels. For example, the vertical range in the training of nursing staff (child, nursing, elderly care) extends from the ‘unskilled workers’ level through SII training courses to the bachelor’s level. Training at all three levels of qualification usually goes hand in hand with the development of level-related professional fields of action. Whether and to what extent these differ in their content and qualification requirements must be examined empirically in each case. The typical vocational fields of action for technical college graduates are initially identified during the development of test tasks. Domain-specific qualification research has relevant research methods (Rauner, 2006; Röben, 2006). The educational plans of the post-SII educational programmes are of secondary importance, as the ability to work is usually only achieved in a phase of familiarisation with the profession—following the relevant studies at a technical college. The reference point for the content of COMET competence diagnostics is therefore the professional competence, which is the focus of the curricula at the technical college, but which can often only be achieved in the practical phase following the studies at the technical college. The situation is different with higher technical schools such as those established in Switzerland. They are organised in dual manner, and their content and objectives are therefore based on the training content and objectives identified with the participation of organisations from the world of employment. If no results of the relevant qualification research are available, it makes sense to identify the characteristic fields of professional tasks and activities on the basis of expert specialist workshops (Spöttl, 2006).

5.4.3

The Third Test Arrangement: Graduates of Professionally Qualifying Bachelor Programmes As Primary Test Groups

The test tasks are developed by the lecturers of the bachelor’s degree programmes at universities. Here, too, the rule applies that the authors of the test tasks take the professional fields of action as a basis, which are considered representative for the graduates of the degree programmes. One difficulty that arises for this test arrangement is, on the one hand, the very broadly designed courses of study, the contents of which are based more on traditional concepts of basic academic studies. The contrasting study programme concept is based on a high degree of specialisation in content and corresponding ‘tailor-made’ university-based vocational training. For numerous professionally qualifying bachelor degree programmes (subjects), there is a more or less pronounced correspondence on the content of vocational training programmes at SII and technical college level. COMET projects based on this test arrangement have not yet been conducted. The COMET project Nursing (Switzerland) has a certain proximity to this test arrangement, since the dual course of study

104

5 Developing Open Test Tasks

at a higher technical college ends with an examination which is equivalent to a bachelor’s degree (Gäumann-Felix & Hofer, 2015).

5.4.4

Validity of the Test Tasks for Different Training Courses and Test Arrangements

In competence diagnostics projects, it is more the rule than the exception that different vocational training programmes such as dual vocational training, vocational schools and technical colleges as well as bachelor’s programmes qualifying for vocational training take part in a test. The concept of open test tasks facilitates this form of comparative competence surveys. The validity of the test tasks is determined in projects spanning different educational programmes with reference to the higher-level occupational fields of action of the primary test population (occupational validity). The test tasks developed in pre-test procedures are then evaluated by the project groups of the educational programmes involved in the test according to their validity for ‘their’ educational programmes (Fig. 5.7). During the evaluation of the individual test tasks, the project groups (of the participating training courses) evaluate the • Professional authenticity • Representativeness for competence • Curricular validity

5.5

1 1 1

10, 10, 10.

Description of the Solution Scopes

The solution scope of a test task defines the possibilities of a task solution under the basic conditions specified in the situation description. The wishes and requirements of the client (customer) limit the (theoretical) scope for design. Therefore, in the context of a (higher) school learning situation, it is more likely to assume room for manoeuvre and, in a test format related to the context of company work orders, to assume a solution space. Scope for solutions and design can only illustrate possible solutions in their structures in exemplary manner. In this respect, scope for solutions and design is also open to unforeseeable solutions. The authors of the test tasks have an idea of the spectrum of possible solutions to the test tasks. The theoretically possible solutions form an almost unlimited design

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

Action-learning fields (prim. test groups)

Test tasks (prof. validity)

105

Action-learning fields (sec. test groups)

Fig. 5.7 The professional fields of action as reference point for determining the professional validity of the test tasks

scope. It is therefore important to describe and illustrate the dimensions of possible solutions when describing the solution space. The solution spaces for the test tasks are a necessary prerequisite for rater training and for familiarisation with the rating of task solutions. The solution space facilitates the task-specific interpretation of the rating items, which are necessarily formulated at an abstraction level that allows their application in the broadest possible spectrum of professions (Table 5.3). The criteria for holistic task solving serve as a structuring scheme for the description of solution scopes. The solution scopes sensitise the raters to the spectrum of possible solutions. The solution space can indicate the potential of competences that the respective test task contains in the form of its possible solutions. Solution spaces are always incomplete. However, they are an essential basis for rater training when it comes to developing common standards for evaluating the various solutions to test tasks and achieving a high level of interrater reliability. When dealing with the solution spaces within the framework of rating and rating training, it must be avoided that solution spaces are misunderstood as ideal-typical solutions. The use of the solution space when evaluating the task solutions is practiced within the framework of rater training. Practice shows that the raters only occasionally (initially) use the solution spaces to evaluate the solutions after rater training. They are able to apply the rating items in task-specific manner and are able to think of the solution space virtually automatically. This phenomenon finds its expression in a correspondingly pronounced inter-rater reliability.

5.6

Evaluation and Choice of Test Tasks: The Pre-Test

The development of test tasks for a profession or a specialist area is carried out according to a defined procedure (Fig. 5.8).

106

5 Developing Open Test Tasks

Table 5.3 Example of a solution scope for a test task Solution space: Form-glued desktop (carpenter) Criterion 1: Clarity/presentation Structuring of planning documents • Production process. – Topan material, – Form gluing from veneer plywood, – Choice of materials, – Type of surface, – Specification of the edge, – Dimensions of the desk. They should justify the selection, present it in sketches and discuss the advantages and disadvantages. • The notes should be appropriate for the addressee. • The workflow should be clear and understandable for the workshop listed in the appendix. Criterion 2: Functionality • The dimensions of the desk must be selected so that – ... The user suffers no ergonomic damage. – ... There is sufficient space under the tabletop for the office chair and the container. – ... Rounding does not interfere with daily work processes. • As these days no desk can do without a computer, the trainee could recommend an invisible connection with cables as possible. • The surface must offer protection against scratches due to the daily use of the tabletop. Criterion 3: Sustainability • The dimensions of the worktop are not specified by the customer. Here, the trainee has to adjust the dimensions of the desk. • The surface of the desk must be chosen so that it is resistant and can be used for many years (easy to maintain and repair). Criterion 4: Efficiency • Optimal workflow planning can improve desktop production and thus bring greater economic benefits to operations. • Planning the routes is also crucial, as the workshop can also be used for other production processes if production is optimised. • By applying the correctly selected surface, it is possible to ensure that the desk remains intact for a long time (laminate vs. HPL). • A lot of material and working time can be saved by planning the optimum connection between the rounding and the carrier plate. Criterion 5: Orientation on busi• The workflow must be planned in detail. As a basis, the ness and work process trainee must accurately plan the implementation of the rounding. He must decide how this is to be produced. Criterion 6: Social compatibility • The production of round elements is not necessarily everyday carpentry work. Here it is important that the trainee uses the protective measures of the workshop correctly during processing. Criterion 7: Environmental • By optimising the work process, the trainee is to save routes compatibility and material supplies. • The best possible use of the materials used saves money (continued)

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

107

Table 5.3 (continued) Solution space: Form-glued desktop (carpenter)

Criterion 8: Creativity

5.6.1

and resources. • Panel materials are more sustainable than solid wood, as only a manageable service life can be assumed. • Water-based paints do away with solvents to a large extent. The residues of the paints should be disposed of professionally after use. • Since the form of the desk is almost predetermined, the trainee cannot become creative. He can apply his creativity in the optimisation of planning processes. • Apart from the occupational science-related aspects, there are no limits to the trainee’s choice of measurements. • The planning of cable outlets and their supply with electricity could also be considered.

Determining the Test Group(s)

The first step is to determine which test groups are to be involved in a COMET project. As COMET tests are generally designed as international comparative tests or it must be assumed that national projects will expand into international projects, the educational and study programmes to be included in the tests are defined. Three test arrangements are differentiated (! 5.3). These result from the definition of the primary test group. These can be (1) vocational training at upper-secondary level (initial vocational training), (2) continuing vocational training at the level of technical school programmes and (3) higher education programmes that qualify for a profession. The primary test groups can be extended in an extended test arrangement by courses with a lower and higher formal qualification level (secondary test groups). The associated prerequisite is the classification of the test tasks by the subject lecturers (teachers) as valid in content for the test groups to be involved.

5.6.2

Training of Test Task Authors

The authors of the test questions are usually subject teachers/lecturers and trainers (content specialists) who are qualified for the vocational training of the trainees (students) to be examined. As a rule, a one-day training course is sufficient to qualify these teachers/lecturers for the development of test tasks. The subject of the training is an introduction to the COMET competence and measurement model as well as the test procedure. The criteria for developing test tasks are explained using examples of tasks from related COMET projects. The development of test tasks includes the development of solution spaces. These are used for the task-specific interpretation of the rating items by the raters of the task solutions.

108 Fig. 5.8 Procedure of the pre-test phase

5 Developing Open Test Tasks

1.

Definition of test cohort(s)

2.

Introduction seminar for authors of test tasks

3.

Identification of occupational fields of action

4.

Development of 2–3 test tasks per occupational field of action including soulution spaces

5.

Didactical evaluation of drafts test tasks and solution space by project steering group

6.

Revision of test tasks and solution spaces

7.

Pre-test with revised test task in a representative test cohort

8.

Ratertraining and rating

9.

Analysis of results

10.

Choice of test tasks to be used in a main test

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

109

Identification of Professional Fields of Action The development of the test tasks requires the identification of the professional fields of action for the respective profession. For each professional field of action (Table 5.4), two to three test questions (drafts) including the solution spaces are developed by the teams of authors (groups of two or three). It must be taken into account whether the same fields of action apply to all the test groups to be involved or whether specific technical characteristics of training courses have to be taken into account (Fig. 5.9). For such test arrangements, the common competences are covered by a set of test tasks and the specific competences by supplementary test tasks. In the previous COMET test practice, especially against the background of the requirements for the international comparative projects, test tasks are developed which aim at the end of the educational programmes. This serves to record the competences defined in the job descriptions (job profiles), on the basis of which the employability or the training objective is described. Vocational (university) Table 5.4 Example of two professional fields of action Professional fields of action Logistics managers (1) import export orders (2) procurement (3) marketing/proposal preparation (4) forwarding and logistic services business processes/ controlling

Fig. 5.9 Common and sector-specific fields of action in nursing training in Switzerland for the areas of childcare, nursing care and care for the elderly

Car mechatronics (1) service/maintenance (2) repair (3) conversion and retrofitting (4) diagnostics

110

5 Developing Open Test Tasks

education and training courses can also be included. Although vocational skills cannot be taught in these programmes, it is possible to measure the degree to which these programmes succeed in teaching their pupils/students vocational skills. The authors’ intended ‘degree of difficulty’ results from the qualification requirements placed on the primary test group. The aim is to assess the difficulty of the test tasks by the (primary) test group with values between 6.5 and 7.5 on a scale of 0 to 10. These values are determined in the pre-test. The ‘difficulty’ of open test tasks according to the COMET test task format should not be confused with the degree of difficulty of normative test tasks (! 5.8).

Didactical Evaluation and Revision of the Test Tasks An essential step in the development of test tasks is the evaluation of the task drafts and solution spaces by the coordinating project group and the test experts involved in the project. As a rule, this results in initial revision instructions and a corresponding revision of the task drafts. A detailed didactic evaluation of the test tasks and rating scale (if modified for a new professional field) is part of the rater training (testing the test tasks).

Rater Training and Rating The test tasks (drafts) are tested on a sample of the primary test group. Each test task should be completed and evaluated by at least ten to 15 test persons. If the test group is relatively homogeneous, the lower number of participants is sufficient. In the case of more heterogeneous courses of education, the upper limit should be chosen. The pre-test includes rater training immediately after the test. The project group or the group of authors of the test tasks selects a task solution for each professional field of activity—at least four sample solutions of medium difficulty. They form the basis for rater training.

The Aims of Rater Training Rater training has three objectives: 1. Above all, the raters should learn to safely apply the rating scale for the evaluation of task solutions and to develop professional and task-specific evaluation standards with the aid of the solution space of the test tasks. This goal is achieved when the degree of agreement of the rating values of the raters is largely given (Finn(just) > 0.7). 2. When applying the rating scale, the raters should also check the validity (coherence) of the content of the rating items in the event that the rating is also part of the task development for a new profession or a new occupational field.

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

111

3. This includes the review of the author proposals for the list of rating items to be considered. The method of rating training is described below. Exact adherence to the methodical procedure ensures that good to very good reliability values are achieved after approx. One day of rater training.

The Trainers The trainers conducting the rater training should have assisted in at least one rater training. You must be familiar with the COMET test procedure and have an exact knowledge of the test tasks and their solution space for the respective project. It has proven useful that training is carried out by teams of two and that one of the trainers has relevant professional/technical and didactical competence.

The Participants in Rater Training The participants in rater training must be teachers or trainers with a teacher/trainer qualification for the respective profession or vocational training course and should have several years of professional experience. The authors of the test tasks as well as the authors of textbooks and experienced examiners bring good previous experience into rater training. Specific knowledge of the COMET test procedure is not required. The number of participants should not exceed 30. The number of raters to be qualified for a COMET project is determined by the following parameters. • The average rating for a task solution (after training) is approx. 15 min. • A double rating (two raters per task solution) is advisable in order to achieve a sufficiently high reliability. Rarely more than 150 participants take part in a pre-test rater session. Rating can then be handled by a group of 8–10 raters. Example For 600 test participants, each of whom solves a (complex) test task (maximum processing time: 120 min), a double rating requires 300 h of rating time. • With a rating time of 10 h per rater, 30 raters are required for the rating, and 20 raters with a rating time of 15 h per rating. • After approx. 4 h of rating (empirical value), each rater should take a halfhour break, as rating requires a high degree of concentration.

112

5 Developing Open Test Tasks

Organisation Rating is done online. A joint one- or two-day rating schedule has proven to be the best. Upon completion of the online rating, the rating results are available for feedback to the test participants (via the responsible teachers/trainers). Each participant is provided with a rater manual to prepare for rater training. Contents of the Rater Manual 1. The COMET test procedure. 2. The test tasks with solution spaces. 3. The rating scale. 4. The selected solutions for the trial rating. 5. References.

Structure and procedure of rater training 1. Introduction to the rating procedure. • The COMET competence and measuring model, • The evaluation criteria and the evaluation procedure (rating), 2. First trial rating in plenary session. The participants evaluate (rate) the first solution example without consulting other participants. Individual evaluations are required. If rater training takes place within the framework of the pre-test, it is also about the didactic evaluation of the test tasks (drafts), the solution spaces and the evaluation criteria selected by the teams of authors, which may not be applied to the individual test tasks for reasons of content. For rating scales that have been adapted to a new professional field, participants are asked to check the validity of the rating items (formulations, etc.) in terms of content and—if necessary—to make suggestions for corrections. The solution space is used as a working surface. 3. Group rating. Following individual rating, groups of five participants are formed, each of whom carries out a group rating on the basis of their individual ratings. The following rules apply: (a) the rating items are called in sequence—by a member of the group who moderates the group rating. If the ratings (of digits 0–3) do not differ by more than one value (e.g.: 2, 2, 2, 3, 2), then this can be considered a consensus. The mean value is recorded as a group result by increasing or decreasing the value. Deviating evaluations can be explained briefly with the use of the solution space. (b) if evaluations differ by more than one value (e.g. 3, 3, 2, 1, 3), then the participants justify their evaluation standards with deviating evaluations. The group agrees on a group value. The aim is to agree on the content of the evaluation criteria. It is also important to use the solution space. (c) Group ratings are not primarily about calculating averages

Time (in minutes) Approx. 40 Approx. 40 Approx. 60 (approx. 70)

Approx. 60

(continued)

5.6 Evaluation and Choice of Test Tasks: The Pre-Test Structure and procedure of rater training and overruling group members with deviating ratings, but about developing common evaluation standards. Input of individual ratings and group ratings (figure) for presentation and discussion in plenary Once all individual and group ratings are available (cf. rating table: Fig. 5.10), they are followed by Evaluation of the first trial rating in plenary The groups present their group results Difficulties in finding group values for individual items Proposals for correcting formulations of items, if necessary If necessary, proposals for the inclusion of individual items (in deviation from the suggestion of the test authors) The reports of the groups are followed by a discussion of noticeable evaluations. The projected tableau of all individual evaluations and the group evaluations serves this purpose. Noticeable evaluations are • Strong deviations from the mean value (across all items), • Strong deviations in individual items of individual raters, • The results of the discussion on the evaluation standards in the groups, • Rating items for which the ratings of the raters differ significantly (by more than one point value), • Items that some of the participants do not consider relevant in terms of content. 4. Additional trial ratings. The example of the first trial rating is followed by additional (usually) 3–4 trial ratings. Experience has shown that the duration of the trial rating is considerably decreased with each further example. Specifics If rater training is a qualification of raters for the implementation of COMET projects on the basis of test tasks that have already been tested, the reference values of the rating from the project in which the test tasks were developed are also available for the plenary discussions of the rating results. For these cases, both the selected test tasks and their solution spaces, as well as the rating items to be applied, are defined.

5.6.3

113 Time (in minutes)

Approx. 15 (depending on number of participants)

Approx. 30

Second trial rating Approx. 90 Third trial rating Approx. 60 Fourth trial rating Approx. 60 Closing plenary session Approx. 60

Calculating of the Finn Coefficient

Calculation of Interrater Reliability for the COMET Test Instruments Different coefficients are available for calculating the interrater reliability. Apart from the question, the choice of a suitable coefficient depends primarily on two factors: 1. the number of rating persons 2. the scale level.

Fig. 5.10 Example of a rating table from rater training for the profession ‘industrial mechanic’ (first trial rating) This rating table shows the rating results of the first trial rating of twelve raters and three rating groups. The degree of agreement is therefore still very low. In the course of rater training, the degree of agreement increases steadily and converges to values of Finn >0.75 (Fig. 5.11).

114 5 Developing Open Test Tasks

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

115

Fig. 5.11 Progress of the rater consensus forwarding and logistics merchants

In the case of COMET test instruments, we usually have more than two raters to deal with during the pilot phase of the projects—especially during rater training. Up to 40 raters will participate in the pilot phase. This means that they evaluate the same task solutions. For more than two raters, the following three coefficients are suitable:

Fleiss’ and Conger’s Kappa It is (also) suitable if more than two raters evaluate a person or task and an ordinal scale structure is adopted. Both are the case with COMET instruments. Fleiss’ Kappa distinguishes between the exact, also called ‘Conger’s Kappa’ and Fleiss’ Kappa. In most cases, Conger’s Kappa is slightly higher, so a comparison between Fleiss’ Kappa and Conger’s Kappa is recommended.

The Finn Coefficient Finn coefficient (Fu) Fu ¼ 1 

MSw  1=12  N 2  1

MSW ¼ average deviation square of the observed values per item within minutes N ¼ number of measured values Spearman–Brown formula for the rater group:

116

5 Developing Open Test Tasks

F ug ¼

n  Fu 1 þ ð n  1Þ  F u

n ¼ number of raters Justification for the choice of the measure: Asendorpf and Wallbott (1979) propose the Finn coefficient if the variance of the mean values of the observation units is too small (as in our case). To calculate the Finn coefficient correctly, a distinction must be made between a ‘two-way’ model and a ‘one-way’ model. The ‘one-way’ model assumes that only the persons/tasks to be evaluated are selected at random. The ‘two-way’ model also assumes that the raters are randomly selected. Since the latter is usually not the case, the ‘one-way’ Finn coefficient is calculated for the COMET model. The advantage of the Finn coefficient lies in the fact that it is suitable for calculation even if there is a high degree of correspondence between the raters. In other words, it is sensitive to small differences between persons.

Intraclass Correlation Coefficient (ICC) for One-Way and Two-Way Models The ICC is particularly popular due to its implementation in the SPSS statistics software. As with the Finn coefficient, a distinction must be made here between a ‘one-way’ model and a ‘two-way’ model. As with the Finn coefficient, the one-way model assumes that only the persons/tasks to be evaluated are randomly selected. The reasoning is the same as for the selection of the Finn coefficient, so that for the COMET instruments, the ICC is calculated for ‘one-way’ models. Another aspect to consider when correctly calculating the ICC is whether the absolute or the average agreement of the rating persons is of interest. This aspect also depends on how high the agreement between the rating persons is, so it is advisable to calculate both the ‘absolute’ (¼ ‘consistency’) and the ‘relative’ (¼ ‘agreement’) ICC. The consideration of this aspect is interesting in that it could be that there is a high degree of agreement between the raters across all (averaged) items, but that these differ in some important respects. If only the relative ‘ICC’ is calculated, there is a risk that these differences cannot be worked out. Accordingly, both the ‘conformity’ and the ‘consistency’ version of the ICC are considered below. Accordingly, the following interrater coefficients are calculated for COMET instruments: 1. 2. 3. 4. 5.

Fleiss’ Kappa Conger’s Kappa The Finn coefficient (‘one-way’) The ICC (‘one-way’) to check the consistency The ICC (‘one-way’) to check the relative conformity.

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

117

The following shows how these interrater coefficients differ from one another using the example of electricians’ test tasks used in international comparative COMET projects. The correct reading of the data was checked several times using descriptive parameters. The proximity of the unadjusted Finn coefficient (2009) to the (‘two-way’) Finn coefficient with the statistical software ‘R’ is striking. However, the (‘two-way’) Finn coefficient assumes a random selection of raters. This would mean that 14 out of 20 raters are randomly selected. The random allocation to the tasks is already given by the (‘one-way’) calculation. The calculation (‘two-way’) increases the ‘degrees of freedom’ and thus leads to a higher Finn coefficient. The table shows that the calculation of the Finn coefficient for the reference values (2009) corresponds exactly to the values of the comparative rating for the skylight control and the drying space. Results: The unadjusted Finn coefficient is the two-way Finn coefficient. This is not suitable for the COMET test procedure, as the raters are not selected randomly (Table 5.5).

Prospect The evaluation for selecting a suitable interrater coefficient is based on more than two raters evaluating a task. This is the case in rater training during the pilot phase of COMET projects. In the actual test phase, however, the solutions of the pupils, trainees and students are always evaluated by two independent raters, so that further coefficients are available for the calculation of the appraiser agreement. These coefficients and their benefits would still have to be demonstrated for COMET instruments.

5.6.4

Rating Results

All test tasks (drafts) are tested in the pre-test. The test results are used to measure the competence (competence level and competence profile) of the test participants. A distinction is made between test tasks (Fig. 5.12). The profile of a test task and the variation coefficient can be used to estimate the potential requirements of the test task. If all task profiles for a profession have a varying degree of homogeneity, it makes sense to strengthen the sub-competencies under-represented in the situation descriptions of the test tasks with corresponding requirement-related information without including specifications because these would already be part of the solution The total point values (ATS) of the test tasks (drafts) for the carpenters show consistently high values. Since four of the test tasks (A1, A2, A3 and A7) have a homogeneous to very homogeneous (A2, A3) task profile, two conclusions are obvious.

Rater training South Africa: Electrotechnology: 14 raters; 39 items; 4 tasks Finn (unjust/just) (Dr. Finn ‘one-way’ ICC (unjust/just) Erdwien) (‘two-way’) (Dr. Erdwien) Skylight 0.72/0.84 0.634 (0.746) 0.70/0.80 control Signals 0.55 /0.70 0.400 (0.541) 0.55/0.65 Drying 0.74/0.84 0.668 (0.766) 0.70/0.79 space Pebble 0.80/0.89 0.775 (0.859) 0.58/0.70 treatment Comparative rating 2009 Hessen (18 raters, 39 items) Finn_170310 Skylight 0.76/0.82 0.758 (0.815) 0.38/0.45 control Drying 0.67 /0.73 0.668 (0.728) 0.32/0.36 space Pebble 0.74/0.80 0.543 (0.802) 0.50/0.57 treatment

Table 5.5 Example for rater training ICC consensus (‘one-way’) 0.117 0.054 0.118 0.0771

0.376 0.311 0.105

ICC consistency (‘one-way’) 0.117 0.054 0.118 0.0771

0.376 0.311 0.105

0.107

0.020 0.093

Fleiss’ Kappa 0.081

0.119

0.035 0.105

Conger’s Kappa 0.092

118 5 Developing Open Test Tasks

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

Fig. 5.12 Profiles of test task drafts from the carpenters’ pre-test (Figs. 5.17 and 5.18)

119

120

5 Developing Open Test Tasks

1. The team of authors was able to formulate situation descriptions suitable for the collection of homogeneous competence profiles. The task profiles show which sub-competencies are not challenged by the situation descriptions. For Test 8, for example, this concerns the sub-competencies K3, K5, K6, K7 and K8. 2. With values of 40 and above, the TS represents a rather easy level of difficulty with the exception of the A 4 tasks. The test results tend to have an objective level of difficulty which corresponds to the subjective assessment of the difficulty of the tasks by the test group and the values of its self-assessment. For example, the values for ‘difficulty’ and ‘selfassessment’ are 6, which reflects the objective level of difficulty of this task with its TS ¼ 32.5.

Rating Results (pre-test) In a first approximation, the total TS (of the pre-test participants) represents the objective difficulty of a test item for the test population represented by the pre-test group. In a first approximation, the competence profiles of the test tasks represent the competence profiles of the pre-test participants and, at the same time, the quality of the test tasks. The variability coefficient V indicates whether a test item has the potential to comprehensively test professional competence. The authors of the test questions and the participating subject teachers decide— taking into account all pre-test results—whether an inhomogeneous competence profile of a test question is due to the competence of the test groups or to weaknesses in the situation description of the test questions.

Example: Trainees for Shipping and Logistics Services (SLS) The pre-test results of SLS indicate a special feature. Although the criteria or sub-competences environmental and social compatibility were applied in all test tasks, the competence profiles show a pronounced competence gap in the trainees’ ‘technical’ understanding (Fig. 5.13). However, both competence criteria (sub-competences) are of fundamental importance for the SLS. This was confirmed by the project group with reference to the job description and the relevant training regulations. It is remarkable here that the teachers, with their extended understanding of professional competence, were able to identify very precisely the reduced professional understanding of their students when assessing the test results (rating): ‘The test result is probably due to our own professional understanding’. However, this had changed fundamentally with the rater training, according to the consistent assessment of the pre-test experiences of the raters. Reliability analyses (Erdwien & Martens, 2009, 70 f.)

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

121

Fig. 5.13 Competence profiles of trainees for shipping and logistics services (SLS) (n ¼ 6 left, n ¼ 8 right) Table 5.6 Reliability analyses for the eight criteria of the evaluation form Criterion Clarity/presentation Functionality Sustainability Efficiency/effectiveness Orientation on business and work process Social compatibility Environmental compatibility Creativity

Rating items 1–5 6–10 11–15 16–20 21–25 26–30 31–35 36–40

Alpha value 0.88 0.86 0.84 0.82 0.87 0.84 0.85 0.90

As the aim is to maintain the eight criteria adopted in the further analyses or to combine them into the competence levels ‘functional competence’, ‘procedural competence’ and ‘shaping competence’, a reliability analysis was carried out once again on each of the evaluation items belonging to one criterion in addition to the factor analysis in order to check whether joint further processing of each of the five evaluation items belonging to one criterion is appropriate. The reliability analyses show the alpha values documented in Table 5.6. If item 20 is excluded from the scale, as it does not meet the requirements of sufficient cell occupation, the criterion ‘efficiency’ leads to a slight deterioration of the alpha value to 0.80. By contrast, exclusion of item 35 from the ‘environmental compatibility’ scale would lead to a slight improvement of the alpha value to 0.86 In a further step, it was examined which reliability values were achieved by the competence levels ‘functional competence’, ‘processual competence’ and ‘shaping competence’ on which the theoretical assumptions were based, and whether all 40 assessment items resulted in the overall construct ‘vocational competence’. The relevant results are shown in Table 5.7. 1.Conclusion Overall, the results of the reliability analyses show a very satisfactory scale stability for each of the eight criteria for the closer determination of the

122

5 Developing Open Test Tasks

Table 5.7 Reliability analyses for the three assumed competence levels Competence levels Functional competence (DF) Procedural competence (DP)

Shaping competence (DG) Professional competence

Competence criteria (sub-comp.) Clarity/presentation Functionality Sustainability Efficiency/effectiveness Orientation on business and work process Social compatibility Environmental compatibility All 40 rating items

Alpha value 0.93 0.92

0.93 0.97

competence model’s competence levels. The reliabilities for the competence levels based on education theory and for the overall construct of vocational competence are proving to be very high

5.6.5

Interviewing the Test Participants

Four questions are presented to the pre-test participants for evaluation of the test questions. There is also the opportunity for additional comments. How do you assess. . . 1. the comprehensibility of the test tasks 0. . .. . .. . .. . .. . .0.10, 2. the difficulty of the task 0. . .. . .. . .. . .. . .0.10, 3. the practical relevance of the task 0. . .. . .. . .. . .. . .0.10, 4. How well have you solved the task? 0. . .. . .. . .. . .. . .0.10.

Comprehensibility When assessing the comprehensibility of test tasks, it must be noted that the comprehensibility of a professional text also depends on the professional understanding and competence of the test participants. When evaluating the pre-test, the project group must therefore assess whether the linguistic formulation or the competence of the participants determines the degree of comprehensibility.

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

123

Difficulty of the Test Tasks The level of difficulty of the test tasks is a criterion that is of secondary importance in an open test format, as open test tasks allow the entire range from weak to very elaborate task solutions. If the open test tasks are based on authentic descriptions of the situation characteristic of the test population or the respective occupational field of action, it can be assumed that the test tasks also have an appropriate level of difficulty. This can be changed by the degree of complexity of the situation descriptions. 2.Example Assessment of comprehensibility and own competence (How well have I solved the task?) (Fig. 5.14).

Practical Relevance When assessing the practical relevance of the test participants, it must be noted that they should have relevant practical experience. If, for example, both trainees with relevant practical experience and pupils/students of vocational school programmes participate in a comparison project, a test group with practical experience should be selected for the pre-test.

Fig. 5.14 Example: Student assessment of pre-test task 2: Training, guidance and counselling of patients and relatives (COMET project Care professions/Switzerland) Assessment of the degree of difficulty and the practical relevance of the test drafts (Fig. 5.15)

124

5 Developing Open Test Tasks

Fig. 5.15 Example: Assessment of students’ pre-test tasks, test 2: Training, guidance and counselling of patients and relatives (COMET project Care professions/Switzerland)

5.6.6

Selection and Revision of Test Tasks and Solution Scopes

Only the combination of a subjective evaluation of the test tasks by the test participants and the assessment of their own competence as well as the objective test results provide a sufficient basis for the selection of suitable test tasks and, if necessary, their revision. The following shows how the appropriate test tasks are selected on the basis of the pre-test results and according to which criteria they are finally corrected, if necessary. When evaluating the pre-test results, particular attention must be paid to any contradictions between the subjective assessment of the trainees (e.g. with regard to their own competence) and the objective test results (Figs. 5.16 and 5.17). For example, the results for ‘shipping clerks’ show that they consistently assess the degree of difficulty of the tasks as very low (Fig. 5.16). Their objective test results give a clearly different picture: the competence profile is highly one-sided, and the overall score is rather low. A completely different picture results from the pre-test of the carpenters (Fig. 5.17). They also rate the level of difficulty of their test tasks as low. This corresponds to the high overall score that the pre-test participants achieve, as well as a considerably more homogenic competence profile. In this case, it is necessary to increase the complexity of the situation description. This also significantly increases the level of testing requirements for carpenters. For a summary of the results and the proposal for the revision of the carpenters’ test tasks, see Fig. 5.18. The difficulty of the test task can be increased by including further and higher requirements in the situation description. It should always be borne in mind that this

5.6 Evaluation and Choice of Test Tasks: The Pre-Test

125

Fig. 5.16 Evaluation results pre-test 2013, profession: Shipping and logistics services

Fig. 5.17 Evaluation results pre-test 2013, profession: carpenter

Comprehensibility: except for Task 2 and Task 8, completely given, i.e. at this point no revisions are necessary Difficulty: all tasks tend to be too easy (5.0 as point too low) Practical relevance: given for all tasks except for Task 4 to a limited extent V coefficient good: A1, A2, A3, A7 Æ Potential test tasks – after revision: A1, A3, A6, A7, A8 Fig. 5.18 Summary of the results and proposal of the scientific support for the selection and revision of the test tasks, profession: carpenter

must be an authentic requirements situation for the test group (test population). This is the only way to ensure the validity of the content of the test tasks. This can most likely be achieved by teachers and trainers who already have rating experience (e.g. by participating in the pre-test). It is absolutely necessary to use the competency model as a basis.

126

5.7

5 Developing Open Test Tasks

Test Quality Criteria

In recent decades, especially since the establishment of the PISA project, empirical educational research has developed and internationally established methods of competence diagnostics, especially in mathematics and the natural sciences, which now have high quality standards (measured by the test quality criteria). The method of competence diagnostics for vocational education and training must be measured against this. The special features of vocational education and training explained in Chaps. 2 and 3 require an application and interpretation of the established quality criteria, taking into account the special features of vocational education and training. Excluding this differentiation and transferring the established test methods used in the PISA and TIMMS projects to vocational training, there is a risk that precise measurement results can be presented, but that these do not match the object of measurement—professional competence. Robert STERNBERG and Helena GRIGORENKO therefore also warn against misunderstandings in the conceptualisation and implementation of ‘Studies of expert performance’: ‘Expertise theorists have argued about what it is that makes someone an expert (. . . .). How expertise is acquired, for example, through deliberate practice or skilled apprenticeship. They have failed to consider fully the role of expertise in the development and maintenance of expertise, and indeed, few expertise theorists have used any tests of abilities in their research’ (Sternberg & Grigorenko, 2003, VII). This is a sobering balance which shows the height of the hurdle for the development of competence diagnostics for vocational education and training that needs to be overcome. In his ‘Epistemology of Practice’, Donald SCHOEN unfolds the characteristics of professional competence as an opposite pole of social knowledge to theoretical and scientific knowledge. In contrast to abstract, context and purposeless scientific knowledge, professional competence means ‘a way of functioning in situations of indeterminacy and value conflict, but the multiplicity of conflicting views poses a predicament for the practitioner, who must choose among multiple approaches to practice or device his own way of combining them’ (Schoen, 1983, 17). The peculiarities of vocational work and vocational learning developed in Chaps. 2 and 3, as summarised once again here from a different perspective, make special demands on the quality criteria of competence diagnostics in vocational education and training. When traditional test quality criteria are applied to the measurement of professional competence in the relevant methodological manuals of empirical social research, the quality criteria for test procedures—in the tradition of experimental scientific research—are generally listed in the following order: objectivity, reliability and validity.

5.7 Test Quality Criteria

5.7.1

127

Objectivity

It specifies the degree to which a test result is measured independently of the tester. The tests are differentiated according to the objectivity of their execution and evaluation.

Objectivity of Implementation A high degree of implementation objectivity is achieved through standardised information of the test participants about the goals, the procedure and the evaluation of the tests. A regulation on the feedback of test results to test participants has proven to be particularly important. Since the COMET tests are often carried out close to the final examinations and the test participants measure the examination as highly important for their professional future, the interest of the participants in a competence test is based on being given the test results as soon as possible.

Objectivity of Evaluation The objectivity of the COMET test procedure is given by the rating procedure. However, this only applies if it can be ensured that the raters do not evaluate the tests of ‘their’ pupils/students. Evaluation objectivity requires not only the anonymisation of the test documents, but also a sufficiently large sample of test participants.

Reliability (Credibleness)

The reliability of a test indicates the degree of accuracy with which professional competence is measured. This represents a special challenge for the COMET test procedure, as professional competence can only be measured with open, complex test tasks. The consequence here is that as many different task solutions have to be evaluated as persons take part in the test. A further complication is that the great heterogeneity of the competence characteristics—especially in international comparative projects—places additional demands on the rating. Sufficiently high values of interrater reliability are regularly achieved with the help of tried and tested rater trainings.

128

5 Developing Open Test Tasks

Validity (Significance) It accurately specifies how the test measures that which is to be measured. A distinction is made according to a large number of validities, including above all content validity, construct validity, criterion validity and ecological validity. Whether a test is highly objective and reliable is irrelevant if it does not fulfil the criterion of validity in terms of content. Why this criterion is often problematic from a psychometric perspective, despite its importance, is immediately obvious, since the contents of the test tasks, and above all the quality of the task solutions, can only be assessed by experts qualified for the respective domain. What constitutes the professional competence of a nurse or an industrial mechanic can only be assessed by experts in this profession. BORTZ and DÖRING therefore avoid answering the question of what is important in a professional situation in their method manual: ‘Strictly speaking, content validity is therefore not a test quality criterion, but a goal that should be considered in test design’ (Bortz & Döring, 2003, 199). Why the authors use the subjunctive ‘should’ is not clear from the argumentation. Of course, there is no way around compliance with the criterion of validity of content. Therefore, it ‘must’ (not ‘should’) be ensured, as otherwise a test would lose its significance. Descriptions of validity of content with designations such as ‘face validity’ refer to the tension between this central and at the same time unwieldy quality criterion and psychometric procedure. However, only the domain experts should be able to ultimately decide to what extent a certain characteristic can be assessed as obvious, empirically founded or logically comprehensible.

Validity of Content (Face Validity and Ecological Validity)

If the contents of the test tasks comprehensively capture the construct to be measured (professional competence) in its essential moments, then content validity is given. This is best achieved when a test task directly represents the characteristic to be measured. In vocational education and training, this is always the case when a test task corresponds to a real vocational task. It is therefore always about a high degree of authenticity. In the COMET test tasks, the reference point in terms of content is the professional fields of action. In this respect, professional work tasks and professional fields of action are regarded as external criteria for the development of complex open test tasks. The validity of the content can also be recorded numerically, e.g. by a group of professional experts (teachers/trainers) assessing the degree of validity of a test item’s content for the population to be tested after rater training. The question to the experts is: To what degree does the test task (the description of the situation) represent one of the central professional fields of action of a professional skilled worker?

5.7 Test Quality Criteria

129

A special feature of determining the validity of open, complex test tasks is that these tasks can always be based on a different level of knowledge. If test subjects of different formal qualification levels (e.g. skilled worker and technician levels) participate in a test, it is necessary to name the primary test group for which the test tasks were developed. A special case applies if a comparative study involves both test participants in dual, vocational and technical school programmes. While, in dual vocational training courses, vocational competence is to be attained at the end of training, school-based vocational training courses are always followed by a phase of familiarisation with the profession. If school-based vocational training providers have an interest in knowing to what extent pupils/students attain employability, participation in comparative projects is justified. As the validity of a test task’s content is always assessed in relation to authentic situations in the respective profession (professional validity) and not in relation to a curriculum, the various forms of vocational training courses can participate in the COMET projects if the representatives of the training courses want to find out to what degree it is possible to convey vocational competence to the pupils/students in a school-based training course. For example, the results of a test for apprentices in the profession of industrial mechanic, in which a test group of students in a dual Master’s programme for mechanical engineers also participated, show that the students rated these test tasks as valid in content. They justified this with the fact that, as future managers, they were also responsible for the quality control of such tasks. The head of the study pointed out that the students all had the professional qualifications of a master craftsman, technician or a comparable profession. The aim of this course of study would be to convey a holistic professional competence profile at the level of senior executives. A continuing education programme with the goal of developing management and junior executives must ultimately enable them to consider the respective overall system in every solution development (Heeg, 2015, 123; Fig. 5.19).

In order to conduct a COMET test in which this course would be the primary test group, it would be important to adapt the competence and measurement model to the qualification profile of managers (ibid., 123 f.). Professional work tasks are not ‘given’ values from which test tasks can be derived, but they are regarded as the reference points for the development of test tasks. However, this plausible assumption proves to be a challenge for the test developers. Professional work tasks are the result of different company organisation and organisational development traditions. If one wants to grasp the specific quality of a professional task and the competence incorporated in it, then this presupposes regarding professional work processes as an expression of work structuring and work organisation. Vocational qualifications and competences therefore result not (only) from objectively given qualification requirements, but from work structuring processes. This also includes the design of human–machine interaction, for example in network- and computer-controlled work systems.

130

5 Developing Open Test Tasks

Fig. 5.19 Average competence profile of all test participants in the MScPT Industrial management course, full time); n ¼ 18, Ø TS ¼55.97, Ø V ¼ 0.23

With a professional work task, specific work to be performed by an employee is described in relation to results. It should refer to work contexts that allow employees to understand, execute and evaluate their function and significance for a higher-level operational business process. This determines the degree of competence development. For vocational education and training, it is therefore a question of developing and psychometrically evaluating a skills model (not a difficulty model) ‘which can be used to model how test persons, whose solutions have different levels of responsibility, can cope with open occupational tasks’ (Martens & Rost, 2009, 98).

Criterion Validity

Criterion validity is measured by the degree to which the result of a test to measure a latent characteristic or construct such as professional competence corresponds to the results of a corresponding test or examination procedure. For vocational education and training, the determination of criterion validity is indeed appropriate with reference to the established examination procedures under the Vocational Training Act. However, it should not be forgotten that this

5.7 Test Quality Criteria

131

examination practice has been critically evaluated for decades—especially with regard to its inadequate validity (cf. Rademacker, 1975). It therefore—conversely—makes more sense to evaluate the criterion validity of the established examination practice, if it concerns the nationally standardised examination parts with reference to the large-scale competence diagnostics of the COMET test procedure. The latter is based on a competence and measurement model which, in turn, is based on educational theory and vocational education. The coordinators of the COMET model test for electronics technicians therefore come to the following conclusion: ‘The further development of examinations in industrial occupations will receive new impetus [on the basis of the COMET research results]’1. The reference point of competence diagnostics is ultimately the professional ability in the professional fields of action. Therefore, tests for further research can only represent a possible external validity criterion if they also refer to the competences incorporated in the professional fields of action, for example in the form of company commissions.

Construct Validity

Construct validity is given if the result of a test procedure precisely and comprehensibly reflects the scope of the construct (e.g. professional competence) to be measured. The degree of construct validity can either be derived theoretically or empirically. Construct validity is particularly important in the psychometric evaluation of test procedures, since no objectifiable values can be given for content validity and no suitable external criterion can be given for measuring competence in vocational education and training. Construct validity can be determined from the examination of hypotheses derived from the target construct: in this case the COMET competence model. The psychometric evaluation of the COMET competence and measurement model aims at construct validity (Martens et al., 2011, 109 ff.; Erdwien & Martens, 2009, 62 ff.; Martens & Rost, 2009). The central object of the first COMET pilot project was the psychometric examination of the competence model with the aim of developing it into a measurement model (Martens & Rost, 2009, 95). In COMET Vol. III, Martens and others present the method of an evaluation procedure with which the construct validity of the COMET test procedure was checked (Martens et al., 2011, 109–126).

1

From the protocol of the project coordinators of 2.12.2010 (COMET Vol. III, 233). The project coordinators have long-term experience as examiners in carrying out examinations according to BBiG (skilled worker, journeyman and master craftsman examinations).

132

5.8

5.8.1

5 Developing Open Test Tasks

Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended to Measure Professional Competence Standardised Test Tasks

The introduction of intermediate examinations in dual vocational training in the early 1970s led to a considerable increase in the time and effort required for examinations. The introduction of so-called ‘programmed tests’ (multiple choice tasks) was intended to provide a remedy, as this test format allows a rational test procedure and a high degree of achievement of the test quality criteria. Two points of criticism are highlighted in particular: (1) In individual cases, the proportion of correctly guessed answers could be far too high. This could allow candidates to pass the exams that have not in fact attained the competency. The advocates of multiple-choice tasks rightly point out that the randomly corrected score x’ is easy to determine2. X0 ¼ XR 

X  XR m1

The proportion of professional skills that can be verified by means of selection responses is very limited. This second point of criticism is more serious. Based on the design criteria: difficulty index P, selectivity index T and the equal distribution of the solutions among the distractors (incorrect response options) are intended to ensure that vocational skills can be validly tested with multiple choice tasks. In the following, we will examine whether this goal can be achieved. The difficulty index P of an examination task is determined from the relationship P ¼ 100

NR N

NR stands for the number of participants who solved the task correctly and N for the total number of participants. The selectivity index T results from the relationship. T¼

2

R0  Ru  100 N

The achieved number of points X R (raw value) is reduced by a factor which is divided by the

difference between the total number of points X und X R, divided by the number of selection answers m reduced by 1.

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

133

R0 stands for the number of examination participants from the upper half of the examination participants who have correctly completed a task and RU for the number of examination participants in the lower half who also performed this task correctly, and N is the total number of examination participants. The upper and lower group is formed by sorting the participants of the examination (overall examination results) according to the increasing number of points and dividing them into an upper and lower half of the same size. The level of difficulty and the selectivity index of the MC exam tasks are directly related, as shown in Fig. 5.20. Maximum selectivity is achieved at a difficulty index of 50 (medium difficulty). On the other hand, the selectivity index T ¼ 0 for test tasks when the difficulty index is P ¼ 0 or 100, i.e. when the task is solved either by all examination participants or by none. Since such tasks are not suitable for distinguishing between ‘good’, ‘less good’ and ‘bad’ examination participants, i.e. they are not valid in the sense of discriminatory validity, they are considered unsuitable examination tasks according to this examination concept. In addition to the ideal line ‘1’, curves ‘2’ and ‘3’ show a course that can be achieved empirically. This is due to the fact that in the practical application of multiple-choice tasks, there are always exam candidates in the subgroup who are

Fig. 5.20 Dependence of selectivity on difficulty index (Lüdtke, 1974)

134

5 Developing Open Test Tasks

also able to solve individual difficult tasks and, conversely, members of the supergroup are occasionally unable to solve even minor tasks. The standard work on ‘Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften’ [Research methods and evaluation in social and human sciences] by Bortz and Döring (2003) states: ‘For very easy and very difficult items one will have to accept [...] losses in selectivity. Items with medium difficulties now have the highest selectivity’ (ibid., 219). Their conclusion is ‘In principle, the greatest possible selectivity is desirable’ (ibid., 219). And these high selectivity values are achieved if the test tasks are constructed in such a way that they ‘ideally’ lie at a medium degree of difficulty (P ¼ 50) or have a degree of difficulty between 30 and 70 or also between 20 and 80 (ibid., 218). For examination tasks that are more difficult or easier, the selectivity index would be too low to distinguish between ‘good’ and ‘weak’ examination participants. SCHELTEN therefore comes to the conclusion that test tasks that fall outside the framework thus defined ‘must be completely revised or replaced by new ones’ (Schelten, 1997, 135). It is therefore not important in this form of standardised test questions to check whether a participant has a specific professional ability—in this case, it would depend on the validity of the test question—but to construct test questions in such a way that the given bandwidth of the degree of difficulty and a correspondingly high selectivity value are achieved. These values can be achieved, for example, by adjusting the distractors (the wrong response specifications) for multiple-choice tasks. This principle of test construction applies to classical test theory as well as to probabilistic test methods.

In 1975, Hermann Rademacker was commissioned by the Federal Institute for Vocational Education and Training (BBF), today’s BIBB, to draw attention to the fact that professional skills cannot be tested with standardised test tasks. He illustrated this with an example of pilot training. At the end of the pilot training at a ‘pilot school’, it was regularly checked whether the prospective pilots were able to interpret the displays of the artificial horizon correctly. The test task was: ‘For the

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

135

following display of the artificial horizon, please indicate the flight status of your aircraft!’. Correct answer: ‘Descending in a left turn’ (Rademacker, 1975). All participants in pilot training regularly solved this task correctly. This is not surprising at all, as the artificial horizon reading is trained on a large number of test flights as well as in the aircraft simulator. The instructors (experienced pilots) were (always) very satisfied with this test result. All student pilots had demonstrated an essential aspect of professional competence (as pilots). The psychometric evaluation of the established test procedure came to the conclusion that this task should be removed from the test or reformulated, as it would not meet the quality criteria of the relevant test theory in its present form. The degree of difficulty and the selectivity index would be outside the observable limits. The task was changed in such a way that a higher degree of difficulty and therefore also a sufficiently high selectivity value were achieved. The reworded task was: ‘Please draw the position of the artificial horizon in the illustration (an empty circle symbolising the artificial horizon), which indicates when you are flying a left turn on your plane while ascending’. A sufficiently large proportion of the prospective pilots now solved the task incorrectly, although they had all demonstrated the safe and error-free handling of the artificial horizon during their ‘training flights’ and in the flight simulator. This example shows that standardised tests are unsuitable for testing professional competence, as the level of difficulty of the test tasks does not result from the complexity of the task to be tested, but from the manipulation (e.g. by skilful formulations) of the wrong answer options. When checking occupational skills, especially those that are safety-relevant, the use of standardised test tasks is not only unsuitable, but also not permissible, since the validity of the contents of the test or examination tasks is not given. For example, it is essential that the VDE safety regulations for the installation of electrical systems are safely controlled by qualified electricians. An examination practice that does not verify this involves incalculable risks, as a successful examination also entails the authorisation to install electrical systems. The examination of professional competence therefore necessarily requires valid forms of testing and examination in terms of content (see Rademacker, 1975). If it is not about individual items but about tests with a large item pool, for example in multiple-choice tests, probabilistic modelling with the help of the Item Response Theory allows precise statements to be made about the selectivity of entire tests. At the Chamber of Industry and Commerce’s final examinations, for example, there was a lack of reliability, particularly in the lower part of the results, which is however the decisive factor for passing or failing the final examination (Klotz & Winther, 2012) (Fig. 5.21). The lack of reliability in the lower range of personal competence is due to the use of only a few very difficult or very easy items. A higher selectivity for the respective degrees of difficulty could be achieved by using a correspondingly higher number of items in these areas. It is also possible to select items from a sufficiently large item pool individually for each test person, whose level of difficulty is adapted to the personal competence based on the results of the previous items (‘adaptive testing’).

136

5 Developing Open Test Tasks

Fig. 5.21 Capability-specific reliability sum for all test items (Klotz & Winther, 2012, 9)

While this requires the use of a large number of individual items, such items cannot depict work process knowledge and context understanding. This splits the solution of complex work tasks into the knowledge of individual activity steps. Not even specialist knowledge can be validly tested in terms of content. Especially, the low confidence of employers in the professional relevance of this type of examination (Weiß, 2011), as well as the trend of modern examination practice to rely on open exemplary tasks in the form of company assignments, is clear arguments against the use of multiple-choice examinations in vocational education and training. Conclusion The use of standardised test tasks leads to problems with selectivity. If the degree of difficulty of individual tasks is adjusted accordingly, the task thus optimised fails to achieve essential contents of professional competence. The optimisation of entire test batteries allows good selectivities, even if individualised or according to difficulty ranges. However, this requires the division of tasks into a large number of individual items. This in turn leads to a survey of specialist knowledge, but not to the measurement of professional competence. Here again, a fundamentally problematic test procedure is not improved by its optimisation.

5.8.2

Criteria-Oriented Test Tasks

Criteria-oriented test tasks must first of all be valid in terms of content. The criterion validity of a test is therefore measured by whether the test result correctly predicts the subsequent behaviour (see above). If, for example, the ability to solve a professional task in conceptual-planning manner is tested with open, valid test questions, it is assumed that the test person can also solve this task practically—in the working world or within the framework of a practical examination. Test items that meet this criterion are referred to as criteria-oriented test items. In this respect, the validity of the criteria concretises the overriding criterion of the content’s validity. The test-

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

137

theoretical literature refers to the fact that the validity of the content cannot be determined numerically and that one is therefore ultimately dependent on the technical or didactic competence of the developers of the test tasks to assess the validity of the test task’s content. In the COMET projects carried out and planned so far, vocational competence is measured towards the end of vocational training—with reference to the occupational competence to be imparted. The content dimension of the competency model differentiates between professional fields of action for beginners, advanced beginners, advanced learners and experts. This is the basis for the development of test tasks with which professional development can be recorded (! 5.1, 5.2).

Difficulty of Tasks in a Cross-Professional Comparison As COMET projects for different occupations are based on the same competence model as the design of test tasks (see this chapter), this leads to the assumption that this should also enable a comparison between the test results across occupations. If one compares the test results of the model tests (COMET) for the occupations electronics technician (industrial engineering and energy and building technology), industrial engineer and car mechatronic (Fig. 5.22), one is shown clear differences in the competence of the occupation-related test groups.

Fig. 5.22 Competence level of trainees in electronics, industrial engineering and car mechatronics (COMET projects in Hesse)

138

5 Developing Open Test Tasks

Table 5.8 Evaluation of the difficulty of the test tasks for electronics technicians, industrial engineers and car mechatronics by vocational teachers in the fields of electrical engineering and metal technology on a scale from 1 to 10 Teacher of the subject Test tasks for Electronics technicians Industrial mechanics Car mechatronics

Electrical engineering 7,4

Metal technology 6,1

Automotive engineering 5,0

Σ 7,1

Variance 2,4

6,0

6,0

5,8

6,1

1,3

5,0

5,9

5,9

5,4

1,7

On the basis of ‘All teachers evaluate all tasks according to their degree of difficulty’, a clear weighting is given for the ‘degree of difficulty’ of the test tasks. The highest level of difficulty is found in the test tasks for electronics technicians (7.1). The test tasks for industrial engineers have a lower value (6.1). The test tasks for car mechatronics are classified as significantly less difficult. The relatively high variance is striking as the ratings vary considerably. This applies both to the evaluation of each individual test item and to the evaluation of each non-occupational test item A comparison in levels of difficulty of the individual test questions assessed by the teachers with the objective test results a) on the basis of all test participants and b) on the basis of comparable test groups results in further conspicuities. The test results on the one hand and the assessment of levels of difficulty of the test tasks by the teachers on the other hand are relatively far apart (Table 5.9)

Industrial engineers achieve significantly higher test values than, for example, electronics engineers. This also applies if the test groups of all three occupations are compared with each other on the basis of comparison groups (with comparable previous training). This finding triggered an evaluation by the vocational school teachers involved in these COMET projects of the difficulty of all test tasks across all three occupations (Table 5.8). While the total point values (TS) for the individual test tasks in the respective groups vary by a maximum of 5.5 points, i.e. are relatively close together, the range of the teachers’ assessment of the level of difficulty is considerably wider. At the same time, the teachers’ assessments of the difficulty and the actual number of points achieved by trainees (e.g. T6) differ. The teachers involved attributed this, among other things, to the fact that the ‘difficulty’ was primarily assessed from the limited perspective of subject-systematic criteria. However, the concept of ‘holistic task solving’ allows the scope for design to be exhausted and thus a higher number of points, especially for complex tasks The inclusion of other professions such as industrial tradesmen, medical specialists and carpenters in the comparisons of the difficulty of the test tasks results in further differentiations (Fig. 5.23) The very different levels of competence initially confirm the thesis that a crossprofessional comparison must take into account what is to be compared. The ‘difficulty’ of a test or an examination results from the qualification requirements in the professional fields of action as defined in the job descriptions (the job profiles). These also represent the profession-specific qualification level. For example, at the end of their training, the qualification level of industrial tradesmen is rated as equally high or even higher than that of graduates of relevant bachelor’s degree programmes,

Professions Test tasks Teacher of the subject Electrical engineering Metal technology Automotive engineering

Electronics technicians T1 T2 T3 23.1 22.1 26.6 6.5 7.1 7.6 4.8 6.0 7.0 6.0 7.6 7.4

Table 5.9 Total point values (TS) for all test tasks T4 27.1 8.5 6.8 6.6

Industrial mechanics T5 T6 T7 35.5 40.3 36.3 5.8 6.4 6.8 5.8 6.2 6.8 6.2 7.4 7.2 T8 41.8 5.4 5.3 5.4

Car mechatronics T9 T10 34.5 35.8 5.4 6.8 5.8 6.4 8.2 5.7

T11 38.7 4.4 6.4 5.1

T12 32.8 6.2 7.0 6.6

T13 33.2 4.3 5.6 5.1

T14 37.7 3.9 4.4 3.9

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . . 139

140

5 Developing Open Test Tasks

Fig. 5.23 Competence level of apprentices in industrial mechanics, industrial tradesmen, medical assistants and carpenters, COMET projects in NRW

which are formally rated higher according to European standards. Practical training shows that occupations predominantly chosen by high-school graduates have a higher level of qualification than occupations predominantly chosen by secondary school students (Fig. 5.24). The preparatory training of trainees in a profession therefore also represents their level of requirements (qualification levels). This is reflected by the idiom that industrial clerk is a typical occupation for high-school graduates. From this perspective, the training occupations can be distinguished according to their ‘difficulty’. The wording ‘degree of difficulty’ is avoided, as the calculation of a degree of difficulty for professions would not do justice to the complexity of the occupational concept. Howard Gardner has pointed out that each profession has its own quality: ‘Take a journey through the world in spirit and look at all the roles and professions that have been respected in different times and cultures. Think of hunters, fishermen, farmers, shamans (...) sportsmen, artists (...) parents and scientists (...). If we want to grasp the whole complex of human cognition, I think we must consider a far more comprehensive arsenal of competencies than usual’ (Gardner, 1991, 9). With the concept of multiple intelligence, Gardner tries to meet the variety of different abilities, which also find their expression in the competence profiles of different professions. The various profiles of intelligence and skills expressed in the

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

141

Fig. 5.24 Competence level of apprentices for industrial clerks and electronics technicians for energy and building technology

occupations are superimposed in the practice of vocational education and training by the development of occupations with different levels of qualification. For example, the two-year assistant occupations (e.g. nursing assistants) are regarded as occupations with lower qualification requirements and thus also as less difficult to learn. Subjectively, the level of difficulty in learning the profession of industrial clerk (IC) will probably be experienced as appropriate by a high-school graduate as by a trainee with a weak secondary school leaving certificate learning a two-year assistant profession. According to the Vocational Training Act, vocational training follows full-time compulsory schooling. A differentiation of occupations according to previous schooling is not planned. The ‘two-year occupations’ are an exception. They are considered ‘theory-reduced’. The few trainees in such professions receive a certificate in Switzerland that enables them to continue their training in a related ‘fully fledged’ profession. The German BBiG does not provide for this differentiation. Informally, these occupations are considered ‘training occupations’.

Test-Theoretical Problems for Vocational Education 1. Competence diagnostics (and testing) generally relate to established training occupations. In countries with a dual vocational training system, the job profiles are ‘classified’ on the basis of statutory regulations. Different levels of requirements can at best be seen in the length of training. Four years of training (as is the case in Switzerland and Austria, for example) is considered an indication of a ‘more difficult’ profession than

142

5 Developing Open Test Tasks

a profession with a three-year training period. The reference point for the development of test tasks, with which the employability is examined, is the competences defined in the job description. In international comparisons such as the International World Skills (IWS), the professional experts of the participating countries agree on the fields of action relevant to professional practice and the criteria of professional competence for the respective profession. These are the basis for the formulation of complex competitive tasks (Hoey, 2009, 283 ff.). A similar procedure was developed for the international comparative COMET projects. On this basis, the professional capacity can be checked at the end of the vocational training. The possibility of measuring competence during the course of vocational training (for beginners, advanced beginners, advanced learners and experts) is also possible in principle. However, difficulties always arise when vocational training courses are included which differ in the content structure of the curricula/training regulations. This is the case, for example, if the development of competences during a training course structured according to learning fields is to be compared with a training course structured according to a subject system. This difference is irrelevant for the verification of employability at the end of training. Competence diagnostics makes it possible to compare programmes with different curricular structures if they pursue the goal of promoting the trainees/ students on their way to professional competence. 2. The ‘level of difficulty’ of the test items is of secondary importance for open test items. The decisive criterion for the quality of open test tasks is their professional validity and therefore their authenticity and their representativeness for a profession’s fields of action. The competence level of a test participant therefore does not depend on the level of difficulty of the test tasks, but firstly on the ability to solve the open (complex) test tasks completely and secondly on the professional justification of the solutions of a test task. A distinction is made between the level of action-guiding, action-declarative and action-reflecting work process knowledge (! 3.5.4). This difficulty component is also not a characteristic of the test item, but an ability characteristic. The test task therefore always contains the request to provide detailed and comprehensive reasons for the task’s solution. This concept of difficulty is realistic because the test tasks can be solved not only at different competence levels, but also at different levels of knowledge. At the first level of knowledge, it is only important that the future specialists (completely) solve or process the tasks assigned to them on the basis of the rules and regulations to be observed. In companies with a flat organisational structure, in which a high degree of responsibility and quality assurance is shifted to the directly value-adding work processes, it can be assumed that the specialists can understand and explain what they are doing. Equally typical are situations in which, for example, journeymen from an SUC company advise their customers on the modernisation of a bathroom or heating system (at the level of action-reflecting knowledge) in such a way that they have the opportunity to make a well-founded choice between alternative modernisation variants.

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

143

Test situations are also conceivable in which technical college or university students from relevant fields document their specialist knowledge beyond initial vocational training when explaining their proposed solution. In an ability-based test concept such as COMET, the level of the test result does not therefore depend on the ‘degree of difficulty’ and the selectivity of test items, but on the test subjects’ ability to solve the complex, open task solution more or less completely and to justify it comprehensively. The ‘degree of difficulty’ of a test task for a test population results solely from the validity of the situation description and the guiding idea for vocational training: ability to participate in shaping the working world and society in socially and ecologically responsible manner (KMK, 1991, 1999). The guiding principle of design competence places high demands on vocational education and training. The COMET test procedure allows the measurement of the degree to which this guiding principle is implemented in vocational training practice. 3. Inter-occupational comparisons of the ‘level of difficulty’ of test tasks Assuming that the procedures described and justified in this chapter are used as a basis for the development of the COMET test tasks, the test results represent (in the case of a representative test) the competence level and competence profile of the test population as well as the heterogeneity of the competence level of the test participants in and between the test groups. This shows whether and at what level the prospective skilled workers have attained professional competence. This result says something about how difficult this profession is to learn. For example, the result of the second main test of a longitudinal study of nursing school students (Switzerland) shows a very high proportion of students who achieve the highest level of competence compared to other occupations (Fig. 5.25).

Fig. 5.25 Competence distribution of nursing occupations Switzerland, second main test 2014 (On differentiating competence levels according to knowledge level ! 8.2.2)

144

5 Developing Open Test Tasks

Students and lecturers had the opportunity to reflect on the weaknesses of their training identified in the first main test—1 year earlier—and to introduce more forms of learning according to the learning field concept. The context analysis, the project results, the longitudinal study and the results of the feedback discussions with this project group (the coordinators of the VET centres involved in the test) were the basis for the interpretation of the test result. 1. The test tasks are based on an authentic, valid and representative description of the situation (result of the pre-test). 2. The test tasks are classified as adequate and demanding by the lecturers/subject teachers and the students. 3. The high level of competence that is achieved with the three-year dual technical college course is an expression of the high quality of this course (Gäumann-Felix & Hofer, 2015; Rauner, Piening, & Bachmann, 2015d). This also means that the very high proportion of test participants (representative of the test population), which achieves a high and very high level of competence, cannot be interpreted as an indicator of a low level of difficulty of the test tasks. In a capability-based test concept, the test authors must agree on the formulation of the solution space and the raters on the rating criteria for the rating items. When formulating the solution spaces, it is important to define the space of possible task solutions in relation to all relevant solution criteria. The authors of the test tasks and the solution spaces are oriented to their picture of the primary test population to be tested. If, for example, the authors (teachers) teach both trainees and technical college students and if the primary test population is not explicitly defined, a requirement level can subjectively arise that is at the level of the technical college rather than that of the dual courses of education—or vice versa. Therefore, it is necessary to accurately define the primary test population. Difficulties arise in international comparative projects if, for the same professional activities (e.g. for nursing professionals) in the participating countries, training is provided in uppersecondary (dual and vocational schools), post-secondary (technical colleges) or tertiary education programmes. In these cases, inaccuracies can only be avoided by carrying out the rater training on the basis of the solution examples and the reference values of the rater training for the primary test group. Only then is there certainty that the raters apply the same standards in interpreting the rating items in relation to the individual test items. If this prerequisite is met, then different courses of education at different qualification levels which qualify (are to qualify) for the same or a comparable professional activity can be compared with each other. Cross-professional comparisons of competence levels as a yardstick for the difficulty of the test tasks, on the other hand, are only possible to a very limited extent.

5.8 Difficulty Level: A Problematic Quality Criterion for Test Tasks Intended. . .

145

The empirical data available on this subject show that assessment standards, which are characterised by different vocational training traditions, often lead to a wide divergence in the assessments of the raters at the beginning of rating training. The rater training allows a final evaluation of the task solutions on a high level of agreement.

5.8.3

The Variation Coefficient V: A Benchmark for the Homogeneity of the Task Solution

The quality of a COMET test task is proven in its potential to measure the degree of completeness and homogeneity of professional competence. To quantify more or less complete task solutions, the variation coefficient V is determined. V ¼ Standard Deviation ðC1 :: C8Þ=MeanðC1 :: C8Þ V is a benchmark for the degree of homogeneity of the task solutions. It is calculated by dividing the standard deviations of the eight competence values by the mean value of the competence values 1–8, based on the sub-competences valid for a test task. If a situation description (of a test task) contains the potential for a homogeneous task solution, then it is suitable for measuring competence profiles: the ability for a complete task solution. This prerequisite was met for the test tasks used to measure the competence profiles in Fig. 5.26.

Fig. 5.26 Example of a homogeneous and inhomogeneous competence profile of two commercial professions

146

5 Developing Open Test Tasks

5.8.4

Conclusion

A summarised state of the scientific discussion on the ‘degree of difficulty’ of test tasks in COMET projects results in the following findings. 1. The COMET procedure of competence diagnostics is not a difficulty level test procedure, but an ability-based test procedure. 2. The competence levels of test groups of different professions are an expression of • • • •

job-specific requirement levels different (scholastic) backgrounds different training periods (2-, 3-, 4-year courses) the more or less successful implementation of the guiding principle ‘shaping competence’ (the learning field concept).

Chapter 6

Psychometric Evaluation of the Competence and Measurement Model for Vocational Education and Training: COMET

6.1

What Makes it So Difficult to Measure Professional Competence?

The complexity of professional tasks usually requires a holistic solution. Specifically, this means that different competences must interact successfully to constitute the professional competence together. One possible description of these competences is the eight criteria of the complete task solution (clarity/presentation, functionality, sustainability, economic efficiency, work and business process orientation, social compatibility, environmental compatibility and creativity), which have frequently been described above (! 4). The most important question for the measurement of professional competences is how the interaction of these eight criteria can be described mathematically and transferred into a suitable measurement model (Suppes & Zinnes, 1963). Mathematically, the interaction of several criteria (! 6.3) must be described as a possible interaction of a higher order. If, for example, four criteria are involved, then this means that all possible interactions of these four criteria must also be considered (vgl. Martens & Rost, 2009). The theoretical requirement of a holistic task solution therefore determines the search direction for the description of a suitable measurement model. The first step is to check which higher-order interactions can be identified. Such complex models can be systematically simplified if theoretically and mathematically feasible. The simplification then facilitates the representation and interpretation of such models. Thus, if an empirically tested measurement model can represent higherorder interactions, which can subsequently not be empirically identified, then the simpler measurement model, which excludes exactly these interactions, can also be used. Unfortunately, this search direction cannot be reversed. It is not possible to conclude from the fit of a model that such higher-order interactions can be excluded © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_6

147

148

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

a priori. This statement can be supported by two main arguments. There is always a—possibly very small—chance that a better measurement model can be found. Moreover, it is not possible to define clear criteria for the fit of measurement models. These criteria are also subject to normative assumptions which, however, cannot be discussed here (cf. Bozdogan, 1987). Therefore, if the theoretical assumptions require it, at least an active search should be made for a model that can mathematically represent the ordered higher-order interactions. If higher-order interactions can actually be assumed, then this would mean that simpler models pretend comparability, for example, of the persons investigated, which is not given. Higher-order interactions result in qualitatively different capability profiles that are no longer directly comparable (Erdwien & Martens, 2009). In summary, it can be said that the interaction of the eight criteria of the complete task solution determines the search direction for the measurement model to be identified: from simple to complex. A further argument that specifies this search direction results from the rough allocation of the eight criteria to three levels of professional competence (Rauner, Grollmann, & Martens, 2007). This allocation gives rise to two possible development paths for professional expertise: a gradual development in which the three levels develop in succession and a simultaneous development with a more or less continuous increase in all three areas. If both development paths are to be allowed, this also has implications for the selection of a suitable measurement model. In particular, the idea of the successive training of competence levels gives rise to different qualitative competence profiles, which in turn can only be described by allowing higher-order interactions. It must be noted at this point that there can be no clear identification of a measurement model. Ultimately, one has to decide on a suitable model with a transparent presentation of the corresponding selection criteria. It is often necessary to balance the contradictory forms of selection criteria against each other. In the following, we will therefore examine which statistical approaches and measurement models 0are statistically suitable to meet the requirements discussed here.

6.1.1

Procedures Based on the Analysis of the Covariance Matrix

Factor analyses and many related procedures are usually based on an analysis of the covariance matrix, i.e. on a correlative relationship of two variables each. Historically, the continuing popularity of these methods is mainly due to the fact that they could be calculated easily and without the help of computers. However, this mathematical simplicity is achieved with a severely limited informative value of the resulting models. The data structure based on the covariance matrix, the bivariate network of relationships, excludes higher-order interactions between the variables a priori. The corresponding models are therefore only conditionally suitable for the

6.1 What Makes it So Difficult to Measure Professional Competence?

149

analysis of professional competences. Factor analyses are—in relation to the covariance matrix—the attempt to bundle variance components and thus present them more simply. The structural statements from these simplification processes also strongly depend on the characteristics of the respective sample. Even minor changes in the composition of the sample can change the factor structure, for example. This is defined mathematically. Changes in the covariance matrix must also lead directly to a change in the higher-level factors in the factor analysis. Such direct dependence requires careful sampling. An instable factor structure can already be expected if, for example, part of the sample drawn is not taken into account. For example, if certain differences in the sample are systematically reduced, this can lead to an apparent increase in the resulting number of factors, because a general factor can combine less variance. This is an important difference to the family of mixed distribution methods—these are largely structure-invariant to changes in the sample. The empirical results of the procedures based on a covariance matrix must therefore be very carefully interpreted with regard to the measurement of vocational competences, which particularly concerns Chaps. 6.4, 6.5 and 6.6. The disregard for higher-order interactions and the sample dependence of the results limits the informative value of the results.

6.1.2

Mixed Distribution Models

Mixed distribution models separate the population to be analysed into homogeneous subpopulations, whereby methods can be distinguished in which the persons in a subpopulation can be further differentiated (the Mixed Rasch Model) or it is simply assumed that persons assigned to a common population also have the same personal characteristics (the Latent Class Analysis). These methods generally meet the theoretical measurement requirements resulting from a holistic task solution. The separation into subpopulations mathematically corresponds to a simultaneous consideration of all interactions of higher order. Some disadvantages of these procedures must of course be specified: 1. Persons assigned to different subpopulations may no longer be directly comparable. This would restrict some practical applications, such as selection decisions, since the profiles can no longer be reduced to a single criterion. Persons can only be directly compared if the competence profiles of the assigned subpopulations run in parallel and without overlaps. 2. In addition, the statistical determination of the most suitable parameters is challenging. While the selection of most model parameters can follow practical considerations, the number of subpopulations in particular can lead to an improved fit between model and data. This relationship naturally applies to all measurement models: if the number of model parameters is increased, the data fit must improve. In this case, the use of so-called FIT indices such as the AIC (see Bozdogan, 1987) only helps to a limited extent, because the use for additionally

150

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

invested parameters is of course a normative setting. A possible way out is the use of bootstrapping (Efron & Tibshirani, 1994; von Davier, 1997; von Davier & Rost, 1996), in which a test distribution is generated using artificial data sets assuming the validity of the model. Thus, the fit of the empirical data set can be roughly estimated in comparison with the artificial data sets under model validity. Ultimately, however, one should not rely on a criterion for the quality of the model fit. Careful consideration of several criteria with reference to the theoretical foundations certainly makes sense. For example, the mean class affiliation probability is a good measure of the stability of a solution, even if this measure can only be used to compare different solutions with each other to a limited extent.

6.2

Ensuring the Interrater Reliability of the COMET Test Procedure

Reliability is the degree of measurement accuracy of a test procedure. If a test procedure is based on a rating procedure involving a large number of raters, interrater reliability is regarded as an indicator of the reliability of the test procedure. The determination of reliability in the measurement of occupational competence with open complex test tasks (as in the COMET project) is particularly relevant. If it can be proven that a very high interrater reliability is also achieved for international comparative surveys (here Germany, China, South Africa), then this is an important indicator for the quality of the test procedure (see Zhuang & Ji, 2015). The basis for achieving high reliability in the COMET project is a tried and tested concept for rater training (Haasler & Erdwien, 2009; Martens et al., 2011) and the development of solution scopes for the test tasks by the authors of the test tasks (cf. Martens et al., 2011, 104 ff.).

6.2.1

Example: Securing Interrater Reliability (COMET Vol. I, Sect. 4.2, Birgitt Erdwien, Bernd Haasler).

In order to put the proof of interrater reliability on a solid basis, a sample was drawn beforehand from the test person solutions of the main test, which was submitted to all 18 raters for evaluation. From the set, consisting of the four test questions, two test person solutions were used for the rating. Each rater from the team was therefore confronted with 8 solution variants of test persons, which had to be evaluated and assessed. The advance rating therefore consisted of a database of 144 individual ratings.

6.2 Ensuring the Interrater Reliability of the COMET Test Procedure

151

As a result of the response format for the evaluation of the test tasks and the number of raters, the Finn coefficient was chosen as a quality measure for the evaluation of the interrater reliability. Strictly speaking, this is a measure that requires an interval scale level and for its use, the data must meet the requirements for calculating variance analyses. Although the available data are ordinally scaled rating data, a rating scale can be treated as an interval scale, provided that the characteristic values are equidistant, and the numbering of the different characteristic values therefore differs equidistantly. In addition, Bortz and Döring (2002, 180 f.) point out that the mathematical requirements for the analysis of variance say nothing about the scale properties of the data. This means that parametric procedures can be used even if the data are not exactly interval-scaled, provided that the other prerequisites for carrying out the procedure are met. The criteria—(a) independence of observations, (b) normal distribution of observations and (c) homogeneity of variances—are the main prerequisites for the feasibility of a variance analysis. The violation of the criterion of independence leads to serious consequences, whereas the analysis of variance is robust against a violation of the normal distribution or variance homogeneity criterion. In the present case, the observations are independent: vocational schools are only the functional units in which all students can be tested together. However, practical vocational skills are primarily developed in training companies due to the dual organisation of training, so that independence is ensured due to the allocation of pupils to different training companies. In addition, the vocational students of the various vocational school classes solve the tasks assigned to them individually, whereby all four test tasks are distributed equally among the students within a respective vocational school class, which prevents attempts at cooperative work or ‘copying’. Also, the rates evaluate the task solutions strictly independent of each other. They are never in exchange with each other at any time during the assessment process. An explorative data analysis also showed that 33 of the 40 items meet the criterion of variance homogeneity. The seven non-variance homogenous items were nevertheless included in the main analysis; however, when the overall data set was available, they were subjected to another critical examination with regard to their variance homogeneity before further, constructive analyses were carried out. The interrater reliabilities were calculated both including and excluding these seven items. To test empirical data for the presence of a normal distribution, various graphical (e.g. histograms, P-P plots) as well as statistical (e.g. Shapiro–Wilk test, Kolmogorov–Smirnov test) methods can be used. In this study, so-called P-P plots were generated. These represent the expected cumulative frequencies as a function of the actual cumulative frequencies. The normal distribution probability was calculated using the Blom method. Figure 6.1 gives an example of the PP plots generated in this way. While the P-P plots indicate the existence of normally distributed data, this could not be proven by statistical methods. Due to the robustness of a variance analysis against the violation of the criterion of the normal distribution, the use of the Finn coefficient for the interrater reliability calculation was, however, not discarded. With regard to the use of the Finn coefficient, it should be noted that it is generally considered to be of little value. It poses the danger that the proven reliability can

152

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.1 P-P-Plot for the evaluation of the item ‘Is the solution functional?’

positively distort the degree of actual agreement between the raters. Therefore, the calculation of the intraclass correlation (ICC) as a stricter valuation method had to be considered to prove interrater reliability. However, here again it proves to be a problem that only a small variance of the item mean values leads to the fact that ‘no or no significant reliability can be measured with the ICC for a measuring instrument’ (Wirtz & Caspar, 2002, 161). Although a lower ICC value is considered acceptable in this case, it is difficult to determine a clear anchor point of the boundary between satisfactory and inadequate interrater reliability. Compared to the ICC, the Finn coefficient is ‘apparently independent of the variance of item averages’ (Asendorpf & Wallbott, 1979, 245). The variances of the item averages in the available data prove to be small to very small. They range between 0.00 and 1.02. The mean dispersion is 0.37. Therefore, the Finn coefficient is useful here as a reliability measure. Several models are available for calculating reliability with the Finn coefficient. For the purpose of reliability determination, 1. each subject is assessed on the basis of 40 items 2. each rater of the entire rater team makes these assessments; i.e., they are not randomly selected from a larger rater population,

6.2 Ensuring the Interrater Reliability of the COMET Test Procedure

153

a two-factor model of reliability measurement (rater fixed) is selected (cf. Shrout & Fleiss, 1979). Furthermore, the decision must be made whether an unadjusted or an adjusted reliability should be used as a measure. The unadjusted reliability reflects the degree of agreement between the raters, while the adjusted reliability does not take average differences between the raters into account as a source of error and thus does not take the personal frame of reference of the raters into account. According to Wirtz and Caspar (2002) (related to the ICC) and Shrout and Fleiss (1979), a decision criterion for the use of the unadjusted or adjusted reliability calculation lies in the properties of the rater test. Since all raters are to assess all subjects and the reliability statement is to apply exclusively to the raters belonging to the sample, an adjusted value can be used as a reliability measure in this case. A reliable assessment can be assumed if the differences between the subjects (here the pupils) are relatively large and the variance between the observers with respect to the subjects is relatively small. The Finn coefficient can accept values between 0 and 1.0. A value of 0.0 expresses that there is no correlation between rater assessments, while a value of 1.0 arises when the raters have both equal mean values and equal variances. The closer the value is to 1.0, the higher the reliability of the assessments. For the Finn coefficient, values from 0.5 to 0.7 can be described as satisfactory and as good from more than 0.7. Due to its merely low valuation stringency, only Finn values with high interrater reliability are considered acceptable in the present study; i.e., only Finn values of at least 0.7 are considered sufficient. The following table shows the results of the reliability calculations for the eight test person solutions that were given to the 18 raters for evaluation following the rater training, whereby these are displayed both including and excluding the seven non-variance homogeneous items (Table 6.1). This shows that all Finn coefficients range within high reliability; i.e., the target criterion of 0.7 defined for this study is achieved or exceeded. Even if the seven nonTable 6.1 Interrater reliabilities in the advance sample of the population

Subject code H0102 H0265 H0225 H0282 H0176 H0234 H0336 H0047

Test task Skylight control Skylight control Signals Signals Drying space Drying space Pebble treatment Pebble treatment

All 40 items

Inclusion of the seven non-variance homogeneous items

Finnjust 0.70

Finnjust 0.71

0.76

0.75

0.80 0.78 0.74 0.71 0.87

0.79 0.78 0.74 0.71 0.86

0.73

0.73

154

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

variance-homogeneous items are excluded from the calculations, the results are largely stable. All in all, the interrater reliabilities can be described as satisfactory.

6.3

Latent Class Analysis of the COMET Competence and Measurement Model

Thomas Martens The psychometric review of a competence model aims to examine whether and how it has been possible to translate the theoretically and normatively based competence model into a measurement model. Thomas Martens and Jürgen Rost point to the complexity of the psychometric evaluation of a competence and measurement model for vocational education and training and to the state of psychometric evaluation practice, which is particularly challenged here. They describe the ‘approximation process’ between the theoretical creation of models and their gradual verification with various measurement models (Martens & Rost, 2009, 96 ff; Rost, 2004b): • Competence models as psychometric models of one-dimensional competence levels. • Competency models as psychometric models of cognitive sub-competences. • Competence models as psychometric models of level-related processing patterns. The central object of the first COMET model experiment was the psychometric examination of the competence model with the aim of developing it into a measurement model (ibid., 95). In COMET Vol. III, MARTENS and others present the method of an evaluation procedure with which the construct validity of the COMET test procedure was checked (Martens et al., 2011, Ch. 4.3, 109–126) (Fig. 6.2). The competence and measurement model based on educational theory and normative pedagogy has all the required psychometric quality characteristics. In the discussion about the external validity of the test procedure, it is also a question of whether the modelling of professional competence has been successful, and whether the professional skills and knowledge of experts can be adequately represented in their development from beginner to expert with the competence model. For this purpose, a comprehensive justification framework was developed in the COMET project, proven to be highly connectable in the international comparative studies conducted to date. To answer the question of how professional competences can be measured, some fundamental questions of test theory and measurement theory must first be discussed.

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

Source COMET Vol. I

155

(2009)

COMET Vol. II (2009)

COMET Vol. III (2011)

Objectivity

HAASLER, ERDWIEN: Ch. 5 HAASLER et al.: Ch. 4

HEINEMANN et al.: Ch. 2 RAUNER, MAURER: Ch. 3 PIENING, MAURER: Ch. 4

MARTENS et al.: Ch. 4.1 MAURER et al.: Ch. 5

Reliability

MARTENS, ROST: Ch. 3.5 HAASLER, ERDWIEN: Ch. 5.1, 5.2

ERDWIEN, MARTENS: Ch. 3.2

MARTENS et al.: Ch. 4.2

Validity

RAUNER et al.: Cahpter 1, 2 MARTENS, ROST: Ch. 3.5

ERDWIEN, MARTENS: Ch. 3.2 RAUNER, MAURER: Ch. 3.1

RAUNER et al.: Ch. 1, 2 HEINEMANN et al.: Ch. 3 MARTENS et al.: Ch. 4.3

Criteria

Fig. 6.2 Source references for the quality of the COMET test procedure (COMET Volumes I to IV)

6.3.1

On the Connection between Test Behaviour and Personal Characteristics

Rost (2004a) suggests that the subject area of test theory is the conclusion of test behaviour to the personal characteristic (Fig. 6.3). In many practical applications of test theory—especially in psychological test practice—it is assumed that a test behaviour can be translated directly into an empirical relative; for example, ‘test task solved’ and ‘test task not solved’ is converted into ‘1’ and ‘0’ and is then subsequently used as an estimator of a person’s ability, for example in the dichotomous Rasch model (Rasch, 1960; see Rost, 1999). However, even this simple relation raises a whole series of questions. Such a structure implies, for example, the property intended by the test designer that there can only be two possible outcomes of a test action, i.e. ‘solved’ or ‘not solved’. This logic, for example, does not map the intermediate steps towards the result of the action. How the result of the step came about is simply ignored, for example whether there could be alternative steps leading to an equivalent result. From the VET perspective and VET research, such dichotomisation of the test behaviour into ‘correct’ and ‘wrong’ represents a strong restriction of the validity of the content. The division into individual and independent steps, which can then be regarded as ‘solved’ or ‘not solved’ in the sense of a single test action, also seems hardly possible in many contexts of vocational education and training.

156

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.3 Connection between personal characteristic and test behaviour according to Rost (2004a)

6.3.2

On the Relationship Between the Structure and Modelling of Vocational Competences

This introductory consideration should have made it clear that the conclusion of test behaviour on characteristics of professional competence is by no means trivial. Basically, this is a fitting problem between empiricism and theory (see also COMET Volume I: Martens & Rost, 2009). Steyer and Eyd (2003, 285) explain: ‘Measurement models therefore have the task of explaining the logical structure of theoretical quantities and their combination with empirical quantities.’ In particular, therefore, this is a matter of (a) Theoretical assumptions relating to a structure of professional competence; (b) Mathematical relationships that can be described by a measurement model. Theoretical model (a) and measurement model (b) should be linked in such a way that one structure can be transferred to the other. According to Suppes and Zinnes (1963), this should be as isomorphic and unique as possible. A number of desirable properties can be added to both the theoretical model and the measurement model.

6.3.3

Mathematical Properties of a Test Model

The desirable properties of a test model cannot be discussed in detail here (see, for example, von Davier & Carstensen, 2007). The characteristic that is particularly controversial from the perspective of vocational education and training is the idea of items that are independent of one another and measure the same personal characteristic. Conceptually, this is described in classical test theory (KTT) with ‘essential tau equivalence’ and in probabilistic test theory (PTT) with ‘local stochastic independence’ (cf. Rost, 1999). This property of a test or a test model facilitates the exchange of test questions or test items or to shorten tests—with loss of reliability. This test

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

157

property is also the basis for the development of adaptive tests, in which selected test items are presented according to personality and test progress.

6.3.4

Characteristics of a Competence Model for Vocational Education and Training

Vocational education and training focuses on the competence to act (cf. COMET Vol. I, 28). This implies above all competencies for actions in work processes. Optimal processing usually requires various consecutive steps, which then result in a work result. This also means that there are usually several different strands of action that can lead to an equivalent result. Such steps are no longer generally independent of each other, but build on each other systematically. The artificial isolation of these steps will therefore generally severely restrict the validity of the content of the vocational competence model.

6.3.5

Dilemma: Dependent Versus Independent Test Items

This therefore poses a dilemma: on the one hand, the test-pragmatic requirement for a measurement model that contains independent test items and, on the other hand, the content-theoretical VET requirement that the individual processing steps of a test should build on one another. Before this section outlines possible solutions to this dilemma, we would like to point out the consequences if there is no convergence between the theoretical model and measurement model in professional competence measurement. If the demand for independent test items or test questions is maintained on the side of the measurement model, this would mean that only a few professional competences could be tested. It is then, of course, up to the respective domain experts to ascertain whether this residual proportion of professional competences to be measured is sufficient for an appropriate validity of content. In many cases, the answer must certainly be ‘no’. If, on the other hand, a non-measurable specificity of professional competence were derived from theoretical content requirements, this would also have far-reaching consequences. It would remain confined to expert assessments relating, for example, to observations of work or the evaluation of work products. In particular, the further formal processing of these assessments is then not assured. For example, individual indicators of the expert assessment could not be meaningfully linked to an overall value. In particular, the test subjects could no longer be compared with each other without a suitable linking rule for the indicators. Only the ranking of the characteristics of individual indicators could be compared with each

158

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

other—and also only under the assumption that a reliable expert assessment is available. An insufficient formalisation of the competence measurement would also contradict the objectivity of implementation and test fairness. There would be hardly any possibility to regard the expert assessment thus obtained as independent of the situational conditions. For example, both a work sample and a work product would be inseparably linked to the respective practical working conditions. Such confusion with the examination situation could be mitigated by standardising the test situation. But even in this case, the test fairness can be violated if the conditions in the training company and those in the test situation resemble each other to varying degrees.

6.3.6

The Search for a Bridge Between Theory and Measurement

The aim is therefore to identify a suitable fit that can build a bridge between the theoretical requirements of competency assessment and the desired requirements of a measurement model. Basically, two ways of approach can be distinguished: 1. In psychometric modelling, item dependencies can be taken into account by modelling so-called testlets. Monseur, Baye, Lafontaine and Quittre (2011) describe three ways to model such item dependencies: as partial credits, as fixed effects or as random effects. 2. The other alternative would be to place the theoretical expert model on a better empirical basis. This can be promoted in particular by the following measures: (a) (b) (c) (d)

6.3.7

product development is mostly freed from situational influences, linking the rating criteria to a theoretical model of professional competence, ensuring interrater reliability through appropriate measures, mapping the theoretically derived rating criteria using a suitable psychometric model.

The Example COMET

The empirical procedure within the framework of the COMET project will be outlined here as an example for the above-mentioned item 2. One focus of the following presentation will be on the steps of the empirical approach, which are more closely related to psychometric modelling.

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

6.3.8

159

Empirical Procedure

Five evaluation steps were carried out in the COMET measurement procedure: 1. 2. 3. 4. 5.

Determination of the interrater reliability of task solutions. Sorting of task solutions. Verification of the homogeneity of the competence criteria (scale analysis). Identification of typical competence profiles. Longitudinal analysis of competence development.

The first step was to check whether the interrater reliability of the individual items is sufficient for further data processing. In a second step, the data matrix was restructured. The assignment of the individual task solutions to the test persons and to the measuring points was resolved so that all task solutions could be analysed together. This procedure seems justified inasmuch as it may be possible for two different tasks to be performed by one test person at different competence levels. In the third step, the five items that can be assigned to a competence criterion are then checked for homogeneity in line with the Rasch model, i.e. whether these ratings actually measure the same latent dimension. In the fourth step, the person parameters determined with the Rasch model, and which correspond to the competence criteria, are calculated in a joint analysis. The aim is to identify typical person profiles that correspond to the assumptions of the COMET measurement model. In the fifth step, the data record was returned to its original order. This means that the four task solutions of the first two measuring points are again assigned to a test person in order to be able to analyse longitudinal developments. The decisive evaluation step in the empirical approach of the COMET project is the rating of the open solutions by specialists—usually vocational schoolteachers and trainers. The open solutions are evaluated by means of questions, five of which form one of the eight criteria of the COMET model (! 4). Before the actual empirical results of the COMET project are presented, two key aspects of test quality are to be discussed in more detail: the reliability of the rating procedure and the validity of the derived rating dimensions in terms of content.

6.3.9

On the Reliability of the COMET Rating Procedure

In order to determine the reliability, two advisers evaluate the same task solution. Securing interrater reliability is an absolute prerequisite for further data calculation pursuant to a measurement model. Without sufficient measurement accuracy (reliability), the data thus obtained would vary randomly and would no longer be meaningful. To ensure interrater reliability, the following measures, systematically

160

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

applied during the COMET project (see COMET Volumes I–IV) can be recommended: • The rating schemes should be formulated as precisely as possible. • Actual case studies should be used for training. • Rater training should be accompanied by a continuous review of interrater reliability until a sufficient target criterion is reached. • The rater training should work with real task solutions. • The composition of the rater teams should be systematically varied within the training and also in the subsequent investigation. • A third rating for systematically selected solutions can further increase measurement accuracy.

6.3.10 On the Content Validity of the COMET Rating Dimensions The transformation of the open solutions into the criteria of the COMET measurement procedure is the most important link in the measurement chain; therefore, it must be critically discussed at this point whether the validity of the content can be guaranteed here. Does it really measure what needs to be measured? The most important measure to ensure the validity of the content is to have the rating carried out by experts. These experts ensure that the abstraction of domain-specific solution knowledge is incorporated into the target criteria. The direct involvement of domain experts immanently and directly supports the validity of the content in the COMET measurement procedure. This means in particular that the rating of open task solutions must also be carried out permanently by domain experts. As long as this is the case, the validity of the rating procedure in terms of content is also assured in the long term. The validity of the open tasks in terms of content and the universal applicability of the eight criteria to the different domains of VET must be discussed elsewhere (see COMET Volumes I-IV).

6.3.11 Population The basis for the following calculations is a data set obtained in Hesse with electronics technicians in industrial engineering (industry) and electronics technicians for energy and building technology (crafts), which was collected at two measuring points in the second and third year of training; 178 pupils at the first and 212 pupils at the second measurement point completed the open tasks. In total, 1560 task solutions have been developed and rated accordingly (cf. COMET Volumes I-IV).

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

161

6.3.12 Step 1: Determination of Interrater Reliability The ‘Finn coefficient’ was used as a reliability measure (see Asendorpf & Wallbott, 1979). With a value range of 0.0 to 1.0, values of 0.5 to 0.7 can be interpreted as satisfactory and values 0.7 as good reliabilities. The test for this sample showed coefficients between 0.71 and 0.86. This means that the reliability can be classified as consistently good (! 6.2).

6.3.13 Step 2: Sorting of Task Solutions After reviewing the interrater reliability, the data matrix was restructured. The assignment of the individual task solutions both to the test subjects and to the events was completely resolved, so that all task solutions could be analysed together in a vertical data matrix. The analysis units in the procedure described below are no longer the test subjects, but the tasks.

6.3.14 Step 3: Verification of the Homogeneity of the Competence Criteria The raw values from the ratings of the evaluation items were then processed further, as already described for Erdwien and Martens (2009) in COMET Vol. II. Each subject was judged by two raters—in a few cases also by three raters. Although the interrater reliability proved satisfactory (see Step 1), there were, of course, divergent assessments. Therefore, the rater judgments were averaged before further processing of the data. Mean values of 0 to 3 were calculated according to the rating scale and rounded as follows for the following analyses (see Table 6.2). The following analyses first examined whether the evaluation items for a criterion are really homogeneous, i.e. form a scale that measure the same underlying dimension. In particular, it was checked whether the evaluation items of a criterion are so similar that they can be summarised to a scale value. The eight criteria (clarity, functionality, sustainability, economic efficiency, business process orientation, social compatibility, environmental compatibility and creativity) were each calculated using the ordinary Rasch model. On this basis, the reliability (according to Table 6.2 Rounding of the rating assessments

Rounding range 0–0.499 0.500–1.499 1.500–2.499 2.500–3

Rounding 0 1 2 3

162

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.3 Overview of the scale analyses

Clarity/presentation Functionality Sustainability Efficiency Orientation on business and work process Social compatibility Environmental compatibility Creativity

Reliability (according to Rasch) 0.88 0.85 0.84 0.8 0.87

Items eliminated 0 1 0 0 1

Number of categories in the LCA 9 9 9 7 9

0.77 0.73 0.86

1 1a 1

9 8 9

A rating item was removed here because of an ‘overfit’. The corresponding explanations can be found in the text a

Rasch) and the Q indices (cf. Rost & von Davier, 1994) were determined as item fit measures. As the strictest model test, this solution was additionally contrasted with the mixed Rasch model for two subpopulations. Information criteria were used to directly compare whether the mixed Rasch model with two subpopulations was better suited to the data. The ‘Consistent Akaike’s Information Criterion’ (CAIC) was primarily used for this (see Bozdogan, 1987; Bozdogan & Ramirez, 1988). The fit was better for the simple Rasch model than for the mixed Rasch model. This ensures that the respective criterion can be represented by a latent parameter. For individual criteria, however, questions (items) had to be excluded from the further analysis due to insufficient homogeneity (measured by the Q indices (see Table 6.3). After excluding the respective items, the following solutions of the simple Rasch model were inconspicuous and matched the data well. An exception is the criterion ‘environmental compatibility’—here an item was removed because of an ‘overfit’; i.e., the item characteristic curve was too steep for the simple Rasch model. After eliminating this item, the 1-class solution for environmental compatibility was also inconspicuous, even though the resulting reliability was somewhat lower. An overview of the results can be found in the following table. Thus, for all eight criteria, a satisfactory solution can be identified using the simple Rasch model. This allows each task solution with exactly one value per criterion to be included in the further analyses. The corresponding items thus each form a scale for recording one of the eight criteria of vocational competence. It should be noted that five items of a rating each relate to the same task solution. Even if this fact has been taken into account in detail in the rating training courses, it cannot be completely ruled out that the homogeneity of the items will be overestimated by this procedure. This could lead to an overestimation of the actual correlations, especially in the further calculation of correlations between the scales. At the same time, the average of the rating assessments leads to an underestimation of homogeneity. The exact effect of the rating procedures on homogeneity would have to be checked with further data simulations. These are the additional reasons

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

163

why the following statistical methods concentrate on mixed distribution models and thus above all on qualitative differences of the profiles.

6.3.15 Step 4: Identification of Typical Competence Profiles. The latent class analysis method was used to further identify typical competence patterns in vocational education and training. The analytical procedure is based on that of Martens and Rost (1998). The results of the above-mentioned ordinal Rasch analyses of the competence criteria are included in the latent class analysis in the form of rounded theta parameters. Here, the theta parameter represents the taskspecific ability to fulfil the respective criterion well. This two-step procedure has the particular advantage that the data basis for the subsequent latent class analysis becomes more robust against possible distortions of individual evaluation items. Latent-class analysis is a method that identifies typical profiles of task solutions. For this purpose, the entirety of the task solutions is broken down into precisely defined subgroups. Each subgroup (classes) defined in this way has exactly one characteristic profile for the eight criteria. The challenge of this procedure is in particular to determine the correct number of subgroups into which the whole is broken down. With each additional subgroup, the fit of the measurement model to the data must become more accurate. On the other hand, there is the fundamental requirement that a measurement model should be as ‘simple’ as possible, i.e. as few subgroups as possible should be identified. The two demands for ‘data fit’ (¼ more subgroups) and ‘simplicity’ (¼ fewer subgroups) must be carefully weighed against each other. Information criteria such as CAIC (cf. Bozdogan, 1987) are often used for these weighing processes. However, the penalty function implemented in CAIC for additional model parameters is arbitrary and differs from other similar information criteria. To determine the correct number of subgroups, two further criteria were therefore used: the use of bootstrapping (see, for example, von Davier, 1997) and the consideration of the average allocation probabilities to the subgroups. Bootstrapping creates a customised test distribution using synthetic samples. The actual sample should not differ significantly from artificially generated samples. The medium assignment probabilities indicate how well the criteria profiles of the task solutions can be assigned to the subgroups. The mean allocation probabilities to the subgroups should not fall below 0.8. Low assignment probabilities would mean that the profiles cannot be uniquely assigned to the subgroups. The model with the most subgroups, which simultaneously has a non-significant bootstrap (P(X > Z ) ¼ 0.234 for the test variable Pearson X2) and provides average allocation probabilities to the subgroups between 0.81 and 0.95, has 10 groups. Although the information criteria indicate that models with a smaller number of groups could be better suited to the data—CAIC, for example, refers to a solution with four subgroups—this of course only applies if additional model parameters are disproportionately penalised.

164

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.4 Characteristic values of the identified subgroups Subgroup 1 2 3 4 5 6 7 8 9 10

Size of subgroup (in percent) 19 15.8 13.9 12.5 9.3 8.0 6.7 6.1 5.2 3.4

Average assignment probability to the subgroups 0.844 0.888 0.816 0.811 0.899 0.949 0.808 0.830 0.856 0.920

Fig. 6.4 Competency patterns of subgroups 1–10: General overview

The number of subgroups identified by latent class analysis, weighing ‘fit’ and ‘simplicity’, is 10 (Table 6.4). Each of the 10 groups has a typical competence profile (Fig. 6.4). The measurement model of latent class analysis assumes that all solutions assigned to a specific subgroup have exactly the same (latent) competence profile. Only the level of assignment probabilities varies between the task solutions of a

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

165

subgroup. When interpreting the lower graphics, it must also be taken into account that these are typical competence patterns for the solution of tasks and not the competence patterns of persons. Our sample therefore includes people whose solutions to tasks are assigned to different subgroups and thus different competence profiles. Such intraindividual differences in competence can be caused by personal characteristics, such as learning effects or fatigue, or task characteristics, such as systematic task differences. In particular, systematic differences in tasks would have to be very carefully taken into account when interpreting the results. In order to present this result more clearly, the following sub-graphics of this overall graphic are presented with a systematic selection of the subgroups. Figure 6.5 shows the essentially parallel competence patterns of subgroups 1, 2, 3, 5, 6 and 10. 69.4% of the tasks that were solved with this competence profile. These subgroups differ almost exclusively in their different profile heights. Subgroup 6 with a share of 8% of all task solutions has the lowest competence profile; i.e., the corresponding tasks were processed on the rating items with particularly low competence according to the ratings. Conversely, the particularly good task solutions were assigned to subgroup 1 with a share of 3%. When interpreting the graphics, it must be taken into account that these are rounded theta parameters, so the absolute height differences cannot be interpreted directly.

Fig. 6.5 Competency patterns of subgroups 1, 2, 3, 5, 6 and 10

166

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.6 Competency patterns of subgroups 4 and 7

Figure 6.6 gain shows two almost parallel competence profile curves. These two parallel subgroups 4 and 7 have a higher level of descriptiveness/presentation and functionality compared to the competence profiles considered in Fig. 3.3. Such a competence pattern has been theoretically expected and corresponds to the level of ‘functional competence’. The corresponding task solutions were therefore carried out with a disproportionately pronounced functional competence. Figure 6.7 shows the two remaining subgroups 8 and 9 together with the subgroup 1 already shown above; like subgroup 1, the two subgroups 8 and 9 also show an average competence profile. However, subgroup 9 shows a slight drop in the criteria clarity/presentation and functionality compared to subgroup 1, while subgroup 8 does not appear to have a very different profile from subgroup 1. The validity of the contents of these two profiles must be checked by further analyses, for example by distributing the subgroups to the task sets. The competence patterns of most subgroups therefore run in parallel. In contrast to this, subgroups 4 and 7 show a competence profile that corresponds to the level ‘functional competence’; a drop can be identified in the levels ‘procedural competence’ and ‘holistic shaping competence’. For subgroups 8 and 9, it is not immediately clear which theoretically expected competence profiles these patterns could correspond to.

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

167

Fig. 6.7 Competence patterns of subgroups 8, 9 and 1

6.3.16 Distribution of Tasks Among the Competence Models In the following, it was examined whether the four different tasks were also equally distributed among the identified competence patterns. First of all, the different levels of difficulty of the tasks are striking (Fig. 6.8). This can be seen directly in the proportionate ratio of subgroups 6 (the lowest competence profile) and subgroup 5 (the second highest competence profile). Task 2 (skylight control) is relatively speaking the easiest, followed by task 3 (drying space) and task 4 (pebble treatment) and finally by the most difficult task 1 (signals). It can be noted that there do not seem to be any task-specific, qualitative patterns. In particular, the subgroups 4, 7, 8 and 9, which represent qualitative deviations from the ‘standard pattern’, appear to be more or less equally distributed among the tasks. Only for subgroup 4, there is a slightly reduced percentage for task 3 (drying space). The distribution of competence patterns among the tasks can therefore serve as proof that the identified patterns or subgroups are not specific competence profiles of individual tasks. This means that the solutions of the four test tasks are very similar in terms of the proficiency criteria in the different subgroups. The implementation of the competence model presented here has proved its worth insofar as there is no task that shows a completely independent solution in the form of an ‘own’ subpopulation.

168

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.8 Distribution of tasks among the competence models

6.3.17 Step 5: Longitudinal Analysis of Competence Measurement. In Step 5, the task solutions were again assigned to the individual persons and lined up in time. For an initial evaluation, the number of identified subgroups from Step 4 was evaluated. The first two tasks were solved at the first measurement time and the other two tasks (w3 and w4) at the second measurement time. For the interpretation, it must be taken into account that each task is a mixture of tasks, since the assignment of the tasks to the persons has been systematically rotated. Since only test persons were considered who solved tasks at both times, the data basis for the following graphics consists of 151 pupils. In particular, the first solved task (w1) should be compared with the first solved task one year later (w3). This shows a significant increase, especially for the largest subgroup 1. In addition, the two subgroups with the highest competence profiles (subgroups 10 and 5) show an almost equal number at least in comparison with the measurement points w1 and w3. This can be understood as a first indication of the validity of the tasks in terms of content and curricula: At least for some of the tasks, a higher competence profile can be shown. At the same time, subgroup 6, which has the lowest competence profile, shows that a fatigue effect occurs within a measurement for part of the sample (increases in frequencies from both w1 to w2 and from w3 to w4). Furthermore, a kind of reactance effect can be observed, which manifests itself as an increase of this subgroup between the two measurement points, especially from w2 to w4. The

6.3 Latent Class Analysis of the COMET Competence and Measurement Model

169

proportion of these task solutions increases, especially in the last task. Probably some of the pupils simply no longer wanted to work on their tasks in a motivated way. This interpretation is also supported by the fact that subgroup 2 with the second lowest competence profile has a relatively uniform share over time (see Fig. 6.9).

6.3.18 Discussion of the Results Overall, 89% of the task solutions correspond to the assumptions made in the COMET model for vocational education and training (subgroups 1, 2, 3, 4, 5, 6, 7, 10). Only subgroups 8 and 9 show theoretically unexpected profile progressions. Subgroups 4 and 7 in particular confirm that there are qualitatively different task solutions which are characterised by a higher level of functional competence and correspondingly lower levels of conceptual/procedural competence and holistic shaping competence. The question remains as to why no significant qualitative profile differences could be identified with regard to the distinction made for theoretical reasons between conceptual/process-related competence and holistic shaping competence. In the sense of the theoretical competence model, at least one subgroup should have been found in which the conceptual/procedural competence (sustainability, economic efficiency, business process orientation) is higher than the holistic shaping competence (social compatibility, environmental compatibility, creativity). Various explanations are conceivable: as already indicated above, the fact that the ratings referred to one and the same task solution could tend to lead to certain characteristics of the task solution ‘outshining’ other characteristics and to a uniform assessment of the various criteria (halo effect). Furthermore, it could be justified by the fact that the subjects studied are at the beginning of their careers. At least a general halo effect can be excluded, because then subgroups 4 and 7 would not have been identified either.

6.3.19 Need for Research Further research is therefore required, which supplements the available data with a sample of test persons who are already more advanced in their professional lives and thus have a higher level of professional expertise. With such a sample, further qualitative level differences could then be found. The longitudinal analyses provide initial indications of the content and curricular validity of the VET competence model presented here. However, it turns out that the designed tasks promote a certain displeasure potential. This potential can be identified—as shown—with mixed distribution analyses, but when using the tasks in a non-scientific environment, this potential ‘displeasure’ must of course be taken into account. However, in order to carry out a more detailed analysis of individual

170

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.9 Development of the competence patterns of the individual subgroups (mc1 to mc10) over time

6.4 Confirmatory Factory Analysis

171

development patterns over time, the number of samples would have to be increased and the number of subgroups to be examined reduced.

6.3.20 Prospect The measurement procedure within the framework of the COMET project demonstrated a possible approach for resolving at least part of the contradiction between measurement model and competence model in vocational education and training. The combination of open tasks and subsequent ratings shown here could of course also be implemented under other framework conditions, for example by computersupported holistic simulation of tasks, the solution of which could then be assessed similarly by raters. Various options could be found for calculating the corresponding rating assessments, based on the respective theoretical model (for an overview, see Martens & Rost, 2009). Especially where the theoretical competency model predicts qualitative profile differences, measurement models should be used that can map this accordingly—such as mixed distribution models such as the latent class analysis used here or the mixed Rasch model (Rost, 1999). It should not be overlooked at this point that the ‘fit’ between measurement model and theoretical model required in the introduction means that further criteria, such as a particularly efficient measurement of professional competence, cannot be given priority. Adequate consideration of both theoretical and measurement methodological requirements means that methods that can solve this fitting problem, such as the COMET measurement method, are relatively time-consuming to implement. It must also be emphasised that measurement methods that meet the formulated requirements are not suitable for all practical applications. For example, the use of mix distribution models prevents the profile heights from being compared directly with each other. This means that at least no simple application for selection processes is possible. This is also directly related to the fact that it is not possible to identify common parameters for an entire population. This makes the use in large-scale studies very difficult.

6.4 6.4.1

Confirmatory Factory Analysis The I-D Model (→ 4, Fig. 4.5)

The identity and commitment model of vocational education and training is evaluated on the basis of two samples (A ¼ 1124. B ¼ 3014) using the methods of an explorative and a confirmatory factor analysis. In this model, an occupational and an organisational identity can be distinguished from each other. However, it is also assumed that the two are related to each other—in both cases, it is a matter of identity that is shaped by vocational training and work. The development of occupational

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

172

identity and identification with the profession in the process of vocational training and professional work go according to this model rather with the development of occupational and organisational commitment. Following Carlo Jäger, the work ethic is defined as the unquestioned willingness to carry out the instructions of superiors (Jäger, 1989).

6.4.2

Original Scales

Vocational Identity NRW BE1 BE2 BE3r BE4 BE5 BE6

Saxony SkBE1 SkE2 SKBE3r SkBE4 SkBE5 SkBE6

I like to tell others what profession I have/am learning. I ‘fit’ my profession. I am not particularly interested in my profession. REVERSE I am proud of my profession. I would like to continue working in my profession in the future. The job is like a bit of ‘home’ for me.

Cronbach’s Alpha NRW: 0.843 (n ¼ 1124) Cronbach’s Alpha Saxony: 0.871 (n ¼ 3014).

In both data sets, the alpha would increase if the reverse-coded item (Sk)BE3r were removed from the scale: in NRW, the alpha increased to 0.859 and in Saxony to 0.889. The corrected item scale correlation in NRW is 0.425, and 0.447 in Saxony. It remains open whether the reasons for this effect lie in the reverse coding and whether the trainees deal with this differently than with the items coded in the right direction, or whether interest alone or lack of interest alone does not yet say anything about the identification with the profession. Vocational Commitment NRW ID1 ID2 ID3 ID4

Saxony SkID1 SkID2 SkID3 SkID4

ID5 ID7

SkID5 SkID7

I am interested in how my work contributes to the company as a whole. For me, my job means delivering quality. I know what the work I do has to do with my job. Sometimes I think about how my work can be changed so that it can be done better or at a higher quality. I would like to have a say in the contents of my work. I am absorbed in my work.

Cronbach’s Alpha NRW: 0.767 (Note: Only data on 627 trainees for item ID7 were available.) Cronbach’s Alpha Saxony: 0.820 (n ¼ 2985)

6.4 Confirmatory Factory Analysis

173

Organisational Identity NRW OC1 OC2

Saxony SkOC1 SkOC2

OC3 OC4r OC5 OC6

SkOC3 SkOC4r SkOC5 SkOC6

The company is like a bit of ‘home’ for me. I would like to remain in my company in the future—even if I have the opportunity to move elsewhere. I like to tell others about my company. I don’t feel very connected to my company. I ‘fit’ my company. The future of my business is close to my heart.

Cronbach’s Alpha NRW: 0.869 (n ¼ 1121) Cronbach’s Alpha Saxony: 0.899 (n ¼ 3030)

Organisational Commitment NRW BetID1 BetID2 BetID3 BetID4 BetID5 BetID6

Saxony SkBetID1 SkBetID2 SkBetID3 SkBetID4 SkBetID5 SkBetID6

I like to take responsibility in the company. I want my work to contribute to operational success. I am interested in the company suggestion scheme. The work in my company is so interesting that I often forget time. I try to deliver quality for my company. Being part of the company is more important to me than working in my profession.

Cronbach’s Alpha NRW: 0.702 (n ¼ 1077) Cronbach’s Alpha Saxony: 0.704 (n ¼ 2990)

The item (Sk)BetID6 fits rather badly to the rest of the scale. However, this is to be expected with regard to the content of the item: here, two concepts are compared with each other. The value of the affiliation to the company is asked, not the affiliation itself. That makes this item difficult to understand. The corrected itemscale correlation for NRW is only.198 and for Saxony.201. When this item is excluded from the scale formation, the alpha for NRW increases to.732 for Saxony to.737. The alpha therefore remains at a rather low level even when the conspicuous item is excluded. Work Moral NRW AM1 AM2 AM3

Saxony SkAM1 SkAM3 SkAM2

I am motivated, no matter what activities I get assigned. I am always on time, even if the work does not require it. I am reliable, no matter what activities I get assigned.

Cronbach’s Alpha NRW: 0.592 (n ¼ 1145) Cronbach’s Alpha Saxony: 0.687 (n ¼ 3037)

Overall, the work moral scale should be revised, as the alpha is at a very low level. Although the shortness of the scale of only three items has to be taken into account, which additionally weighs on the alpha level, the very low corrected item scale correlations of, e.g., AM1 (NRW) 0.401 and AM2 (NRW) 0.331(!) also indicate an ambiguity of the scale in terms of content.

174

6.4.3

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Confirmatory Factor Analysis for the Original Model

By means of confirmatory factor analysis in MPlus7, the model briefly described above is to be checked with regard to its fit to the empirically collected data. This is again done using the data records from NRW 2013 and Saxony 2014. NRW (Original Model) The model specifications were stated in such a way that the latent factors are formed on the basis of the original scales. As a result, regressions of engagement factors and work morale on identity factors were inserted and correlations between identity factors and between engagement factors and work morale were allowed. The model comes to the following conclusion (cf. Fig. 6.10). The factor loads show that these are in an acceptable range overall. Only some items have low loads below 0.5 (be3r, am2, betid6). The residual variance of these items is particularly high. The main problem is the item betid6, which, with a charge of 0.214, actually does not fit the scale. The correlation between the identity factors is quite high at r ¼ 0.616, in line with expectations. The regressions of the commitment factors confirm the model. At β ¼ 0.565, the regression of vocational commitment to vocational identity is higher than the regressions of work moral and occupational commitment to this factor. Likewise, the regression of the organisational commitment to the organisational identity with β ¼ 0.64 is higher than the regressions of the occupational commitment and work morale to this form of identity. The problem with the model seems to be that the work morale shows very different regressions in the forms of identity (β ¼ 0.423 for professional identity and β ¼ 0.194 for organisational identity). Furthermore, the very high correlation between the two forms of engagement seems problematic. These factors are almost identical to a correlation of r ¼ 0.938(!). These problems are also reflected in the statistical evaluation of the model. MODEL FIT Information Number of Free Parameters: 91

Loglikelihood H0 Value 36028.260 H1 Value 35125.421 Chi-square test of model fit Value 1805.677 Degrees of freedom 314 P-value 0.0000 Chi-square test of model fit for the baseline model Value 12319.722 Degrees of freedom 351 P-value 0.0000

Information Criteria Akaike (AIC) 72238.519 Bayesian (BIC) 72696.809 Sample-Size Adjusted BIC 72407.766 (n* ¼ (n + 2)/24) RMSEA (root mean square error of approximation) Estimate 0.065 90% C.I. 0.062 0.068 Probability RMSEA  0.05 0.000 CFI/TLI CFI 0.875 TLI 0.861

(continued)

6.4 Confirmatory Factory Analysis

175

MODEL FIT Information Number of Free Parameters: 91

Loglikelihood H0 Value 36028.260 H1 Value 35125.421 SRMR (standardised root mean square residual) Value 0.052

6.4.4

Information Criteria Akaike (AIC) 72238.519 Bayesian (BIC) 72696.809 Sample-Size Adjusted BIC 72407.766 (n* ¼ (n + 2)/24)

Explanations

Loglikelihood: H0 means that the specified model is valid, H1 means that an unrestricted model in which all mean values, variances, etc. are independent of each other is valid. The absolute figures of the loglikelihood are not easily interpretable or comparable. The statistical check is then carried out using the chi-square test. AIC/BIC/adjusted BIC: Should be as small as possible, but serves only for descriptive analysis. Chi-square test: Tests the null hypothesis that covariance matrix in the population is equal to the covariance matrix implied by the model. If the test becomes significant, this means that the model does not fit the data. This is the case in our analysis. RMSEA: Should be less than 0.05. CFI/TFI: Should be above 0.95, better still above 0.97. Chi-Square Test for Baseline Model: Tests the so-called ‘baseline model’ for its fit to the data. The baseline model assumes that no valid predictions (regressions) exist between all variables of the data set. The baseline model is also called the zero model. SRMR: Should be below 0.05. The results of the model specified here are not good. However, the values are not so far away from a good model. We want to check again what model looks like when the problematic item betid6 is excluded.

6.4.5

Modification

BetID6 is excluded from the analysis. All other items are retained. See Fig. 6.11 for the result of the model. Compared to the first model, which takes all items into account, the latent factor of operational commitment becomes somewhat clearer. However, with the exception of minor changes in regression weights and correlation coefficients, there are no major changes.

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.10 Factor Analysis NRW (original model)

176

Fig. 6.11 Factory analysis NRW (first modification)

6.4 Confirmatory Factory Analysis 177

178

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

The statistical indices are improving marginally. The model cannot be really optimised by deleting problematic items. A look at the CFA of the original model based on the data from Saxony should provide further insights. MODEL FIT Information Number of Free Parameters: 88

Loglikelihood H0 Value 34592.199 H1 Value 33751.369 Chi-square test of model fit Value 1681.661 Degrees of freedom 289 P-value 0.0000 Chi-square test of model fit for the baseline model Value 12151.190 Degrees of freedom 325 P-value 0.0000 SRMR (standardised root mean square residual) Value 0.050

6.4.6

Information Criteria Akaike (AIC) 69360.399 Bayesian (BIC) 69803.580 Sample-Size Adjusted BIC 69524.066 (n* ¼ (n + 2)/24) RMSEA (root mean square error of approximation) Estimate 0.065 90% C.I. 0.062 0.068 Probability RMSEA  0.05 0.000 CFI/TLI CFI 0.882 TLI 0.868

Discussion

It remains open why model and data do not go well together. The partially low charges already described above provide initial information. These were already noticeable during the reliability analysis. It is therefore necessary to further sharpen the content-related fit of the items to the scales as well as the similarities between the items of a scale. Furthermore, in this model a correlation is on the edge of possibility: the two forms of engagement correlate to r ¼ 0.938! This means that they measure almost the same. It is therefore important to work on the scales of commitment. However, it remains to be seen at this stage whether a distinction should be made between two forms of commitment in terms of content. Saxony (Original Model) The original model was checked again on a second data set in order to check the results and findings initially obtained. In the first calculation, all items were taken over, and no changes were inserted. As in Saxony, the factor loads are largely within an acceptable range. Really critical are only the items SkBetID6 (0.182!!!!) and partly those of SkBE3r (0.477). Overall, the loads are somewhat higher than in NRW. Again, it is striking that the regression from organisational commitment to organisational identity has the

6.4 Confirmatory Factory Analysis

179

greatest overall predictive power for organisational identity (β ¼ 0.59) and the same also applies to the predictive power of occupational commitment to occupational identity (β ¼ 0.677). The correlations between the two forms of identity are r ¼ 0.637, and the correlation between the two forms of commitment even exceeds 1! r ¼ 1.068. The original model was modified as a result. It is also striking that there are also very high correlations between work moral and the forms of commitment of r ¼ 0.708 to vocational commitment and r ¼ 0.818 to organisational commitment (Fig. 6.12). Initially, MPlus tells us that the model, as already mentioned above, cannot continue to exist in this form: WARNING: The latent variable covariance matrix (PSI) is not positive definite. This could indicate a negative variance/residual variance for a latent variable, a correlation greater or equal to one between to latent variables, or a linear dependency among more than two latent variables. Check the TECH4 output for more information. Problem involving variable BTE. The statistical key figures are as follows: MODEL FIT Information Number of Free Parameters: 91

Loglikelihood H0 Value 96043.612 H1 Value 93298.023 Chi-square test of model fit Value 5491.179 Degrees of freedom 314 P-value 0.0000 Chi-square test of model fit for the baseline model Value 42619.050 Degrees of freedom 351 P-value 0.0000 SRMR (standardised root mean square residual) Value 0.056

Information Criteria Akaike (AIC) 192269.224 Bayesian (BIC) 192817.218 Sample-Size Adjusted BIC 192528.075 (n* ¼ (n + 2)/24) RMSEA (root mean square error of approximation) Estimate 0.074 90% C.I. 0.072 0.075 Probability RMSEA  0.05 0.000 CFI/TLI CFI 0.878 TLI 0.863

As in the review by the data from NRW, the statistical parameters indicate that the model and data do not match well. The resulting correlation of one in this model invalidates the model.

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Fig. 6.12 Factory analysis Saxony (original model)

180

6.4 Confirmatory Factory Analysis

6.4.7

181

Explorative Factory Analysis

Since the originally assumed model does not ideally fit the data of both data sets, we want to determine by means of explorative factor analysis (main component analysis) what model is proposed to us on the basis of the available data. NRW First EFA: All variables taken into account. The analysis is carried out with the SPSS program, and all items are first included in the analysis. No assumptions are made. Factors included in the model should be determined according to the Kaiser criterion (value greater than 1). The solution is also rotated using the Varimax process. This means that by continuously rotating, the items are tried to assign only one factor as clearly as possible. Medium high loads are attempted to be avoided. However, the factors remain independent of each other.

6.4.8

Results

The main component analysis results in a solution with 5 factors, which looks like this: Factor 1 OC1 OC2 OC3 OC4r OC5 OC6 (BetID4)

Factor 2 BE1 BE2 BE3r BE4 BE5 BE6 ID7

Value: 9.133

2.365

Factor 3 BetID1 BetID2 ID1 ID2 ID3 ID4 ID5 (BetID3) (BetID4) 1.706

Factor 4 AM1 AM2 AM3 BetID5

Factor 5 BetID6 (BetID3)

1.325

1.003

57.52% of the total variance can be elucidated with the 5 factors. The items in parentheses could not be uniquely assigned, but load on at least two factors similarly high.

6.4.9

Discussion

The forms of identity are very clear in the analysis. Factor 1 corresponds to the organisational identity and factor 2 to the vocational identity. The work moral can also be redetermined (factor 4), whereby however, the item BetID5 is awarded from the area of the operational commitment actually intended for this. The original

182

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

separation between two forms of commitment is not found in the data. Factor 3 forms a kind of general commitment factor, which consists mainly of items of vocational commitment and partly of items of organisational commitment. The analysis of reliability yields an alpha of 0.787 if BetID3 and 4 are omitted. Factor 5 consists only of items from the area of operational exposure, but these are the items that have already attracted negative attention in the previous analyses. BetID3 contains the company suggestion system, which is not necessarily relevant for all trainees. BetID6 contains an assessment of the importance of the company’s affiliation. In terms of content, factor 5 seems to be a kind of ‘residual factor’ on which the items that caused the student problems during processing load. BetID6 and, if possible, BetID3 should be excluded from further analysis and revised if possible. Second EFA: Exclusion of BetID6 A new explorative factor analysis (EFA) excluding the variable BetID6, which was identified as inappropriate to its originally intended scale ‘organisational commitment’ as well as to all other scales, should provide information about the factor structure in which a ‘residual factor’ is avoided. The inclusion criterion again forms the Emperor criterion. The following factor solution results: Factor 1 OC1 OC2 OC3 OC4r OC5 OC6 (BetID3) (BetID4)

Factor2 BE1 BE2 BE3r BE4 BE5 BE6 ID7

Value: 9.072

2.239

Factor3 BetID1 BetID2 ID1 ID4 ID5 (BetID3) (BetID4) (ID2) (ID3) 1.694

Factor4 AM1 AM2 AM3 BetID5 (ID2) (ID3)

1.299

Overall, the model explains 55.017% of the variance.

6.4.10 Discussion Excluding the item BetID6, the fifth factor is omitted, which suggests that it was in fact a kind of residual factor that resulted from the difficulty of the item BetID6. The EFA carried out here paints a similar picture as the first EFA with regard to the identity factors (factor 1 and 2). Factor 2 has the same structure as in the first EFA and factor 1 is now only supplemented by the proportional charge of the item BetID3, which no longer charges on a residual factor. The work moral (factor 4) also achieves a similar structure as in the first EFA, with additional items from the area of commitment loading on the factor of work moral. The commitment factor (factor 3) has decreased overall. In this analysis, it turns out that the items BetID1

6.4 Confirmatory Factory Analysis

183

and 2 as well as the items ID1, 4 and 5 make up the core of this factor. The other commitment items are now spread over various factors. They do not seem to fit concretely enough on commitment alone, but instead blur the boundaries to identity and work moral (with the exception of ID 7 and BetID5, which do not strictly fit the commitment, but complement the vocational identity and work moral).

6.4.11 Discussion With a third EFA: exclusion of the aging factors BetID6 and BetID3, the proportion of the enlightened total variance increases significantly, whereas the values of the factors become only slightly smaller. Saxony An explorative factor analysis will also be carried out again on the basis of the Saxon data in order to check whether similar patterns arise in this data set. First EFA: All Variables Taken Into Account The first EFA contains all variables and comes to the following result after the Varimax rotation: Factor 1 SkOC1 SkOC2 SkOC3 SkOC4r SkOC5 SkOC6 SkBetID4

Factor 2 SkBE1 SkBE2 SkBE4 SkBE5 SkBE6 SkID7 (SkBE3r)

Value:10.425

2.332

Factor 3 SkBetId1 SkBetID2 SkBetID3 SkID1 SkID2 SkID4 SkID5 (SkID3) 1.698

Factor 4 SkAM1 SkAM2 SkAM3 SkBetID5 (SkID3)

Factor 5 SkBetID6 (SkBE3r)

1.205

1.012

The model elucidates a total of 61.75% of the total variance.

6.4.12 Discussion This first explorative factor analysis also comes to a solution of 5 factors. Again, it is striking that not two engagement factors arise as assumed in the original model, but a global engagement factor (factor 3) and a kind of residual factor (factor 5), on which exactly the items upload, which already caused difficulties in the reliability analysis, unlike the solution from NRW, the item SkBetID3 finds its place in factor 3 and instead the item SkBE3r becomes blurred with charges on factors 2 and 5. This item loaded data in the solution for NRW on the scale of the vocational identity intended

184

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

for it. This reverse-coded item seems to cause the trainees greater difficulties here, or does not have the same significance for their vocational identity as in NRW. Two further model corrections were analysed: exclusion of SkBE3 and SkID3; as a result, these analyses contribute to the conclusion that the scales of organisational commitment and work moral in particular should be revised.

6.4.13 Overall Discussion EFA Both the analysis of the data from NRW and the data from Saxony provide a pattern that can best be described by a 4-factor solution. A clear distinction is made here between vocational and organisational identity, while there is no separation between vocational and organisational commitment. These forms blur into each other. The work moral can be found similar to the originally assumed form in both data sets, whereby it is supported sometimes more strongly times less strongly by further items from the commitment range. The evaluations carried out here suggest that the separation between vocational and organisational commitment should be reconsidered and at the same time some items should be assigned differently. Thus, item (Sk)ID7 appears consistently as part of the vocational identity and item (Sk)BetID5 as part of the work ethic. The other items, some of which have changed their affiliation to a scale, require vocational pedagogical justification. It should also be examined whether the separation between the forms of engagement should be removed. It remains to be seen whether this is just one type of commitment, as the factor analysis consistently shows or whether the scales provided so far are not suitable for validly measuring the construct and the problem therefore lies in the construction of the scales.

6.4.14 Considerations for Further Action The analyses carried out show that the separation between organisational and vocational commitment could not be confirmed on the basis of the item structure analysed. This may indicate that both concepts are one concept and can be measured within a construct. Similarly, the result may also mean that the separation of content is justified, but the items are not able to measure the constructs. It must therefore be justified in terms of vocational education whether a separation of the two concepts can be assumed, what exactly the core points of the concepts are, and at which points they differ from one another. Against this background, new items can then be found and scales developed (! 4.7: Table 4.7 and 4.8). Other results relate to work ethics. So far, the scale contains only 3 items with only mediocre reliability values. In the EFA, one or more items are additionally assigned to the scale. It should therefore also be examined what exactly constitutes the core of work moral and how it can be more clearly distinguished from

6.5 Validity and Interrater Reliability in the Intercultural Application of. . .

185

commitment. Then, further items can be found, and the existing items can be modified (Table 4.9).

6.5

Validity and Interrater Reliability in the Intercultural Application of COMET Competence Diagnostics

The COMET project is an internationally comparative research project on vocational education and training that was initiated in Germany. On the basis of the threedimensional model of vocational competences, the level of development of trainees and students is diagnosed with regard to their occupational competences, their occupational commitment and their occupational identity (COMET I, II, III). The aim is to carry out comparative studies based on the evaluation of measurement results in various (higher) schools, regions and countries and to achieve a high degree of reliability and validity in a country comparison.

6.5.1

Method

Guillemin, Bombardier and Beaton (1993) point out that existing measuring instruments must always be adapted to existing cultural and linguistic differences and similarities when used in other cultures and in a different language (Table 6.5). China and Germany show great differences in terms of the vocational training system, language and culture. For this reason, the intercultural adaptation of COMET measuring instruments and evaluation items to the vocational training system/training in China has not only involved translations but also cultural adaptations. The overall process included preparation, translation, cultural adaptation and performance evaluation. In addition, appropriate counselling training was conducted Table 6.5 Different situations of intercultural adaptation (based on Guillemin et al., 1993) Differences and similarities between languages and cultures from the target group in the new measurement and source measurement No difference in language, culture and country Same country and same language, but different culture (e.g. group that immigrated to the country of source measurement a long time ago) Different country and different culture, but same language Same country, but different language and culture (e.g. new immigrants in the country of source measurement) Different country, language and culture

Situation of cultural adaptation in need of needs cultural translation adaptation Not Not necessary necessary Not Necessary necessary Not necessary Necessary

Necessary

Necessary

Necessary

Necessary

186

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

in China to ensure the reliability and validity of the evaluation of task solutions for open test tasks.

6.5.2

Preparation and Translation

The Chinese working group received the German version of the measuring concept, the measuring instruments and the evaluation items (COMET-Bde I-IV) from the I: BB of the University of Bremen and worked out the implementation concept for the competence measurement in China together with the German colleagues. During translation, the working group used the two-way translation method, in which a text is translated directly into the foreign language and then translated back into the source language in order to achieve high semantic equivalence and maintain the content and meaning of the tasks in the original instruments. The translation work was carried out jointly by the developers of the measuring instrument and concept of the University of Bremen and by the scientific staff of Beijing Normal University and the Beijing Academy of Education.

6.5.3

Cultural Adaption

In order to ensure the equivalence of content on the basis of semantic equivalence through translation, the Chinese project group had contents such as test tasks and questionnaires checked and adapted by scientific staff, vocational school teachers and practitioners from companies by organising workshops, among other things, so that the formulation of the measurement concept corresponds to Chinese expressive customs, is adapted to the special features of vocational training in China and the Chinese examinees and thus the raters and test participants can precisely understand the significance of measurement tasks, questionnaires and evaluation items. The adaptation with regard to context questionnaires and evaluation items took place mainly in the initial phase of the project, while the adaptation of open test tasks was carried out before each test. First, the formulation of 40 evaluation items was discussed and determined. As in several previous innovative projects, such as the ‘Project to improve the qualifications of teachers at vocational training centres’ of the City of Beijing, many further and advanced training measures were carried out for teachers/lecturers with regard to curriculum development and implementation in work process systems, the teachers/ lecturers were able to quickly understand and accept the basic idea of the COMET measurement concept, the measurement model and the 40 assessment items in the workshop. An agreement could also be reached quickly regarding the formulation of the evaluation items. Furthermore, the workshop participants interpreted the tasks in the context questionnaire in order to understand the measurement intention. For this purpose,

6.5 Validity and Interrater Reliability in the Intercultural Application of. . .

187

the formulations were adapted to Chinese usage. For example, ‘occupation’ was replaced by ‘subject/training course’ (zhuanye) and ‘training company’ by ‘practical training company’. Some questions that do not correspond to Chinese circumstances have been deleted. To ensure the international comparison of the results, the original numbering of the test items was retained. For example, questions 6 and 8 were deleted from the German context questionnaire and simply skipped in the Chinese questionnaire (Rauner, Zhao, & Ji, 2010). Before each test, the Chinese working group organised experienced teachers and practitioners from companies to check and adapt the validity of the open tasks and problem-solving areas proposed by the German side, especially from the perspective of professional validity instead of the teaching validity. It turned out that the teachers were able to agree quite easily on the validity of test task content due to the acceptance of the measurement model for recording professional competence. For example, the experts (including teachers and practical experts) examined and discussed four test tasks proposed by Germany in the course of measuring the vocational skills of trainees in electrical engineering subjects in Beijing and quickly reached an agreement. There was agreement that three of the four tasks did not need to be changed and could be taken over directly as test tasks in China. The corresponding problem-solving scopes did not have to be adapted either. Only the understanding of a test task was discussed for a long time: the point was that the task at one point in the description of the situation would go beyond the scope of electrical engineering and that the trainees would also be required to work on the task from an interdisciplinary perspective. At the beginning, the experts disagreed as to whether the interdisciplinary element should be retained in the task. However, after carefully interpreting the task with reference to the COMET competency model, a consensus emerged that there would be an agreement between the task and the model. Therefore, it was finally agreed to include this task as a test task (Rauner, Zhao, & Ji, 2010). Teachers and practical experts tested and adapted the test tasks on the basis of the COMET competence model and professional practice in China. This also happened in the subsequent tests for trainees in the automotive service sector. The practice of cultural adaptation of the COMET concept as well as the test tasks and the context survey show that the basic idea of the work process systematic curriculum and the COMET competence model is accepted by the Chinese teachers involved in the project. This shows that the COMET competency model has a sound scientific basis for achieving corresponding educational goals and guiding principles for international standards in vocational education and training and that an international comparison can thus take place.

188

6.5.4

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Rater Training

In order to ensure the assessment quality and the quality of the country comparison of competence measurement, the training concept for raters to increase interrater reliability was developed in the COMET project. The Chinese project group organised training courses for raters for all tests, in which the teachers were involved in the evaluation work. A very high interrater reliability was achieved with the rat training. The process of rater training includes • Presentation of the COMET model for vocational competence, the measuring procedure and the evaluation procedure; • Explanation of the eight evaluation criteria and 40 evaluation items for the evaluation of the solution examples; • Rating exercise using real examples, i.e., solutions of the trainees were selected as case studies for each task and evaluated by the raters for the exercise. Each exercise included three parts: the individual assessment, a group rating and a plenary discussion. For the process of rater training and the test in electrotechnical subjects, see COMET Vol. III, 4.2, Tables 4.5 and 4.6.

6.5.5

Analysis of the Reliability and Validity of the Evaluation Item

Based on the aforementioned processes and on the data from three tests, the structural validity of the evaluation items was evaluated by means of an explorative factor analysis and the reliability of the evaluation items by means of the internal consistency coefficient. The three tests involved 831 trainees in electrical engineering subjects in 2009, 779 trainees in automotive engineering in 2011 and 724 trainees in automotive engineering in 2012 (Zhuang & Zhao, 2012, 46 ff.).

6.5.6

Results

Effect of Rater Training on Increasing Interrater Reliability The Chinese working group organised a very extensive rater training and came to the following results: • The first test rating was very different. The raters relied on their subjective teaching experience and did not rely predominantly on the solution scopes. After the importance of the solution scopes as an interpretation aid for the rating had been discussed in plenary, the degree of agreement increased sharply at the next rehearsal rating.

6.5 Validity and Interrater Reliability in the Intercultural Application of. . .

189

• Individual independent evaluation, agreement in the group rating as well as reports and discussion in the plenum represent a very effective approach to Council training. In the group rating and the plenary discussion in particular, the evaluations of individual raters were presented to the groups in the plenary in tabular form. The evaluation results of the German raters were also shown as reference values. During the discussion, each rater reflected on his own assessment based on the evaluation of other experienced raters, the groups and the plenary discussion. In the groups and in the plenary session, the main focus was on the evaluation points on which there were major deviations in the evaluation in order to agree on the evaluation standards and the evaluations. To this end, those advisers whose assessments differed significantly from others were asked to explain and justify the assessment so that the gap between the assessments of the different advisers could be reduced (discursive validity). • In the evaluation of the first two solutions, the agreement of the raters increased significantly. In the evaluation of the third solution, the agreement of the raters had already reached a high level. The interrater reliability of the raters (Finnjust) was 0.8 and higher. After the discussion on the fourth solution, the raters had internalised the evaluation points even more deeply and mastered the evaluation standards, so that the interrater reliability of the raters remained at a high level. See also Tables 6.6, 6.7 and 6.8. Table 6.6 Interrater reliability: Rater training for the preparation of competence measurement in electrical engineering subjects (2009)

Pb-Code H0282 H0225 H0176 H0234 H0265 H0102 H0336 H0047

Task Signals Signals Drying space Drying space Skylight control Skylight control Pebble treatment Pebble treatment

Day 2 Day 1 morning Finnunjust 0.41 0.54 0.80 0.75

Day 2 afternoon

Day 3 morning

Day 3 afternoon

0.86 0.79

0.82 0.79 0.84 0.80 0.82 0.83 0.85 0.79

0.84 0.82

Table 6.7 Interrater reliability: Rater training for the preparation of competence measurement in the field of automotive engineering (2011) Name of the case study Number of raters Finnjust

Answer sheets from Chinese trainees on oil consumption 29 persons

Answer sheets from Chinese teachers on winter check 30 persons

Answer sheets from Chinese trainees on winter check 30 persons

Answer sheets from German trainees on winter check 30 persons

0.7

0.76

0.85

0.77

190

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.8 Interrater reliability: Rater training of the competence measurement of trainees in the field of automotive engineering (2012) Name of the case study Number of raters Finnjust

Window lifter 1 25 persons 0.64

Window lifter 2 25 persons 0.79

Liquified petroleum gas 25 persons 0.84

Classic car 1 25 persons 0.78

Classic car 2 25 persons 0.8

Glowing MIL 25 persons 0.84

Misfuelling 25 persons 0.83

Note: The interrater reliability (Finnjust) is satisfactory at a value >0.5 and good at a value >0.7

Analysis of the Structural Validity of the Evaluation Items The COMET competence model assumes that competence at a higher level includes competence at a lower level. Here, the factor analysis is carried out at the level of functional competence (assessment points 1–10), procedural competence (assessment points 11–25) and shaping competence (assessment points 26–40) (cf. Table 6.9). From the result of the factor analysis, it can be deduced that the 10 evaluation items under ‘functional competence’ can be combined into one factor and the 15 evaluation items under ‘shaping competence’ into two factors (of which 10 evaluation items under ‘social compatibility’ and ‘environmental compatibility’ into one factor and 5 evaluation items under ‘creativity’ into one factor). This ensures a good structural validity. The 15 evaluation items under ‘procedural competence’ can be grouped into three factors. Four evaluation items of each of the three criteria ‘sustainability’, ‘economic efficiency’ and ‘business and work process orientation’ can be combined into one factor with five evaluation items each. Overall, the COMET evaluation items represent a good structure and essentially correspond to the theoretical hypothesis (Table 6.10). Table 6.9 Factor analysis of three levels of professional competence

KOM value χ2 value of the Bartlett test for sphericity Number of common factors taken Share of declared total variance

Functional competence (Assessment points 1–10) 0.957 14088.433**

Procedural competence (Assessment points 11–25) 0.910 10648.921**

Shaping competence (Assessment points 26–40) 0.934 26649.283**

1

3

2

70.724%

63.176%

61.972%

Note: ** means P < 0,01

6.5 Validity and Interrater Reliability in the Intercultural Application of. . .

191

Table 6.10 Factor analysis for the 15 evaluation items under ‘procedural competence’ Components 1 2 3 WM11 0.386 0.001 0.501 WM12 0.382 0.056 0.752 WM13 0.2 0.106 0.781 WM14 0.8 0.012 0.232 WM15 0.81 0.062 0.204 WM16 0.623 0.304 0.105 WM17 0.133 0.866 0.043 WM18 0.177 0.83 0.111 WM19 0.091 0.529 0.521 WM20 0.384 0.713 0.241 WM21 0.708 0.351 0.236 WM22 0.728 0.308 0.251 WM23 0.434 0.394 0.359 WM24 0.642 0.328 0.343 WM25 0.135 0.312 0.665 Extraction method: Main ingredient Rotation method: Orthogonal rotation method according to the Kaiser criterion Convergence of rotation after 5 iterations

Analysis of the Reliability of the Evaluation Items Reliability was analysed for the overall reliability of ‘vocational competence’, the three competence levels ‘functional competence’, ‘procedural competence’ and ‘shaping competence’ as well as the eight criteria. It was found that the coefficient Cronbach α for the overall reliability of ‘vocational competence’ and for the three competence levels is above 0.9 and the coefficient α for all eight criteria is above 0.8. Overall, the measurement model has a high internal consistency (see also Tables 6.11 and 6.12). Table 6.11 Reliability analysis for the three assumed competence levels

α coefficient a

Vocational competence 0.956a

Functional competence 0.953

Procedural competence 0.907

Shapingcompetence 0.924

Item 20 was deleted from the evaluation scale of the 2009 test and item 3 in the 2011 and 2012 tests, which is why the overall reliability of occupational competence is the calculation result without evaluation item 3. Calculation without evaluation item 20 results in α ¼ 0.971

192

6 Psychometric Evaluation of the Competence and Measurement Model for. . .

Table 6.12 Reliability analysis of 8 competence dimensions Competence level Functional competence Procedural competence

Shaping competence

Competence dimensions Clarity/presentation (item 1–5) Functionality (item 6–10) Sustainability (item 11–15) Efficiency (item 16–20) Orientation on work and business process (item 21–25) Social compatibility (item 26–30) Environmental compatibility (item 31–35) Creativity (item 36–40)

Alpha value 0.910 0.902 0.899 0.821 0.917 0.837 0.828 0.907

Discussion On the basis of the analysis described, the following can be determined: 1. The intercultural adaptation of the measurement model of COMET’s test tasks and evaluation items and the rater training in China were very successful. This result supports the assumption that the COMET competence model has a good scientific basis, complies with the rules and training objectives of VET and provides a good basis for a country comparison. 2. The practice of rater training in China shows that the succession of individual independent evaluation, group rating and collective discussion in plenary is an effective training concept. The combination of individual, independent evaluation and reflection on the basis of reference values can effectively promote understanding in rating. The solution scope can help the raters effectively both in evaluation and in reaching agreement among themselves. The Chinese raters were able to accept and discuss different opinions and to agree on common evaluation standards. After evaluating two solutions, a high interrater reliability has already been achieved. The evaluation items of the COMET competence measurement have a good structural validity and a high internal consistency.

Chapter 7

Conducting Tests and Examinations

7.1

How Competence Diagnostics and Testing are Connected

When measuring professional competence development in the form of competence levels and competence profiles, competence is recorded as a domain-specific cognitive disposition and not as professional action competence. What is measured is the ability to solve professional tasks through planning and concepts. This means that— so far—the aspect of the practical execution of a conceptual-planning task relevant for the examination of professional action competence remains unconsidered. By defining competence as a cognitive potential, the aspect of the practical implementation of tasks solved in planning and conceptual terms is excluded from researchpragmatic aspects (time scale, costs, etc.). However, these restrictions do not apply to the performance of examinations in accordance with the regulations of the Vocational Training Act. For example, several days (or even weeks, e.g. in the case of industrial clerks and craftsmen who have adhered to the tradition of the journeyman’s project) are available for the part of the examination dealing with the ‘company assignment’ or ‘company project work’ as central elements of a modern examination and the examination boards have the appropriate personnel and time resources for the performance and evaluation of the examinations. If competence is measured using the COMET method with the aid of computers—instead of the traditional ‘paper pencil’ test—then for many occupations which do not focus on manual skills, it is possible to measure professional skills in the form of computer-assisted competence diagnostics. This is especially true when programme-controlled work systems shape the content and form of the work process in a profession. For occupations in the commercial and administrative employment sector, for example, this situation is just as frequent as for IT occupations. This also applies increasingly to industrial-technical and artistic professions, in which the professional activity is shaped by the computer-, net- and media-supported working © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_7

193

194

7 Conducting Tests and Examinations

environment. A typical example is the car mechatronics engineer (Schreier, 2000). As a result, the traditional separation between planning, conceptual and executive activities is losing importance for these occupations and the relevant professional fields of action. For professions and professional fields of activity, for which—in contrast—a fundamental distinction is required between planning and execution of work orders and for which only the execution of the planned project allows information about the level of competence and the competence profile, the question arises as to the applicability of the COMET measurement procedure for the examination of professional action competence. The size of the step from the planned solution of a task to its practical implementation must therefore be examined. This includes the question of whether and to what extent the professional competence measured according to the COMET test procedure can also be regarded as an indicator of professional competence. Regardless of the individual occupational circumstances, ‘final examinations’ are about checking the qualification requirements laid down in the job descriptions and training regulations. Competence diagnostics, on the other hand, measures cognitive disposition. As the categories ‘qualification’ and ‘competence’ are often used synonymously, it was obvious to specify them when establishing the COMET competency model (Table 2.1). This categorical differentiation also means that the two forms of testing vocational skills overlap to a greater or lesser extent and that guiding ideas and forms of competence diagnostics can diffuse into examination practice. On the basis of these considerations and circumstances, it will therefore be examined in the following whether the examination part of company assignments and company project work and the form of ‘holistic’ and ‘complex’ tasks can be designed as forms of examination of professional competence with the methods of competence diagnostics according to the COMET test procedure?

7.1.1

Reviews of Professional Competence: The New Examination Practice

Answering this question requires an analysis of examination forms, which were initially developed and introduced for technical draftsmen and IT occupations (Borch & Weißmann, 2002; Petersen & Wehmeyer, 2001), and then for the reorganised electrical and metal occupations and numerous other occupations. These examples will be used to investigate the similarities and differences between modern tests and COMET competence diagnostics and whether the COMET competence model can be used to design tests. According to § 38 of the BBiG, the function of the examination is ‘to determine whether the candidate has acquired the professional action competence. The examinee must prove that he/she has the necessary professional skills, the necessary professional knowledge and abilities and is familiar with the teaching material

7.1 How Competence Diagnostics and Testing are Connected

195

Fig. 7.1 Examination structure for IT occupations

which is essential for vocational training and which is to be taught in vocational school lessons’ (Fig. 7.1). The examination practice, which was initially introduced for IT occupations, provides for central examination parts that represent typical work and business processes, namely, • An ‘in-house project work’ (for IT occupations) or an in-house assignment (for other occupations), • Two ‘holistic tasks’ or a comparable examination format. In the project work, current topics from the operational activities of the respective field of application or specialist area of the candidate are to be taken up, which should also be usable for the company if possible (Borch & Schwarz, 1999, 24).

The term ‘holistic task’ is intended to express that the purpose of this form of examination is to test the understanding and knowledge of context as well as the ability to solve professional tasks in full. This form of examination, introduced in the 1990s, has already met with a predominantly positive response from both trainees and companies at the very first attempt (Fig. 7.2). The majority of the trainees rated the examination part of company project work as practice-oriented—on average 70%. Trainees regard the degree of difficulty of these tasks to be appropriate. Approximately 30% consider this part of the

196

7 Conducting Tests and Examinations

Fig. 7.2 Evaluation of the in-company project work by the trainees (Petersen & Wehmeyer, 2001)

examination as too difficult, with only a small minority of 5% feeling undertaxed by this form of examination. In contrast to the positive assessment of the practical relevance and the degree of difficulty of company projects, the trainees had considerable reservations about the assessment of their examination performance. Only slightly more than one in three trainees consider the evaluation of the performance of the project work to be fair. This is mainly due to the fact that between 40 and 60% of the trainees saw insufficient agreement between the training and examination contents. This disagreement was seen as particularly large in the implementation and evaluation of the company’s project work. Approximately 50% of trainees stated that the assessment was not objective (Petersen & Wehmeyer, 2001, 300 f.). The differentiated assessment of this form of examination by trainees provided important indications for the further development of this form of open examination tasks (Fig. 7.3).

Characteristics of ‘Measuring’ and ‘Testing’ Professional Skills A comparison of this ‘holistic’ evaluation concept with the COMET test procedure reveals striking similarities at the concept level.

7.1 How Competence Diagnostics and Testing are Connected

197

Fig. 7.3 Evaluation of the assessment of the company project work in the final examination by the trainees (Petersen & Wehmeyer, 2001)

7.1.2

Context Reference: Work and Business Processes

In 1984, Horst Kern and Michael Schumann published their research on rationalisation in industrial production in the very sensational book Das Ende der Arbeitsteilung? (Kern & Schumann, 1984, 738). Behind this question is a thesis that has since found its way into ‘work and technology’—as well as into vocational training research—and which was confirmed in the MIT study ‘The Machine that Changed the World’ (Womack, Jones, & Roos, 1990). Two findings in particular have since led to far-reaching effects in the practice and organisation of business processes and the qualification of specialists for the directly value-adding work processes: 1. The reduction of the horizontal and vertical division of labour strengthens the productivity and competitiveness of enterprises and, consequently. 2. The implementation of lean, process-oriented business concepts depends on vocational training that allows tasks and responsibility to be transferred to directly value-adding work processes (Fig. 7.4). The concept of process- and design-oriented vocational training is based on five basic pedagogical principles and guiding ideas that take account of technological and economic change as well as the normative pedagogical models of vocational education aimed at understanding, the ability to reflect, personality development and participation. The categories of ‘process’ and ‘process orientation’ replace those of

198

7 Conducting Tests and Examinations

Fig. 7.4 From a function- to a business process-oriented organisational structure

‘function’ and ‘function orientation’ and stand for a variety of process-related issues that shape vocational work and training. The complexity of this guiding principle posed considerable difficulties for VET planning and research in its implementation in training and examination regulations. This became apparent when formulating and establishing a competency model as the basis for structuring the examinations. The ‘extended examination’ with parts 1 and 2 replaced the traditional intermediate examination and the first part of the final examination—half-way through training - has since then been included in the assessment of the final examination with a share of 30–40% (Fig. 7.5). Part 1 of the final examination is a vocational examination that is tailored to the various training occupations and refers to the qualifications that are taught in the first 18 months of dual vocational training. As an overriding reference, ‘the ability to plan, carry out and control independently and to act in the overall operational context’ is emphasised. ‘This ability must be demonstrated at a level appropriate to the level of training achieved. The ‘complex work tasks’ examination format is used to ‘focus’ on the qualifications required for the planning, creation and commissioning of technical systems or subsystems. In addition to assembly, wiring and connection, this also includes functional testing as well as the search for and elimination of faults. Included are specific qualifications for assessing the safety of electrical systems and equipment’ (ibid., 7). The candidate receives a list of subtasks for the analysis and execution of the task (see Fig. 7.6). The evaluation of the examination performance is based on this task list. The system checks whether the sub-competences shown in the task list are ‘fully’, ‘partially’ or ‘not’ achieved. The situation description of this form of complex work tasks differs only slightly from the form of the situation description (scenario) of the COMET test tasks. It is striking that when explaining the concept of

7.1 How Competence Diagnostics and Testing are Connected

199

Fig. 7.5 Structure of the final examination of industrial electrical professions (BMBF, 2006b, 6)

the ‘complex work task’, for the processing of which—as with COMET test tasks— 120 min is provided, a gradual reduction of the ‘typical’ (complex) work tasks is made. In a first step, this takes the form of reducing complexity by means of restrictive partial contracts (BMBF, 28). In order to implement this concept of reduced professional competence, an ‘examination hardware’ is specified (ibid., 29). With an abstract test hardware ‘automation technology’, the typical characteristics of the operational work processes evaporate. Questions about the usefulness and practical value of a technical solution, its environmental and social compatibility and the question of the creativity of the task solution no longer arise, or only to a very limited extent. For the construction of the ‘complex work task with reduced complexity’ in this example, defined errors are to be installed ‘to ensure that all test specimens meet the same basic requirements for troubleshooting. In addition, the evaluators can carry out the evaluation more easily’ (ibid., 30). Similar requirements are proposed for other parts of the test (e.g. planning the wiring and program changes). It is intended to transform the ‘complex work task’ into numerous subtasks. The solution scope is limited mainly to the criteria or the partial competence of the functionality. The essential elements of professional competence, such as the training regulations and the COMET competence model, are no longer in view. The operationalisation of the open situation description leads to an examination format guided by subtasks, which does not make it possible to check the professional competence defined in the training regulations on the basis of the assessments of the examination results. The BMBF’s handbook points out this problem: ‘Unfortunately, it is possible to agree on a procedure for an evaluation that is highly objective and reliable, but still

200

7 Conducting Tests and Examinations

Fig. 7.6 Work task (BMBF, 2006b, 31 f.): ‘Your supervisor instructs you to plan the changes, carry out the changes in the control cabinet and test them at a pilot plant’

does not cover what is to be covered! It’s about the validity of a test. In this respect one must be aware that one can record and evaluate very different performances of an examinee. A central question always remains whether the examination performance recorded is an appropriate indicator of what is to be examined. [. . .]’. It was shown that examination requirements with a high affinity to the requirement dimension of the COMET competence model are defined in training regulations. When implementing the training objectives and examination requirements in a manageable examination procedure, the question arises as to the quality criteria of the new examination practice. In its evaluation report on the introduction of IT occupations, BBIB points to a serious problem in the implementation of the new form of examination: ‘However, the examination practice is quite different. In the first intermediate examination,

7.1 How Competence Diagnostics and Testing are Connected

201

sixty tasks were set instead of four—a blatant disregard for the training regulations. The ‘holistic tasks’ are also subdivided and partly programmed [multiple choice tasks]—but in no case holistic. To date, neither the Federal Government nor the Federal States, as supervisory bodies over the Chamber of Industry and Commerce, have prevented the unlawful examination practice’ (Borch & Weißmann, 2002, 21). These conclusions derived from the evaluation results for the implementation of the IT reorganisation procedure (Petersen & Wehmeyer, 2001) were obviously taken into account in the reorganisation procedures in the following years. The renunciation of dissolving the ‘holistic’ tasks into a multitude of subtasks (according to Aristotle: ‘The whole is more than the sum of its parts’) brought with it a new problem: the evaluation of the very different solutions to ‘holistic tasks’ or the quality of the work results of the wide range of different ‘operational tasks’. According to the authors of the IHK NRW handbook Der Umgang mit dem Varianten-Modell of 4 February 2010, a ‘still unsolved basic problem of this form of examination’ is that ‘the examination board must arrive at a substantiated statement at the end of the examination on the basis of a written work assignment’ (IHK NRW, 2010, 6). With the COMET test procedure, a task format was developed, and a rating procedure with which all quality criteria—both a high degree of substantive validity and an equally high degree of objectivity and reliability—can be achieved. It therefore suggests itself to investigate whether the COMET test procedure can be applied to the tests and thus the ‘quality’ of the new tests can be increased.

7.1.3

Levelling of Test Results

With the form of fragmented division of the complex task (wholeness) into a structure of subtasks, the solution scope of the task, as it is initially created in the situation description, is clearly limited. As a consequence, this leads to a levelling of the assessment of the examination performance. High-performance test subjects do not have the possibility to make full use of the solution scope given by the open situation description. The structuring of the task solution is precisely specified. Weak trainees receive far-reaching support in solving the task through the question-guided presentation of the task. There is a risk that the objectively given heterogeneity of the competence characteristics of the subjects will be reduced (Fig. 7.7). Since task-specific assessment criteria are applied in established examination practice, the competence development in the form of competence profiles and competence levels can no longer be compared across tasks. In addition, task-specific evaluation items make it more difficult to qualify the examiners and to achieve a sufficiently high degree of comparable examination results that depend on the examiners. The COMET measuring method offers a solution. In addition to a standardised measurement model (evaluation grid), the development of task-specific solution spaces is a prerequisite (see above). These have the function of enabling the raters (examiners) to interpret the rating items in a task-specific manner. After a rater

202

7 Conducting Tests and Examinations

Fig. 7.7 Levelling effect of the tests of 453 test participants (COMET electronic technicians) (TS: total point value COMET test group)

training, the raters are able to evaluate the entire range of highly diverse solutions on the basis of the standardised rating scale even without using the solution scopes. The application of the COMET evaluation procedure for the evaluation of audit performance in solving holistic work tasks simplifies audit procedures and increases the reliability, accuracy and thus the comparability of audit results quite decisively. At the same time, there is no need to divide the holistic tasks into a multitude of subtasks. The consequence of introducing an objective examination procedure is that the heterogeneity of professional competences is reliably and objectively recorded.

7.2

The Measurement of Professional Competence

For some training occupations, the training companies are free to choose between the ‘operational order’ version of the examination and the standardised form of the ‘practical task’. Practical tasks are created nationwide and are a form of simulation of real work processes. The relevant ‘implementation guides’ rarely emphasise in more detail the fact that these are two fundamentally different types of examination. The operational order is characterised by its embedment in the social context of a company, the specific competitive situation as well as the uncertainties of unforeseeable events. Simultaneously, this complicates the realisation of a comparable evaluation of operational orders. It is precisely the central characteristic of these orders that they are each singular events. This strengthens their authenticity. However, this form of examination poses the challenge of developing selection and evaluation criteria that ensure the comparability of this element of the examination. Using the training regulations for the profession of electronics technician for automation technology as an example, the BMBF has presented an implementation guide

7.2 The Measurement of Professional Competence

203

that makes it easier for companies to decide for or against this examination element. Initially, reference is made to the qualification requirements to be assessed: ‘The candidate should demonstrate an ability to 1. Evaluate technical documents, determine technical parameters, plan and coordinate work processes, plan materials and tools, 2. Assemble, dismantle, wire, connect and configure subsystems, comply with safety, accident prevention and environmental regulations, 3. Assess the safety of electrical systems and equipment, check electrical protective measures, 4. Analyse electrical systems and check functions, locate and eliminate faults, adjust and measure operating values, 5. Commission, hand over and explain products, document order execution, prepare technical documents, including test reports’ (BMBF, 2006a, 2006b, 9). The examination forms ‘operational project’ (IT occupations) and ‘operational order’ (electrical occupations) are de facto identical examination forms, even if an attempt is made, with extensive justifications, to construct an artificial distinction between technical competence and process competence. The latter is assigned to the operational order in the examination regulations for electrical occupations. ‘This [. . .] should be a concrete order from the trainee’s field of application. What is required is not a ‘project’ specially designed for the examination, but an original professional action in everyday business life. However, the operational order must be structured in such a way that the process-relevant qualifications required from the candidate can be addressed and ascertained for evaluation using practicerelated documents in a reflecting technical discussion’ (BMBF, 2006a, 2006b, 6). Arguments in favour of this form of examination say that this is about evaluating process competence in the context of business processes and quality management in the operational overall context. The evaluation encompasses the qualities with which the coordination and decisions determining professional action can be perceived in the work processes (ibid., 4). This serves to express that this is not a question of checking for ‘professional competence’. A large number of implementation guides and practical examples have since been used in an attempt to impart the separate assessment of process and professional competence to examiners and examination boards. This was not successful because experience of centuries of examination practice—from the journeyman’s project to the operational order—speaks against this regulation and there is no other vocational pedagogical justification for this concept. For example, a recommendation for companies and examiners of North Rhine Westphalia’s Chamber of Industry and Commerce responds to the question of whether ‘technical questions’ are prohibited in the technical discussion following the operational order by stating: ‘The central focus of the new examinations for the operational order is on both the ‘processability’ and ‘technical character’, or ‘process competence’ and technical competence’. Although so-called ‘technical questions’ are by no means prohibited during a technical discussion, they should directly relate to the operational order’ (IHK NRW, 2010, 14 f.).

204

7 Conducting Tests and Examinations

Fig. 7.8 Evaluation—operational order (BMBF, 2006a, 2006b, 12)

The work performed during the processing of an operational order is evaluated in accordance with the following scheme (Fig. 7.8). A comprehensive set of instruments was submitted for the evaluation of examination performance, which—in its basic structure—is presented here in simplified form (Fig. 7.9). There are major differences in the differentiation of the qualifications to be demonstrated. They range from a few higher-level qualifications to 50 or more individual items (ibid., 21). A procedure to simplify the evaluation was developed by PAL. The individual items assigned to the four higher-level qualifications Q1 to Q4 were combined in this procedure to form four items (Fig. 7.8, Column 2). This is followed by a determination of the degree to which this summarising criterion is met, again differentiated into three levels. The examiners give the ‘comprehensive’ criterion a score between 8 and 10, the ‘incomplete’ evaluation a score between 5 and 7 and the ‘inadequate’ evaluation a score between 0 and 4.

7.2 The Measurement of Professional Competence Qualification areas / qualifications to be demonstrated

PAL rating and evaluation procedure

Q1: Accepting orders, selecting solutions

Handling solution variants

205

comprehensive 8-10 points

The solution is ... incomplete inadequate 5-7 points

0-4 points

ƒ Analysing work orders ƒ Procuring information, clarifying technical and organisational interfaces

ƒ Evaluating and selecting solution variants from technical, economical and ecological perspectives Q2: Planning work processes ƒ Planning and coordinating order processes ƒ Defining subtasks, compiling planning documents ƒ Considering work processes and responsibilities at the site of operation Q3: Executing the order and examinations

Structuring the overall order according to subtasks

8-10 points

5-7 points

0-4 points

Handling obstacles and faults

8-10 points

5-7 points

0-4 points

Documenting and explaining work result

8-10 points

5-7 points

0-4 points

ƒ Executing orders ƒ Testing and documenting function and safety, observing standards and specifications for the quality and safety of systems ƒ Systematically searching for and eliminating errors and defects Q4: Order conclusion and evaluation, product hand-over ƒ Handing over products, providing specialist information, preparing acceptance reports ƒ Documenting and evaluating work results and services, invoicing services ƒ Documenting system data and documents

Fig. 7.9 Evaluation form for ‘operational orders’ (BMBF, 2006a, 2006b, 15 ff)

This procedure limits the possibility of achieving a sufficient level of objective and reliable evaluation of the examination performance. The decisive weaknesses of this examination procedure do not lie in the form of a ‘operational order’, but 1. In the incomprehensible attempt not to—officially—use this traditional form of practical examination (e.g. The ‘journeyman’s project’) to examine professional competence and to restrict it to the distinguishable ‘process competence’ instead. This represents a fundamental difference from the successful examination concept for IT occupations. It cannot be conveyed that professional competence can also not be examined on the basis of a company contract.

206

7 Conducting Tests and Examinations

2. In the structure of the operational orders and the evaluation of their solution according to the concept of complete learning and working. The basic theory underlying the holistic solution of occupational tasks for the modelling of vocational action and shaping competence therefore remains unconsidered. The theory of complete action abstracts from the contents of professional work processes, with the consequence that the individual competences to be taught in modern vocational education and training are partly ignored. Professional competence should enable people to solve professional tasks comprehensively. In this case, ‘comprehensively’ also refers to the concept of complete action, but above all to the theory of holistic problem solution. And the latter concerns the requirements dimension of professional competence and therefore its development as a competence level and competence profile. It is therefore evident that the further development of this examination concept should be based on the regulations for IT occupations (operational project). However, the criticism expressed in the evaluation studies that ‘artistic projects’ are very frequently only developed for examination purposes—far beyond real operational practice—must be taken seriously. At the same time, it should be remembered that the term ‘operational project’ also includes an element of prospectivity and therefore points beyond existing practice. In contrast, operational ‘routine orders’ are oriented to existing practice. All too easily, this could create a central idea of adaptation qualification: qualifying for that which exists. However, this would contradict the fundamental change of perspective introduced in VET in the 1980s with the concept of ‘Shaping Competence’. It is therefore a question of examining professional competence—differentiated according to the progressive competence levels of functional, processual and holistic shaping competence. A comparison of the qualification grid Q1 to Q4 with the COMET competence model shows very differentiated qualification requirements Q1, Q2 and Q4 reflected in the modelling of the requirements and action dimension of the COMET competence model. The high affinity that exists between modern training regulations, and the COMET competence and measurement model was documented in detail in COMET Volume II. In this respect, it remains to be examined whether and, if so, how Q3 ‘Executing orders’ can be integrated into the COMET test procedure. The following procedure seems suitable here: 1. The items of the rating procedure are suitable throughout not only for the evaluation of the conceptual-planning solution of a test item, but also for the evaluation of the execution of the operational orders. In examination practice, the work result (product) is transferred to the client. 2. As documentation and explanation of the order (result) as well as, for example, by explanations on how the result was handled and managed (e.g. in the form of user training). 3. In the form of an expert discussion (30 min.), during which the candidate has the opportunity to justify the solution of his task.

7.2 The Measurement of Professional Competence

207

Table 7.1 Items on the sub-competence ‘Implementing the plan’ Implementing the plan Was it possible (1) (2) (3) (4) (5)

... To implement the plan in practice? ... To react appropriately to obstacles and faults? ... To verify the solution’s customer friendliness? ... To identify and, if necessary, to remedy the errors? ... To arrange the handover to the client in customer-oriented manner?

4. In this regard, there is a parallel to the COMET test procedure. As is the case during the technical discussion and during handover of their work results, the trainees are asked to give comprehensive and detailed reasons for their proposed solutions. 5. As the execution of an operational order (Q3) includes the examination of the functional capability, and the complete implementation of all standards and regulations (safety, health, environmental compatibility, etc.), the qualification requirements encompass the identification of faults and defects as well as their elimination. COMET competence diagnostics does not provide this implementation aspect, as this has so far been limited to measuring the conceptual-planning solution/processing of tasks. It therefore makes sense to supplement the COMET competence and measurement model with this partial competence of ‘implementing the plan’ (cf. Table 7.1). It is a good idea to integrate these assessment criteria into an appropriately modified assessment scale. This applies in particular to the sub-competences ‘clarity/presentation’ and ‘functionality/professional solutions’ (Table 7.2).

7.2.1

COMET as the Basis for a Competence-Based Examination

The relevant examination regulations provide for an examination structure according to which, for example, in part A (operational order/operational projects), vocational skills are examined on the basis of an operational order selected as an example, and with part B (two complex tasks) of max. 2  120 min, aimed at vocational work process knowledge, while vocational competence is examined on the basis of an operational order selected as an example. The duration of the intermediate examination or part 1 of the examination is a maximum of 10 h for the complex work task and a maximum of 120 min for a written examination part. According to this examination arrangement, comprehensive examination of the qualification requirements defined in the training regulations is possible neither at the level of knowledge nor at the level of ability.

208

7 Conducting Tests and Examinations

Table 7.2 Adaptation of the assessment criteria to the rating or evaluation of operational orders/ projects The requirement is ... in no way not partly met met met

fully met

(1) clarity/presentation Is the presentation form of the solution suitable for discussing it with the client? Is the solution presented appropriately for professionals? Was it possible to verify the solution’s customer friendliness? Is the documentation and presentation technically appropriate? Was it possible to arrange the handover to the client in customer-oriented manner? (2) functionality/professional solutions Was the ‘state of the art’ taken into account during planning? Was it possible to react appropriately to obstacles and faults? Was it possible to implement the plan in practice? Was it possible to identify and, if necessary, to remedy the errors? Are the solution of the assignment and the procedure adequately justified?

The operational work assignment or the operational project work as well as the holistic work assignment represents competences or qualification requirements of the vocational fields of action. The ability to work is assessed based on examination tasks which are developed or selected according to the criteria of representativeness and exemplarity. In the case of safety-relevant professional competences (e.g. mastery of the relevant safety regulations for electrical installations), it makes sense to establish a concept of competence diagnostics to accompany training. The vocational fields of action and learning are suitable for the temporal structuring of an extended examination (cf. BMWi, 2005, p. 46). Simultaneously, the great advantage of such adiagnostic competence during training is an extended examination with a high feedback structure. And this is particularly important in the development of vocational competence (cf. BMWi, 2005, pp. 9 and 46; Hattie & Yates, 2015, 61 ff.). This would be the first time that continuous training guidance based on the recording of vocational competence development would be regulated in binding manner. Such a procedure—involving the examination and testing practice of vocational schools— would not only strengthen the quality of training but also significantly reduce the burden on the time-consuming final examination process. This would make it easier to justify a one-off final examination to measure the level and profile of competence on the basis of characteristic and representative test or examination questions.

7.2 The Measurement of Professional Competence

209

The objectivity, reliability and at the same time the validity of the content of the examination can be realised on a high level based on the COMET competence and measurement model, with the prerequisite of ensuring that the examiners are instructed in the evaluation of examination results. Comparability of the examination would be ensured by the complex and authentic (valid in terms of content) examination tasks to be developed in accordance with the COMET competence model and high-quality criteria for the examination procedure by a standardised rating procedure.

Examination Variant of the Examination Structure (TAKING into Account the COMET Examination Procedure) The testing concept is largely based on Recommendation 158 of 12.12.2013 of the BIBB Board Committee (BAnz AT 13.1.2014 S1 (hereinafter referred to as ‘E 158’)).

7.2.2

Examination Format for the Extended Final Examination (GAP)

The E 158 is used for this purpose: ‘Part 1 of the CAP can therefore only deal with competences which are already part of the professional competence to be considered in the final examination’ (cf. E 158, p. 11). This recommendation suggests that the same examination format should be used for Parts 1 and 2 of the CAP.

The Operational Order ‘The operational order is proposed by the company’ (E 158, p. 20). It is based on a customer-oriented situational description. Specifications in the sense of a requirement specification as well as question-guided subtasks are to be avoided, since these already represent essential elements of the solution. With the ‘translation’ of the operational order formulated from the customer’s perspective into a specification (requirement specification), a part of the solution would already be given. A clear distinction must therefore be made between the work order specified (and to be applied for) by the company and the processing of the order (planning, execution, verification of the result) by the candidate in the examination procedure. An appropriate evaluation of the work result as well as the work and procedure can only be carried out if the candidate has the possibility to document the order planning process as well as its execution and procedure (1) and (2) to justify it comprehensively and in detail and to weigh alternatives against each other.

Table 7.3 Extended final examination (BBiG): Verification of professional competence (according to COMET quality standards) (Example: electronics technician)

210 7 Conducting Tests and Examinations

7.2 The Measurement of Professional Competence

211

E 158 talks about the ‘execution of a complex task typical for a profession’: ‘The work/procedure and the result of the work are evaluated’ (p. 20). Regarding the operational order, it says: ‘The work and procedure are evaluated. The results of the work can also be included in the evaluation’. However, this is only possible if the candidate not only documents, but also justifies, the results of their work and procedure. In the test model, the CAP therefore comprises both Part 1 and Part 2 of an operational order (or alternatively, a ‘practical task’) with the following examination parts (Table 7.3). I

II

III

• Conceptual-planning solution/processing of the order including its justification (approx. 3–4 h under supervision). • Subsequent rating of the conceptual-planning solution by the examiners. • Practical implementation of the plan and quality control (approx. 18–30 h). • Preparation of the documentation and (if necessary) justification of the deviation from the plan (approx. 8 h). • Expert discussion (including presentation) (max. 0.5 h). • Final team rating and determination of the examination result.

Assessment of the Examination Result Part A (Operational Order/ Practical Task) The planning of the operational order (OO)/Practical Task (PT)/as well as the justification of the proposed solution and the planned procedure are evaluated by the examination board on the basis of the standardised COMET rating scale (Appendix B) in the form of a team rating.

Fig. 7.10 Evaluation of the planning and justification of an operational order/practical task

212

7 Conducting Tests and Examinations

Expert Discussion The expert discussion takes place on the basis of the preliminary assessment result and the documentation of the OO/PT. The examiners are therefore able to check whether the candidate ‘knows and can do more’ than the preliminary evaluation result shows. The competence profile determined in the rating procedure and the documentation of the OO/PT form the basis for the expert discussion (Fig. 7.10). The preliminary evaluation result shows on which points the expert discussion should concentrate. The weaknesses of the solution and the procedure identified in the rating procedure are questioned again in the expert discussion. The subject of the expert discussion is also the deviations between planning and execution of the OO or PT. After the expert discussion, the examiners supplement their assessment with regard to the criteria of the implementation of the plan on the basis of the corresponding positions on the rating scale. In addition, they can correct ratings from the perspective of the skills shown.

Practical Task Variant 2 of the practical examination—a ‘practical task’—can be retained in modified form for companies or candidates for whom the concept of operational orders cannot be implemented. For these cases, documents (e.g. drawings, construction kits) are made available, making it possible to develop ‘practical tasks’ in the form of situation descriptions (from a customer perspective), taking into account the criteria resulting from the COMET competence model. The rating procedure is then identical to that for the operational orders.

Holistic Tasks E 158 states: ‘The selected examination instrument(s) for an examination segment must enable the candidate to demonstrate performance that meets the requirements by means of coherent tasks’ (p. 16). With the help of the holistic tasks, competence can be captured/checked at a high level of validity and reliability. The COMET test procedure can be applied here without restriction. Holistic examination tasks are only possible if they are not broken down into subtasks. These tasks are solved through concepts and plans, and the proposed solutions are—if possible—to be explained comprehensively and in differentiated manner. The COMET rating procedure enables an objective, valid and reliable evaluation of the examination services provided.

Evaluation of the Task Solutions by the Examiners (Dual or Team Rating) The OO and, if applicable, the PT are evaluated using evaluation sheet A, and the complex tasks are evaluated using evaluation sheet B (Appendix B). The

7.2 The Measurement of Professional Competence

213

Table 7.4 Explanation of the rating scale 0 Unmet If no or no valid information is provided for an item relevant to the examination task.

1 Rather not met If the information on a criterion is only very general and not situation-specific and practice-related: If this information is based on factual knowledge but has not really been understood.

2 Partly met If it is evident that the candidate/test participant can name and justify the specific aspects of the solution without, however, weighing them against each other: e.g. if a technologically highquality solution is proposed without paying attention to both utility value and costs.

3 Fully met If a solution aspect is not only well justified from a technical point of view, but if it is weighed up between alternative possible solutions and if the candidate weighs between competing solution criteria according to the situation: e.g. environmental compatibility versus utility value.

non-applicable evaluation criteria will be deleted for each task or OO/PT. The examiners evaluate each of the remaining criteria according to the following gradations (Table 7.4).

7.2.3

Procedure for the ‘Operational Order’ (OO)Examination

Application Process The training company formulates an operational order according to the following criteria: • The OO is assigned to a vocational field of action or an examination area that comprises several vocational fields of action. • The OO must be at the level of employability (professional competence pursuant to the training regulations). • The description of the OO contains a description of the situation for which a professional solution has to be developed (plan and justify) as well as implemented and checked (control). • When describing the situation, care must be taken to ensure that the criteria of the complete task solution (Table 7.5) are applied. The training companies and instructors are familiar with the evaluation sheets A and B (Annex 2). This is already very sensible because this evaluation concept represents an important didactic instrument for vocational training. The evaluation sheets are suitable for the self-evaluation of competence development in a form adapted to the projects and learning tasks.

214

7 Conducting Tests and Examinations

Table 7.5 Brief description of the criteria for a complete task solution (industrial-technical professions) Functionality

Clarity/Presentation

Sustainability/Utility value orientation

Economy/Effectiveness

Business and Work process orientation

Social acceptability

Environmental compatibility

Creativity

The criterion refers to instrumental professional competence and therefore to context-free expert knowledge. The ability to solve a task functionally is fundamental for all other demands placed on the solution of professional tasks. The result of professional tasks is anticipated in the planning and preparation process and documented and presented in such a way that the client (supervisor, customer) can communicate and evaluate the proposed solutions. It is therefore a basic form of vocational work and learning. Professional work processes and orders always refer to ‘customers’ whose interest is a high utility value as well as the sustainability of the task solution. In work processes with a high division of labour, the utility value and sustainability aspects of solving professional tasks often evaporate in the minds of employees. Vocational education and training counteract this with the guiding principle of sustainable problem solving. In principle, professional work is subject to the aspect of economic efficiency. The context-related consideration of economic aspects in the solution of professional tasks distinguishes the competent action of experts. It comprises solution aspects that refer to the upstream and downstream work areas in the operational hierarchy (the hierarchical aspect of the business process) and to the upstream and downstream work areas in the process chain (the horizontal aspect of the business process). The criterion primarily concerns the aspect of humane work design and organisation, health protection and, where appropriate, the social aspects of occupational work which extend beyond the occupational work context. A relevant criterion for almost all work processes, which is not about general environmental awareness, but about the occupational and subject-specific environmental requirements for occupational work processes and their results. Indicator that plays a major role in solving professional problems. This is also a result of the very different scope for design in the solution of professional tasks depending on the situation.

The assessment criteria (rating items) that are not relevant from the company’s perspective are marked when applying for an OO. • The solution space to be specified by the client must define the cornerstones for possible solutions. This is not an ideal solution proposal. Solution variants must be possible which have as high a utility value as possible in line with the situation description. • The situation/order description also includes an overview of the technical and business management options available in operation that are necessary for the performance of the OO. This includes information on ordering and procurement procedures. • The applicant estimates the duration of the OO.

7.2 The Measurement of Professional Competence

215

Approval of the Application The examination board approves the application taking into account • The job description, • The criterion of professional competence (complexity of the order and the concept of a complete (holistic) task solution), • The solution space and, if necessary, making corrections with reference to the competence model • The duration of the project.

Procedure for the OO Examination (Fig. 7.11) Part 1: The OO is solved conceptually, follows a plan and is explained in detail. This part of the examination lasts for 3–4 h and takes place under supervision. The candidate can use the Internet and the relevant work documents provided by the company. Part 2: On the basis of the justified solution variant and the procedure, the candidate prepares the implementation of the plan (orders and other preparatory measures). The available time varies, as it very much depends on the type of OO. This is followed by the implementation of the plan into practice as well as quality control and documentation of the solution. Simultaneously, the team rating of the reasoned solution proposed by the candidate is carried out in accordance with the COMET evaluation procedure. Part 3: Expert discussion. The expert discussion is initiated by a short presentation of the results of the work with the main focuses: implementation of and, if applicable, any deviations from the plan, assessment of the quality of the work result and the procedure. The subsequent expert discussion includes questions and discussions:

Fig. 7.11 Examination procedure for operational order (BA)

216

7 Conducting Tests and Examinations

• On the basis of the rating result, • Deviations from the plan, • Alternative solutions. Finally, the examination board supplements and corrects (if necessary) its ratings and determines the examination result for the OO: the level of competence achieved, the overall score or the examination mark for this part of the examination.

7.2.4

The Examination Result

Each candidate receives a printout of their competence profile, a brief description of their level of competence and the total score (TS) of their task solutions (Fig. 7.12). The homogeneity of the competence profile is given as coefficient of variation V (Table 7.6).

Total Score (TS) The total score results from the addition of the individual values for the eight sub-competences. It is a rough indication of the level of competence achieved. Candidates can compare their TS with that of their examination group/class to see where their performance is compared to that of the other trainees. An accurate TS takes into account the degree of homogeneity of the competence profile (Fig. 7.13). This example shows that, taking into account the competence profiles, the same raw TS results in two different (corrected) TS(k). This means that the level of competence of the two candidates is different. Therefore, two candidates with the same raw TS of 42 can reach two different competence levels.

Conclusion The application of the COMET examination concept for the implementation of examinations in vocational education and training poses several advantages. 1. The COMET competence and measurement model provides an interdisciplinary procedure for the selection and formulation of holistic examination tasks and operational orders. The content dimension of the competency model must be concretised in each case by the vocational fields of action that are relevant for the contextual description of employability. 2. This also makes examinations comparable for the ground-breaking examination concept of the operational order on a supra-regional and interdisciplinary basis. 3. The introduction of a scientifically based examination format such as this would therefore considerably simplify the communication between all those involved in vocational education and training.

7.2 The Measurement of Professional Competence

217

Fig. 7.12 Documentation of the examination performance Table 7.6 Degrees of homogeneity of competence profiles measured as coefficient of variation V

V < 0.15 V ¼ 0.16–0.25 V ¼ 0.26–0.35 V ¼ 0.36–0.5 V > 0.5

Very homogeneous Homogeneous Less homogeneous Inhomogeneous Very inhomogeneous

218

7 Conducting Tests and Examinations

Fig. 7.13 Correction of the raw TS values (comparison of the competence characteristics of two commercial occupations)

4. This would also solve the integrated review of in-company and school-based training, as the COMET competence model represents vocational training as a whole. At the same time, the specific contributions of the learning locations to achieving and verifying employability can be identified. 5. The examination results based on the COMET competence model also reveal the strengths and weaknesses of the training performance of the learning locations. The examination results therefore provide a good basis for educational guidance and for quality assurance and quality development in vocational education and training. 6. The evaluation of examination performance based on the COMET competence and measurement model leads to the development of common evaluation standards. This examination practice should prove to be a form of informal rater training for examiners and should be supported by introducing the examiners to the new examination practice. 7. A high degree of interrater reliability (consistency of examiners in the assessment of examination performance) can be achieved by dual or team rating—a procedure tried and tested in COMET projects. The significance of the examination results in accordance with this examination concept is significantly higher than that of conventional examinations. Not only a score is displayed, but also • The level of competence achieved • The competence profile. In addition, this examination form satisfies the established quality criteria of competence diagnostics.

7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . .

7.3

7.3.1

219

Interrelationship Analyses Between Examinations and Competence Diagnostics for Automotive Mechatronics Technicians Comparison of the Examination and (COMET) Test Results

In the context of the feasibility study ‘COMET testing’ (implications of the COMET test procedure for achieving a higher quality (validity, objectivity and reliability) of final examinations according to the Vocational Training Act), which was carried out in cooperation with the COMET project (NRW) and the Chamber of Industry and Commerce (NRW), it made sense to carry out a case study in which the examination results of 96 candidates (motor vehicle mechatronics technicians) were compared with their test results in the COMET test. A correlation-statistical context analysis was performed, which made it possible to map the relationship between two variables. To investigate the interrelationships, differentiated scores are available both from the final examinations and from the COMET test. The following final examination values were used for a differentiated analysis: • Mean value from the practical examination part • Mean value from the written part of the examination • The total examination score. The COMET scores can be divided into the following areas according to the competence model: • • • •

Score of the competence level functional competence (FC), Score of the competence level procedural competence (PC), Score of the competence level (holistic) shaping competence (DC) and. Total score (TS).

7.3.2

Results of the Statistical Influence Analysis

Correlations can be used to map relationships between two characteristics, with the correlation coefficient ‘r’ quantifying the strength of the relationship. The correlations from r > 0.2 to r < 0.4 are considered weak. Mean correlations are 0.4 < r < 0.6. A strong correlation is indicated from r > 0.6 (cf. Brosius, 2004). For the data of the automotive mechatronics technicians, the two test instruments were first examined separately. It is evident that the elements of the chamber examination—practical part, written part and overall assessment—are strongly interrelated and therefore coherent. The areas of the COMET test are also closely related and therefore measure the same construct. Each test forms a coherent unit in

220

7 Conducting Tests and Examinations

its own right. A comparison of the two tests revealed only a weak correlation of r ¼ 0.25 ( p < 0.05). The differentiated context analyses of individual elements of the two tests yielded the following results1: The score from the practical exam correlates to • • • • •

Strong with the score from the written test (r ¼ 0.63; p > 0.01), Not with the total score (ts) COMET (r ¼ 0.17; not significant), Weak with functional competence (fc: r ¼ 0.25; p < 0.05; pk: r ¼ 0.2), Not with the level of procedural competence (r ¼ 0.17; not significant) and, Not with the competence level of shaping competence (DC): r ¼ 0.06 (not significant).

Although the result from the written part of the final examination correlates only at a weak level, it is nevertheless significant with the COMET values for the • TS (r ¼ 0.29; p < 0.01), • Functional competence (FC) (r ¼ 0.31; p < 0.01), • Procedural competence (PC) (r ¼ 0.28; p < 0.01). The degree of correlation between the score of the written test and the score of the COMET shaping competence (DC) is very weak and statistically insignificant. The calculated correlation coefficient r can represent a random relationship. The overall result of the test correlates at a low level with the COMET values. • TS (r ¼ 0.25; p < 0.05), • FC (r ¼ 0.29, p < 0.01) and. • PC (r ¼ 0.24; p < 0.05). There is no demonstrable link between the overall result of the final examination and the (holistic) shaping competence identified in the COMET competence model (r ¼ 0.12, not significant).

Interpretation of the Results Overall Result (Final Examination): Total Score (COMET) The weak positive correlation between the overall score of the final examination and the COMET test result indicates that higher scores in the examination tend to be accompanied by higher scores in the COMET test (see Fig. 7.14).

All calculations without extreme values, i.e. written part (chamber) > 0, practical part (chamber) > 0, overall mark (chamber) > 5, functional competence >0, procedural competence >0 and shaping competence >0, TS > 5. 1

7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . .

221

Fig. 7.14 Relationship between examination result (chamber test) and total score COMET, without extreme values (TS > 5 and overall score > 5, r ¼ 0.25, p < 0.05; R2 ¼ 0.09)

The final examination of the chamber as a whole can therefore be partially represented by the COMET examination. However, only 9% of the variation of one test’s values can be explained by the other.

Practical Part of the Examination It can first be assumed here that this aspect of the examination hardly correlates with the values of competence diagnostics, as COMET is limited to measuring conceptual-planning competence. The correlation values to the TS with r ¼ 0.17 (not significant) to the competence level of functional competence (FC) with r ¼ 0.21 ( p < 0.05) and procedural competence (PD) with r ¼ 0.17 (not significant) show that there is almost no correlation. Due to the divergence in content between the two forms of examination (practical chamber examination and written examination at COMET), an interpretation of the results only partly leads to added value. In the practical examinations of the chambers, the trainees are asked to implement the theoretical problem solution, while the COMET test asks them to formulate the solution in writing. This is where translation errors can occur. The purely cognitive solution of a problem does not necessarily mean that the actions are performed according to the calculated procedure.

222

7 Conducting Tests and Examinations

Fig. 7.15 Relationship between the written examination (chamber examination) and functional competence (COMET), r ¼ 0.31, p < 0.01, R2 ¼ 0.1 (without extreme values)

Written Examination As expected, the most pronounced correlations to the COMET test are found for this part of the examination. This predominantly applies to functional competence (FC) with r ¼ 0.31( p < 0.01). A high score in the written part of the final examination therefore goes hand in hand with a high score in the functional competence area of the COMET test.2 This weak but still significant correlation is shown in Fig. 7.15. The correlation to procedural competence (PC) is somewhat weaker with r ¼ 0.28 (Fig. 7.16). In contrast, with r ¼ 0.17 (not significant) there is no correlation with shaping competence (GC) (Fig. 7.17).

2 Conversely, an upstream COMET test at the level of functional competence would be good preparation for achieving high scores in the written part of the final examination.

7.3 Interrelationship Analyses Between Examinations and Competence Diagnostics. . .

223

Fig. 7.16 Relationship between the written examination (chamber examination) and procedural competence (COMET), r ¼ 0.28, p < 0.01, R2 ¼ 0.08 (without extreme values)

Fig. 7.17 Relationship between the written examination (chamber examination) and shaping competence (COMET), r ¼ 0.17, not significant, R2 ¼ 0.02

224

7.3.3

7 Conducting Tests and Examinations

Conclusion

The higher the score for the written examination, the higher the score for functional and procedural competence—and vice versa. Higher scores in the written examination do not justify holistic shaping competence. This is not covered by the examination. This context analysis therefore confirms the findings of the empirical surveys cited that the objectives of process-oriented and competence-oriented vocational education and training anchored in the examination practice are not covered in the training regulations. With the COMET test format, this deficit can be eliminated for the written part of the examinations. The application of the COMET examination procedure to the entire examination requires a supplemented competence model. The application of the COMET examination concept for implementing examinations in vocational education and training has a number of advantages. 1. The COMET competence and measurement model is an interdisciplinary procedure for selecting and formulating holistic examination tasks and organisational orders. The content dimension of the competency model must be concretised in each case by the vocational fields of action that are relevant for the description of the content of employability. 2. On this basis, examinations shall also be comparable on a supra-regional and cross-occupational basis for the ground-breaking examination concept of the company mandate. 3. This would introduce a scientifically based examination strategy that would greatly facilitate the understanding of all those involved in VET. 4. The integrated review of in-company and school-based training would thus be solved, since the COMET competence model represents vocational training as a whole. At the same time, the specific contributions of the learning locations to achieving and verifying employability can be identified. 5. The examination results based on the COMET competence model also reveal the strengths and weaknesses of the training performance of the learning locations. The examination results thus provide a good basis for training guidance and quality assurance in training. 6. The evaluation of examination performance on the basis of the COMET competence and measurement model leads to the development of common evaluation standards. This examination practice should prove to be a form of informal rater training for examiners. This should be supported by introducing the examiners to the new examination practice. 7. A high degree of interrater reliability (consistency of examiners in the assessment of examination performance) can be achieved by dual rating (two examiners)—a procedure tried and tested in COMET projects. 8. The informative value of the examination results according to this examination concept is significantly higher than that of conventional examinations. It identifies not only the score but also the level of competence achieved and the competence profile. In addition, this examination form satisfies the established quality criteria of competence diagnostics.

7.4 Measuring the Test Motivation

7.4

225

Measuring the Test Motivation

The influence of test motivation on test behaviour and test results is the subject of different and sometimes contradictory research results. As part of the first Pisa test (2000), an additional experimental study was conducted in Germany in order to gain insights into the test motivation of different groups of pupils (distinguished by school types) and the influence of various external incentives on test motivation. The overall result of the experiment is that the different experimental groups do not differ in their willingness to make an effort (Baumert et al.: Pisa 2000, 60). The results of the study also suggest that external incentives should not be used, as these effects are negligible. Differences in test motivation between ‘Hauptschule’ (general school in Germany offering Lower Secondary Education) pupils and ‘Gymnasium’ (high school in Germany offering Higher Secondary Education) pupils could not be established in this study. Regardless of the type of school, the effort was relatively high throughout. The various incentives had no significant influence on the test results (ibid., p. 27 ff.). In the first COMET project, a motivation test was therefore not included. However, test practice then suggested that the test motivation should be considered as a context variable in the second test as part of the longitudinal study.

7.4.1

Preliminary Study: The Time Scope of the Test as Influence on Test Motivation

Based on the experience of examination practice in dual vocational training and in final examinations at technical colleges, a longitudinal study with a cross-over design was selected for the COMET Electronics Engineer project (Bremen, Hesse 2007–2009) (Fig. 7.18). Afterwards, each test participant had to solve two complex test items at each of the two test times.

Fig. 7.18 Cross-over design for the use of test items in longitudinal section (COMET Vol. I, 144 f)

226

7 Conducting Tests and Examinations

The test comprised a total of four complex test items (COMET Vol. I, 144 f.). Each test participant had to solve two test items, with a maximum of 120 min available to solve each test item. After the first test of the one-year longitudinal study, the observation of the teachers involved in the test already indicated that the motivation to work on the test items played a greater role than had initially been assumed. It was therefore examined how the test time of two times 120 minutes was used by the test participants and the proportion of test refusers. This resulted in first clues regarding the test motivation of the test participants and for recording the test motivation. In a pre-test, it was first examined whether there was a systematic drop in performance when processing the second complex test item. In the evaluation of the test results, a distinction was made between trainees in their second and third year of training and students from technical colleges. The experiences of the first phase of the study already suggested that the motivation to complete the test items played a greater role than assumed among the various groups of trainees. On the one hand, reports by teachers indicated that test motivation among vocational school students varies. On the other hand, some of the trainees have only partially exhausted the test time of 2  120 min; some pupils have not seriously completed the test items, and they form the group of test refusers.3 Based on these findings, the test motivation and test behaviour were recorded at the second test time (March/April 2009), whereby the formulation of the questions is based on PISA test practice (Kunter et al., 2002). In addition, the teachers conducting the tests answered questions on test motivation in the classroom and on the working atmosphere; the results can be used for comparison at class level. In addition, the comparison of processing time and test result of the first and second test items allows conclusions to be drawn about the course of motivation during the test.

Results of the Survey on Test Motivation (cf. COMET Vol. III, Sect. 7.7) The general interest of trainees and students in the COMET test items varies greatly. More than half of the electronics engineers found the first test item interesting to very interesting (55%). This figure is even higher for electronics technicians specialising in energy and building technology (61%) and for students of technical colleges (60%). Overall, all test groups worked on the first task in concentrated (73%) and careful (65%) manner. On the other hand, it is noticeable that almost every second electronics technician for industrial engineering states that he is less motivated to work on the second task than on the first task; however, this is only stated by every fifth electronics technician specialising in energy and building services engineering and students of technical colleges (Fig. 7.19).

3 A refuser is a participant who has achieved a total score of less than five points or who has completed both test items together in less than 60 min. In Hesse in 2009, 24 participants were identified as refusers according to this definition, of which ten were E-EG trainees (7%) and fourteen E-B trainees (6%).

7.4 Measuring the Test Motivation

227

Fig. 7.19 Frequency distribution according to test groups: ‘I am (a) less motivated to work on the second test item, (b) similar, (c) more motivated than for the first test item

Comparison of the Results of the First and Second Test Item In this context, by comparing the test results of the two test phases (2  120 min for two test items), it is possible to examine whether and for which test groups there are significant differences in the test results between the first and second test item. If the test result of a test group is worse for the second test item, this can be interpreted as an indication of decreasing test motivation. This effect is not present for all test groups. In the case of the apprentice electronics technician specialising in energy and building services engineering, there is no difference between the results of the first and the second test item. This may be because this group has a rather low overall test level. The electronic technician trainees for industrial engineering achieve a significantly better result in the first task than in the second: 15% achieve the highest level of competence in the first test item and only 6% in the second test item (cf. Fig. 7.20). This also corresponds to the lower motivation of this test group for the second test item described above. In the case of the first task, the risk group is only 10%; in the case of the second task, this figure rises to 23% (cf. Fig. 7.21). Here, too, a test for

228

7 Conducting Tests and Examinations

Fig. 7.20 Competence level distribution of the group of electronic technicians for industrial engineering (Hesse), comparison of the results based on the first and second test item 2009

Fig. 7.21 Competence level distribution of the group of technical college students (Hesse), and comparison of results based on the first and second test item 2009

mean value differences4 shows that the average total score for the first task is significantly higher than for the second task. This results in a considerable loss of motivation, especially among weaker students. Higher-performing students improve from the first to the second test item. The majority, however, do slightly worse than in the first task. Figure 7.22 illustrates this effect: each cross in the diagram represents a trainee, the axes show

4

t-Test for dependent samples.

7.4 Measuring the Test Motivation

229

Fig. 7.22 Scatter diagram to compare the results for the first and second task (Hesse, electronics technician for industrial engineering, 2009, n ¼ 297)

the total scores achieved for each of the two tasks completed and the horizontal and vertical lines in the diagram show the total score (26.5) for both tasks. The diagonal line divides the graphic into two parts. The top part (A) shows the trainees who did better in the second task than in the first, and the bottom part (B) shows those who did worse in the second. Part B contains significantly more participants (63%).

Comparison of the Processing Time for the First and Second Test Item The recording of the processing time also allows an assessment of the extent to which the motivation decreases in the course of the test. For the first test item, the test participants worked an average of 100 min and for the second test item only 83 min. This can be interpreted as fatigue or decreasing motivation. However, a shorter processing time cannot be exclusively attributed to a lack of motivation on the part of the test participants from the start of the test. It must also be considered that an excessively demanding task solution can lead to a participant ending the test early. However, this is contradicted by the fact that there is only a small correlation between the test result and the processing time. This pre-test on the relationship between test motivation, processing time and test results led to the decision to reduce the test scope for each test participant to the processing of a test item. Only in subsequent projects was test motivation included more comprehensively in the context analysis as a determinant of competence development.

230

7.4.2

7 Conducting Tests and Examinations

Explorative Factor Analysis of the Relationship Between Test Motivation and Test Performance

Capturing the Test Motivation When recording test motivation, a distinction is made between primary and secondary motivation aspects. The primary motivational aspects are • The assessment of the occupational relevance of the test items. It is assumed that for test participants with a developed professional identity, the occupational relevance of the test items has a motivating effect on the processing of the test items. • The benefit of the test items. The evaluation of the benefit of the test items results from the assessment of the test participants that participation in the test has a positive effect on training. • The interest in task processing represents another primary motivational aspect. On the one hand, this motivational aspect is based on the two other primary motivational aspects and, on the other hand, on the interest in the content of the tasks. The secondary motivational aspects are • • • •

Commitment. Concentration. Diligence and. Task-solving effort.

(cf. the test motivational model in Fig. 7.23). The primary motivational aspects represent the evaluation of the test items as relevant for vocational education and training, without this already being associated with a willingness to make an effort in processing the test items. If, for example, a test is conducted just before a final examination, this may lead to a lack of interest in the test, as it is perceived as a disruption in exam preparation. The test motivation is then impaired without affecting the evaluation of the occupational relevance of the test items and their basic benefit for vocational training. The evaluation of the interest in the test items or the test results from the occupational relevance and, at the same time, from the benefit of the test for the training as well as for the examination preparation if necessary. The secondary motivational aspects result from the primary motivational dimension. The four secondary motivational aspects represent different aspects of the willingness to make an effort. The recording of the processing time for the solution of the test items can be regarded as a dimension of the test motivation, as shown by the study cited above. At the same time, it is immediately evident that the processing time is also an indicator of the competence level. The test results show that more efficient test participants use the available test time (120 min) to a greater extent than less efficient test participants. The processing time is therefore an indicator for both the competence level and the test motivation.

7.4 Measuring the Test Motivation

231

Questionnaire for Recording Test Motivation

Dear trainees, We would like to hear from you how you assess the test task you have worked on. For this purpose we would like to ask you for some information. Then please place this sheet in the envelope provided for your task solution. Thank you very much for your cooperation! How long did you work on the test task?

less than 1/2 hour 1/2 –1

hour

1–11/2 hours 11/2–2 hours

fully disagree

rather disagree

undecided

rather agree

fully agree

The processing of the test task was very interesting. These types of test items are very useful. The test task has a lot to do with my job. I worked on the test task with a lot of concentration.

I worked on the test task very diligently. I put a lot of effort into processing the test task.

For things that are very important to you personally, you make a special effort and give your best (e.g. sports, hobbies, ...). In comparison, how much effort did you put into the test task? (Please mark with a cross!) 1 2 3 minimum effort

4

Fig. 7.23 Item structure and test motivation

5

6

7

8 9 10 maximum effort

232

7 Conducting Tests and Examinations

The Test Motivational Model: Data Structure of Motivational Variables in the COMET Test Procedure In the previous COMET projects, the test motivation was analysed on the basis of the individual items. An exploratory factor analysis was carried out to check whether connections between the observable motivational aspects can be explained by superordinate dimensions. This makes it possible to uncover non-observable (latent) dimensions that can be superior to the observable items. The test motivational model is based on the hypothesis that a perceived occupational reference, benefit and interest in the submitted test items leads to careful, concentrated processing.

Sample The data were collected during a COMET test of second- and third-year nursing students from a total of six locations in Switzerland (locations: Aarau, Basel, Bern, Lucerne, Solothurn, Zurich/Winterthur). A total of N ¼ 477 persons took part in the survey, 87% of whom were female (n ¼ 417). The items used in the motivation questionnaire (Table 7.7) are therefore subjected to an explorative factor analysis, which is intended to reveal the underlying structures of the items. Due to the correlative character of the items, the assumption is made that possible factors also correlate with each other. Accordingly, a direct rotation is used for factor analysis, which largely allows for a possible correlation between the factors. As a result, two factors or motivational dimensions can be extracted. The factor loads are listed in the following table (Table 7.8). Factor 1 consists of the items Commitment, Diligence, Concentration and Effort. The statements on Interest, Meaningfulness and Occupational relevance are based on a factor of 2. With regard to the formulation of the content of the items (cf. Fig. 7.24), factor 2 can be seen as meaningfulness (primary motivation). The factor describes the benefits for the professional future identified in the test items and links an interest Table 7.7 Test instruments for the first test time and second test time of the first COMET project (COMET Vol. II, 41) Testing instrument Open test items Context questionnaire Questionnaire on test motivation for trainees Teacher questionnaire on the test motivation of trainees Rater questionnaire on the weighting of competence criteria Test of basic cognitive ability (subtest ‘figure analogies’ of the cognitive ability test (CAT))

Use from test time T1 (2008) t1 (2008) t2 (2009) t2 (2009) t2 (2009) t2 (2009)

7.4 Measuring the Test Motivation

233

Table 7.8 Results of the factor loads of the motivational items on the extracted factors (data: nursing staff Switzerland 2014, N ¼ 477) Item Commitment Diligence Concentration Effort Interest Meaningfulness Occupational reference

Item parameters M SD 2.48 1.00 2.44 0.95 2.58 1.01 4.86 2.22 2.44 1.04 2.37 1.02 2.99 1.11

rit 0.78 0.74 0.71 0.67 0.65 0.6 0.36

Factor loads 1 0.89 0.87 0.82 0.74 0.03 0.02 0.02

2 0.01 0.01 0.03 0.04 0.83 0.77 0.49

Comments: Factor loads >0.30 are marked bold; Bartlett test: χ 2 ¼ 1709.19(df ¼ 21), p < 0.001; Kaiser-Meyer-Olkin (KMO) measure ¼0.86. N ¼ sample size; M ¼ mean value, SD ¼ standard deviation; rit ¼ selectivity.

Fig. 7.24 Results of the explorative factor analysis (data: nursing staff Switzerland 2014, N ¼ 477). r ¼ correlation coefficient; a ¼ factor charge

with them. Factor 1 describes the behaviour when processing the test items and is referred to as investment (secondary motivation). The term investment refers to the motivational skills used during processing. The defined factors have a medium positive correlation (r ¼ 0.66). Both factors explain an overall variance of 62%.

The Connection between Test Motivation and Test Performance The studies cited above already indicated a connection between test motivation and performance. With regard to the content dimension of the two factors, it is reasonable to assume that meaningfulness functions as the primary motivation dimension and investment as the secondary motivation dimension, as presumably people who feel that testing makes sense also invest more in test processing. The mediator analysis described below was carried out in order to further investigate the relationship between the two extracted motivational dimensions and the relationship between these dimensions and the test performance.

234

7 Conducting Tests and Examinations

Question and Hypothesis Based on the results of the factor analysis described above, it was examined whether the two motivational dimensions make an explanatory contribution to the COMET test performance (measured as the total score (TS)). The following hypothesis was made for the statistical investigation of the problem: The motivation factors meaningfulness and investment causally explain the competence performance in the COMET test procedure. As a mediator, the investment factor mediates the connection between the meaningfulness factor and the performance factor (TS) in the COMET test procedure.

Method In order to investigate the hypothesis statistically, a mediator analysis was carried out with the three variables meaningfulness as independent variable, investment as mediator variable and TS as dependent variable. The analysis was carried out in the four steps usual for a mediator analysis (Preacher & Hayes, 2004), in which various linear regression analyses were calculated. In Step 1, the regression of the TS was examined for meaningfulness. In the second step of the analysis, a regression of investment was examined for meaningfulness, and in the third step, the regression of the total score to investment was examined. In the last step of the analysis, a multiple regression of the total score was examined for meaningfulness and investment. The Sobel test was also carried out to check the statistical significance of a media effect found (Preacher & Hayes, 2004).

Outcomes The results of the mediator analysis are shown in Fig. 7.25. The analysis showed that with a corrected R2 ¼ 0.043, the variable meaningfulness can explain 4.3% of the variance in the total score. Even if this variance portion is small, the regression model from step 1 becomes significant with F ¼ 22.13(1;475), p < 0.001. In this model, the relationship between meaningfulness and total score is significant and positive (b1 ¼ 4.314.70; p < 0.001). The results of the regression from step two show that with F ¼ 210.48(1;477) and p < 0.001, there is significant regression. This model explains 30.5% of the variance of investment. The relationship between meaningfulness and investment is positive and significant (b2 ¼ 0.60; p < 0.001). The regression of the third analysis step showed a significant result with F ¼ 32,53 (1;475) and p < 0.001. The model explains 6.4% of the variance of the total score. The relationship between investment and total score is also significant and is positive (b3 ¼ 4.79; p < 0.001). The regression from step four indicates that at 6.8%, a significant degree of total variance of the total score can be explained by the two variables meaningfulness and

7.4 Measuring the Test Motivation

235

Fig. 7.25 Mediator analysis to determine the relationship between primary and secondary motivation and performance in the COMET test procedure (data: nursing staff Switzerland 2014; N ¼ 477). bi ¼ regression coefficient; * p < 0.05; ** p < 0.01

investment (F ¼ 18.28(2;474), p < 0.001). The relationship between investment and total score remains positive and significant (b4 ¼ 3.72; p < 0.001), while the relationship between meaningfulness and total score is no longer significant (b ¼ 2.12; p ¼ 0.051). The Durbin-Watson value of this model is 0.74, so a strong positive autocorrelation (first order) of the residuals must be assumed (Brosius, 2013). The Sobel test was significant with Sobel test statistics ¼ 5.31, p < 0.001 in this study.

Discussion The results of the mediator analysis indicate that the investment completely mediates the connection between meaningfulness and performance. The result of the Sobel test confirms this result. This means that people who see more benefit in the COMET test procedure invest more and achieve better performance in this way. The investment can therefore be confirmed as primary motivation and the investment as secondary motivation. This confirms the assumption made in earlier COMET studies that more motivated people also achieve better test results. However, the analysis clarifies that the willingness to invest in the test item depends on how strongly the benefit of testing is perceived. For the future, therefore, the test participants should be advised of the benefit of the COMET test procedure before carrying out the test, in order to avoid poor test performance resulting from poorly perceived benefit. The relatively low overall model’s explained variation of 6.8% indicates that a large part of the variance of the performance remains unexplained by the present model. This indicates that, in addition to the motivational components, numerous other factors have an influence on the test performance. This is immediately obvious,

236

7 Conducting Tests and Examinations

since achievements depend on knowledge, skills and abilities. A further indication for this assumption is provided by the result of the Durbin-Watson test, which points to a strong positive autocorrelation (first order) of the residuals. This could be an indication that important explanatory variables are missing in the calculated model. The consequence of strong autocorrelation of residuals may be that the true standard errors are underestimated. This continues to distort the results of the significance tests (Brosius, 2013). The available results must therefore be interpreted with caution.

7.4.3

Influence of Processing Time on Overall Performance

Based on the relatively low explanatory variance of the performance by the two motivational factors, which was shown in the mediator analysis, the variable of the processing duration could lead to an increase of the explained variance. Based on the hypothesis that a comprehensive, reflected task solution with detailed justifications (corresponding to the COMET task) inevitably results in a longer processing time, the processing time is recorded even after the COMET test has been shortened by one task in order to be able to further investigate this aspect as an indicator of test motivation. The processing time for the test, which is asked on a four-step scale, is as follows in this sample: How long did you work on the test item? (1) less than 1/2 h (2) 1/2–1 h (3) 1 1/2 h (4) 1 1/2–2 h Absent Total

Number 16 83 127 236 15 477

Percentage 3.4 17.4 26.6 49.5 3.1 100

It is apparent that almost half of the trainees exploited the full processing time. A correlation of the processing time with the total score of r ¼ 0.53 culminated in a significant result. This means that there is a medium-strong positive correlation between the processing time of the COMET test and the total score achieved. The question of the interaction between the motivational aspects of meaningfulness and investment, the processing time and the total score will now be examined in greater depth. The research interest here is particularly aimed at investigating the influence of processing time on the performance of the trainees in addition to the two motivational dimensions. For this purpose, the factors investment and meaningfulness as well as the processing time are examined in a hierarchical regression analysis with the total

7.4 Measuring the Test Motivation

237

Table 7.9 Summary of the hierarchical regression analysis for predicting the variable ‘Total score’ (n ¼ 462) Variable Step 1 Meaningfulness Investment Step 2 Meaningfulness Investment Processing time

SE

β

R2

Corrected R2

2.47 3.62

1.12 1.03

0.12 0.19

0.077

0.073

0.028 0.001

1.30 2.69 19.02

0.99 0.91 1.60

0.06 0.14 0.48

0.296

0.291

0.187 0.003 0 represent above-average and values 11.2 10.3–11.2 9.3–10.2 8.3–9.2 8.2

Minimum score required for CP + CD to reach competence level 1 (∑(PP + PD)) – 3 6 9 No further compensation possible

Competence Level 0: Nominal Competence The test persons who only reach this competence level do not yet have a professional competence. This applies whenever the conditions for reaching the first level of competence are not met. In Fig. 3.1, this applies to test persons 13 to 17. Competence LEVEL 1: Functional Competence In order to achieve this level of competence, the following conditions apply: 1. A test person will reach competence level 1 if his/her score for functional competence is higher than 11.2 and if the conditions for reaching competence level 2 are not met. 2. If the score for functional competence is less than or equal to 11.2, the missing scores of up to 8.3 can be compensated by scores achieved in the other two competence dimensions ‘Procedural competence’ and ‘Holistic shaping competence’. Table 8.4 shows the applicable rules. Table 8.4 also shows that, with a score of 8.2 or less for functional competence, competence level 1 can no longer be reached in any case. In Fig. 3.1, test persons 2, 5 and 6 as well as 8 to 12 do reach the first competence level. Competence Level 2: Procedural Competence In order to achieve this level of competence, the following conditions apply: 1. A test person reaches competence level 2 if both the score for functional competence and the score for procedural competence are greater than 11.2 and if the conditions for reaching competence level 3 are not met. 2. If the score for procedural competence is less than or equal to 11.2, the missing points—analogous to condition 2 for reaching the first competence level—can be compensated by scores achieved in the competence dimension ‘Holistic shaping competence’. Table 3.3 shows the applicable rules. 3. Table 8.5 shows that, with a score of 8.2 or less for procedural competence, competence level 2 can no longer be reached in any case. In Fig. 3.1, test persons 1, 3, 4 and 7 do reach the second competence level.

290

8 Evaluating and Presenting the Test Results

Table 8.5 Rules for the compensation of missing functional competence scores in procedural competence Achieved score for ‘Procedural’ (PP) > 11.2 10.3–11.2 9.3–10.2 8.3–9.2  8.2

Minimum score at CD to achieve competence level 2 (PD) – 3 6 9 No further compensation possible

Competence Level 3: Holistic Shaping Competence The third competence level has been reached by those who have achieved more than 11.2 points each for all three dimensions of competence. In Fig. 3.2, no test person meets this requirement.

8.2

Graphical Representation of the Test Results

The test results can be displayed in different ways. The representation in Fig. 3.1 summarises the results of a school class. In addition, a network diagram is created for each individual test participant (Fig. 3.8). This representation, which includes not only the three competence dimensions but also the eight competence criteria, emphasises the multidimensional character of the competence model.

8.2.1

Competence Levels

The COMET competence model comprises is a concept of competence dimensions that considers competence levels to be relatively independent competence dimensions. This concept is paid special attention by the representation of the test results in the form of network diagrams. The eight competence components allow a complete description of professional competence profiles. According to Schecker and Parchmann, the definition of a hierarchy of competence levels has its origins in the grading interest of educational planning. In contrast, it should be borne in mind that the concept of competence development facilitates the qualitative description of abilities which are not necessarily to be classified in an ordinal scale (Schecker & Parchmann, 2006, 51). Therefore, in the COMET competence model, functional competence (first vocational competence level) is not a subordinate or inferior competence, but rather a competence which, on the one hand, has the significance of a basic competence and, on the other hand, also has the functions of a relatively independent quantity in a multidimensional space of multiple competence. The lack of functional abilities or their inadequate

8.2 Graphical Representation of the Test Results

291

Fig. 8.2 The graphical representation of competence level distribution in a test group of vocational school students, n ¼ 27

development can be compensated neither by procedural competences (second competence level) nor by abilities assigned to the third competence level. The degree to which competence levels represent successive and interrelated competences and the degree to which they are dimensions of multiple competence require empirical testing. On the basis of the survey data of the first test date (April 2008), the measurement and evaluation procedure was psychometrically analysed (COMET Vol. II, Erdwien & Martens, 2009). This confirmed, among other things, that competence levels represent successive and interrelated skills: In compliance with the theoretical model according to which higher performance in ‘procedural competence’ can only be achieved if ‘functional competence’ is sufficiently developed and ‘shaping competence’ continues to be developed to a greater extent only if the competence levels ‘functional competence’ and ‘procedural competence’ are sufficiently developed, all types show a tendency towards a pronounced drop in the assessment ratings. (COMET Vol. II, 80).

The results of the analysis also show that the competence levels and components are relatively independent (COMET Vol. II, 67 ff.). This dual structure of competence components, both as competence levels and as dimensions of competence profiles, considerably expands the possibility of evaluating test results. In this case, it should be borne in mind that both forms of representation, the competence levels and the competence profile representation of vocational competence, represent two complementary evaluation perspectives, each of which in itself is subject to abridgements. For practical handling of the illustrated test results, it is therefore advisable to interpret both forms of representation in relation to individual results in context. This is exemplified by the results of a test group of students from technical colleges (Fig. 8.2).

292

8 Evaluating and Presenting the Test Results

Fig. 8.3 Comparison of competence level distribution in several test groups (results from 2009)

Fig. 3.2 shows that, with 41%, the first competence level is the most pronounced; 30% achieve the second, and only 26% achieved the third competence level. This could be read as if only 41% of the test persons reached the first competence level. If the diagram is read correctly, the statement is 96% of the test persons reach the first competence level, 56% of them the second and 26% the highest competence level. The risk group comprises 4%. This means that, as a rule, all test persons who reach the second and third competence levels also have functional competence—as a basic competence— which can be assumed for the second and third competence levels. This also applies to the relationship between the second and third levels of competence. The third competence level includes the skills of the first and second competence levels. For the comparative representation of a larger number of test groups, a form of representation is suitable in which the distribution of competences at competence levels is represented as shares of 100% in a bar (Fig. 8.3). This form of representation serves to clearly illustrate the differences between test groups with regard to the competence levels achieved.

8.2.2

Differentiation according to Knowledge Levels

In the COMET measurement model and in the rating procedure, this is reflected in the evaluation of the solution aspects on the basis of items that are rated according to a four-stage interval scale (0–3) (Table 8.6).

8.2 Graphical Representation of the Test Results

293

Table 8.6 Assignment of the interval scale to the levels of work process knowledge Interval scale 0–3 Levels of work process knowledge

Fully met 3 Know Why

Partly met 2 Know How

Not met 1 Know That

In no way met 0

Fig. 8.4 Distribution of overall scores for nominal, functional, procedural and holistic shaping competence

The assignment of the scale values 1–3 to the levels of work process knowledge is based on a pragmatic justification. An item is then evaluated as ‘fully met’ if the respective solution aspect is not only considered but also ‘justified in detail’. Each test task therefore states: ‘Justify your solution in full and in detail’. If this is completely successful, then this corresponds to the level of action-reflecting or ‘Know that’ knowledge. An item is then not (or ‘still’) met if the underlying rules for the complete solution of a task were considered but could not be justified. This then corresponds to the level of ‘Know that’ or the value ‘1’. An item is ‘partly met’ if the corresponding solution aspect could be justified in principle, but without adequately taking the situational context into account. The definition of the three successive and interrelated competence levels on which the COMET measurement model is based leads to (in test practice) relatively large intervals in the overall scores (TS). This means that it is possible for subjects with a higher overall score to be placed at a lower level of competence. This happens whenever they reach this level at a higher level of knowledge. Figure 8.4 shows that, for example, a TS of 45 can mean that a test person/test group has achieved both the competence level Procedural competence (high) and the competence level shaping competence (low). This differentiating form of evaluation and representation of the competence characteristics depicts the reality of vocational education and training much more validly and accurately than a score on a

294

8 Evaluating and Presenting the Test Results

Table 8.7 Differentiation of competence levels according to knowledge levels Competence level Nominal competence Functional competence

5% Quantile 6.2 17.2

Lower third 10.7 22.3

Procedural competence

29.2

Holistic shaping competence

40.0

33.4 (35.0) 47.7

Mean value 13.1 24.7 (25.0) 38.3 53.1

Upper third 14.9 26.8

95% Quantile 20.8 32.9 (34.5)

10.0 (41.0) 55.9

51.1 71.5

continuous competence scale which defines competence levels in accordance with quantitative level differences. The two forms of representation can be summarised for representations aimed at the clear hierarchisation of the test persons. This requires the introduction of an index indicating whether a certain level of competence is associated with a relatively high, low or average total score. The introduction of such an additional index makes it possible to classify the test persons more precisely according to the work process knowledge incorporated in their competence (Table 8.7). A percentile band is calculated for each competence level to determine the thresholds for differentiation in accordance with the three levels of work process knowledge: high, medium and low. The 33rd and 66th percentiles are determined for each percentile band. The resulting three equally large subdivisions of the respective competence levels represent the three successive and interrelated levels of work process knowledge (Fig. 8.5). Figure 8.5 shows that the percentiles of the three competence levels (5%, 33%, 50%, 66%, 95%) linearly increase from the first (CF) to the third (CD) competence level. If one applies the model of a linear increase in percentile values from the first to the third competence level to the determination of the differentiation according to the three levels of work process knowledge incorporated in the vocational competences, then the result comprises the limit values given in Table 3.5. The values show great similarities between the empirical and the model-based values for the standardised differentiation according to the action-leading (Know That), actionexplaining (Know How) and action-reflecting knowledge (Know Why). Depending on the overall score achieved, the results within the competence levels can be differentiated into ‘low’, ‘medium’ and ‘high’, so that the evaluation can be further refined. This differentiation corresponds to the three levels of occupational work process knowledge (Fig. 8.6): Work process knowledge Action-leading knowledge Action-explaining knowledge Action-reflecting knowledge

Level Know that Know how Know why

8.2 Graphical Representation of the Test Results

295

Fig. 8.5 Standardised subdivision of competence levels into ‘low’, ‘medium’ and ‘high’

Fig. 8.6 Example: Competence levels differentiated according to low/medium/high (NRW carpenters)

296

8.2.3

8 Evaluating and Presenting the Test Results

Transfer of Competence Levels Differentiated according to Knowledge Levels into a Grading Scale

In many countries, test or examination performance is graded on a scale that usually ranges from ‘1’ (very good) to ‘5’ (inadequate). • • • • • •

1 . . .. . .very good 2 . . .. . .good 3 . . .. . .satisfactory 4 . . .. . .adequate 5 . . .. . .inadequate 6 . . .. . .unsatisfactory.

This grading scale is also used in the examination practice of vocational education and training. In this case, an examination performance that is not ‘adequate’ is assessed as a failed examination. Figure 8.7 shows the allocation of competence/knowledge levels to grades. For the evaluation of vocational competence, however, marks or scores are less significant. A competence classified into a grade does not say anything about the competence profile of an examinee and which professional tasks can be assigned to him or her for independent processing. This also applies to the selection of appropriate further training courses in order to close gaps in the competence profile. Figure 7.12 (! 7.2.4) shows one way in which the examination performance can be shown in a test certificate.

Fig. 8.7 Allocation of competence levels to grades

8.3 Competence Development as a Competence Profile

8.3

297

Competence Development as a Competence Profile

The representation of the same test results in the form of a network diagram (Fig. 8.8) realistically illustrates the characteristics of all eight competence components, but not the effect of the successive and interrelated competence levels. This is why the pointers for the development of the competence levels also appear here as independent of each other. In contrast, the length of the pointers here represents the extent of the competence development in the form of scores. The average score for functional competence (CF) is 14.6 points. The average value of procedural competence (CP) is 11.7 points and that of shaping competence (CD) is 9.2 points. These values do not contradict the other forms of representation of occupational competence. Modelling the requirements dimension of the COMET competence model (! 4.2) is based on 1. the concept of the complete solution of professional tasks, 2. differentiation according to the three levels of successive and interrelated work process knowledge: action-leading, action-explaining and action-reflecting. The scaling model for vocational education and training is not assigned the difficulties of the individual test items on the respective competence scale. Instead, the probable skill values of the competence development of the competence dimensions functional, procedural and organisational competence form three overlapping courses. For example, the average functional competence can be defined as a value between 0.33 and 0.66 on the CF scale (functional competence). Accordingly, a medium to high ‘functional competence’ value can correspond to a low to medium ‘procedural competence’ value. Based on more than 7000 test results, the distribution for the test groups was determined in accordance with test levels and the total score. This distribution forms the basis for the definition of low, medium and high

Fig. 8.8 Average competence profile of a test group of vocational school students (type ‘Vocational education and training’), n ¼ 27

298

8 Evaluating and Presenting the Test Results

competence development of the respective competence level. When representing nominal competence (risk group), the differentiation of knowledge levels is omitted, as this level is below the first competence level of functional competence. This differentiating form of evaluation and representation of the competence characteristics depicts the reality of vocational education and training much more validly and accurately than a score on a continuous competence scale which defines competence levels in accordance with quantitative level differences. This means that in the development of each of the eight competence criteria, as well as the functional, procedural and holistic shaping competence, a distinction can be made between the three levels of work process knowledge. A test task can be solved, for example, at the level of shaping competence, by • taking into account all relevant solution criteria. If the complete task solution is based on the level of action-leading knowledge, then this means that the test person knows all relevant rules and can apply them to the task at hand without being able to explain them technically. • also being able to explain the complete task solution technically—with reference to the relevant professional work process knowledge. This knowledge is the basis for ‘understanding what you do’ and therefore also for a sense of responsibility and quality as well as a higher level of shaping competence. • justifying the complete task solution in relation to the situation and weighing the solution criteria against each other as well as selecting and justifying the best possible alternative solution professionally. This level of knowledge constitutes the highest level of shaping competence. These three successive and interrelated knowledge levels can therefore be distinguished for each of the three competence levels. In psychology, stage models serve to describe qualitative differences in development processes. This is exemplified by the levels of moral and intelligence development of children and adolescents (Kohlberg, 1969; Piaget, 1973). In curriculum theories, a qualitative distinction is made between successive and interrelated knowledge levels. In Natural Science Didactics, Bybee (1997) introduced a didactically founded level concept that has found its way into the PISA project and into the COMET competence model. The competence levels identified in the COMET competence model are based on a qualitative justification of competence development. As competence development in vocational work and vocational learning can also be represented as multidimensional competence profiles in accordance with the concept of complete vocational task solution, with which the quality of the competence development achieved can be quantified, continuous competence development according to levels and scales can be ruled out for the definition and scaling of vocational competence. Professional competence is measured as the ability to exploit the specific scope for solutions or design. The resulting format of

8.3 Competence Development as a Competence Profile

299

open, complex test tasks requires a capability-based measurement model and a corresponding rating procedure. Professional competence cannot be measured with a large number of more or less difficult test items that can be solved correctly (or incorrectly). In the professional world, it depends on the understanding of the context. For example, electronics technicians in energy and building technology do not have to deal with correct or incorrect lighting systems, just as cabinetmakers do not produce ‘correct’ furniture. It is always (and inevitably) a matter of exhausting the respective solution spaces: of searching for adequate and good compromises when weighing up all relevant solution criteria.

8.3.1

Homogeneous versus Selective Competence Profiles

The representation form of the competence profiles using network diagrams contains two items of information: the three competence levels and the eight competence components. To this end, a measure of the homogeneity or variance of the competence profiles can be determined independent of the level of the total score: the coefficient of variation. It is quite possible that the test persons solve a test task at a relatively low level, but nevertheless with equal consideration of all eight solution aspects. The coefficient of variation V indicates whether the competence profile is more balanced or imbalanced. A low coefficient of variation stands for relatively high homogeneity of the competence profile, while high values stand for low homogeneity (Fig. 8.9). An analysis of the network diagrams shows that the criteria ‘environmental’ and ‘social compatibility’ in particular have the lowest significance in vocational training. In the world of work, however, this would have a considerable impact as, depending on the task at hand, violations of environmental and social compatibility regulations can have far-reaching consequences. In the demand for the professional execution of a work order, ‘professional’ is often associated with the categories ‘specialist’ or ‘technical’ in the context of scholastic learning. In in-company vocational training, on the other hand, the ‘professional’ category refers to ‘skilled’ work. If the vocational school succeeds in designing vocational education and training from a work and business process-related perspective, this also means a change of perspective in the technical understanding (Bauer, 2006), as corresponds to the COMET concept of the complete solution of vocational tasks. In contrast, if ‘professional’ is associated with specialist science, we lose sight of work and business processes. The consequence is vocational education and training moves away from its function of imparting vocational competence (cf. KMK, 1996). Conclusion The COMET test procedure enables the representation of competence not only in the form of competence levels, but also of competence profiles. This is mainly due to the didactic quality of the test procedure. Teachers/trainers as well as trainees and students can read directly from the competence profiles as to which of

300

8 Evaluating and Presenting the Test Results

Fig. 8.9 Differentiation of the competence profiles according to the total score (ATS) and the coefficient of variation: (a) E-B, class no. 7, n ¼ 26; (b) E-B, class no. 5, n ¼ 18; (c) E-B, class no. 24, n ¼ 19; (d) E-B, class no. 23, n ¼ 17 (results COMET Electronics Engineers (Hesse) 2009)

the eight sub-competences in an educational programme or learning group have been developed and which have been neglected. At the aggregation level of local, regional and national education levels or in an international comparison of vocational education and training systems, competence projects can also be used to draw conclusions about the quality of training regulations, training plans and training courses as well as the strengths and weaknesses of vocational education and training systems. Although the competence profiles comprised in the COMET projects in a large number of very different occupations show that the change of perspective from a subject-specific systematic and technically structured VET to a VET concept based on work and business processes was largely implemented in the educational programme and theoretical discussion, the broad implementation of this central idea in VET practice still has to be achieved. An analysis of the test results—especially the competence profiles of the test participants and the test groups—together with the teachers/trainers and lecturers

8.4 Heterogeneity of Professional Competence Development

301

involved in the project is an essential element of quality development. This is mainly due to the extended (holistic) technical understanding that the teachers acquire in the COMET projects.

8.4

Heterogeneity of Professional Competence Development

The differences in the vocational school performance of trainees and students are a well-known phenomenon. It is not uncommon for high-school graduates and young people without a lower secondary school leaving certificate to learn the same profession. This tends to occur more frequently in craft trades as, for example, a high-school graduate wants to continue his/her training as a master craftsman after passing the journeyman’s examination in order to assume responsibility in the parental company and to be able to perform the function of a trainer. The heterogeneity of performance in these vocational school classes is therefore very high. In occupations under the auspices of the IHK (German Chamber of Industry and Commerce), the proportion of trainees with university entrance qualifications has risen significantly in recent years, in line with the motto: ‘Learn a ‘real’ profession first before I tackle the jungle of the new study courses’. In 2013, for example, 30% of trainees in IHK occupations in their first year of training had a higher education entrance qualification (cf. Report on Vocational Education and Training 2014, 28 f.). The heterogeneous performance structure has less of an impact on trainees in the training companies. Companies have the opportunity to select applicants who meet their requirements. This informal selection leads, for example, to the fact that the majority of apprentices in occupations such as media designer, industrial clerk and IT occupations are high-school graduates (‘high-school graduate occupations’). In these occupations, this form of informal selection also tends to reduce the heterogeneity of the performance structure in the classes of vocational schools. In countries with school-based VET systems, a distinction is generally made between two to three successive and interrelated school-based programmes: vocational colleges, technical colleges, higher technical colleges and, more recently, so-called ‘vocationally qualifying bachelor’s degree programmes’ based thereon. If the admission requirements for the vocational track or the academic track for higher education programmes are controlled by selection and admission regulations, this reduces the spread of competence development among pupils and students. The phenomenon of heterogeneity in vocational education and training, the extent of which has so far been underestimated, will be presented below and analysed on the basis of empirical results, whereby the degree of heterogeneity measured in previous COMET projects is shown first. This is followed by an interpretation of the causes for the heterogeneous performance structure in vocational education and training programmes and considerations on ‘dealing with heterogeneity’.

302

8.4.1

8 Evaluating and Presenting the Test Results

Heterogeneous Levels of Competence

The heterogeneous performance structure becomes particularly clear when the classes participating in a COMET project (of a profession or occupational group) are differentiated into the proportion of pupils at risk (nominal competence) and the proportion of those who achieve the highest level of competence. This was demonstrated, for example, in the COMET Electronic Technicians project (NRW), in which nine classes of electronic technicians for industrial engineering (E-B) and twelve classes of electronic technicians specialising in energy and building technology (E-EG) took part in 2013/14. The proportion of trainees in the E-B classes who reached the highest level of competence ranges from 48% to 0%. In the E-EG classes, the dispersion of 44% 0% is similarly large. This is complemented by the great variance in the proportion of trainees assigned to the risk group (nominal competence), which is also pronounced. This value varies between 13% and 64% for the E-B classes and even between 17% and 79% for the E-EG classes (Figs. 8.10 and 8.11). Another aspect of heterogeneity is the differences in the distribution of test subjects among competence levels measured in related occupations (Fig. 8.12). The two test groups of the two commercial occupations do not differ with regard to their prior education: the proportion of pupils with a university entrance qualification is roughly the same in both occupations at around 70%. Nevertheless, the distribution of competences among the competence levels in the two occupations differs greatly; 70% of INK trainees, but only 8% of SPKA trainees, reach the third competence level. 32% of the SPKA, but only 7% of the INK are risk students.

8.4.2

Percentile Bands

The standard method for representing heterogeneity in the COMET project is the use of percentile bands (Fig. 8.13). The differences and dispersion in competence levels, determined by scores, between test subjects or test groups formed according to different characteristics such as occupations, states, age and prior schooling, provide information on the degree of heterogeneity assumed in vocational education and training. The percentile bands also used in the PISA studies can be used as a form of representation. Representation using percentile bands makes it possible to graphically bundle three different types of information on different groups (school locations, sectors, years of training, educational programmes and education systems) (Fig. 8.14). The centre mark (CM) shows the average value of the groups. Differences in average performance become visible by comparing the different averages. Whether these differences are significant can be seen from the grey area around the average value on the bands, the confidence interval. With a 95% certainty, this is where the ‘true’ average value lies, i.e. the projection from the respective group to

8.4 Heterogeneity of Professional Competence Development (n)

Electronics technicians (EB)

class 1 (n = 23, 3rd year) class 7 (n = 26, 2

303

nd

nd

class 4 (n = 24, 2

year)

ATS

13 %

42,2

15 %

23,4

17 %

40,1

year)

rd

class 5 (n = 23, 3 year)

37,9

22 %

rd

3 year (n = 76)

30 %

36,3

Total (n = 170)

31 %

34,4

2nd year (n = 94)

31 %

32,9

class 8 (n = 16, 3rd year) nd

class 13 (n = 14, 2

38 %

year)

43 %

class 15 (n = 12, 2nd year) nd

34,2

25,8

50 %

year)

class 18 (n = 15, 2

29,3

23,4

60 %

class 16 (n = 14, 3rd year)

26,2

64 %

0%

20%

40%

60%

80%

100%

Electronics technicians (EEG) class 3 (n = 29, 3rdyear)

17 %

37,2

class 9 (n = 16, 2ndyear)

25 %

32,9

class 2 (n = 27, 2nd year)

26 %

41,3

class 11 (n = 12, 3rd year) 2nd year(n = 102)

31,9

42 % 44 %

33,4

year)

47 %

37,2

Total (n = 205)

49 %

31,3

class 6 (n = 15, 2

nd

class 10 (n = 13, 3rd year)

54 %

32,4

3rd year(n = 103)

54 %

29,4

class 14 (n = 19, 2

nd

year)

nd

year)

nd

year)

class 19 (n = 13, 2 class 12 (n = 12, 2

23,4

62 %

30,1

67 %

rd

class 17 (n = 12, 3 year)

25,3

75 %

class 20 (n = 18, 3rd year) rd

class 21 (n = 19, 3 year)

0%

28,3

58 %

20%

78 %

21,6

79 %

19,2

40%

60%

80%

100%

Fig. 8.10 Percentage of trainee electronics technicians in the risk group (NRW project 2013/2014)

304

8 Evaluating and Presenting the Test Results Electronics technicians (EB)

(n)

Total score

class 5 (n = 23, 3rd year) nd

class 4 (n = 24, 2

year)

rd

class 1 (n = 24, 3 year)

48 %

37,9

46 %

40,1 42,1

39 %

rd

3 year(n = 76)

36,3

34 %

class 8 (n = 16, 3rd year)

25 %

34,2

Total (n = 170)

26 %

34,4

2

nd

21 %

year(n = 94) nd

32,9

year)

15 %

35,0

class 13 (n = 15, 2nd year)

13 %

29,3

class 16 (n = 17, 3 year) 12 %

26,2

class 18 (n = 15, 2nd year) 7 %

23,4

class 15 (n = 12, 2nd year) 0 %

25,8

class 7 (n = 26, 2

rd

20%

0%

40%

60%

80%

100%

Electronics Technicians EEG Total score

(n) class 2 (n = 27, 2nd year)

41,3

44 %

rd

class 11 (n = 12, 3 year)

42 %

31,9

class 6 (n = 15, 2nd year)

40 %

37,2

rd

37,2

38 %

class 3 (n = 29, 3 year) 2nd year apprenices (n = 102)

26 %

33,4

Total (n = 205)

24 %

31,3

class 10 (n = 13, 3rd year)

23 %

32,4

22 %

29,4

rd

3 year apprenices (n = 103)

class 12 (n = 12, 2nd year)

17 %

30,1

class 17 (n = 12, 3 year)

17 %

25,3

class 14 (n = 19, 2nd year)

16 %

28,3

rd

nd

13 %

32,9

class 20 (n = 18, 3rd year) 11 %

21,6

class 19 (n = 13, 2nd year) 8 %

23,4

class 21 (n = 19, 3rd year) 0 %

19,2

class 9 (n = 16, 2

year)

0%

20%

40%

60%

80%

100%

Fig. 8.11 Proportion of trainee electronics technicians at the level of holistic shaping competence

8.4 Heterogeneity of Professional Competence Development

305

Fig. 8.12 Comparison of the distribution of competences of industrial clerks (INK) and shipping clerks (SPKA) (COMET NRW, 2013)

Fig. 8.13 Example of a percentile band

Fig. 8.14 Sample percentile band (surveys from 2009)

306

8 Evaluating and Presenting the Test Results

the population. Accordingly, differences between two groups are significant and most likely not accidental if the average of one band is outside the grey area of another. The third important information of the percentile bands concerns the spread of the results, i.e. the professional distance between worse and better test results. The white areas represent the values for 25–50% or 50–75% of a group. This range includes the values for half of the test participants grouped around the average value. Finally, the outer grey areas contain those cases which form the lower (10–25%) or upper (75–90%) range. The best and weakest 10% of the results are not captured by the bands so as not to distort their width by individual outliers. The white part of the bands (including the grey confidence interval) therefore indicates the range of the average 50% of the test results. The entire band shows the range of results of 80% of the participants. The 10% best or worst results are to the right or left of the band. To avoid major distortions in the representation of class results, test groups with less than 15 participants are not included in the representation of percentile bands in this report. The total scores (TS) and the variation range of the percentile bands can be represented in the form of learning times and learning time differences (LTD). As vocational training with a duration of 3–3.5 years corresponds to a maximum of approximately 70 points, a training duration of 1 year roughly corresponds to a score of 20 points. The following represents characteristic percentile bands for different vocational education and training programmes. For larger samples, the percentile bands are extended to the fifth and 95th percentiles. Example The spread of the competences of the second- and third-year test group of electronic technicians in industrial engineering (E-B), building and energy engineering (E-EG), and of the technical college students (electronic technicians) of full-time and part-time students (F-VZ and F-TZ) is striking in several respects and was something the responsible project consortium did not expect in this form. This mainly concerns 1. the extraordinarily wide range of variation (spread) of the competences of the test participants in the classes. This often amounts to 40 and more points and therefore corresponds to a learning time difference of two and more years. 2. the large differences in competence levels between the classes. Despite comparable prior training of E-B and E-EC trainees, the levels of competence differ, in some cases considerably, from one another. The E-B-Class with the weakest performance differs from that with the highest performance by a learning time difference of almost 1 year. 3. The formal difference between the qualification levels for initial vocational education and training and the level of vocational schooling apparently has hardly any influence on the competence levels measured (Fig. 8.15).

8.4 Heterogeneity of Professional Competence Development

307

Fig. 8.15 Percentile bands for professional competence across test groups at class level for trainees (results from 2009)

8.4.3

The Heterogeneity Diagram

The evaluation of all empirical values available so far on the spread of the competence of test persons and test groups results in values from 0 to a maximum of 80. Theoretically, values of up to 90 are conceivable. In fact, however, values above 80 were only measured very rarely. This also applies to the value ‘0’. If the empirical values of the spread are plotted as a learning time difference of 0–3 years on the vertical line and the corresponding average values of the test groups on the horizontal line, this results in test-group-specific patterns for the heterogeneity of the competence development as well as a characteristic function, with which the dependence of the learning time difference on competence levels can be described. y ¼ ai - ba2i ( x  b)2.(a1 ¼ 2.5; a2 ¼ 1.5; a3 ¼ 0.5 for each max. LTD; b ¼ max. Achievable LTD).

308

8 Evaluating and Presenting the Test Results

Fig. 8.16 Heterogeneity diagram of various occupations (shipping clerks (SPKA), industrial clerks (IK), electronics technicians in China)

According to the heterogeneity diagram, it can be expected that with increasing competence levels up to the value of TS ¼ 40 in accordance with the function on which the heterogeneity diagram is based, the spread of the competence development and therefore the learning time difference in the learning groups/educational programmes will increase. If a higher competence level is reached, then the spread of the learning time difference or the variation range of the competence characteristics decreases again. According to the previous test results of the regional, national and international projects of the COMET network on approximately ten occupations, three levels of heterogeneity (high, medium and low) can be distinguished. The educational courses in the Beijing region (n ¼ 800) participating in the international comparison project ‘Electronics Technicians’ are characterised, for example, by a low level of heterogeneity at an overall low level of competence. In comparison, the heterogeneity level of industrial clerks (IK trainees) is at a medium level and that of shipping clerks (SPKA trainees) at a high level. Comparing the results of the COMET motor vehicle project (China, Hesse, North Rhine-Westphalia) also results in educational course-specific characteristics of heterogeneity (Fig. 8.16).

8.4 Heterogeneity of Professional Competence Development

8.4.4

309

The Causes of Heterogeneity

The different forms of evaluation and representation of more or less heterogeneous competence characteristics convey an impression of the complexity of the problem. The main determinants of heterogeneous competence—with reference to the state of research in the COMET project—will be compiled below. Only when it is possible to analyse and understand the phenomenon of heterogeneity in its situational circumstances and genesis can the didactic actions of the teachers and trainers pose a targeted reaction. In this case, the primary aim is not to avoid the extent of heterogeneous performance, but to develop didactic concepts with which heterogeneous performance structures in learning groups can also be understood as an element of learning opportunities for all.

Prior Schooling The COMET projects confirm that prior schooling has a considerable influence on the development of heterogeneous performance structures. The composition of learning groups in VET programmes is a crucial determinant of heterogeneity. This finding applies to large-scale studies (cf. Heinemann, Maurer, & Rauner, 2011, 150 ff.). On the other hand, this does not permit the conclusion that, for example, a high proportion of trainees with a lower secondary school leaving certificate in a class determines the level of competence or that a high degree of heterogeneity in prior schooling in a class causes a correspondingly high degree of heterogeneity in the level of competence. As could be shown in this report, classes with a comparable structure of trainees in the same occupation and at the same location can achieve very different levels of competence.

Selection and Admission Rules for Vocational Education and Training as Determinants of the Degree of Heterogeneous Performance Structures The more comprehensively and tightly regulated the admission requirements for vocational training programmes, the more homogeneous the development of competence structures. Characteristic examples are the vocational programmes of the upper secondary level, post-secondary and tertiary vocational programmes of the Chinese vocational training system. In China, admission to the general branch of upper secondary education is decided by a nationwide test procedure. Students who do not pass this test are referred to vocational programmes. Access to ‘higher vocational education’ (at universities) is also regulated by a selection procedure. These admission and access regulations contribute to the fact that the heterogeneity of competence development in Chinese vocational education and training at all qualification levels is low to medium.

310

8 Evaluating and Presenting the Test Results

The Teacher as an Influencing Factor According to the results of the COMET projects available to date, the teacher plays a decisive role in the competence development of trainees and students. Apart from the curricular structures specific to educational programmes, teachers/trainers are the most influencing factor for professional competence development. This can be predominantly seen in the fact that the competence level of learning groups can be very different despite having the same prior schooling and the same training programmes, without this having an effect on the spread of competence development—insofar as this does not result from the competence level (see above). This result points to the particular challenge teachers and trainers face in dealing with heterogeneity.

The Heterogeneity Diagram The degree of heterogeneity depends on the competence level of the learning groups. In learning groups with a low competence level (TS  40), the degree of heterogeneity tends to increase—irrespective of whether the learning groups are homogeneous or heterogeneous. In the learning groups with a high competence level (TS  40), the degree of heterogeneity decreases again with a further increase in the competence level in accordance with the function underlying the heterogeneity diagram.

Learning Venue Cooperation A further determinant of the degree of heterogeneity of competences in the learning groups (classes) is the quality of learning venue cooperation and the quality of in-company and school-based vocational training. With increasing differences in the quality of training at the two learning venues, the quality of learning venue cooperation decreases, and the heterogeneity of competences increases. Conclusion The recording of the heterogeneity of competence development in vocational education and training programmes is an essential prerequisite for the development and implementation of didactic concepts for dealing with heterogeneity. A strategy that is primarily aimed at restricting the range of variation of competence characteristics could lead to a drop in the qualification level of the test groups (see heterogeneity diagram). Up to the intermediate competence level, heterogeneity in the learning groups tends to increase. Attempts to strengthen the heterogeneity in the learning groups by giving special support to trainees with learning difficulties should therefore be combined with measures to provide individual support for high-performance learners (cf. Piening and Rauner, 2015g). The COMET test results on the heterogeneity of vocational competences also show that one sidedness in the professional

8.5 Measuring Identity and Commitment

311

understanding of teachers and lecturers not only leads to inhomogeneous competence learner profiles, but also tends to impair the development towards a higher level of competence.

8.5

Measuring Identity and Commitment

The research interest in the commitment of trainees and employees mainly consists in identifying different reference fields to which commitment refers as clearly as possible. As depicted by the model description (! 4.7), there are three main factors that can be taken into account here: emotional attachment to the organisation: organisational identity, identification with the occupation: professional identity and an abstract willingness to perform that refrains from concrete work content: work ethics (! 4.7). Some fundamental assumptions and research questions are reflected in the design logic of the instruments. It cannot be assumed that such ‘types’ are normally available in pure form, but that the various forms of commitment interact with each other. Positive experiences in the company influence professional commitment and work ethics, while on the other hand, it seems difficult to maintain professional commitment even when faced with disappointments in relation to one’s own organisation. Relationships such as these can be analysed with the help of the instruments. In addition, these types do not exhaust the possible fields of reference of commitment—the relationship to individual colleagues, teams, certain activities etc. can also play a major role and must be surveyed separately. It is of particular interest for vocational education research whether the process of developing professional identity leads to shifts in the dominant field of motivational reference. The development of professional identity is referred to the willingness to develop it subjectively. Commitment can be generated from the affiliation to the occupation or the enterprise or even the work as such. It can be shown that these different normative fields of reference of commitment in turn have repercussions on the development of competence and identity (! 8.6).

8.5.1

On the Construction of Scales

Since professional identity is related to the respective occupation, it is not possible for an inter-occupational concept to develop a scale for its capture which is based on assumptions about the respective specific content of such an identity. The extent to which a particular professional role has been assumed is therefore not examined. This distinguishes the concept of professional identity used here from others, which aim more to acquire implicit or explicit knowledge in addition to professional competence in order to be a member of a certain occupation and therefore to share a certain universe of thoughts and actions. The scale of professional identity focuses

312

8 Evaluating and Presenting the Test Results

on those cognitive and emotional dispositions that correspond to a development from novice to expert in a subject and lead to professional capacity to act. Three aspects were identified for this purpose: the interest in placing one’s own activities in the overall vocational or operational context (orientation), the interest in helping to shape work and technology (design) and the interest in high-quality performance of one’s own work (quality). On the one hand, the step to this meta-level involves the risk of not including essential aspects of growing into a specific professional role. Therefore, the appropriate scale for such studies should at most be applied in complementary manner. It does not refer directly to professional socialisation processes, but to the subjective disposition to take on the professional role successfully. On the other hand, there are various advantages to this risk. In addition to the possibility of making crossoccupational comparisons with regard to this disposition, such a concept of professional identity also escapes justified criticism, which refers to more conventional formulations. The assumption of a professional role can therefore take place in different contexts and in different ways In this context, Martin Fischer and Andreas WITZEL rightly point out that the term ‘professional identity’ should not be idealised, for example by deducting a lack of professional competence from an untrained professional identity, for example as a result of career changes (Fischer & Witzel, 2008, 25). It makes sense to emphasise subjective dispositions for assuming the professional role and general occupational values in so far as these can be ascertained independently of the respective qualification path. Which type of training organisation favours or hampers the development of such a professional identity therefore becomes an empirical question. As described (! 4.7), a variety of approaches exist within the framework of commitment research for the empirical capture of professional and organisational commitments. In this context, three questions are of interest for vocational and business education studies. • How pronounced is the professional willingness to perform? • Is professional motivation based on factors of intrinsic or extremist motivation? • To what extent do professional identity, emotional affiliation to the company and the willingness to (obediently) perform given tasks contribute to a professional willingness to perform?

8.5.2

Calculating the Results

I-C split scales are required to calculate the I-C diagrams. This form of representation is suitable for a differentiation according to occupations. Like the competence levels, the commitment scales are also differentiated into low, medium and high (commitment split scales). To determine the limit values, the 33rd and 66th percentiles of each commitment scale are determined on the basis of the Bremerhaven study (Heinemann, Maurer, & Rauner, 2009) (n ¼ 1560). In the

8.5 Measuring Identity and Commitment Table 8.8 Low, medium and high limits

Table 8.9 Saxony Study 2015: Limit values for commitment differentiated according to low, medium and high

313

Organisational identity Professional identity Professional commitment Work ethics Organisational commitment

Low 0–12 0–14 0–16 0–14 0–14.4

Medium 12.1–17 14.1–19 16.1–19 14.1–18 14.5–17

High 17.1–24 19.1–24 19.1–24 18.1–24 17.1–24

Professional identity Professional commitment Organisational identity Organisational commitment Work ethics

Low 0–16 0–17 0–12 0–14.4 0–18

Medium 16.1–20 17.1–20 12.1–18 14.5–17 18.1–22

High 20.1–24 20.1–24 18.1–24 17.1–24 22.1–24

context of the ‘Saxony study’ (Rauner, Frenzel, Piening, Bachmann, 2016) (n ¼ 3300), the I-C model was completed by the dimension in the form of a further scale for organisational commitment (Table 8.8). For extensive projects such as the Saxony study involving more than 3000 trainees, project-specific limit values can also be calculated and applied (Table 8.9).

8.5.3

‘Commitment Lights’

The commitment split scales are required to calculate the commitment lights. As a rule, the lights are calculated per occupation and presented in a cross-occupational diagram (Fig. 8.17). Diagrams displaying all commitment scales of a profession can also be used (Fig. 8.18).

8.5.4

Commitment Progression

Several line diagrams are generated for the commitment progressions. The average scale values of the total sample are displayed in a line diagram. In five further diagrams, the average values of the second and third training years are shown for each commitment scale to enable a comparison in commitment between these two training sections. Example The progression of professional identity in industrial/technical occupations In the occupational group of the industrial-technical industry, industrial mechanics stand out in comparison with other trainees due to their heterogeneous

314

8 Evaluating and Presenting the Test Results

Fig. 8.17 Example of a commitment light, here for professional commitment, COMET NRW 2014

Fig. 8.18 Exemplary representation of all commitment scales of a professional group

development of identity over the years of training. Industrial mechanics’ identification with the profession drops surprisingly sharply in the second year of training and then rises sharply again in the third. All other occupations show a slight to severe (plant mechanic) drop in occupational identity with increasing duration of training (Fig. 8.19).

8.5 Measuring Identity and Commitment

315

Fig. 8.19 Progression of professional identity, group 1, industrial-technical industry (Rauner, Frenzel, Piening, & Bachmann, 2016)

The progression of professional commitment shows a certain similarity with the progression of professional identity for the same occupational group. It is evident that after a drop in professional commitment in the second year of training, professional commitment in three of these occupations rises again in the third year of training. Only the process mechanics show an increasing development of professional commitments for the duration of their training (Fig. 8.20).

8.5.5

Four-Field Matrices

The z-standardised commitment scales are required for the four-field matrices and enable a comparison of the different professions. The average value of the total sample is 0, and the standard deviation is 1. Values >0 therefore mean that the subgroup with this value has an above-average result. Conversely, values 0.7 was achieved by all national rating groups. As a rule, all rater training participants used the solution space of the test tasks only for the first two trial ratings. They then internalised the profession-specific interpretation of the evaluation of an item. This result also explains that the COMET test procedure manages with a total of only three very similar evaluation scales for all professional fields. The rater team of the Chinese project was reason for a big surprise. Nobody in the German-Chinese project consortium had expected that 30 Chinese teachers from vocational schools, technical colleges and higher technical schools in the Beijing region would assess the task solutions of German trainees selected for the rater training at a very level of agreement at the second trial rating (including the rating results of the German project). This result was the first proof that the COMET method of competence diagnostics for vocational training can be used to carry out international comparative examinations without any problems. The results of two repeat trainings (Beijing, Hesse)—after 1 year—showed that the once achieved competence of the raters—the new technical understanding—is maintained (COMET Vol. III 2011, 107). The sustainable acquisition of a holistic technical understanding and problemsolving competence as a basis for the safe and objective evaluation of the most varied solution variants of the test participants for open and complex test or

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

377

examination tasks does not require lengthy further training. In a 1-day training session, it is possible to convey this ability (Sect. 4.6.1). This can only be explained by a “Eureka” effect, which is triggered by a spontaneous insight on the part of the participants in the rater training, which does not require any lengthy justification: Professional tasks must always be completely solved (“What else?”). If even one of the solution criteria is not observed, this may entail incalculable risks for the environment, the company or the employees themselves. If outsiders are confronted with this method and the values of interrater reliability it achieves, this usually triggers great astonishment. “I would not have thought it possible,” said a vocational training expert at an IHK conference at which the results of an international COMET project were presented. If raters apply all 40 items in the evaluation of a certain profession-specific task solution, then it becomes apparent that they are often still far apart from each other in the initial trial rating. No later than at the fourth trial rating, they then reach a high to very high degree of agreement (interrater reliability): Finn ¼ 0.75–0.85. In the course of the rating training, and at this high level of agreement, the raters learn to interpret the rating items in profession-specific manner. The COMET test procedure is, therefore, at the stage where it is being professionally handled. Now at the latest, the subject researchers (teachers/trainers) actively involved in the project are in a position to apply the COMET concept as a didactic model in their teaching. Thomas Scholz: “The discursive process among the teaching staff that accompanies the project is just as complex and multi-layered as the effect of COMET on teaching. Mutually influencing conversations occur at different levels and in the associated social relationships. Meta-communication is created between all participants. COMET has initiated a pedagogical-didactic discussion with us from the very beginning. However, it took another two years until we understood the COMET concept in all its depth and were able to use it didactically” (Scholz, 2013, 28).

9.6.2

The Changed Understanding of the Subject Shapes the Didactic Actions of Teachers

If the standards established for comparative investigations in competence diagnostics are used as a basis, the results of the pre-tests can only be compared with those of the main test to a very limited extent.

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

378

9

9.6.3

Context Analyses: The Subjective View of Learners on the Importance of Learning Venues

The subjective importance that the trainees attach to the vocational school and its teachers for their competence development becomes particularly clear in the context analyses in the BBiG professions, which refer to learning venue cooperation.

The Weighting of Learning Venues When weighting the importance of school and company as learning venues for learning a profession, trainees trained in BBiG professions have a clear preference for the company as a learning venue. This applies in particular to industrial and technical training professions. In the COMET NRW project (cf. Piening, Frenzel, Heinemann, & Rauner, 2014), 71.3% of industrial mechanic trainees and 65% of electronics technicians for industrial engineering “fully agree” and “partly agree” with the statement: “I learn much more at work than at vocational school”. Trainees rate the importance of school-based learning as consistently low. A clear majority of trainees in industrial and technical professions negate the statement “Vocational school teaching helps me to solve the tasks and problems of in-company work”. The assessment of the statement “The vocational school lessons and my everyday work in the company have nothing to do with each other” is similar. Obviously, the fit between theoretical and practical learning content seems to be limited. It is noticeable that trainees differentiate between the relatively highly rated technical and methodological competences of their teachers (Fig. 5.18) on the one hand and their knowledge of company reality on the other. The latter is considered to be rather low (Fig. 5.19). If one compares the assessment of the learning situation at the vocational school with that of the companies, one can see that in-company vocational training is valued much more highly. The fact that trainees can learn a lot from their trainers is undisputed among those surveyed, irrespective of their profession. Therefore, they also come to the conclusion that they learn much more at work than at school (Figs. 9.28 and 9.29). A clear picture emerges when summarising these assessments of school learning and teachers by the trainees, who rate the significance of the vocational school and its teachers for their competence development as rather low. They believe that they learn significantly more for their profession in the company than in the vocational school. If one compares these training quality assessments of vocational school as the learning venue by the trainees with the results of the competence survey, it can be seen that the learning situation in the vocational school classes is the decisive determinant of the development of the trainees’ competence. Almost every second trainee in Class 5 (Fig. 8.11) to become an electronics technician for industrial engineering achieves the highest level of competence. In Class 21, on the other

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

Fig. 9.28 “Our teachers really know the subject well” (ibid., 115)

Fig. 9.29 “Our teachers have a good overview of organisational reality” (ibid., 115)

379

380

9

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

hand, none of the trainees reaches this level of competence. According to the state of COMET research, the very pronounced spread of competence development between classes can be attributed to the teacher factor (cf. Rauner & Piening, 2014).6 Trainees in professions with a high level of competence (IC, FLSC, MA and J) rate the quality of teaching significantly higher—with mean values between CW ¼ 3.6 and CW ¼ 3.9—than trainees in industrial and technical occupations (E-B and IM) with their low competence levels of 27.2 (IM) and 28 (E-B). They rate the quality of teaching as below average with CW ¼ 2.5. The assessment of teaching quality correlates with the assessment of teacher competence. The E-B trainees rate the practice-related competence of their teachers as below average CW ¼ 2.6, while the IC trainees rate their teachers as below average CW ¼ 4.0. In the following, a further scale on vocational school learning, the vocational school learning environment, will be used to examine the thesis of the contradiction between the empirically proven great influence of the factor of vocational school learning on the one hand, and the quality of the school as a learning venue and its teachers for their vocational competence development on the other, which is rated by the trainees as low.

Vocational School Learning Environment The vocational school, as a partner of the training companies in the dual system of vocational training, is involved in imparting professional competence and in preparing for the final examinations regulated by the BBiG, in which teachers take part as examiners. However, in the German dual vocational training system, the results of school-based learning are not recorded in the final examinations. Therefore, the vocational school is experienced by the trainees as a learning venue of lesser importance—as a “junior partner”. This also has an effect on the learning climate. The evaluation of the statement, “I feel comfortable at school” gives a first indication of the different perception of school learning. While electronics technicians specialise in energy and building services, engineering and automotive mechatronics technicians largely feel comfortable at vocational school (55.5% and 54.8% respectively); this applies only to 28.4% (!) of industrial mechanics. The reasons given by the motivated trainees are the lack of cooperation in the learning environment. More than half of the industrial mechanics complain that their classmates have little consideration for other pupils (51.3%). This assessment is not shared to the same extent by the other two occupational groups. The vocational school has a compensatory function for trainees in craft trades. They also perceive and value it as a learning venue that compensates for the weaknesses of their in-company vocational training.

6

Also refer to the results of relevant learning research (e.g. that of Hattie & Yates, 2015).

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

381

Fig. 9.30 “What we do in class, I usually find interesting.”

The extent to which teaching plays a role in this is described in more detail below (Fig. 9.30). For the industrial mechanics, the factor “teaching disrupted by classmates” turns out to be an influential quality aspect. While the risk pupils (trainees who did not exceed the level of nominal competence in the test) and the pupils with a low and very low level of competence do not perceive the “teaching disrupted by classmates”, the high-performing pupils perceive these disruptions caused by classmates as a serious problem. There are two possible causes for the interpretation of the paradox: the high learning potential of VET schools and their low assessment by trainees. 1. When learning within the work process, the trainees experience their competence development directly, especially in the industrial and technical professions. The development of their professional skills, which they experience within the work process, is the yardstick for their assessment of the training quality of the learning venues. How the acquisition of vocational work process knowledge in the processes of school learning contributes to the development of professional skills is not immediately apparent to many trainees. They, therefore, agree with the statement that although their teachers are professional competent, they are less convinced of their knowledge of the realities of work. Only learners with a higher level of competence are aware that they (can) acquire the action-explaining and reflecting knowledge of the work process characterising employability, especially at school.

382

9

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

2. The company as a learning venue has a significantly higher significance in the minds of the trainees, as the training contract is concluded with the training company, which also remunerates the training. While the presence in the training company is also regulated by labour law and a violation of this can have far-reaching consequences—up to and including the termination of the training relationship—no comparable regulations apply to “attendance” at the vocational school. Opinions differ when assessing the importance of vocational school learning for the acquisition of employability. This applies above all if the vocational schools are not equally involved in the final examinations. Nursing Training at Technical Colleges This example shows that the management of dual vocational education and training “from a single source” and the equivalence of learning venues leads to a much more positive attitude among students towards learning in vocational schools: “Satisfaction with training at the surveyed schools for health and nursing [...] is very high overall. 79 % of respondents are more or less satisfied with their training” (Fischer, 2013, 219). The teachers at the nursing schools are also rated positively. 71% confirm that their teachers have a good overview of professional reality and 83% consider them to be more or less technically competent and up to date (ibid., 222). It is, therefore, no surprise that learning venue cooperation in dual nursing training is rated significantly more positively than in vocational training regulated under the BBiG. “[...] 70 % of the students are therefore of the impression that the teachers of the technical schools cooperate with the practice instructors and nursing services in the hospital more or less or completely—this statement does not apply to only 3 % of the respondents” (ibid., 237). The students of Swiss nursing training rate the learning venue cooperation “even more positively than the trainees [of the German vocational schools]” (ibid., 238) (Fig. 9.31). Renate Fischer concludes that the training between technical school and practical training, which was assessed as positive by students at technical colleges, and the good cooperation between teachers and practical instructors (e.g. also in joint projects) have a “highly beneficial effect on the development of professional identity and commitment” (ibid., 272).

9.6.4

Conclusion

The competence surveys carried out within the framework of COMET projects in dual vocational training programmes show very clearly that school as a learning venue and teachers are decisive determinants of professional competence development. This applies above all to achieving the highest level of competence, as provided for in the learning field concept: “the ability to help shape the world of

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

383

Fig. 9.31 Learning venue cooperation, comparison of trainees and students on the statement: “My practical workplaces and the school coordinate their training”. (ibid., 239)

work in socially, ecologically and economically responsible manner” (KMK, 1996, 10). By contrast, the assessment of school-based learning by trainees in dual vocational education and training (Germany) shows on average that they rate the quality of learning at school lower than in-company learning. The predominantly positive assessment of learning in the training company and the underestimation of schoolbased learning as a decisive factor for competence development can be attributed to the fact that the trainees perceive and experience their increase in professional action competence directly in their professional actions in training practice, while they experience their competences acquired in the school-based learning process less directly but rather mediated through their action competence in practical professional work. It should also not be underestimated that trainees conclude their training contract with the training company in which the rights and obligations of training are regulated as legally binding and that they receive their training remuneration—the “reward” for their training activities—from the companies. The educational potential of vocational schools is recognised above all by highperforming trainees, by trainees in professions with a high average level of competence and by trainees who experience the school as a learning venue to which they attribute an important compensatory function in their training. Despite the weaknesses of the dual organisation of vocational education and training, which trainees and students see primarily as being caused by the inferior quality of the school as the learning venue, the results of the competence surveys show that the school as learning venue has a high learning potential. This can be seen

384

9

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

above all in the polarisation of the levels of competence achieved by the classes taking part in the tests. Despite comparable profession-specific educational prerequisites of trainees and students, part of the classes (of a profession) regularly reach a (very) high level of competence and another part a (significantly) lower level of competence. This means that teachers exploit the educational potential of vocational schools to a very different degree. The quality of learning venue cooperation is of overriding importance for exploiting the training potential of schools as a learning venue. Using the example of nursing training in Germany and Switzerland, it has been demonstrated that the management of dual vocational training “from a single source”—and therefore the equal participation of vocational schools in dual vocational training—enhances the quality of their training. This is reflected in the appreciation of vocational (technical) schools by trainees/students.

Comparability of Test Groups A statistical comparison between the test results of the pre-test participants and the participants of the first main test is possible, however, if both test groups represent the test population and if comparable test items are used for both tests. If the pre-test participants are distributed among the training centres participating in the test, then a comparison of the pre-test results with the results of the first main test can be used to examine whether and to what extent competence development has taken place.

Example: Pilot Study (Industrial Clerks) Eighty-two second- and third-year trainees from two vocational colleges (VC) took part in the first main test of the COMET Project NRW pilot study for industrial clerks (cf. Stegemann et al., 2015; Tiemeyer, 2015). Fifty-two trainees from the same VCs took part in the pre-test. The results are, therefore, not representative for the test population of the federal state. The comparability of the pre-test and main test participants is, however, given, as the number of participants in both tests hardly differs. Both test groups are representative of the industrial clerk trainees at the two vocational training centres (Fig. 9.32). The result impressively shows that the competence level of the trainees has increased significantly in a period of about 6 months between pre-test and the first main test. The increase in the competence level of the test group is reflected above all in a significant increase in the proportion of test participants who have reached the highest competence level (Shaping Competence): from 21.8% in the pre-test to 69.5% in the first main test. A similar effect can be seen in the COMET NRW Carpenter project. The high Finn coefficient, which was reached in the rater training is an indicator for the fact that all raters mastered the COMET competence and measurement model following

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

385

Fig. 9.32 Distribution of competence in pre-test and main test for industrial clerks (INK-A)

rater training. A comparison of the pre-test and main test groups is also possible here, as both test groups from two VET centres were involved in this pilot project. 77% of the test participants reach one of the two upper competence levels during the first main test. In the pre-test, this was only 51.8%. The decline in the number of risk students from 30% (pre-test) to 17% in the first main test is particularly marked. The participation of the teachers in the pre-test—especially in the rater training and in the rating—obviously led to the fact that they were able to implement their extended problem-solving pattern and technical understanding in their didactic actions.

Example: COMET Project Nursing Training, Switzerland The example of the training of nursing staff at higher technical colleges in Switzerland also shows a significant increase in the competence level of students at technical colleges in the period between the pre-test and the first main test. The proportion of students who reach the third (highest) competence level has increased significantly, while the proportion of the risk group has decreased significantly (Fig. 9.33). These three examples represent a development that has been demonstrated in almost all COMET projects.

386

9

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Fig. 9.33 Distribution of competence levels, COMET project nursing training, Switzerland: Pretest 2012 and first main test 2013 (n ¼ 115)

9.6.5

Conclusion

The hypothesis that the active participation of vocational trainers in the development of test items and their evaluation and optimisation within the framework of a pretest—including rater training—has a positive effect on their competence development was confirmed. This form of further training takes place as an implicit learning process, as the feedback workshops show, in which the project groups, when interpreting the test results, did not recognise the changed didactic actions of the teachers as a (decisive) cause for the increase in competence of their students. The fact that vocational trainers (also) implicitly transfer their specialist knowledge to their pupils/students was proven in an extensive large-scale study in which 80 teachers/lecturers took part in the student test (cf. Zhou, Rauner, & Zhao, 2015; Rauner, Piening, & Zhou, 2015 [A + B-Forschungsbericht Nr. 18/2014]). Thomas SCHOLZ sums up the experiences of the project group, which they gathered and reflected on during the implementation of the COMET Industrial Mechanic (Hesse) project, as follows: “With the experiences from the pre- and the two main tests as well as the development of test tasks, the working groups approached the design of learning tasks with a holistic solution approach. A new dimension of task development opened up, tasks that highlighted COMET’s influence on teaching change. The discussion about methodology (continued)

9.6 Teachers and Trainers Discover Their Competences: A “Eureka” Effect and. . .

387

and didactics with regard to COMET tasks in the classroom became the focus of the working groups. The group of industrial mechanics decided to introduce this new form of learning: the ability to solve tasks according to the COMET competence model. The introduction of this new learning form, as suggested by the learning field concept, had an impact on the test results. The more advanced the new teaching practice is, the better the test results will be” (Scholz, 2013, 25). The didactic actions of the teaching staff are characterised by the tension between their specialist knowledge, which is shaped by their university studies and develops in their specialist studies on the one hand, and the knowledge of work processes incorporated into their professional activities on the other (Bergmann, 2006; Fischer & Rauner, 2002). With the acquisition of the COMET Competence Model, the professional knowledge of action (the work process knowledge) moves into the centre of didactic action and the scientific knowledge becomes rather a background knowledge which retains its significance for the reflection of complex work and learning situations. The theories and research traditions on which the COMET test procedure is based (again) experience their fundamental significance in competence diagnostics: • Research into the knowledge of work processes (cf. Boreham, Samurçay, & Fischer, 2002) • The theory of multiple competence and the associated guiding principle of holistic problem solving (cf. Connell, Sheridan, & Gardner, 2003; Rauner, 2004b; Freund, 2011) • The novice-expert paradigm and the associated insight that one is always a beginner when learning any profession and that the path to becoming an expert follows the rule that one grows with one’s tasks (cf. Dreyfus, 1987; Fischer, Girmes-Stein, Kordes, & Peukert, 1995) • The theories of “situated learning” (cf. Lave & Wenger, 1991) • The concept of practical knowledge (cf. Holzkamp, 1985; Rauner, 2004b) • The theory of “developmental tasks” (cf. Gruschka, 1985; Havighurst, 1972) and the related concept of paradigmatic work situations (cf. Benner, 1994) • The theory of “Cognitive Apprenticeship” (cf. Collins, Brown, & Newman, 1989) • The “Epistemology of Practice” (cf. Schoen, 1983) Teachers/trainers who actively participate in the COMET projects as test item developers and as raters already have the ability to assess the professional competence of trainees and students at a high level of interrater reliability after 1 day of rater training.

388

9

The Contribution of COMET Competence Diagnostics to Teaching and Learning. . .

Teachers and trainers change their understanding of the subject and their didactic actions in the sense of the COMET competence model by participating in the development of test and learning tasks, in rater training and in the rating of task solutions as well as by reflecting on and interpreting the test results with their pupils, in their subject groups and with scientific support. This change in thinking and acting does not take place as laborious additional education, but rather casually as a Eureka effect and to one’s own surprise; “Oh of course, it’s as clear as day” or “I have the feeling that I’ve been in charge for years”. The new or expanded understanding of the subject is reflected in the development of learners’ competences. Above all, their competence profiles are an expression of the new quality of training. They challenge teachers and trainers and make it easier for them to reflect on and change the strengths and weaknesses of their own didactic actions. In a team, it creates a very effective form of learning from each other.

Chapter 10

Measuring Professional Competence of Teachers of Professional Disciplines (TPD)

10.1

Theoretical Framework

The Conference of the Ministers of Education and Cultural Affairs of the Federal States of Germany (KMK) published standards for teacher training (report of the working group) in 2004. As an introduction, Terhart explains: “An [...] assessment of the impact and effectiveness of teacher training based on competences and standards is [...] the prerequisite for being able to introduce justified improvements if necessary” (KMK, 2004a, 3). His indication that the professional competence of teachers ultimately depends on the quality of their teaching is also confirmed by the results of the project “Competence Diagnostics in Vocational Education and Training” (COMET). In addition to the teacher factor, the previous schooling of trainees or students of technical colleges and the in-company learning environment in dual vocational training have proven to be further determinants of professional competence development (COMET Vol. III, Chap. 8). When measuring the professional competence of vocational school teachers, a distinction must be made between two aspects. It has proven useful to encourage teachers to take part in their students’ tests. This has been tested both in PISA studies and in the COMET project. Naturally, this is not enough to measure teacher competence, which requires the development of a competence and measurement model. The linchpin is the definition and operationalisation of the requirements dimension as well as the justification of competence levels (refer to KMK, 2004b, 3). An extensive quantitative study of occupational competence development revealed that there are extraordinarily large—unanticipated—differences in competence between the 40 test classes (Hesse/Bremen) of (electronics technicians) trainees and students at technical colleges, 31 of which were from Hesse. Figure 9.6 shows this for the third competence level (“Holistic Shaping Competence”). The heterogeneity of competence development within the test groups (classes) turned out to be just as unexpectedly large (Fig. 10.1). © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_10

389

390

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.1 Percentage of test participants (Hesse) who reach the level of “Holistic Deign Competence” per class: E-B ¼ Electronics technician for industrial engineering, E-EG ¼ Electronics technician for energy and building technology, F-TZ ¼ Technical college students, part-time

Within a class, there is often a difference of 2 years of training between highperforming and low-performing students. At the aggregation level of the two electronics occupations, electronics technician for industrial engineering and electronics technician specialising in energy and building technology (E-EC), previous schooling proves to be a decisive factor in explaining the higher competence level of E-B (cf. also Baumert et al., 2001, Chap. 8). The analyses of the context data in relation to the performance differences between the classes of the same training occupation (e.g. E-B) are of didactic interest. In addition to the in-company learning environment for trainees and the competence of the trainers, the skills and behaviour of the teachers are the decisive factors for the professional competence development of the pupils/students (Rauner et al., 2015b). If we compare the classes that are taught by the same teacher but receive their training in different companies, however, they are relatively close to one another in terms of their average level of competence—even if the training companies differ considerably from one another in terms of the quality of the training they provide and the training environment in which they operate. There are extensive and diverse findings from educational research on the central importance of teachers in all forms, courses and systems of education for the competence development of learners. In summary, John Hattie referred to the state of research (Hattie, 2003, 2011). The extremely large differences in the competence characteristics of the test groups and the phenomenon of transferring the teachers’ competence profiles to their trainees/students, therefore, suggest that teacher competence should be

10.2

Fields of Action and Occupation for Vocational School Teachers

391

measured. If this succeeds, it would be a major step forward for the quality development of vocational education and training. The aim to gain deeper insights into the professional competence (development) of vocational school teachers with the methods of Large-Scale Competence Diagnostics can be justified by the following reasons: (1) The results of empirical vocational training research, according to which only very limited success has been achieved to date in enabling vocational school teachers to implement the introduction of the learning field concept agreed by the KMK in 1996 into the development of framework curricula for vocational school programmes (cf. Przygodda & Bauer, 2004, 75 f.; Lehberger, 2013, Chap. 2) (2) The high degree of heterogeneity that occurred in the vocational competence surveys of trainees/students between the test groups of comparable courses of study (see above) (3) The large proportion of trainees and students who, at the end of their training, do not have the ability to complete professional tasks in professional and complete manner

10.2

Fields of Action and Occupation for Vocational School Teachers

In justifying the KMK standards for teacher training, the educational sciences are emphasised as the essential basis for the acquisition of teacher competences. These are above all the educational and didactic segments of the studies and the competences based thereon (KMK, 2004a, 4). From the perspective of general education, this restriction can possibly be justified as, particularly in the tradition of humanistic pedagogy with the paradigm of exemplarity, the meaning of the contents of teaching and learning was reduced to the function of a medium in the educational process (cf. Weniger, 1957). In vocational education and training, on the other hand, training content is of constituent importance. The job descriptions and training regulations prescribed by the Vocational Training Act (BBiG) specify the knowledge, skills and abilities to be mastered in an occupation. This applies in particular to the examination requirements, which form the basis for the examination of employability in the individual occupations. Skills and knowledge which are necessary (!) for the exercise of a profession are examined separately as the right to exercise a profession is not infrequently acquired with a qualification. Germanischer Lloyd, for example, verifies the mastery of various welding techniques by industrial mechanics apprentices (specialising in ship mechanics) in accordance with its own quality standards. Whenever safety, environmental and health-related training contents are involved, vocational school teachers and trainers are particularly challenged to communicate the corresponding training contents and to check their safe mastery. It follows that a “vocational school teacher” competence model must have a content dimension.

392

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

If one looks at the standards for initial and continuing training of teachers, then the descriptions of the skills that teachers must acquire in their initial and continuing training are found to be largely identical, albeit differently distinguished. It is striking that, in the majority of training concepts, the dimension of specialist knowledge is often ignored. Fritz Oser (1997), for example, proposes 88 “standards” for teacher training. These are 12 higher-level training areas (“standard groups”), which are broken down into 88 detailed training objectives (standards). On the other hand, a remarkable aspect here is that the content dimension of competence is missing in this compilation. In the training guidelines of the “National Board for Professional Teaching Standards” (NBTS) quoted by Oser, one of the “five ideals for the collection and verification of teaching standards” under b) is “knowledge of the content that is learnt...” (cited from Oser, 1997, 28). In another current compilation of professionalisation standards for teacher training, the subject contents are even given special emphasis. The “Professionalisation standards of the Pedagogical University of Central Switzerland” read: 1. The teacher has specialist knowledge, understands the contents, structures and central research methods of their subject areas and can create learning situations which make these subject-specific aspects significant for the learners (professionalisation standards of the Pedagogical University of Central Switzerland [2011]).

The analysis by Andreas Frey (2006) and Johannes König (2010) of methods and instruments for the diagnosis of professional competences of teachers confirms that competence diagnostics (teachers) is primarily aimed at recording interdisciplinary pedagogical-didactic competences. Frey summarises his findings as follows: “The list [of 47 methods and instruments] shows that the social, methodological and personnel competence classes are already well covered by instruments. However, the specialist competence class, in particular the various specialist disciplines, is insufficiently documented in the specialist literature. In this case there is a need for scientific research and development” (Frey, 2006, 42)1. The project of the International Association for the Evaluation of Educational Achievement (IEA) to measure the competence of mathematics teachers (TEDS-N) focused on the subject and didactic competence of teachers (cf. Blömeke & Suhl, 2011). However, the format of the standard-based test items and a supplementary questionnaire limit the scope of this test procedure. The “professional competence” of teachers can, therefore, only be recorded to a very limited extent.

1

cf. also König (2010).

10.2

Fields of Action and Occupation for Vocational School Teachers

393

10.2.1 Proposal for a Measurement Method by Oser, Curcio And Düggeli Oser, Gian, Curcio and Düggeli have developed and psychometrically evaluated a method for measuring competence in teacher training. Methodologically, the concept is based on situation films (multi-perspective shots and film vignettes generated from them) and an expert rating of the competence profiles of teachers (Oser, Curcio, & Düggeli, 2007)2. OSER and his research group have good reasons for opting for the methodological middle course between direct observation and self-evaluation procedures, as both procedures have not yet led to the desired results. The method of direct observation is already ruled out for test-economic reasons. Even if it were possible to develop a reliable rating procedure, this procedure would not cover a decisive dimension of competence: the knowledge on which teacher behaviour is based. Even if one assumes that one can deduce from the observable action the action-leading knowledge, then this method leaves open to what extent teachers can reflect their actions in a well-founded way or whether they act more intuitively and on the basis of practised “skills”. If one wants to capture teacher competence at the competence level of reflected professional knowledge and actions, then the observers cannot avoid reflecting the observed behaviour with the observed teachers. There are narrow limits to decoding the competences incorporated in observable behaviour as a domain-specific cognitive disposition. This limitation also applies to the “advocatory procedure” proposed by OSER and his team, which is also based on an observation procedure. In forms of teacher training based on video documents, the common reflection of those observed is, therefore, an essential element. The observed teacher has the opportunity to explain why he or she behaved in this way and not differently in specific situations (video feedback). Without the reflection of visual documents with the actors, video-based observation methods for measuring competence have only a limited reach. The OSER approach for identifying competence profiles at a medium level of abstraction is interesting because it avoids merely determining competence levels. The profiles were developed or identified according to the Delphi method. Using the example of competence profile A 2.3 (A2: “forms of mediation” is one of nine standard subgroups): “The teacher organises different types of group teaching...” (ibid., 16). With the concept of competence profiles, it is possible to approach the quality of teacher competence to be described and recorded. The project shows the empirical effort involved in developing this method for measuring vocational school teacher competence.

2 The project was carried out with 793 teachers from vocational schools. However, it is a project that is not limited to vocational training.

394

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

10.2.2 Competence Profiles The approach of using competence profiles to record vocational school teacher competence is interesting insofar as it clearly goes beyond the one-dimensional scaling of competence in the form of scores. Competence profiles can be used to represent the quality of competence (ibid., 17 ff.). The representation of competence profiles requires a competence model, primarily a scientific and normative model of the requirement dimension. The difficulty with this method lies in determining the number of profiles. OSER et al. choose a medium degree of abstraction to limit the variety of profiles to 45. There are, therefore, pragmatic reasons for the level of differentiation. As each teacher has their “own” competence profile, which may still vary from subject to subject, this approach requires an examination of the question of a taxonomy or other forms of systematising competence profiles. One way out of the difficulty of condensing the competence profiles of teachers to a certain number of profiles is to identify the competence dimensions (sub-competences) scientifically and normatively, which make it possible to map arbitrary competence profiles. With this approach, Howard Gardner succeeded in establishing the concept of multiple intelligence as a concept of competence research (Connell, Sheridan, & Gardner, 2003). Abilities can then be conceptualised as functionally integrated intelligence profiles. Through the development of specific intelligences, there is room for competence development and potential abilities (Fig. 10.2).

10.2.3 Validity of the Oser Test Procedure Here, Oser et al. rightly indicate a very sensitive point of competence diagnostics. The high validity of measurement procedures or test items can only be confirmed if one can prove how teacher competence affects the development of pupils’ competences. “However, whether or not the quality characteristics of a standard are actually recorded with the present diagnostic instrument must be checked with the aid of cross-validation” (ibid., 19). The planned procedure of an expert rating exhausts the possibilities offered by this procedure as a whole. Ultimately, evidence must be provided as to whether pupil performance is due to the competence of their teachers.

10.2.4 The Action Fields for TPD In teacher training, it is necessary to provide seminars on the manifold conditions and decision-making fields of the didactic actions of teachers—for example, on the teaching methods. This also applies to the acquisition of expertise. Although it is possible to verify whether a teacher/student has methodological and professional

10.2

Fields of Action and Occupation for Vocational School Teachers

395

Fig. 10.2 Example of an individual intelligence profile as well as two different intelligence profiles and the different spaces for competence development they define (Connell et al., 2003, 138 and 140 f)

competence, measuring teacher competence requires test tasks or test arrangements to be developed on the basis of a competence model. The content dimension of the competence model identifies “teaching” as a field of action. In order to determine the competence characteristics a teacher applies in their teaching and the characteristic competence profile they have in this field of action, a measurement model is required that covers all relevant aspects of this field of action. This means that the quality of group work must be recorded as one among many other and interacting aspects in the “teaching” field of action. Cooperation in a learning group only gains its didactic significance in the context of the teaching project. The KMK standards for teacher training identify four training and task areas for teachers: • • • •

School as an organisation Planning, implementation and evaluation of lessons Assessment of student performance Cooperation of colleagues in the context of teaching and school development (KMK, 2004a)

396

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

For each of these four training and task areas, a distinction is made between the knowledge to be acquired in the course of study and the skills to be acquired or mastered in teacher training and teacher work. If the training and task areas (1) and (4) are combined into one field of action— “participation in school development” (see KMK, 2004b, 3; item 5)—and if it is assumed, in line with the concept of complete action, that teaching also includes the evaluation and review of learning processes and the assessment of student performance (2) and (3)3, then two fields of action remain. For vocational school teachers, the following four task fields can be justified.

Planning, Implementing and Evaluating Vocational Learning Processes The central task of every teacher at vocational schools is the design and evaluation of vocational training processes and their individual evaluation (cf. KMK, 2004b, 3). The COMET competence model is of particular importance in this context. Cooperation with other teachers (e.g. the subject group) and with out-of-school cooperation partners (e.g. trainers) is, therefore, the rule.

Development of Educational Programmes Occupational profiles, framework curricula and training regulations are increasingly designed to be open to development, so that accelerating technical change does not lead to a constant backlog of regulatory tools in need of updates. The implementation of open training plans and the application of broadly based core occupations (e.g. mechatronics technician, polymechanic (Switzerland), computer scientist (Switzerland), media designer), taking into account the local and regional fields of application of the relevant training companies, means that the development of training programmes in dual vocational training is one of the core tasks of vocational school teachers. The consideration of the qualification potential of the companies involved in dual vocational training requires a high degree of vocational competence—in addition to vocational pedagogical and didactic skills.

Planning, Developing and Designing the Learning Environment Study labs and workshops are of particular didactic relevance for the design of vocational training processes. Their quality is a decisive factor in the implementation of “action-oriented” forms of learning. The study labs and their equipment are, therefore, often a “trademark” of specialist departments or vocational schools. How study labs and workshops can be designed under the conditions of technical

3

Teachers carry out their assessment and advisory duties in the classroom [...] (ibid., 3., No. 3).

10.2

Fields of Action and Occupation for Vocational School Teachers

397

change and changing qualification requirements so that they have an experimental quality that also allows them to deal prospectively with the vocational world of work is a particular challenge for vocational school teachers in this action field. In addition, the didactic actions of the teachers also depend on the media equipment and the quality of network access.

Participation in School Development With the shift of operational tasks of school development to the level of vocational training institutions, the participation of vocational school teachers in quality development and assurance is one of their original tasks. The extraordinarily large diversity of professions, vocational training programmes and the regionally specific embedment of vocational training in economic structures calls for a change from vocational schools to regional competence centres (BLK, 2002). It is foreseeable that the emphasis will increasingly shift to continuing vocational education and training. The international development towards “Further Educational Colleges” and “Community Colleges” (USA) is already further advanced here. In this context, the traditional concepts of school development are losing importance. The transformation of vocational schools into competence centres requires the development of new forms of organisational development and their institutionalisation alongside universities and general upper secondary levels. When developing test items for a project “Measuring the professional competence of vocational school teachers”, the four action fields should be represented by at least one complex test item each. In the teacher training discussion, reference is made to the different dimensions or sub-competences of professional teachers. This concerns the sub-competences of teaching, educating, counselling, evaluating and innovating, as they were already founded by the German Education Council (1970) and reformulated by the KMK Teacher Training Commission (1999). If these dimensions of teacher competence are “taught” in the form of modules as self-contained skills during the phase of familiarisation with teaching activity in seminars, then there is a risk that these sub-competences will be misunderstood as mutually isolated characteristics of teacher action. For the standards of teacher training, this means that competence in counselling, teaching, education, etc. can only be demonstrated in the context of domain-specific design and evaluation of the (training) processes, something to which the KMK expressly refers in its standards. The educational task at school is closely linked to teaching. Similarly, assessment and counselling are not isolated fields of action, but tasks that are integrated into teaching (KMK, 2004b, 3). In teacher training, structured according to disciplines and modules, disciplinary knowledge can be imparted and tested in exams. On the other hand, the professional competence of teachers only becomes apparent in the domain-specific concrete fields of action (see above). A special feature of vocational education and training is its overriding guiding objective: “Empowerment to help shape the world of work in socially, ecologically

398

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

and economically responsible manner” (KMK, 1991, 196). The central idea of design-oriented vocational training has far-reaching consequences for the professionalisation of vocational school teachers and the realisation of learning environments in the institutions involved in vocational training: vocational schools, training companies and inter-company training centres. For vocational education and training, this means that its contents cannot be obtained by means of the didactic transformation of scientific contents. Professional work process knowledge has its own quality. When implementing an electrical lighting system in a residential or office space, a butcher’s shop or the vegetable department of a food discount store, the selection and arrangement of lighting fixtures in terms of brightness and colour temperature is extremely varied, taking into account the respective standards for workplaces, sales rooms, etc., not least also taking into account aesthetic criteria as well as ease of operation and repair. The decision for a low or normal voltage solution is also a question of weighing competing criteria. If the classes were propaedeutically geared to the basics of electrical engineering, the focus would be on switching logic and the functionality of the lighting fixtures. The content “electric lighting” would become one of the applied natural sciences. The real world of work with its professional requirements—the actual contents of vocational education and training—as well as the central idea of co-designing the world of work would then be excluded. In the world of professional work, professionals are always faced with the task of exploiting the respective scope for solutions and design—“with social and ecological responsibility” (KMK, 1999, 3, 8). The implications of the learning field concept based on this central idea are obvious for the fields of action of vocational school teachers. The quality of the learning environments for vocational education and training must be measured by whether they are designed according to the concept of the holistic solution of vocational tasks. Donald Schoen, with his insightful-theoretical paper “The Reflective Practitioner”, corresponding to the category of practical intelligence, has demonstrated the fundamental importance of practical competence and professional artistry as independent competence not guided by theoretical (declarative) knowledge. At the same time, this leads to a critical evaluation of academic (disciplinary) knowledge as a cognitive prerequisite for competent action. Schoen summarises his research results in the insight: I have become convinced that universities are not devoted to the production and distribution of fundamental knowledge in general. They are institutions committed, for the most part, to a particular epistemology, a view of knowledge that fosters selective inattention to practical competence and professional artistry (Schoen, 1983, S. VII).

In this context, he cites from a study in medical expertise: “85 % of the problems a doctor sees in his office are not in the book”. Schoen sees the deeper cause for the inability of the education system to impart knowledge that establishes vocational competences in disciplinary, subject-systematic knowledge. The systematic knowledge base of a profession is thought to have four essential properties. It is specialized, firmly bounded, scientific and standardized. This last point is particularly

10.3

The “TPD” (Vocational School Teacher) Competence Model

399

Fig. 10.3 On the relationship between the objectives and theories of vocational education and training, the initial and continuing training of TPD and the design, evaluation and measurement of their competences

important, because it bears on the paradigmatic relationship which holds, according to Technical Rationality, between a profession’s knowledge base and its practice (ibid., P. 23).

He refers to an extensive curriculum analysis by Edgar Schein, who criticises the knowledge concept incorporated in school curricula: Usually the professional curriculum starts with a common science core followed by the applied science elements. The attitudinal and skill components are usually labelled ‘practicum’ or ‘clinical work’ and may be provided simultaneously with the applied science components or they may occur even later in the professional education, depending upon the availability of clients or the ease of simulating the realities that the professional will have to face (Schein, 1973, S. 44, cited in Schoen, 1983, P. 27).

10.3

The “TPD” (Vocational School Teacher) Competence Model

The central task of teachers at vocational schools is to empower pupils and students to help shape the world of work and society in socially and ecologically responsible manner (KMK, 1991/1999). The guiding ideas and objectives of vocational education and training as well as teacher training and teacher activity form the explanatory framework for a “TPD” competence model (Figs. 10.3 and 10.4). The development of a “TPD” competence and measurement model is needed to mediate between the guiding principles, goals and theories of vocational education and training and teacher training and to develop test tasks and describe their solution spaces. The didactic relevance of the competence model can be seen above all from the fact that it is also suitable as a guide—among others—for TPD training. The “TPD” competence model comprises the usual dimensions of competence modelling

400

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.4 The “TPD” competence model

• The requirements dimension (competence development/competence levels) • The contextual dimension • The behavioural dimension (cf. KMK, 2005; Schecker & Parchmann, 2006, 55; COMET Vol. III, 51 ff.). III, 51 ff.)

10.3.1 The Requirements Dimension The requirements dimension reflects the interrelated levels of professional competence. They are defined on the basis of skills resulting from the processing and solution of occupational tasks (see Bybee, 1997; COMET Vol. III, Fig. 3.3, Sect. 3.3). The objective and subjective demands placed on the work of teachers refer directly to their professional and educational skills. The nine criteria of the competence level model with its three competence levels and the lower level of nominal competence serve as a framework for interpretation.

10.3

The “TPD” (Vocational School Teacher) Competence Model

401

Functional Competence Functional competence is available to (prospective) vocational school teachers who, within the framework of their university education, have acquired basic vocational pedagogical-didactical, technical vocational, vocational/technical-didactical and technical-methodological knowledge. Functional competence is based above all on the relevant pedagogical-didactical and professional action-leading knowledge, without the test persons being in a position to specifically apply this knowledge to a situation and to sufficiently justify and reflect on their pedagogical actions.

Procedural Competence Vocational school teachers have procedural competence and are also in a position to apply their vocational knowledge in vocational training practice in a manner appropriate to the situation, to reflect on it and to further their education. A characteristic feature of this level of competence is the ability to design and organise vocational training processes under the real conditions of school or training reality. The teachers have a vocational educational work concept. They are part of the professional group practice.

Shaping Competence Building on the previous levels, the highest level of competence represents the ability to holistically (completely) solve vocational pedagogical tasks. This includes the criteria of socially compatible teaching as well as the ability to socio-culturally embed vocational training processes. The level of holistic competence includes the ability, with a certain amount of creativity, to weigh up the various demands placed on the holistic task solution in a situation-specific way: for example, between the requirements of the curriculum, the resources available and the most pronounced individual support for learners possible. The teacher is familiar with the relevant professional and pedagogical-didactic innovations in their field.

Nominal Competence It is outside the scope of professional competences if, as here, the development of professional competence is introduced into the model design as a characteristic criterion for the success of teacher training. (Prospective) vocational school teachers who only reach the level of nominal competence are assigned to the risk group. As defined by PISA, risk students are not up to the demands of successful vocational training and have to reckon with considerable difficulties in the transition to working life (Baumert et al., 2001, 117). This definition can be applied mutatis mutandis to

402

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

teacher training and activities. There is, therefore, an urgent need for further training for teachers whose cognitive domain-specific performance disposition (competence) is below the first competence level (functional competence) and who nevertheless work as teachers. The training scope and content can be identified relatively precisely using the COMET test procedure.

10.3.2 The Contextual Dimension The operationalisation of the contextual dimension is based on the four fields of action for vocational school teachers described above. While the professional fields of action of trainees and professionals differ fundamentally in terms of content— from profession to profession and above all from occupational field to occupational field—a uniform content structure covering all professions and occupational fields can be used as a basis for vocational educators (teachers/trainers) (see p. 6). However, this has to be designed in profession-specific manner when developing test tasks and describing the corresponding solution spaces. The superordinate structuring of the contents in the form of the four fields of action can be justified by vocational pedagogy. The modelling of the content dimension at this medium operationalisation level allows the development of a common competence and measurement model for teachers of professional disciplines and, therefore, to compare teacher competence across occupational fields. At the same time, however, the vocational knowledge of the professionals to be qualified is of great importance for the design and organisation of vocational training processes.

10.3.3 The Behavioural Dimension When justifying the behavioural dimension of the competence model, reference can be made to the justification of the COMET competence model. The concept of complete working and learning action applies to teacher action in particular. A spinoff of the review of training success, as provided for by the Vocational Training Act for intermediate and final examinations (or Part 1 and Part 2 of the examination), impairs the professional design of the feedback structure and practice for the vocational school as a learning venue (cf. COMET Vol. 3, 222 et seq.). The operationalisation of the behavioural dimension, therefore, includes the examination of competence development during training. The behavioural dimension of the competence and measurement model is taken up by the occupational research concept of “complete tasks” (Ulich, 1994, 168)4:

4

Ulich refers to Hellpach (1922), Tomaszewski (1981), Hacker (1986) and Volpert (1987).

10.4

The Measurement Model

403

• Independent setting of goals that can be embedded in higher-level goals • Independent preparation for action in line with the perception of planning functions • Selection of means, including the necessary interactions for adequate goal achievement • Implementing functions with process feedback for any corrective action • Control and result feedback and the possibility to check the results of one’s own actions for conformity with the set goals (Ulich, 1994, 168). It is noteworthy that Ulich emphasises the category of “complete tasks” and, therefore, establishes a reference to work design as a central object of occupational research. The programmatic significance that the concept of complete action (task design) has acquired in vocational education has one of its roots here. Another lies in the degree of medium operationalisation in the form of differentiating the complete working and learning action into successive action steps. For the didactic actions of teachers and trainers, this scheme offers a certain degree of certainty. In the meantime, this model of action structure has also found international acceptance in connection with the introduction of the learning field concept into the development of vocational curricula. A further function is fulfilled by the behavioural dimension in the application of the competence model as didactic guidance for structuring process-related solution aids in the solution of vocational work and learning tasks (cf. Katzenmeyer et al., 2009, 173 ff.). While the requirements and content dimensions of the competence model are “product-related” dimensions, the behavioural dimension represents the process structure of working and learning actions.

10.4

The Measurement Model

10.4.1 Operationalisation of the Requirements Dimension (Fig. 10.5) The operationalisation of the requirements dimension refers to the three fields of action: (1) instruction, (2) development of educational programmes, (3) design of the learning environment, as these can be assigned to an overarching field of action of the design and organisation of vocational training processes. In action fields (2) and (3), the focus is on conceptual planning competences. If the learning environment— for example, a study lab—is carefully planned, then its implementation no longer poses any particular professional challenges. How the quality of lesson planning correlates with that of teaching is one of the central research questions of competence diagnostics in teaching. It is conceivable that a teacher can create excellent lesson designs but does not have the competence to translate them into a successful lesson. For the development of the measurement model, this means developing a rating

404

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.5 The professional competence of teachers with professional disciplines: levels, sub-competences (criteria) and dimensions

procedure with which the conceptual planning competence of the teachers can be measured within the framework of large-scale projects (rating scale A). For the evaluation of teaching, it is necessary to develop a modified variant of the measurement model with which both teaching design and teaching itself can be evaluated— for example, within the framework of demonstration lessons (rating scale B).

10.4.2 The Competence Dimensions When modelling the requirement dimension, a distinction is made between nine sub-competences, which are assigned to three competence dimensions: (1) Functional competence (Df) with the sub-competences • Vocational competence • Vocational/technical didactics • Technical methodology (2) Procedural competence (Dp) • Sustainability • Efficiency • Teaching/training organisation (3) (Holistic) shaping competence (Dgk) • Social compatibility • Social-cultural embedment

(Kbf) (Kfd) (Kfm) (Kn) (Ke) (Kuo) (Ksv) (Ksk) (continued)

10.4

The Measurement Model

(1) Functional competence (Df) with the sub-competences • Vocational competence • Creativity

405

(Kbf) (Kk)

10.4.3 The Competence Levels The competence levels: Functional competence (KF), procedural competence (KP) and shaping competence (KG) are interrelated levels of teacher competence. The first competence level KF: The functional competence comprises the competence components KB, KFD and KFM. The second competence level KP: Procedural competence comprises the competence components of functional competence KB, KFD, KFM as well as sub-competences KN, KE and KUO. The third competence level (KG): In addition to the sub-competences of the competence levels KF and KP, shaping competence comprises: KB, KFD, KFM, KN, KE, KUO and the sub-competences KS, KSK and KK.

10.4.4 Operationalisation of Competence Components for Teachers of Professional Disciplines (TPD) (Rating Scale A) The operationalisation of the requirement dimension is based on the psychometric evaluation of the COMET competence and measurement model, which has a similar basic structure (Erdwien & Martens, 2009, 62 ff.; Rauner et al., 2011, 109 ff.).

10.4.5 Vocational Competence The contextual reference point for the design of vocational learning processes entails the characteristic vocational work processes/tasks and the work process knowledge incorporated therein. This has an objective and subjective component, given by the technical/scientific connections, as well as a pronounced subjective component, given by the action leading, action explaining and action reflecting knowledge (cf. Lehberger, 2013). A particular challenge for teachers at vocational schools is, on the one hand, long-lived structural knowledge and, on the other hand, “superficial knowledge” to be found at the surface of technical-economic development. The decision as to whether professional knowledge has to be acquired or whether it is a

406

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

question of the ability to acquire this knowledge per situation using the corresponding media requires a high level of professional competence from teachers. • • • • •

Are the professional contexts presented correctly? Does the solution or design space correspond to professional reality? Is the objective and subjective knowledge of the work process taken into account? Is the functionality of the task solution adequately challenged? Is a distinction made between appropriation and research when dealing with professional knowledge?

10.4.6 Vocational/Technical Didactics The vocational/technical didactics competence of vocational school teachers and trainers determines the ability to transform vocational content into teaching and learning content. With the introduction of the learning field concept, subjectsystematic knowledge was replaced by work process knowledge as a reference point for the content design of teaching and learning processes. The didactic reference point for the design of teaching and learning processes is the “significant vocational work situation”. As the technical and scientific training of teachers is usually oriented towards the relevant subjects, the implementation of the learning field concept poses a particular challenge. • Is the professional validity of the educational objectives and contents (in relation to the professional profile) adequately taken into account? • Has it been possible to select the task for the learners’ respective level of development? • Is the didactic concept of the holistic task solution taken into account in the planning and evaluation concept? • Is the didactic concept of the complete learning/working action taken into account? • Is the curricular validity of the teaching project adequately taken into account?

10.4.7 Technical Methodology (Forms of Teaching and Learning) The concept of professional action competence and the professional profiles defined in the training regulations require a targeted approach in the selection and application of forms of learning and mediation. For example, the acquisition of safety-related skills requires different forms of teaching and learning than the more general aspects of vocational education and training. Equally important is the consideration of the

10.4

The Measurement Model

407

connection between knowledge and ability, with emphasis on the importance of “action-oriented learning”. • Is the methodological concept appropriate for the teaching project (learning tasks, projects, etc.)? • Are alternative methods of learning and teaching weighed against each other? • Does the teaching method take into account the heterogeneity of the class? • Are forms of activity-based learning adequately considered? • Are pupils given the opportunity to present and evaluate their learning outcomes?

10.4.8 Sustainability Teaching and training always aim at the sustainable acquisition of skills. This is most likely achieved through a high degree of self-organised learning and when learning is accompanied by strong feedback (cf. Hattie & Yates, 2015, 61 ff.). In projectbased learning, the success of the project, the presentation of the results and the experience that a teaching project has “achieved” something are decisive for the acquisition of knowledge that is memorised as well as basic skills that form the basis for the ability to act in a variety of situations. • Is superficial learning of professional work/knowledge (Know That) avoided? • Is the aspect of “prospectivity” (future possibilities of professional skilled work) taken into account? • Is competence regarded as cognitive disposition (cognitive potential)—and not only as a qualification requirement? • Are forms of valid and reliable quality assurance used? • Is the aspect of developing professional identity taken into account?

10.4.9 Efficiency The optimal use of resources is a particular challenge for the design and organisation of vocational education and training. This concerns the equipment and use of the study labs. The form of cooperation between teachers and between teachers and trainers includes the possibility to increase not only the quality of teaching and training, but also the efficiency in the planning, implementation and evaluation of teaching and training. • Is the time and effort required for the preparation of the teaching project appropriate? • Are the opportunities for teamwork used? • Are the individual learning outcomes (competence developments) the learners have achieved verified?

408

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

• Are media and study labs used specifically? • Are there good reasons for resorting to tried and tested learning tasks/projects?

10.4.10

Teaching and Training Organisations

In vocational education and training, the teaching and training organisation places particularly high demands on teachers and trainers. This concerns above all the interaction between theoretical and practical learning in dual vocational training and the organisation and design of practicums in vocational and technical schools. The educational contents must be coordinated with each other, and joint projects require a high degree of coordination. With the introduction of learning fields in vocational schools, the demands on cooperation between teachers have increased significantly. When classes are formed, it must be decided whether company-specific or mixed classes are to be established in dual vocational training. • • • • •

Are the premises and equipment resources of the school used appropriately? Are the opportunities for learning venue cooperation exploited? Are the opportunities for cooperation between teachers used? Is Internet access secured for teachers and learners/students? Is feedback on learning outcomes adequately established?

10.4.11

Social Compatibility

Social compatibility in teaching refers above all to the aspect of humane work design, health protection and the social aspects of teacher activity that extend beyond the professional work context (e.g. dealing with the most varied interests of school management, education administration, parents, companies and trainees). • To what extent does the didactic action of the teachers (planning, teaching, follow-up of the lessons) correspond to the criteria of humane work design? • Are aspects of health protection and safety at work (for teachers and learners) taken into account? • Is the aspect of creating a good learning climate considered? • Is handling of disturbances and conflicts (organisation, school, pupils, colleagues) taken into account? • Does the teaching team consider lesson planning and design as a “common cause”?

10.4

The Measurement Model

10.4.12

409

Social-Cultural Embedment

Teaching is increasingly confronted with questions of the cultural and social context of vocational training. On the one hand, this concerns the family situation of pupils and trainees (e.g. single parents) and the economic situation (e.g. poverty) of learners. The migration background of pupils and students (language, social norms, religion, etc.) is a central aspect of teaching, especially in cities and conurbations. • Are the anthropogenic and socio-cultural preconditions of the participants in the lessons taken into account in lesson planning? • Are the circumstances of the social environment taken into account? • To what extent is the currently expected role behaviour (learning guide, moderator, consultant, organiser, role model) taken into account? • To what extent is the potential for conflict arising from the learners’ socio-cultural background taken into account? • Is the economic situation of the learners taken into account?

10.4.13

Creativity

Creativity is a competence that is difficult to assign to a competence level in terms of education theory. To a certain extent, it is transverse to the other competence components. Decisive for an assignment to the third competence level (holistic competence) was the argument that, at the highest competence level, it is important to situation-specifically weigh all solution-relevant criteria against each other in the “search” for a good task solution. The special “plus” of a solution results not only from the consideration of all criteria, but also from the ability to creatively design individual solution dimensions and the balanced weighting of the criteria. This challenge to professional competence is fully met only at the level of holistic competence. • Does the lesson plan contain original elements that go beyond the usual? • Are different ideas for lesson planning creatively balanced against each other? • To what extent are individual and learning group-related learning prerequisites taken into account in planning decisions? • Is the freedom of design offered by the task exploited in the teaching project, for example, through the use of media? • Is the planned lesson sensibly integrated into the longer-term teaching context (learning situation)?

410

10.5

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Test Tasks

In competence diagnostics of teacher competence, a distinction is made between measuring cognitive dispositions pursuant to conceptual-planning competence and professional action competence. This includes didactic action (e.g. teaching). For the evaluation of teaching, that is, the implementation of lesson planning, an appropriately adapted rating scale (B) is used, which contains the three elements: (1) lesson preparation in the form of a lesson plan, (2) instruction (the demonstration lesson) and (3) an evaluation interview with the examiners (! 10.7). Rating scale A (! 10.4 and Appendix B) is used for large-scale competence diagnostics projects: the recording of conceptual-planning competence for the action fields “teaching”, “developing educational programmes” and “designing learning environments”. The competence level and profile of the test participants in the perception of tasks in lesson planning and implementation as well as in the design of learning environments can be seen from the competence characteristics (Table 10.1).

10.5.1 Test Tasks for Measuring Cognitive Dispositions (Conceptual-Planning Competence) If this test procedure does not measure teacher behaviour but only teacher competence as a cognitive disposition, the test items for the action field Teaching refer to the conceptualisation of teaching as it takes place in practice in the form of class preparation. It remains to be seen whether teachers with a high level of competence in lesson planning also have a high level of competence in teaching. The clarification of this connection requires a special empirical investigation. This restriction does not apply to the action fields “Development of educational programmes” and “Planning, development and design of the learning environment”, as in these tasks the planning activities determine the quality of the result. Each test item comprises:

Table 10.1 Guidelines for the development of test items The test tasks: • Capture a realistic task from one of the four vocational fields of action • Define the scope for design given for the respective profession and, therefore, enable a multitude of different solution variants of different depth and width • Are open to design, that is, there is not only one right or wrong solution, but requirementrelated variants • Demand the components of the requirement, action and content dimension identified in the competence model for their comprehensive solution • Require a professional-technical and vocational-educational-didactically well-founded approach for their solution. The task is solved on the basis of a concept-related plan. It comprises a comprehensive justification

10.6

State of Research

411

1. A description of the situation. A realistic or real situation is described for one of the four vocational task fields, so that the (future) teacher can get an exact picture of the task to be solved. The description of the situation specifies the starting points for the task solution. A question-based breakdown of the description of the situation or the task is not planned, as this would already have outlined the solution. 2. The task consists of the invitation to develop a solution proposal that is appropriate to the situation and can be implemented—if necessary, also equivalent solution proposals—and to justify these in detail. For this purpose, solution aids are specified and provided. It can be assumed that (future) teachers will carry out their planning work with the help of computers and the Internet. When processing tasks, the corresponding sources must be indicated according to the usual rules.

10.5.2 Time Scope of the Test Tasks (for Large-Scale Projects) The maximum processing time for test items is 180 minutes. The test items are designed in such a way that this time is sufficient to process the items without time pressure.

10.6

State of Research

For a standardised competence survey of teachers of professional disciplines (TPD), one difficulty is that their training is organised very differently across the globe (Grollmann, 2005; Grollmann & Rauner, 2007). For a TPD competence model and the design of the test items, this means that the fields of action justified above must be taken as a basis and not a national curriculum or national standards for the training of vocational school teachers. The international connectivity of a competence model can best be investigated in the context of internationally comparable research projects (competence diagnostics). On the basis of the test items, it is most likely possible to show whether and to what extent the actors involved in the initial, continuing and further training of teachers for vocational training courses agree in their explicit and implicit concepts and theories of professionalisation.

10.6.1 A Pilot Study with Student Teachers Within the framework of a pilot project with a group of student teachers from the professional fields of electrical and metal engineering, two test tasks were used, one

412

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

each for the action field of teaching and the action field of designing learning environments. Sufficient time was available to process a test item. The prescribed time frame of 3 h proved to be appropriate. Double rating of the task solutions was carried out by two experienced technical managers with extensive experience in rating. The very high degree of agreement of the ratings allowed a precise recording of the competence development of the test participants. The most important result of this pilot study was certainly the willingness of the student teachers to take part in this test. The development of the test tasks for teachers of different professional disciplines was relatively simple in that the vocational fields of action for teachers of all professional disciplines are the same (see above). The didactic actions of the teachers of different professional disciplines, therefore, do not differ in the structure of the actions, but in their content. All professions, for example, are concerned with the design of study labs (see the third field of action) as “learning venues” for experimental and action-oriented learning. The occupation-specific teaching/learning contents are different. This can be easily checked using the example of the study lab to be set up, where there is a fundamental difference to the competence of diagnostic vocational training of professionals in different professions, which differ in their fields of action. Comparable overriding pedagogical-didactic criteria apply to the study lab to be set up (Fig. 10.6) for metal or electrical professions. This also applies to the action field of teaching. This is of great advantage for the execution of tests and above all for the interdisciplinary comparability of the results.

Test Results The extraordinarily large differences in the development of competences, both in their level of competence and in the competence profiles, were not expected either by the discipline leaders or by the student teachers. In the opinion of all participants, the pilot study has shown that the diagnostic value of this survey method is very high and cannot be achieved with any other evaluation method. The competence profiles of the test participants (Fig. 10.7).

10.6.2 The Research Programme: Competence Development of Teachers and Lecturers in Vocational Education and Training in China In preparation of an extensive project of competence diagnostics of TPD in the professional fields of metal technology and automotive service/technology in China under the direction of Zhao Zhiqun (Peking Normal University), both the complex test tasks (developed and tested in a pilot project in Germany) and the measurement

10.6

State of Research

Fig. 10.6 Example of a COMET test task for vocational school teachers

413

414

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.7 Competence profile of student teachers

model were first evaluated didactically. This was based on an analysis of the vocational tasks and fields of action of vocational school teachers using the method of expert workshops for professionals (Kleiner et al., 2002; Spöttl, 2006). The fields of action on which the competence model is based were confirmed, but their content was further differentiated (Zhao & Zhuang, 2012). With reference to this result, the complex test tasks were modified without changing their core (Fig. 10.8).

10.6

State of Research

415

Fig. 10.8 Objects of research and their logical order

Pretest (China) The pretest serves to (a) Verify the quality of the complex test tasks and obtain data for their revision (b) Evaluate the rating scale with its 45 rating items didactically Six “expert teachers” were trained to become raters in a one-day training session. After four trial consultations, a high interrater reliability was achieved with Finn (just) > 0.70. The didactic evaluation of the rating scale showed that 40 out of 45 rating items had to be retained unchanged and the other 5 items had to be editorially adjusted (Zhao & Zhuang, 2012, 6). Overall, the complexity level (level of difficulty) of the situation descriptions of the test tasks was slightly reduced.

Main Test The test was attended by 321 teachers of metal technology and automotive service (technology) from 35 vocational training centres—a representative selection for China. The number of raters was increased to 13. The test results proved to be very informative (Fig. 10.9). By far the highest level of competence in China is held by lecturers at technical colleges (Technicians Colleges). Nearly 85% of these teachers achieve the highest level of competence. The competence level of teachers at vocational colleges is

416

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Fig. 10.9 (Total) Distribution of competences of TPD in metal and automotive service

significantly lower—but still high. 61% reach the second level of competence. By contrast, the competence level of teachers in the professional branch at high schools (vocational schools) is very low. 43% have no professional competence. For the first time, the competence profiles provide a very precise picture of the competences of Chinese vocational school teachers in the various vocational training programmes (Fig. 10.10).

Test Reliability When verifying test reliability, values above 0.5 are considered acceptable and values of 0.9 and higher are considered very high. The values of reliable consistency achieved in the psychometric evaluation of the competence and measurement model are 0.983 and the value of split-half reliability is 0.974 (ibid., 17). High values were also achieved in the verification of empirical validity (Fig. 10.11 and Table 10.2). Referring to COMET teachers’ professional model, the assessment model and its operational definition on professional competence, we construct a basic factor model, i.e., a firstorder 9-factor model consisting of 9 indexes and 45 items (as shown in Fig. 9.10) (. . .) In summary, these results show that the professional competence test exhibited high empirical validity, discriminating teachers with exceptional skills from those with average skills (ibid., 22).

10.6

State of Research

Fig. 10.10 Competence profiles of Chinese vocational school teachers, by training courses

Fig. 10.11 Basic data model for TPD professional competence. (Source: Own compilation)

417

418

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

Table 10.2 Independent t test on the competence dimensions of TPD Item PF Suppose the variance to be equal Suppose the variance to be unequal PP Suppose the variance to be equal Suppose the variance to be unequal PG Suppose the variance to be equal Suppose the variance to be unequal

Mean value 14.7433

Standard deviation 3.88015

t value 3.880

Degree of freedom 233

Significance (2 sides) 0.000

16.6982

2.84448

4.341

188,392

0.000

13.0890

3.97335

4.022

233

0.000

15.1869

3.06921

4.419

180,040

0.000

10.3727

3.78394

3.594

233

0.000

12.1689

3.00513

3.911

175,612

0.000

Notes: PF for functional competence, PP for processual competence; PG for shaping competence

10.6.3 Investigating the Link Between Measured Teacher Competence and Quality of Teaching With the participation of more than 100 teachers in student tests (electronics technicians and automotive mechatronics technicians) within the framework of Chinese COMET projects (cf. Zhuang & Li, 2015; Zhao, Rauner, & Zhou, 2015; Zhao, 2015), a new quality in teacher training was achieved, according to the assessment of the test participants and their school principals. The measured competence development of the teachers in the form of competence profiles is of very high diagnostic significance with regard to the technical understanding underlying the teaching activity (Figs. 10.9 and 10.10). If teachers take part in student tests, in rater training and in the rating of student solutions or if they use the COMET competence and measurement model as a didactic instrument for the design and evaluation of teaching, then they acquire the concept of the complete (holistic) solution of professional tasks in a relatively short time. This has a formative effect on the design and organisation of vocational training processes. On the basis of this recognition that teachers/lecturers transfer their professional understanding and problem-solving patterns to their students, it is now possible to ascertain the professional competence and problem-solving patterns of teachers/ lecturers in VET from the competence profiles of their students.

10.7

10.7

Evaluation of Demonstration Lessons in the Second Phase of Training. . .

419

Evaluation of Demonstration Lessons in the Second Phase of Training Teachers with Professional Discipline (TPD): A Test Model

Demonstration lessons are an essential element in the second phase of teacher training and especially in the context of the second state examination. This exam component includes:

10.7.1 The Lesson Plan The candidate (TPD-C) prepares a teaching draft for each demonstration lesson within the framework of their state examination in the vocational specialisation as well as in a general subject or a further focal point in a vocational specialisation and justifies the embedment of these lessons in the respective educational programme. In dual education programmes, the aspect of learning venue cooperation should be taken into account. In the examination regulations, the scope of these elaborations is limited, since the aim is to achieve a realistic examination effort, which can be extended to a justifiable extent under the framework conditions of an examination. The lesson plans should be sent to the members of the examination board for evaluation a few days before the demonstration lesson. According to the examination procedure outlined here, the examiners evaluate the lesson draft on the basis of the evaluation sheet variant A (Appendix B). A double rating (two examiners) is useful here. They create a group rating on the basis of their individual ratings. This rater practice contributes to the fact that ultimately high to very high values of the agreement (interrater reliability) are reached. The marked items on the rating scale are only used during class observation.

10.7.2 Class Observation Two rating scales are used for the evaluation of teaching: the evaluation sheet variant A used for the evaluation of the teaching design and the evaluation sheet variant B, which consists of the rating scale A and modified rating items. Furthermore, the impression of the observed lesson facilitates the correction of evaluations of the teaching project on the basis of the lesson plan. The main focus of class observation lies on the evaluation of the socialcommunicative competence of the TPD-C. The reflections of the Kollegiums des Studienseminars für das Lehramt an Berufskollegs [collegium of the study seminar for the teaching profession at vocational colleges] in Hagen (NRW) on “characteristics of good teaching” have been incorporated into the extension of the competence model by an essential component: the social-communicative competence of the

420

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

TPD. The social-communicative competence of future teachers is a central aspect of their professional competence. In the context of the second state examination, this dimension of professional action competence is observed and evaluated within the framework of the demonstration lessons. The observed teaching behaviour allows a well-founded evaluation of social-communicative competence only if the examiners have the opportunity to question the TPD-C on their behaviour and actions in class. Therefore, an examination interview is necessary (and usual) after the demonstration lesson, in which the examinee first has the opportunity to reflect on their didactic actions and answer questions of the examiners that are at their core: Why did you act like this in situation X and not differently? For the evaluation of the candidate’s social-communicative competence, this interview is the prerequisite for a valid and reliable evaluation of the teaching project (evaluation form variant C, appendix B). Social-communicative teacher competence comprises five criteria: 1. 2. 3. 4. 5.

Building relationships conducive to learning Designing communicative lessons Moderating learning processes Designing lessons to be reflective and give feedback Fulfilling role expectations and developing personality Each of these five competence criteria is operationalised with five items.

10.7.3 The Interview (Following the Demonstration Lesson) The demonstration lesson is followed by an interview regarding the demonstration lesson. It comprises • A self-assessment of the candidate, who explains how they succeeded in implementing the lesson plan, whether and how they deviated from the plan— if necessary–, how they assesses the learning climate/learning situation and whether they achieved the objectives they justified in the lesson plan. • An examination interview, which mainly includes questions from the examiners. They are based on the evaluation forms variant B and C. The aim is to gain a deeper insight into the teacher’s actions in order to correct the assessments of individual items on this basis, if necessary.

10.7.4 Final Evaluation of the Examination Performance The examination board evaluates the examination performance on the basis of its ratings (evaluation forms variant B and C). They communicate via a group rating. The “weight” with which the individual examination sections are included in an overall assessment of the second state examination is laid down in the examination regulations (Fig. 10.12).

10.9

Outlook

421

Fig. 10.12 COMET rating for the assessment of the performance of teacher candidates (TPD-C) in the context of the second state examination

10.8

Development and Evaluation of the Model “Social-Communicative Competence of Teachers”

The heads of the study seminar for the teaching profession at vocational colleges in Hagen (NRW) have named characteristics and items for successful teaching in a vocational pedagogical discussion process with reference to the relevant vocational pedagogical research as well as their extensive teaching and training experiences, from which a model for the sub-competence “social-communicative competence of teachers” was developed. The large number of heads of department involved and the scope of this discussion process form the basis for the only possible methodological validation of this model in the form of discursive validity (cf. Kelle, Kluge, & Prein, 1993, 49 ff.; Kleemann et al., 2009, 49). In a second step, the reliability of the rating scale and the competence criteria was examined on the basis of a two-stage rating training. The rating was based on two video recordings of demonstration lessons. After the individual rating of the first video recording, the rating groups agreed on a group rating. It was expected that the values of interpreter reliability achieved in the rating of the second video recording would increase. The results of this rating procedure yielded high interrater-reliability values for both groups. With this extension of the COMET model for recording teacher competence (TPD), a set of instruments is available both for competence diagnostics and for teacher training and further education, which has the potential to increase the quality of the didactic actions of these teachers.

10.9

Outlook

For a standardised competence survey of vocational school teachers, one difficulty lies in the fact that their training is organised very differently across the globe (Grollmann, 2005; Grollmann & Rauner, 2007). For a competence model for vocational school teachers and the design of the test items, this means that the justified fields of action (! 10.2) must be taken as a basis and not a national

422

10

Measuring Professional Competence of Teachers of Professional Disciplines. . .

curriculum or national standards for the training of vocational school teachers. The international connectivity of the COMET competence model for vocational school teachers was demonstrated with the project described above.

10.9.1 Psychometric Evaluation of the Competence Model The psychometric evaluation of the competence model was a decisive step in the research process “Competence Diagnostics for Vocational School Teachers”. The concept of open complex test tasks includes high demands on the test methodology. The COMET project was able to show which psychometric evaluation methods are suitable for this research (Martens & Rost, 2009, 96 ff.; Erdwien & Martens, 2009, 62 ff.; Haasler & Erdwien, 2009, 142 ff.).

10.9.2 Investigating the Link Between Measured Teacher Competence and Quality of Teaching In its introductory standards for teacher training, the KMK rightly emphasises: “The professional quality of teachers is determined by the quality of their teaching” (KMK, 2004b, 3). This fundamental insight requires an exploration of this connection. The sense in measuring teacher competence with the methods of Large-Scale Competence Diagnostics and thereby reducing the abilities of teachers to a domainspecific cognitive disposition (e.g. to lesson planning) is only given if the relationship between the measured teacher competence and the quality of teaching can be empirically proven. Only then is the measured level of competence an indicator of the vocational competence of teachers. An external criterion for checking the validity of the content of the test items in teaching is the competence development of the pupils. After it has been empirically proven that the competence profiles of the teachers correspond to a degree with those of their pupils and that the competence profiles of the pupils can, therefore, be traced back to the problem-solving patterns of their teachers, there is a high plausibility for the thesis: “Good teachers train competent pupils” (Zhao, 2015, 443).

Chapter 11

The Didactic Quality of the Competence and Measurement Model

11.1

The Learning Field Concept Provides Vocational Education and Training with an Original, Educational-Theoretical Foundation

For decades, vocational education was torn between two basic guiding principles: science orientation (pure education) versus qualification to suit the needs of the labour market (utilitarianism). The central idea of pure education goes back to Alexander von Humboldt. Heinrich Heine sums it up particularly frankly and briefly: “Real education is not education for any purpose, but, like all striving for perfection, finds its meaning within itself”. For the implementation of this guiding principle, the orientation of pure education towards the sciences—towards pure scientific expertise—appeared to be the adequate path to be pursued by all education. The German Education Council has, therefore, elevated science orientation to a fundamental didactic principle of all education. For vocational education and training, this promised to cast off the stigma of utilitarianism, that is, education aimed at usefulness. This, however, posed a new problem for vocational education and training. Attempts to derive vocational knowledge from (academic) scientific knowledge, to use it to develop systematically structured educational plans and to establish vocational competence led to a dead end. The success story of the science system can be seen in the exponential multiplication of generalisable disciplinary knowledge, based on a system of scientific disciplines with a high division of labour. Scientific knowledge is regarded as pure, resulting in the relationship between genuine— pure—education and the pure knowledge produced by the sciences. However, this ignores the fundamental realisation that the historically grown world can only be understood as a process of objectifying purposes and the underlying interests and needs. The world in which we live and work, therefore, inevitably means interacting and dealing with values and responsibility.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2_11

423

424

11

The Didactic Quality of the Competence and Measurement Model

In the 1980s, the insight described above was taken into account with the central idea of empowering those who are to be professionally educated to help shape the world of work in socially and ecologically responsible manner. Therefore, it is not the scientific abstract knowledge that forms the basis for the development of professional competence, but the knowledge of the work process as the basis for competent and responsible professional action (Rauner, 1988, 32–51; Heidegger, Adolph, & Laske, 1997; KMK, , 1991, 590–593; 1999). The world in which we live and work, in whose development we—in all spheres of society—participate as consumers every day (through our purchasing decisions)— as producers of utility values, as voters or members of social movements, and constantly, consciously or subconsciously, both to a large and small extent, is not a pure world. There are no cars, no buildings, no furnishings, no services that are pure or without a purpose. Howard Gardner formulated the pedagogical response to the ideology of pure education as follows: “I want my children to understand the world, but not simply because this world is fascinating, and the human mind is driven by curiosity. I want their insights to enable them to change the world to improve the lives of the people living in it” (Gardner, 2002). When the Subcommittee on Vocational Education and Training (UABBi) of the KMK (, 1996) established the learning fields as the basis for design-oriented vocational education and training—as a fundamental change of perspective in the development of framework curricula—and bindingly established them in 1999, the change in pedagogical fashions in vocational education and training was interrupted. An original pedagogical central idea of vocational education and training was translated into a curriculum development programme. The UABBi had taken up the recommendation of the Enquête Commission of the German Bundestag “Future Education Policy—Education 2000” on vocational education and training and formulated as a new guiding principle to turn away from vocational education and training aimed at adaptation and turn to vocational education and training aimed at co-shaping the world of work (and society): Since then, each framework curriculum has defined vocational education and training as “the empowerment to help shape the world of work in socially and economically responsible manner” (KMK, , 1999). It is no longer only a question of enabling prospective specialists to understand the (working) world, but also of equipping young people with the ability to participate in shaping the world—on a small and large scale. This change of perspective implemented by the KMK had far-reaching consequences for educational planning. The tradition of pure basic and specialist science-propaedeutic education lost its legitimacy. The curricula derived from scientific subjects, such as the fundamental

11.1

The Learning Field Concept Provides Vocational Education and Training with. . .

425

educational concepts based on social science content in personal services or the concepts of basic industrial and technical education oriented towards the natural sciences, came into conflict with vocational education and training oriented towards learning fields. The learning field concept turns the historically grown (working) world as an objectification of purposes and goals as well as the interests incorporated therein, that is, as a world with values, into the object of vocational education and training. It is a matter of understanding and exploiting the scope for creativity in order to help shape a world of work that is increasingly dependent on participation. The modernisation of a heating system under consideration of the current or even expected environmental regulations, state-of-the-art heating technologies, affordable costs, the highest possible operating comfort as well as trouble-free operation requires shaping competence: the ability to solve professional tasks under consideration of all relevant criteria. Learning fields take up “significant professional action situations” as the bases for curriculum development (KMK, , 1996). Professional work situations are “significant” when they challenge professional competence development. Herwig Blankertz (1983) and Andreas Gruschka (1985) have implemented this theoretical concept of development, which goes back to Havighurst (1972), in the (NRW) college school project, above all in the educational programme for educators: Professional development tasks as the basis of a curriculum structured according to development logic (Rauner, 1999). Patricia Benner’s project from the “Nursing” faculty of the prominent University of California (Berkeley) is still regarded as groundbreaking in the international vocational pedagogical discussion for training nurses in accordance with the novice-expert paradigm. She describes the “significant” work situations of nurses, which she and her team empirically identify as the basis for curriculum development, as “paradigmatic work situations” (Benner, 1997). Successfully coping with these work situations triggers competence development. Paradigmatic work situations have the quality of development tasks. Successfully “passing” as well as reflecting on a paradigmatic work situation teach trainees to see their working world from a broader perspective and to take a recognisable step in their competence development. This is the basis for the KMK’s change of perspective from an objectivistic-systematic learning tradition to a subject-related structuring of vocational development and learning processes. The development of the ability to solve vocational tasks—more precisely to solve them completely—becomes the yardstick when logically structuring the contents of vocational training programmes.

426

11

The Didactic Quality of the Competence and Measurement Model

The risk of failing with the ambitious goal of introducing the groundbreaking idea of vocational education and training structured according to learning fields has not been averted. Under the conditions of accelerated social change, the willingness and ability to try out new things is seen as an indicator of innovative competence in individuals and institutions. Following the rather cumbersome process of introducing learning fields to structure educational plans and processes in vocational education and training, the orientation towards the COMET competence model should open a new approach to the learning field concept. A challenging project, it is also a competence model that is suitable for mediating between educational goal and learning task. The learning field concept is characterised by a number of key terms which should be recalled in order to avoid conceptual misunderstandings in further explanations. 1. The learning field concept is based on the orientation of vocational training processes towards work situations whose potential for professional competence development is assessed as “significant” by experts in the respective profession. 2. In principle, competence-promoting work situations and tasks are the linchpin for the design and organisation of vocational learning, that is, the imparting of vocational action and shaping competence. The KMK manual on the learning field concept, therefore, refers to them as “situations that are significant for the exercise of a profession”. 3. The description of work and learning tasks as effective forms of in-company and school learning, therefore, requires both a description of the competencepromoting (significant) work situations and the respective work assignments. It is only through this linkage that work and learning tasks challenge targeted vocational action and learning. 4. The distinction between action and learning fields points to the fundamental difference between working and learning and to the fact that both—in vocational education and training—are constitutive for each other. The didactic reference point for the learning fields are the vocational action fields. At the same time, learning fields—prospectively—point beyond vocational practice. While the action fields are concerned with the professional execution of a company order, the learning fields are concerned exclusively with learning. Within the learning fields, it is, therefore, possible and, pursuant to the educational objective of co-designing in social and ecological responsibility, also necessary for the description of learning tasks to go beyond the limited operational framework of the work situation in accordance with the formulated characteristics of a learning task (refer to p. 505). In nursing or commercial professions, case situations or case studies are often used which are characterised by a stronger link between learning and action fields (Fig. 11.1). The term “learning task” is not used in the learning field concept and, therefore, requires classification. The learning field concept has produced the blurred term “learning situations”, which take up “professional tasks and courses of action” and “didactically and methodically prepare them for implementation in teaching” (KMK

11.1

The Learning Field Concept Provides Vocational Education and Training with. . .

427

Fig. 11.1 Key terms of the learning field concept

2011, 32). Learning situations—pursuant to the KMK manual—therefore, address both situational (learning task) and process-related (teaching-learning processes) aspects of vocational action. The term “learning task” is intended to describe the situational aspect of learning situations that goes beyond the limited operational framework of the work situation. There is increasing evidence that vocational education and training is turning to competence-based educational standards. This provides a set of instruments with the advantage of being internationally established. The Anglo-Saxon tradition of competence-based vocational learning and the methods of developing modularised

428

11

The Didactic Quality of the Competence and Measurement Model

certification systems and assessment methods based thereon, such as the British system of National Vocational Qualifications (NVQ), promise stronger ground under the feet of those who are looking for tried and tested recipes. In contrast to the seemingly diffuse learning field concept, which after almost two decades of its introduction appears as a ruin of innovation, competence-based learning promises a handy formula which, it seems, is also in line with the EU projects of the European Qualifications Framework and the ESCO project (European Skills, Competences and Occupations). One problem with both EU initiatives is the programmatic formula that vocational education and training is defined as a process of acquiring qualifications “irrespective of place and time”. In this context, vocational curricula and developed methods of vocational learning are regarded as input factors—and, therefore, as yesterday’s methods. From this perspective, vocational training programmes appear to be a considerable disruption potential that stands in the way of establishing a profitable and flexible service sector (in line with a relevant GATS recommendation) (Drexel, 2005). It looks as if the educational policy and planning reception of this qualification concept in Germany is meeting with considerable resistance and that dual vocational training is being rediscovered internationally, above all as a means of combating youth unemployment. At their meeting at the end of September 2011 in Paris, the G-20 employment ministers emphasised the introduction of dual vocational training systems in their catalogue of recommendations for action to combat youth unemployment. Modern vocational training (Sennett, 1998) based on the concept of European core occupations (Rauner, 2005), vocational training structured according to learning fields and competence diagnostics based on vocational shaping competence (Rauner et al., 2011) are gaining in importance in this context—also internationally. There is, therefore, much to suggest that the learning field concept is still proving to be a highly innovative reform project for vocational education and training. The working world for which vocational training prepares teaches us that a heating or lighting specialist, a specialist in the retail trade or a specialist in education is always faced with the challenge of balancing a variety of possible solutions and procedures when solving a professional task. The amount of time available, the variety of professionally possible solutions, their practical value and sustainability, their environmental and social compatibility and not least their economic feasibility are criteria that must be weighed against each other in every situation. A high level of professional competence is, therefore, characterised by the ability to make astute use of the scope for solutions or design given in each case. The guiding principle of shaping competence, which is anchored in every framework curriculum for vocational education and training with the introduction of the learning field concept, represents the reality of the working (continued)

11.1

The Learning Field Concept Provides Vocational Education and Training with. . .

429

world. True education enables us to answer this question: Why are the realities of the working world (and society) like this and not like that? And: Is there another way? True education enables us to help shape the (working) world, which inevitably means facing up to the responsibility associated with it.

11.1.1 Professional Action Fields as a Reference Point for the Development of Learning Fields Professional work tasks encompass a potential for the development of professional competence and identity if the (prospective) professionals are to learn to integrate their work tasks into the company’s business processes. “Working and learning” are the two sides of the same coin. Therefore, work tasks can also be described as learning tasks. The work contents are then emphasised as a medium for acquiring professional competence. A form of representation for learning fields breaks down the characteristic professional work tasks and contexts according to the three aspects of work-oriented content of work and learning: • Subject of (specialist) work. • Tools, methods and organisations of specialist work. (Requirements for (specialist) work and technology are listed in Fig. 11.2). The transformation of a professional action field or a professional task into a learning field (or a learning task) consists in formulating the three content dimensions both from the perspective of the empirically identified professional tasks and that of the educational objectives. A learning field, therefore, always includes work process knowledge (see p. 63 ff.). Since learning fields and related learning tasks/ learning situations always aim at the complete and reflected solution of professional, their basic structure corresponds to the project-based forms of learning.

Fig. 11.2 Identification and determination of training and teaching content in terms of vocational qualification requirements and educational objectives (Rauner, 2000)

430

11.2

11

The Didactic Quality of the Competence and Measurement Model

Designing Vocational Education Processes in Vocational Schools

A widespread misunderstanding among vocational trainers: “Theoretical knowledge of the subject is the prerequisite for professional action” (Fig. 11.3). Current teaching practice is, therefore, often oriented towards the didactic model: • Theory is first taught by theory teachers. • In a second step, theory is then applied on the basis of exercises and experiments in the study lab. • Finally, theoretical knowledge can also be applied in the work process. It, therefore, depends on the clarification of the training paradox: professional beginners become experts by doing what they want to learn. Learning is one of the basic skills that people possess from birth. We are most likely to see this in young children when they develop quickly, learn to speak quickly, learn to move, experience themselves, become increasingly skilful and secure with the objects around them, such as using a spoon properly when eating and learning to ride a bicycle as they grow up. They casually learn what to bear in mind when playing with other children. Most of what a growing child learns, it does not learn according to the scheme: theory and teaching first and then applying what it has learnt. The theory of cycling does not enable the child to cycle. What applies to learning how to swim, cycle and manually reworking mould sealing surfaces (for a tool mechanic) is also generally significant for professional learning.

11.2.1 Professional Knowledge The acquisition of theoretical knowledge is not a prerequisite for professional action; instead, vocational knowledge arises from professional action processes. Each new work experience is evaluated in the light of previous work experience and the result of this evaluation is added to the previous experience. If the difference between previous and new work experience is too great, then subjectively no bridge can be built to the new experience—nothing is learnt (in terms of expanding the

Fig. 11.3 From learning to working: a widespread misunderstanding

11.2

Designing Vocational Education Processes in Vocational Schools

431

fields of meaning of action-relevant concepts). New knowledge only emerges when, on the one hand, the new work experience matches existing connotations, makes them vibrate and, on the other hand, deviates from the existing knowledge to such an extent that the new experience contributes to an extension and deepening of previous connotations and valuations. Work experiences are always made when the existing ideas, connotations and expectations have to be questioned, modified and differentiated by the new reality.

11.2.2 The Training Paradox The concepts of active learning pose a mystery: How do beginners become experts without first acquiring the corresponding knowledge? This may sound paradoxical, but it corresponds exactly to what vocational pedagogy understands by active learning. Beginners in a profession become experts by doing what they want to learn. Trainers support them by confronting learners with work situations that are challenging to master. At the same time, it is also true that professional skills are based on professional knowledge. With the introduction of the learning field concept, the formula “Professional action requires professional knowledge” is a thing of the past. Gottfried Adolph (1984, 40 f.) reported on an informative example from his teaching practice. An example from teaching practice: “Electronics technicians at the beginning of their second year of training are asked to measure current and voltage in the laboratory using a series connection of two lights. The laws of series connection were previously ‘thoroughly tested’ in theoretical lessons. The performance test showed the teacher that all students now have the theory knowledge that when resistors are connected in series, the voltage drops are proportional to the resistors, that the current intensity is the same in all resistors. Students can make the appropriate calculations. The lights that each pupil has connected in series have different wattages at the same rated voltage. Their resistances are, therefore, not the same. They are chosen so that when the circuit is closed, the light with the high rated power does not light up, while the weaker one has almost full power.” The subsequent behaviour of the pupils is almost according to legal regularities: hesitation—surprise; uncertainty about the fact that only one light is “on”. A frequent call to the teacher: one of the lights is broken! This is automatically followed by taking the “broken” light and unscrewing it from its socket. The fact that now the other light also goes out leads to renewed, even stronger uncertainty. By

432

11

The Didactic Quality of the Competence and Measurement Model

inserting and removing the light, the phenomenon is repeated over and over again, as if one needed repetitive [...] confirmation of what is intrinsically “impossible” (exclamation of a student: “This is impossible!”). Gottfried Adolph comments on this typical event: “... Everything that happened was not expected by the students, who expected that a ‘correctly’ connected light would also light up. If it does not, then it is ‘broken’. It is expected that twisting a light in and out of its socket will influence that light and not on the other”. He, therefore, concludes: “The preceding theoretical teaching on the series connection of resistors has not changed the expectations expected in practice—the school theory has not reached the personal, secret theory [...]. It turns out that the still widely used organisational model (first so-called ‘theory teaching’... followed by ‘practising application’... is wrong in its approach.” (ibid., 41). If the teacher had asked the pupils to experiment with the series connection of lights of different wattages instead, then the pupils, possibly supported by the teacher, would have finally understood in a process of testing and experimenting (in line with experimental cognitive activity) not only the laws according to which a series connection works, but also the important aspects of connecting lights in series. The decisive point for this form of acquisition of professional knowledge, however, is that the pupils would not only be taught formulae for calculating the series connection of Ohm’s resistors, but that they would be challenged to experiment and acquire these findings themselves. If these technical findings are taught by the teacher, then their value for practical tasks is not only limited, but the teacher has missed an important learning opportunity, namely, the acquisition of the ability to gain knowledge by experimenting.

11.2.3 Designing Learning Tasks The linchpin of professional learning is competence-promoting company work situations and tasks which, in the form of learning tasks, represent the starting point for vocational training processes.

What Distinguishes Learning Tasks from Work Tasks Only work tasks that encompass a potential for the learner’s competence development have the quality of “development tasks” and can be transferred to learning tasks. Learning tasks can be completed in a few hours if—as with open test tasks— they are restricted to conceptual planning. This distinguishes learning tasks from projects. Projects always have two results:

11.2

Designing Vocational Education Processes in Vocational Schools

433

A “Product” For example, a class of electronics technicians specialising in energy and building technology develops modern lighting for the school canteen or prepares an excursion to a regional wind farm and evaluates this excursion.

A Learning Outcome The learning outcome is the main concern of a project within training and must, therefore, not be lost sight of. It is important to exchange ideas at the beginning of a project and ascertain what can be learnt in the planning and implementation of a project. Learning situations aim at professional competence development. They belong to the project-based forms of learning, as the intrinsic learning tasks are realistic and complex. They are, therefore, also based on the concept of a complete task solution. If learning tasks are also solved practically, then it is useful to speak of “work and learning tasks”. Learning situations pose a practical advantage. Project-based learning is maintained, especially if the didactic concept of complete task solution is observed. However, the organisational and temporal framework conditions for the implementation of learning situations are uncomplicated. This also means that learning tasks can be worked on and justified in varying depth and breadth. The following explanations serve as orientation for the step-by-step design of learning tasks on the basis of work situations/tasks (Fig. 11.4).

Step 1: Identifying Competence-Promoting Work Situations/Tasks The first point of reference for identifying competence-promoting (significant) work situations/tasks is the apprenticeship profiles. In addition to the professional qualification profiles (professional skills), they describe the professional action fields (Table 11.1). Possible sources for selecting “significant work situations” include: 1. The job portal 2. Company exploration (Fig. 11.5) The trainees of a vocational school class usually complete their practical training in the companies of the region. The spectrum of the business fields of the local training companies represents the vocational fields of action of their training professions. The concrete vocational work processes and situations in the companies fit into the context-free description of the job profiles and training regulations. On the other hand, the real work situations/tasks are expressions of company-specific production and service processes and the contents and forms of the professional work processes given by them. Knowing these is very important for teachers, as their

434

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.4 Steps to design learning tasks Table 11.1 Example of professional action/activity fields Professional action/activity fields

Shipping clerks Import-export assignments Procurement Marketing/proposal creation Forwarding and logistic business processes/controlling

Automechatronics engineers Service/maintenance Repairs Conversion and retrofitting Diagnostics

Nursing professions Nursing as part of the nursing process Implementation of initiated measures Training, instruction and consultation of patients and relatives Administrative tasks/management

trainees gain their work experience and are trained practically in these companyspecific contexts. In many cases, it will make sense to conduct a more in-depth investigation of the work situation or task together with the trainees in order to explore the company or the company’s expertise. For this purpose, a detailed questionnaire or an exploration grid with the most important aspects to be considered should be developed. Only such an instrument turns an unsystematic inspection into a targeted exploration. Despite thorough preparation, however, the operational events can never be fully

11.2

Designing Vocational Education Processes in Vocational Schools

435

Exploring company work situations / tasks

ƒ Types of services (repair, maintenance, installation, assembly, consultancy, documentation, presentation, etc.) ƒ Used technology, systems, machines, tools, auxiliaries

ƒ Work organisation, working methods, work processes (assembly, installation, set-up, equipping, etc.)

ƒ Manufactured products/sub-products, their use and field of application ƒ Introduction of new technologies, products, forms of work ƒ Requirements for the products and sub-products/services, the manufacturing process, the organisation of the work (processes), the employees/professionals

ƒ Alternatives to the technology used and to the organisation of work (processes) ƒ Problems/ “hot spots” in the production/service and organisation of work(s). Fig. 11.5 Grid for exploring company work situations/tasks

recorded. In addition to the consideration of overall company contexts, it is, therefore, of particular importance to track down details at the specific workplace and to obtain additional information from the employees on site. This requires a preselection based on the following questions, particularly in the case of larger companies: • Which (typical or unusual) work situations/tasks of professional are of interest to us? • Which jobs of professionals should we examine in more detail? In smaller companies, the boss can usually spontaneously name tasks that are a little out of the ordinary, demanding and suitable for training purposes. Suggestions from part-time trainers who are integrated into the company’s work and business processes and, therefore, have a great deal of background knowledge can also be valuable. As a rule, they can provide information about current innovations, but also about company focal points and problems. Of course, the activities, work situations/ tasks and difficulties of the employees with which the trainees are also confronted in their future profession are of particular interest. The objectives of the exploration are, of course, discussed with the company. It should become clear that it is a matter of becoming acquainted with professional work. In this case, experience has shown that corporate managers are happy to support such measures. Experience from Training Practice Experience has shown that training companies, even within a learning venue network, have a very positive and open attitude towards the exploration process. It obviously contributes to the qualitative improvement of their training, leads to stronger practical orientation and can help to make the (continued)

436

11

The Didactic Quality of the Competence and Measurement Model

Table 11.2 Checklist for verifying the suitability of in-company work tasks for training purposes Trainees • Do the trainees have sufficient previous knowledge and practical experience to cope with the task? • Can the trainees learn anything while working on the tasks in line with their training? • Is the time and organisational effort required to complete the work task clear and manageable for the trainees? • Trainers and teachers • Do the trainers and teachers possess the necessary technical, social and methodological-didactic competences or can they acquire missing competences? • Companies • Is it possible to reconcile work task processing by trainees with the interests of the training companies? • Is there any benefit for the training companies or for the learning venue network? • Are the burdens for the training companies distributed fairly? • Can the production or the service be taken out of the company’s time-critical process for training purposes? • Do those responsible in the company agree? • Is there enough time available for the part-time trainers? • Vocational school • Do those responsible at the school agree? • Are the colleagues whose lessons may also be affected informed and do they agree? • Resources • Are the necessary resources available or can they be procured? • Are there suitable learning and work locations available for processing the work task? • Framework curricula • Can a reference be made to the framework curricula? • Is the work task relevant for examinations? • Possibilities for design and potential • Does processing the task allow alternative approaches and solutions? • Skilled work/craftsmanship • Does the processing of the work task for the skilled work or craft work place exemplary demands on the trainees? • Financing • Can any necessary funding be raised?

transition of young professionals to their role and function as skilled professional much smoother. The management should be sufficiently informed about the training activities (of the network) by the part-time and full-time trainers. This means that the objectives of an exploration have been agreed in advance with the respective company. Any reservations about this measure can, therefore, be avoided from the outset. The following example of a checklist for the selection of suitable work tasks contains selection criteria that can be checked by means of partial questions that can be answered with “Yes” or “No” (Table 11.2): Basically, work processes are always learning processes. Trainees—but also all professionals—gain experience, gain confidence in handling specific professional

11.2

Designing Vocational Education Processes in Vocational Schools

437

tasks (exercise effect), learn how to deal with mistakes and solve unforeseeable problems, work together with colleagues, superiors and trainees. It usually also deals with the concern of the consequences of their own actions regarding • The superordinate work result • The clients • The team In this respect, work tasks are always associated with work experience. It depends on the in-company training—the trainers and the company practice group—whether and to what extent the work experience is reflected. Questions to Reflect the Work Experience • What was new and what was already routine? • What was particularly important to meet the quality requirements? • Did I have to take into account new rules and new knowledge? • Did I understand everything? • What room for manoeuvre was given and how was it used? These questions are all within the context of operational circumstances and the scope for design. Nevertheless, the reflection of the operational work and the exchange of ideas with the operational actors are a first step towards the transformation of a work task into a process of generalising the situational work experience. The result is knowledge that detaches itself from the work process and opens up the possibility of dealing prospectively with the specific work processes in technical discussions with colleagues, trainers and teachers: What could be improved in the implementation of the work processes? At school, the relationship between work and learning—between the work and learning task—is fundamentally changing. It is no longer a matter of professionally carrying out a work task—embedded in a company work process. It is exclusively about learning. In this respect, it is consistent that we are talking here about learning tasks and learning situations. The term “work and learning tasks”, which is occasionally used, is intended to remind us that the learning tasks are directly related to concrete work tasks. That would speak for this designation. It should, however, be reserved for projects which are carried out in cooperation between schools and companies and which are embedded in real work processes. School-based learning tasks, on the other hand, have as their reference point “significant work situations” or work tasks and processes which teachers consider to be characteristic of the profession and adequate for the respective situation of the learner’s competence development. For learning tasks, it is, therefore, not important that they are based on the subjective experience of the trainees, but that the trainees are able to build on their own work experience by working on a learning task in the process of school learning.

438

11

The Didactic Quality of the Competence and Measurement Model

Step 2: Developing and Describing Learning Tasks from Work Situations/Tasks Prospectivity The following six design features can be derived from the COMET competence model and the theoretical integration of the learning field concept for the design of learning tasks: transcending professional reality: prospectivity. Trainees from different companies have similar or different experiences in the same professional action field. In total, they point beyond the problem-solving horizon of individual companies. The school, therefore, has the potential to think and experiment prospectively and beyond the current company situation. When designing the learning situations, it is, therefore, very important to make full use of the experimental possibilities for a prospective interpretation of the learning tasks. To this end, study labs must be equipped accordingly. Unfortunately, they rarely have this quality: they are usually intended to experimentally comprehend and apply theoretically acquired knowledge. Example An alternative TÜV [the German technical inspection association’s roadworthy test] In the model experiment of action-oriented technical instruction in motor vehicle mechatronics classes in Melsungen (at the beginning of the 1980s), one learning task was to develop an alternative TÜV and to test it practically and experimentally in an integrated study lab. The results of the project were presented to the local TÜV officers and discussed with them. One comment: “If I had the opportunity to do so, I would transfer much of what I have seen here today to our TÜV”.

The Concept of Holistic Task Solution The central idea of a design-oriented vocational education and training system suggests that learning tasks should be placed at the level of work process knowledge (refer to p. 58 ff.), in order to avoid slipping into a subject-systematic structuring of “specialist tasks”. The ability to understand vocational tasks in their complexity and to solve them completely presupposes that the working contexts are not subdivided into a structure of context-free subtasks (!). Learning tasks always refer to a professional/company reality that is always socially constituted. The solution of these tasks is, therefore, also about weighing up values and interests.

11.2

Designing Vocational Education Processes in Vocational Schools

439

Example Control system for a hotel lift When designing a lift control system for a hotel, the description of the situation also referred to the rooms distributed over the eight floors (fitness centre, luxury apartments, conference rooms, hotel management offices, etc.). The task was not only to design a functioning control system, but also one that was adequate for the hotel’s situation.

Action Consolidation and Accentuation Learning tasks allow and suggest highlighting of work situations and aspects and neglecting other—less learning-relevant—aspects, as long as the authenticity and objectivity of the work situation is not affected. This achieves a certain dramatisation of the work situation/task, which strengthens the motivation of the learners to deal with the given task with commitment.

Solution and Design Scopes Learning tasks are formulated with reference to realistic work situations from the perspective of “customers”. The learners are, therefore, challenged to lead a problem analysis based on the customer’s description and to ultimately develop a professional procedure and solution of the task. This concept of open tasks requires a more or less wide scope for design through the form of the situation description taking into account the criteria of the concept of the complete task solution.

Representativity The learning task represents work situations that are typical for the profession and contain problems with adequate learning and development potential. They have the quality of development tasks. Focal points of operational organisational development, for which there are no fixed solutions, are also suitable.

Competence Development “Growing into” a profession is subject to the novice-expert paradigm. Vocational training has the function of supporting this development from a professional beginner to an expert with the logical developmental structure of learning as the consequence. The respective educational plans should be interpreted in line with the novice-expert paradigm when selecting and formulating the learning task.

440

11

The Didactic Quality of the Competence and Measurement Model

The following structure has proved to be useful for the description of learning tasks that are intended to challenge the complete solution of tasks (Lehberger, 2015, 67): • Specification of the learning task, which shows the reference to the action • A description of the situation which relates to a typical and problematic professional work situation (if necessary, with illustrations), which is formulated from a customer perspective and which is open to alternative solutions—in line with professional practice • A task that clarifies the perspective from which the situation is to be viewed and from which the objective of the action is to be derived

Publication of Learning Tasks Experience from the COMET projects shows that active use is made of the option of publishing tried and tested learning situations/tasks via the Internet—for example, using net-based groupware. As these are open learning situations/learning tasks and not conventional teaching designs, there is no standardisation of the teachinglearning processes, in which the situation-specific peculiarities remain unaltered. In this respect, there is every reason to establish such platforms. However, one condition should be fulfilled before learning tasks are “published”: Each learning task includes a solution space, so that teachers can recognise the learning potential in a learning task from the perspective of the developers. In principle, solution spaces cannot be complete. However, they define the possible solutions accordingly—related to all aspects of the solution. Therefore, in the course of time, the solution spaces are expanded by new users. With some practice, experienced teachers are able to develop learning tasks virtually “on a continuous basis”. Practice shows that whenever learning tasks have not been tested and are tasks that the authors have only somehow thought up, the quality suffers considerably (cf. Lehberger, 2015, 213 f.). Learning tasks that are placed on the Internet and published should always be tested in class and include a description of the solution space. It is recommended to develop learning tasks in a team. According to all experience, this increases the quality of the tasks. Of course, it is particularly interesting if the results of the self-evaluation are documented in addition to the learning tasks.

11.2

Designing Vocational Education Processes in Vocational Schools

441

Step 3: Identify Learning Opportunities and Diagnose Competence Development Learning Tasks: With Solutions Possible at Different Levels Vocational education and training is particularly challenging because of the very heterogeneous nature of trainees (previous education, interests, talents, etc.). Teachers are, therefore, faced with the task of dealing professionally with these different prerequisites. However, the content structure of vocational education and training is helpful to teachers. While a large number of subjects in general school are concerned with finding the right solution, for example, to a mathematical problem, via a predetermined solution, the solution of professional work tasks depends on making full use of the respective solution spaces. Learning tasks have a solution space, as the working world is always about the search for good and situationadequate solutions. For example, when designing office lighting, two trainees can present solutions that are of equal quality. However, the level of competence of the two solutions can differ considerably if one solution is explained at the level of action-leading knowledge and the other at the level of action-reflecting knowledge.

Possible Differentiation in the Evaluation of Task Solutions Based on the standardised COMET test procedure, it is possible to record the distribution of the test subjects (e.g. of a location) across the test levels (Fig. 11.6). The distribution of the test persons at competence levels shows that each competence level can be reached at a low, medium and higher level.

Fig. 11.6 Distribution of competence levels across two locations (shipping clerks)

442

11

The Didactic Quality of the Competence and Measurement Model

The Teaching Objective: Dealing with Heterogeneity Aims at the Individual Promotion of Professional Competence Formulating and justifying educational goals without programming the learning process, but rather exploiting the potential of a—subject-related—learning task distinguishes the pedagogical skill of the teaching staff. In one form or another, every educational plan encompasses not only the content to be conveyed but also the educational objectives. In modern terms, these are described in the form of educational standards and competences that are expected after completing defined sections of an educational programme. Structured VET is not possible without clearly identified educational objectives, irrespective of how they are described. However, the idea that only educational outcomes that can be checked or even measured embody didactic value is misleading. Promoting professional self-awareness, creativity and democratic behaviour are some of the many important educational objectives that cannot be squeezed into the templates of learning-outcome taxonomies or educational standards and that largely elude the established forms of learning-outcome taxonomies as well as the measurement of professional competencies. It is precisely for this reason that they must be kept in mind when designing and organising vocational training processes. A widespread practice of designing educational processes is based on the idea of teaching oriented towards learning outcomes. Based on the educational goals, the didactic actions of the teacher are to be structured. In other words, according to the central idea of learning outcome orientation, there is a deductive interrelation between the educational objectives and the didactic structure. However, this idea of planning, designing and evaluating teaching is problematic because it ignores the reality of vocational education and training (Fig. 11.7).

Developing Competences The COMET Competence Model offers a solution that is oriented towards the guiding principle of imparting professional competence by working on and solving work tasks that demonstrate the quality of development tasks. The overarching educational objective, the ability to completely solve work tasks, cannot be called into question because incompletely solved work tasks entail risks to a greater or lesser extent. Empirical competence research shows that the great heterogeneity within and between the test groups (e.g. classes) persists even if the teacher succeeds in raising the competence level of their class (Fig. 11.8). If one depicts the professional competence (development) of trainees or technical college students in the form of competence profiles (Fig. 11.9), then learners and teachers can answer important questions such as • Has the trainee/student completely solved the work/learning task? • If not, which aspects of the solution were not or insufficiently considered? • Is the level of competence similar in all learners?

11.2

Designing Vocational Education Processes in Vocational Schools Developing competences competence-oriented

443

Achieving learning objectives TUITION

goal-oriented

Trainees grow into a profession by learning to solve increasingly complex professional tasks completely and responsibly: The professional skills as well as the understanding and responsibility of what one does is an indissoluble connection. Therefore, the potential of the learning task to trigger competence development is particularly important. Professional competence arises from the reflected work experience.

Teachers define the learning objectives for their lessons: target-learning behaviour of the pupils = lesson planning. They organise learning by the optimal arrangement of learning steps: This is about the attempt to achieve the targetlearning behaviour of the pupils = lesson organisation; then the teacher checks whether the pupil Sch has become a pupil Sch’ as a result of learning: learning control.

Schematic representation of work process knowledge

Schematic representation of learning objective-oriented teaching according to MÖLLER 1969, p. 20

The degree to which prospective professionals (trainees/students) are able to exploit the scope for solutions or design given by vocational tasks and justify their solutions is the indicator for the development of professional competence.

The didactic action of the teacher is based on the type of purposeful rational action and corresponding didactics, as expressed, for example, in the concept of programmed learning.

Teachers as “Development Supporters” and “Learning Consultants”

Teachers as “Teaching System”

Fig. 11.7 Developing competences, achieving learning objectives

• If this is the case, then the teacher is challenged to promote the sub-competences developed, for example, by means of corresponding learning tasks. • At what level of knowledge were the task solutions based? The competence profiles of the trainees/students are a good basis for the design of differentiated teaching (individual support). This form of diagnostics (evaluation) also shows the level of knowledge at which trainees/students can solve work tasks/learning tasks: At the level of action leading, explaining or reflecting knowledge of the work process.

444

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.8 Percentile bands for professional competence via test groups at class level for apprentices (results 2009)

The strengths and weaknesses of task solutions or project results can be discussed, so that every learner/student can see how their solution or the project result of their working group should be classified. The standards are always the same: • Has it been possible to solve the task completely in line with the situation description? • Were the solution aspects to be considered weighed against each other in relation to the situation? • How differentiated was the justification of the task solution with regard to the result and the procedure? Conclusion Competence (development) diagnostics instead of controlling learning objectives. Competence diagnostics measures how the professional competence of trainees/ students develops qualitatively and quantitatively and how the ability of learners can be promoted so that they can solve vocational tasks completely “with social and ecological responsibility”. Learning objective-oriented tests serve to check whether the “learning objectives” defined by the teacher are achieved—measured in the form of scores or marks. The tuition of teaching material and skills will be reviewed. Whether and how these contribute to professional competence development is out of sight.

11.2

Designing Vocational Education Processes in Vocational Schools

445

Fig. 11.9 (a) Average competence profile of a test group of vocational school students (type “VET”), n ¼ 27 and (b) Differentiation of the competence profiles according to the total score (TS) and the coefficient of variation: (a) E-B, Class No 7, n ¼ 26; (b) E-B, Class No 5, n ¼ 18; (c) E-B, Class No 24, n ¼ 19; (d) E-B, Class No 23, n ¼ 17 (Results 2009).

446

11

The Didactic Quality of the Competence and Measurement Model

The learning field concept places professional competence development at the centre of subject-oriented VET didactics.

An informative incident from a pilot project for the introduction of a teaching practice oriented towards the guiding principle of shaping competence A group of pupils (second-year electronics technicians specialising in energy and building technology) decided on a project: electrical building protection for a residential building. The description of the situation was based on relatively unspecific requirements of the house owner, so that there was a great deal of scope for the design of the project. The solution of the project was presented and discussed by the group of students at a pilot event. The workshop participants (teachers, members of the education administration, training experts from chambers, vocational training researchers) were extremely impressed by both the project outcome and the presentation. The solution and its professional justification were convincing. The trainees were able to answer technical questions about possible alternatives with confidence. But the answer to the question of how the pupils assessed their project in the context of their vocational training provided a big surprise. The students agreed: “The project was great fun! We occasionally went on after work, but learning was a little neglected.” It is obvious that this evaluation of the project caused some amazement and head-shaking among the workshop participants. There is a simple reason for the differing assessment of what was learnt during this project by the group of pupils on the one hand and the vocational training experts on the other. The learning concept of the pupils is shaped by their learning experiences in general school. They apparently do not yet have a professional learning concept. The acquisition of a professional learning concept by this group of pupils was also hampered by the fact that training in their company in the first half of training was characterised by course-based learning in an apprenticeship workshop. This group of pupils experienced “learning” as the acquisition of teaching material and skills. Such experiences show that in vocational education and training it is also important from the outset to reflect on the results of learning or, more precisely, of professional competence development with the trainees during training. The question of the learning opportunities it contains already arises at the planning stage of the learning task/learning situation. Then, during the evaluation of the learning task/ learning situation, the next question can also be answered: What did we learn while working on the learning task?

11.2

Designing Vocational Education Processes in Vocational Schools

447

11.2.4 Designing Teaching-Learning Processes The planning and preparation of project-based forms of learning are confronted with a dilemma. Detailed planning largely determines the goals, contents and course of didactic action. However, good lesson planning is only given if it opens up leeway for the learners/students. A first hint on how to deal with the described dilemma is provided by the training paradox already considered: Professional beginners become experts by doing what they want or should learn. The teacher is, therefore, not allowed to spoon-feed the students what they want to learn. This is where the new role of teachers comes into play, namely, opening up design and decisionmaking spaces for trainees/students. The saying “from knowledge mediator to learning process facilitator” is first made concrete here. The traditional role of the teacher/trainer as a knowledgeable person who passes on his or her specialist knowledge to the trainees, using more or less good teaching methods, is a thing of the past. Gottfried Adolph made the statement: “Teaching is destructive” (Fig. 11.10). The following, therefore, deals with the design and organisation of teachinglearning processes which enable the individual learner/student to deal with learning tasks which have a suitable learning potential. In order for them to learn something, it is particularly important that they can contribute their work experience and determine where they need to learn something in order to be able to work on the tasks they have derived from the learning tasks. The individual discussion of a learning task includes cooperation with other trainees/students if the work or learning process requires it.

Step 1: Selection of a Customer Order with “Suitable” Learning Potential and Formulation of a Learning Task The selection of suitable customer orders and their formulation as a learning task plays a very central role in the design and organisation of vocational training processes. Two sources for the selection of customer orders have already been described in the form of the works survey and the job portal. Based on the selected assignments, the teacher can create learning tasks with corresponding situation descriptions. If there is a job portal, then of course the learners/students can also choose learning tasks that correspond to their level of development and that have the right potential for something novel. The question from the learner’s point of view is: Where do I see the challenges that the task holds in store for me? The learner will only find an answer to the question of what they can actually learn by dealing with the new situation once the task has been solved. At this point, the teacher has a special responsibility to ensure that the learners are able to learn something while

448

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.10 The structure of the working and learning process

11.2

Designing Vocational Education Processes in Vocational Schools

449

completing the learning task. In order to fulfil this responsibility, it must clarify very precisely what experiences the (individual) learners have already had and what knowledge they have acquired. Only then can the challenge be described, by the accomplishment of which they can gain new work experience. In current teaching practice, it can be observed time and again that learners are usually able to master learning tasks through the use of their existing work and learning experience. Very often nothing new is learnt! The selection of a suitable learning task is not critical in view of the heterogeneity between learners or between different learning groups, as this form of learning leaves open the depth and breadth to which individual learners or learning groups work on the task. There is, therefore, not only a task-specific learning potential or a learning potential related to a competence level that can be summarised in “learning objectives”. Just as in sports, an improvement in the long jump from, for example, 4.20 m to 4.40 m may be a great personal success for some, while the 5.20 m mark is not a success for others if they have already jumped 5.40 m. Learning tasks with their possible solutions leave open the level at which they can be solved. They have an individual development potential for the trainees/ students. What teachers should bear in mind when taking this first step: • The learning task must be selected so that it has an appropriate learning potential for the learning group and all trainees/students on their way to achieving employability (also refer to the job profiles and vocational curricula). • The learning task should be a challenge for both the weak and the strong learners and offer correspondingly challenging solutions. • The learning task must be described from the customer’s perspective (refer to p. 541 and p. 549). In the case of extensive tasks, the question arises as to a division of labour in groups. This form of learning organisation is very demanding because the coordination of division of labour learning involves cooperation between groups and all participants should benefit from the learning processes and outcomes: • The combination of sub-solutions and new knowledge must be carefully planned • How should the group inform each other about what they have learnt (refer to p. 508).

450

11

The Didactic Quality of the Competence and Measurement Model

Step 2: Analysing and Functionally Specifying the Customer’s Situation Description In this step, it is particularly important that the teacher succeeds in getting the trainees/students to adopt the respective learning task as their own. For this purpose, they first clarify the customer’s (externally or internally) formulated requirements and wishes on the basis of the situation description. This analysis gives the trainee/ student an initial orientation regarding the questions as to the result of processing the learning task from a technical perspective and what needs to be done (tasks) in order to achieve appropriate solutions (technical specification). At this point, it is also a matter of identifying possible solutions and deciding which approaches to solutions “remain in play” for the time being, that is, which will be pursued further. All learning tasks are described from the customer’s perspective. The task of the learners—as prospective professionals—is then to: • Check the customers’ requirements for their feasibility • Check whether individual requirements contradict each other and how these contradictions can be resolved by comparing all requirements in their weighting • Check whether the customer has overlooked requirements that are technically feasible, for example The most important thing then is to incrementally translate the customer’s wishes into a specification. It remains to be seen whether the specification formulated in a first step will prove feasible and whether the further steps of the task solution will result in new insights and “better” solution options. It is likely that an initially formulated specification will only take its final form when the procedure and its justification are documented. If a learning task is in the form of a specification given by the teacher, then the trainee/student becomes the executor of the detailed solution, as shown by the following example:

Manufacture Two Grippers (Material: 1.2842) from 20 3 15 Flat Steel According to the Drawing

How teachers and trainers can hinder the process of competence development: • When they formulate situation descriptions that do not reflect the customer’s requirements and wishes • If they give the trainee/student learning tasks in the form of specifications, thereby telling them exactly what they have to do (continued)

11.2

Designing Vocational Education Processes in Vocational Schools

451

• If they do not reflect the learning opportunities contained in learning tasks with the trainees/students—also with reference to the training objective: employability • If they limit learning to the acquisition of theoretical knowledge—and, therefore, lose sight of the vocational action and learning fields • If they underchallenge the trainees/students and, therefore, do not challenge their competence development • If they do not take the trainees/students with their specific competence development—and, therefore, also with their strengths and weaknesses— seriously • If they do not identify with their lessons In practice, teachers frequently set tasks without considering what their trainees/ students can learn. Their professional task concept may be based on a misconception, at least if it is the task concept of operational work preparation (WP) that ensures through detailed specifications that the task solution is implemented as planned. This unintentionally destroys the learning opportunities of trainees/students. A misconception: The aim of the lesson is not achieved when the learning tasks have been solved, but when learning tasks are solved with a learning potential identified in advance by the teacher and when the trainees/students “learn” to pursue this question while reflecting on the work experience. It is, therefore, part of the professionalism of the didactic actor to estimate the degree of difficulty of learning tasks so that the trainee/student is not over- or underchallenged. With heterogeneous learning groups, it might be difficult to find the “right level of difficulty”. Here, the concept of the “open learning task” requires a rethink. It is not important to adjust the level of difficulty of a learning task, as there cannot be an appropriate level of difficulty for all learners in a learning group! Rather, the teacher formulates realistic learning tasks that have development potential in learning a profession. These are learning tasks that • Are appropriate for the “level of training” (beginner, advanced beginner, etc.) • Do not restrict the scope for design • Enable trainees/students to base their tasks on a level of knowledge corresponding to their individual competence development The solution variants of the individual learners and those of the working group, as well as the depth and breadth of their justifications then represent the competence level and the competence profiles of the trainees. When learners give their best, there is no underchallenging of the stronger. The challenge for the teacher is to provide “process-related help”, so that the weaker also find a solution to the problem.

452

11

The Didactic Quality of the Competence and Measurement Model

In a class that feels committed to the individual promotion of vocational competence development, achieving “the goal” does not mean putting all trainees/students “in one basket” (the same “learning objectives”). After the analysis and technical specification of the learning task, the trainees/ students are able to reflect on their learning opportunities together with the teacher. With the learning task analysed in this way, the trainees/students now link the two types of objectives: “learning objectives” and “work objectives”. They have been concretised to such an extent that they represent the orientation basis for working on the task—alone or in a team. Questions for reflection could include: • • • • •

What do I already know? What do I need to learn? (What is the challenge for me?) Can I overcome the challenge with the available tools? For which tasks do I need the help of the teacher? Which tasks are best suited for cooperation with fellow learners?

Of course, the trainees/students can only answer the preceding questions if the analysis of the situation description is successful: Have the customer’s requirements and wishes become clear to them and could they make an initial technical specification? In tuition practice, however, it is not uncommon for trainees/students to have difficulty understanding the description of the situation. They then have no access to the learning situation: “I don’t understand the task and don’t know what to do.” The challenge for the teachers now is to help the trainees/students without depriving them of the chance to find access themselves. This is where process-related help is an obvious option, for example, in the form of questions and requests that open up the learner’s own access to the task at hand. Possible questions and requests to trainees/students: • • • • • •

What does the customer want? Which customer requirements and wishes did you recognise? What exactly remains unclear? What would you do first? Remember the last learning situation: How did you proceed in that case? Create a sketch that illustrates the facts.

11.2

Designing Vocational Education Processes in Vocational Schools

453

Step 3: Development and Definition of Evaluation Criteria Once the approximate result from the solution of the tasks or the processing of a project and which alternative solutions and procedures must be weighed up has been clarified, it is necessary to define the evaluation criteria for the task solution. The COMET rating procedure can be used as an orientation framework here. The didactic benefit of this step is obvious: learners now know exactly what is important when developing a task solution (Table 11.3). The results of empirical teaching research show that the development of evaluation criteria (and their application in the self-evaluation of work and learning outcomes) increases learning success in terms of • The scope and evaluation of alternative solutions • The possibilities for the design and organisation of the task solution (work process) • The reflection of what was learnt and the learning process As the evaluation criteria describe not only the expectations of the result, but also of the task solution process, they are a good basis for reflecting on what has been learned and the quality of the task solution. In this phase of teaching, teachers are challenged to become aware of their expectations of the learners’ individual competence development and to assess the learning potential of the learning task on the basis of the following questions:

Table 11.3 Evaluation criteria for task solution, approach and competence Criteria for evaluating the Task solution • Does the task solution have an appropriate utility value for the “customer” (client)? • Was the task solved completely? • Was there a well-founded balance between alternative solutions? • Was the presentation of the results successful (for whom)? • How is the quality of the task solution—based on the evaluation criteria—assessed?

Method • Has the planned procedure proven worthwhile? • Was it possible to translate the situation descriptionØ into technical specifications? • Was it necessary to deviate from the client’s requirements—if so, why? • In which steps did prior knowledge not suffice to solve the task? • What aids were used to solve the task? • Which errors and dead ends occurred and how were they corrected?

Acquisition of new competences • What work experience and knowledge could be drawn on? • What knowledge and skills had to be acquired in order to solve the task? • Where and how was the knowledge and know-how of the teacher used? • What means were used to solve the task (reference books, internet, etc.)? • Did the know-how of individual pupils (pupils learn from pupils) help? • What role did trying out and experimenting play in the acquisition of new competences?

454

11

The Didactic Quality of the Competence and Measurement Model

• What new work process knowledge is there in a learning task? • How must the situation description be formulated to result in a realistic solution space for the learning task and a wide scope for the learner? • Which competences and which prior knowledge does the learning task require? • Will the trainees/students succeed in translating the situation description into technical language, that is, into a specific task? • Do the learners keep an eye on the utility value of the work result for the customer? • Do the trainees/students succeed in recognising the need to acquire new knowledge? • Do trainees/students resort to good sources and effective forms of learning when expanding their professional knowledge? • Do the trainees/students check the already available concepts for their usability in the current context? • How do the learners solve the task? • Do they already have a professional work concept? • Is the level of challenge appropriate for the learner? • At which competence level is the learning task solved? When observing and advising learners on these questions, it is always important to keep an eye on the competences and the development of each individual’s competences. The requirement level of a task and the level at which it is solved are different for each trainee/student. This is where the great advantage of open tasks comes into play: Open tasks can be solved at a very different level of knowledge and competence. The evaluation criteria for solving open tasks make it possible to make the learner’s competence development transparent.

Step 4: Provisional Definition (Rough Plan) and Implementation of the Task-Solving Procedure: Development of Vocational Concepts of Learning and Working Planning while processing a learning task or a project already has a provisional character because experience is gained as the task is solved and unforeseeable difficulties arise that have to be solved. It is not uncommon for the newly acquired knowledge to suggest a modified approach. Practical experience, therefore, provides the basis for decision-making for further planning. This possibility of decision-making is particularly important for challenges to which competence-developing potential is ascribed. Planning, execution and evaluation—on the way to the solution—are, therefore, always alternating steps in

11.2

Designing Vocational Education Processes in Vocational Schools

455

Fig. 11.11 Steps to solve challenging situations

the processing of learning tasks and in the execution of projects. This “gradual approach” (Böhle, 2009) is an explanation for the dissolution of the described training paradox in connection with action learning (refer to p. 495) (Fig. 11.11). When observing current teaching practice, it is noticeable that in the practical implementation of the theory of complete action this very “gradual approach” is often ignored as a possibility of knowledge acquisition. Instead, the complete handover phases are used to structure the entire work and learning process, and it is assumed that the entire knowledge required for planning can be made available in advance via the one-off procurement of information. This practice takes the concept of acting learning ad absurdum, because it only draws on objectively available knowledge and excludes learning through reflection of experience. The preceding explanations naturally do not exclude the possibility that at the beginning of the discussion with a task, initial planning decisions can be made and approaches to solutions developed by accessing the available knowledge. The development of professional competence is not only about obtaining missing information, but especially about developing concepts for: • Professional learning • Professional work • Professional cooperation (Bremer, 2006, 293 ff.) It usually some time for learners to understand how work and learning are interrelated and that they are two sides of the same coin. Teachers and trainees/ students are challenged in vocational education and training to understand what distinguishes work process knowledge and the vocational skills based thereon. The concept of collegial cooperation is based on experiences of cooperation in operational work processes. The possibilities for dealing with a new challenge, which initially appears to be an insurmountable hurdle to solving a problem, are manifold. First of all, the reflection of the learning experience when solving new tasks—under the guidance of the teacher—is an essential part of tuition. This is about the development of a

456

11

The Didactic Quality of the Competence and Measurement Model

professional learning concept. It is not enough for this to emerge randomly, but for trainees/students to become aware of their possibilities of learning on the path to employability. Forms of learning to acquire professional learning competence: • Perplexity and mistakes: Allowing mistakes is an important prerequisite for learning from mistakes. It is also about the insight of the trainees/students “that it depends on themselves” when it comes to mastering a situation. Blaming others and the circumstances is not the answer! • Encouragement and self-confidence are important prerequisites for mastering new situations. • The concept of open learning tasks and the possibility of solving the tasks at very different levels of competence are beneficial for the trainee/student. • “I’ll try!” • Testing and experimenting help to cope with new situations, which also includes detours. What went right or wrong and at which points becomes apparent at the end. • Group dialogue or internet research does of course also help. • The teacher, the textbook and the relevant specialist literature are ultimately available. The support provided by the teachers should be process-related and not productrelated. References to sources of information, methods of learning, experimental possibilities, software tools or even mathematical methods belong to the processrelated aids which give the trainees the opportunity to solve the task themselves. Process-related support also includes requests or questions expressed by teachers to learners. Product-related support, on the other hand, is aimed directly at solving a task or a problem. Requests or questions could include: • • • •

What exactly prevents you from processing the task further? What could help you to overcome this “edge”? What is the first thing to do? Can you remember a similar situation in the company? How did you deal with it? • Just try it out; you can also learn from mistakes.

11.2

Designing Vocational Education Processes in Vocational Schools

457

Learning Within a Group Group learning is particularly important in vocational education and training because working in “semi-autonomous groups” is highly prioritised in the working world. Group work is appreciated by management and employees alike. The advantages of group work from the perspective of the management and the employees are tabulated. Group work enables the following tasks: The management It enables Shifting responsibility and tasks to directly value-adding processes (increases labour productivity) Shifting the elements of quality control to the work processes: producing instead of controlling quality (increases labour productivity) Group work increases the flexibility of work planning and organisation and, therefore, work productivity. Increases job satisfaction and operational commitment, thereby promoting work productivity

The employees More responsibility means more interesting work Less control through work preparation and personal freedom strengthen self-esteem and job satisfaction “We control our own work” This experience strengthens professional selfconfidence and commitment Experiencing work contexts strengthens the interest in co-designing work processesMore responsibility means more interesting work Less control through work preparation and personal freedom strengthen self-esteem and job satisfaction “We control our own work” This experience strengthens professional selfconfidence and commitment Experiencing work contexts strengthens the interest in co-designing work processes

In the school learning processes, the trainees/students tie in with their own experiences or those of their fellow learners. It is, therefore, important to understand that in vocational education and training, “group work” for teachers and trainees/ students is not a question of changing the social form, as is often the case in textbooks. If cooperation in groups does not result from the learning task/learning situation or at least appears to be advantageous, then the decisive basis for group work or learning is missing. It is not unusual for trainees/students to exclaim “not group work again!” when teachers prescribe group work according to the principle of method change in order to practice the ability to work together. If cooperation in a working or learning group is also to be lived and experienced subjectively as meaningful, then working and learning is a prerequisite for a common cause. If learners are aware of this and have adopted the corresponding learning task as their own, then it is also a question of how the learning process can be shaped together.

458

11

The Didactic Quality of the Competence and Measurement Model

Group work as “cooperation because of a common cause”—for example, in the implementation of a project—results from the content and complexity of the projects. It is not uncommon for a learning task to suggest that, after joint planning, sub-tasks should be divided up into tasks so that they can later be brought together and the final result evaluated according to jointly definable criteria. With this form of cooperation, the insight grows that professional tasks can be solved better in a team. Someone who carries out a subtask at a single place and knows how to contribute to the success of an overall project also proves to be capable of cooperation (cf. Rauner & Piening, 2014, 34). This is where the teacher comes in, who can draw on the results of learning research regarding the organisation and design of group work. In in-company vocational training, trainees are assigned different functions—consciously or subconsciously—namely, that of • • • •

Minions Spectators, observers Worker’s assistants Employees or colleagues

These functions can ultimately result in stable roles with a lasting impact on the success or failure of training. Trainees who remain in the role of the assistant for too long and become accustomed to someone always telling them what to do and how to do it run the risk of not achieving the objective of vocational training “professional action competence”. Very similar traps lurk in the implementation of learning situations and projects at school. Teachers and trainers, therefore, have the important task of making trainees aware of these traps. Cooperative learning Especially in the practical manual on “Cooperative Learning” by Bruning and Saum (2006), the methods of group work are presented vividly and in detail on the basis of extensive international experience and research. Some central elements of cooperative learning should, therefore, be pointed out here. The most important principle in advance: “Individual work is a core element of cooperative learning” (Fig. 11.12).

11.2

Designing Vocational Education Processes in Vocational Schools

459

Fig. 11.12 Principles of cooperative learning according to Brüning and Saum (2006, 15)

The three basic principles of cooperative learning are the sequence of • Think: In this phase, all students work alone. • Pair: Results are compared, deviating results are discussed, etc. in pairs or small groups. • Share: The group results are presented, discussed, improved, corrected, etc. in class.

11.2.5 Dealing with Heterogeneity As in no other form of our educational system, teachers are confronted with a particularly pronounced heterogeneity of learning groups. The formation of working groups is, therefore, of particular importance (Table 11.4). Experience with the strengths and weaknesses of working with heterogeneous groups clearly shows that, contrary to the opinions of the mainstream, these groups have an extraordinarily high learning potential. In fact, the heterogeneity of the learning group offers completely new opportunities, which can be exploited through appropriate teaching arrangements (learning arrangements). However, they also require a corresponding methodology on the part of the learning process facilitator (ibid., 209).

460

11

The Didactic Quality of the Competence and Measurement Model

Table 11.4 Comparing the advantages and disadvantages of homogeneous and heterogeneous learning groups (Bauer, 2006, 208) Learning group Homogeneous

Heterogeneous

Advantages • Tendency to favour highperforming learners. • Classroom-style teaching “sufficient”, therefore, less pedagogical effort. • Reduced complexity. • Teachers feel less overwhelmed. • High resistance to pedagogical errors.

• Favouring learners with learning disabilities. • More equal opportunities. • Better support of individual personality development. • Familiarisation with different perspectives and life plans. • Confrontation with other perspectives. • Promotion of social learning, formation of social competences. • Reflection on own positions. • Improved preparation for modern social challenges. • Basis for the use of versatile methods. • Learning-centred.

Disadvantages • Real existing differences threaten to be ignored. • Learners with learning disabilities miss out. • Pupils with high social status are favoured/supported more strongly. • Development inequality is promoted and consolidated. • Sustainable Pygmalion effects, early fixation on a certain performance level. • Fear of loss of control. • Teacher-centred. • Not to be mastered by classroom-style teaching. • Higher pedagogical effort. • Less resistance to pedagogical errors.

Important steps and rules for learning/working in groups: • It must first be clarified “in plenary” whether a learning project is to be worked on in groups or whether phases of individual and group work should alternate. If the (sub-) tasks are not specified by a project or the teacher, the group must agree on the task. It is important that the respective task is accurately understood by everyone. • Then—in the second step—in a period of time to be agreed upon, everyone thinks about the possible solutions on their own and outlines their proposed solutions. • The individual results are shared in the third step. (continued)

11.2

Designing Vocational Education Processes in Vocational Schools

461

This sharing phase requires the establishment of rules. “If no rules are introduced and the discussion and evaluation process remain unregulated, then the speaking-time slots allotted to the contributions is often distributed very unequally among the members of the group. Some tend to be more reluctant to contribute to discussions, others “talk faster than they think”. Therefore, rules that allow for a balanced exchange are important. The eloquent group members learn to listen and to take a back seat and the reserved ones are given the (learning) chance to argue, discuss and present (Rauner & Piening, 2014, 43). With a method of group work tested in Canada, good experiences were made with so-called talking chips (Johnson & Johnson, 1999, 69). Each student receives an equal amount of talking chips. In the exchange phase, the rule applies that members of the group may only speak if they hand in one of their chips. If the supply of cards of a group member is used up, then they can only participate in a new round of talks (with new chips). Talking chips ensure “that the speaking-time slots of group members balance themselves, while they also have an educative effect, as both the reserved and the eloquent pupils very quickly become aware of their speaking behaviour” (Brüning & Saum, 2006, 34).

11.2.6 Step 5: Evaluating the Task Solution (Self-Assessment) Finding the solution to the problems only means half the distance covered in the processing of the learning tasks. Now it is important to evaluate the quality of the task solution. It generally turns out that different solutions were developed by individual trainees/students or working groups. The competence profiles (Fig. 11.13) show where the differences lie. This is where the teacher comes into play, who can show how the solution space given for a learning task has been exploited with the individual solutions. The results obtained in different ways are to be evaluated and assessed by the learners with the help of the agreed evaluation criteria. The primary focus here is on the utility value of the results for the customer: it is about developing a professional work concept. Assessment forms have proven useful as a tool for self-assessment in teaching practice (Table 11.5). If the tasks are not solved to the satisfaction of the learners themselves (not OK), the task-solving procedure must be reconsidered. The assessments then lead to corrections or additions within the planned and implemented

462

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.13 Different exploitation of the solution space (Final Report 2010, Hesse)

work and learning processes, if necessary. The trainees/students also decide to what extent their results meet the requirements of a complete task solution. Only when the tasks have been satisfactorily solved (OK) does it make sense to reflect on the work and learning processes carried out in context. Both the learners (self-assessment) and the teachers (external assessment) can use the assessment form specially developed in the COMET project for use in teaching as a tool for evaluating and assessing the task solution. Only documented facts can be evaluated (don’t “read between the lines”)! The assessment form can be modified for use in a comparison of self-assessment and external assessment. The assessment results can be transferred into a network diagram for illustrative presentation in the plenum. The result, which is judged to be sustainable by the learners, is then prepared for presentation in the plenum.

11.2.7 Step 6: Reflection on Work and Learning Processes In vocational education and training, learning requires the reflection and systematisation of work and learning experiences. For this purpose, the procedure for processing the task is to be reproduced by the trainees/students in thought, in order to visualise the experiences made in the process. In particular, the experience gained with the approaches that have led to viable and complete solutions to the new challenges is important. These reflected experiences of action generate the knowledge of the work process that justifies competence development. The following questions should help to consider different perspectives in the reflection process.

11.2

Designing Vocational Education Processes in Vocational Schools

463

Table 11.5 Assessment form for use in class (e.g. electronics technician) (Katzenmeyer et al., 2009, 202)

1

2

3

4

5

6

7 8

9

10

11

12

Criteria/indicators CLARITY Presentation appropriate for client? For example:. description, operating instructions, cost plan, component list Representation appropriate for specialists? For example: circuit diagrams, installation diagrams, terminal diagram, cable diagram, programme printout with comments Solution illustrated? For example: technology scheme, site plan, sketches Structured and clear? For example: cover page, table of contents, page numbers, company contact info, customer contact info FUNCTIONALITY Functional capability? For example: dimensioning/calculation o.k., fuse protection, necessary interlocks, limit switch Practical feasibility considered? For example: electrical and mechanical design possible? Are presentations and explanations correct and state of the art considered? Solution complete? For example: are all required and necessary functions in place? UTILITY VALUE Utility value for client? Are useful and helpful functions considered? Possible automatic error detection, interventions and changes User friendliness? Operability, operator guidance, clarity, alarm and operating displays Low susceptibility to faults considered? For example: preventive error information, redundancy, partial running capacity, are material properties optimal for application? Longer-term usability and expansion options considered?

Comments

Fully met

Partly met

Not met

In no way met

(continued)

464

11

The Didactic Quality of the Competence and Measurement Model

Table 11.5 (continued)

. 13

14

15

16

17

18

19 20

21

Criteria/indicators ECONOMIC EFFICIENCY Actual costs economical? For example: time and personnel resources, material use Follow-up costs considered? For example: electricity costs, maintenance costs, downtime costs in the event of control failure Operational and economic aspects considered? For example: downtime costs in the event of component failure weighed against production costs? WORK AND BUSINESS PROCESS Process organisation in own company and at the customer’s For example: time and resource planning, general conditions for installation work clarified? Work process knowledge (work experience) For example: does the solution have a structure allowing the identification of the workflow? Are upstream and downstream processes taken into account? Limits of own professional work exceeded? For example: structural changes, orders for other trades, foundation for switch cabinet, scaffolding for sensor assembly planned SOCIAL COMPATIBILITY Humane work and organisational design For example: ergonomics, serviceability Health protection considered? For example: toxic fumes, radiation, noise, injury hazards detected and prevented Actions possible and explained in an emergency? Risk analysis performed for assembly, operation, service, malfunction and disassembly? Occupational safety and accident prevention considered? Working on ladders and scaffolding, PSA Instruction of third party companies, hazard warnings, hazardous material labels

Comments

Fully met

Partly met

Not met

In no way met

(continued)

11.2

Designing Vocational Education Processes in Vocational Schools

465

Table 11.5 (continued)

22

23

24

25

26

Criteria/indicators ENVIRONMENTAL COMPATIBILITY Recycling, reusability, sustainability For example: ROHS material, PVC-free material, prevention, reduction and recycling of waste Energy saving and energy efficiency For example: energy-saving lamps, EFF class for motors, minimising standby losses, displays with LED instead of lamps CREATIVITY Does the solution show problem sensitivity? For example: customer request fully recorded and implemented? Is the scope for design exploited? For example: useful additional functions planned? Business and economic aspects considered? For example: downtime costs during component failure weighed up against production costs?

Comments

Fully met

Partly met

Not met

In no way met

Questions on the Concretisation of the Reflection Process • How did I proceed in processing the task (what was my first, second, etc. step)? • What were the most important tools for solving the tasks? • What was I able to solve easily based on my previously gained knowledge? • What did I consider specifically for processing this task? • How exactly did I proceed within the respective steps? • Why did I proceed like this and what are the reasons for my decisions? • Which methods did I apply? • Which “hurdles” did the task contain and what new things did I have to learn? • Which new learning methods did I have to learn to fully complete the task? • How do the estimated challenge and the actual gain in learning match? • How did I organise my/our work and what will I do differently next time? • At which “hurdles” did I take advantage of the teacher’s support? Note: Depending on how task processing is organised, the questions must be answered individually and/or in the group.

466

11

The Didactic Quality of the Competence and Measurement Model

11.2.8 Step 7: Presenting and Evaluating the Task Solution, Work and Learning Process as Well as Learning Outcomes (External Evaluation) The presentation and evaluation of task solutions, work and learning processes as well as learning outcomes are of high didactic value. Among other things, this helps to • Clarify the satisfaction/dissatisfaction of the “customer” with the offered task solution • Assess the ability of the expert listeners (co-learners, teachers, trainers and similar) to solve the problem • Clarify questions as to whether the procedure (including the methods used) has proven itself, where there have been problems and how it can be optimised for use in the next learning situations/projects • Evaluate explanations, justifications and considerations of alternative solutions and procedures according to whether they are professional and conclusive • Exchange experiences with different learning methods and work-organisation structures • Describe and evaluate gains in knowledge, new experience and new abilities and considers the question: “What else did you learn?” In this case, it all depends on • The methodical procedure • The capacity for cooperation • The reflection on conflict settlement It is also an opportunity to reflect on the importance of vocational learning for the non-working world. The evaluation is based on the evaluation criteria defined at the beginning (refer to Table 2.1, p. 568). It may also turn out that the evaluation concept contains weaknesses that can be avoided in the next learning situation/project. In this tuition phase, teachers must decide on the role they want to play. • • • •

Do they leave the assessment of the work results to the trainees/students? Do they assume a governing role? Do they evaluate the work results themselves? In the case of group work, it also makes sense for the working groups to evaluate their documentation mutually.

When working on tasks in small groups, all group members should be involved in presentation and reporting. A prerequisite is that a group member takes over the moderation of the presentation and that the roles in the presentation are precisely agreed upon. This also includes the form of presentation. “Ad hoc” reports and presentations should be avoided as they tend to discredit project learning. When presenting learning and work results, students with weaker language skills should be given the opportunity to present and demonstrate their presentation in practical manner.

11.2

Designing Vocational Education Processes in Vocational Schools

467

The documentation and presentation of project results should meet high formal standards. They should be presentable to “customers”. The more successful this is, the more likely the participants are to identify with their learning outcomes. This strengthens the self-confidence and motivation of the learners. In the case of outstanding projects, the public exhibition of project results at school or in public is also an option. The experience that pupils proudly show presentable learning and work results to family members and friends is an indication that the form of documentation and presentation (in addition to the learning and work results themselves) contributes considerably to the development of professional identity and, therefore, also to the strengthening of self-esteem. This technical aspect of task/project learning is, therefore, of considerable social-pedagogical importance. In the preceding remarks, the requests and requirements of the customer are again referred to, which are expressed here in the satisfaction or dissatisfaction with the presented task solution. This again refers to the high formal standards that the presentation has to meet. At this point, it is only about the result—the objective dimension of the learning process and its evaluation (product-related presentation). The other learners and the teacher can take on the role of “customer” in this phase of the presentation and give feedback from this role to the presenter(s). In the other phases of the presentation, the other learners and the teachers are addressed as “experts”. This deals with the • Completeness of the task solution • Technically sound justifications (knowledge-explaining action) and conclusive balancing between different solution variants (knowledge-reflecting action) • Working and learning concepts and concepts for cooperation and reflection on the experience gained in their use • Unresolved questions or questions that arose during the presentation, and finally the question: What did I/we learn? The second phase of the presentation (process-related presentation) deals with learning and the competences acquired—the subjective dimension of the learning process. The co-learners and the teacher, therefore, take on the role of the teacher. In terms of content, they refer to knowledge of the individual work process and the previously agreed evaluation criteria (refer to Table 11.3, p. 520). The teacher also has their own described solution space in mind. The question: What have I/we learnt has a special meaning, because the students’ understanding of “learning” is shaped by general schooling.

468

11

The Didactic Quality of the Competence and Measurement Model

11.2.9 Step 8: Systematising and Generalising Learning Outcomes While the presentation of the work and learning outcomes reveals the individual work-process knowledge of the trainee/student in relation to the current learning task, this phase of tuition is about the specific task for the vocational school to generalise this knowledge. A knowledge that can be traced back to the reflection of the experiences that were made while working on the learning task. “In the process of dual vocational training, the incremental generalisation of professional experience and knowledge ultimately leads to concepts and theories that are available to the individual as generalised ‘tools of thinking’ as well as of communicating and reflecting and at the same time refer to the real context from which they emerged” (Rauner, 2007, 244). Generalisation is about uncoupling the work experience gained from the concrete learning task and the task solution achieved in order to make it available for subsequent customer orders. It is now up to the teachers to ensure that their pupils become aware of their broader understanding of the subject and are able to use this in their thinking, acting and skills in professional and appropriate manner. Practical experience with numerous learning groups shows that the absence of the generalisation described above among learners/students means that the use of the developed task-solving approaches is limited to the learning task for which it was developed. As a result, the subsequent customer orders are often not considered in the light of previous experience but are treated as completely new challenges. The experience that technical terms, which are already known, and action concepts, which are already available, gain extended significance and that connections between initially independent concepts become conscious, characterises the profession-related extension of the fields of meaning of action-relevant concepts, which in their sum and combination constitute work-process knowledge (Lehberger, 2013) and the development of technical language based thereon. For example, a nurse at the beginning of his/her training expands his/her prior understanding of how to put on a bandage with ever new aspects of meaning in dealing with the diversity of bandages in equally diverse and always different individual cases. “Bandaging” as a semantic field quickly develops into a comprehensive and professional concept of acting, thinking and ability. The rudimentary prior understanding of a tool mechanic apprentice of the surface quality of tools—and how to achieve this quality—is expanded by the alternation of reflected work experience and the expansion of the semantic field of the concept

11.2

Designing Vocational Education Processes in Vocational Schools

469

“high-quality surfaces of tools” to a cognitive disposition of professional action and shaping competence. Here is a detailed example: For most learners, teamwork means sitting in a group with other learners at the beginning—and often also towards the end of their training—and somehow working together on a task. This contradicts the principle: “You do not acquire the ability to work in a team simply by [learners] working in a team as often as possible”. For learners to develop a viable concept for working and learning in a team, it is always a question of expanding and changing the semantic field that characterises the professional concept of teamwork. Important aspects of meaning can be identified: • Team-specific social competences (these include, among other things, communicating and actively listening in friendly and respectful manner) • Teamwork (in the sense of a goal-oriented and method-oriented approach, the quality of which must be evaluated) • Methods of cooperative work and learning (the phases of think-pair-share must be implemented methodically) • Solving tasks in a team, where the aim is to perform the tasks necessary for successful teamwork, for example, moderator, process observer, time guard, minute recorder • The formation of groups, whereby a team composition capable of work and learning must be ensured, that is, a team needs team members who come up with ideas, those who pay attention to quality, those who think strategically, etc.) • Team development, that is, an appropriate realisation of the phases of team development [forming, storming, norming and performing].

As certain subjects of work and only certain aspects are taken into consideration depending on the situation and task, the aim of this phase of tuition is to convey that the development of work process knowledge is a process of subjective development of vocational concepts with their semantic fields, which the learners have to place within their work-process knowledge. The dimensions of working and learning refer to a possibility of systematisation oriented towards the vocational work process. In this phase of tuition, the teacher must steer the learning process so that the learners feel challenged to realise the processes of generalisation and systematisation in such a way that their individual professional concepts are further developed. This also applies to processes of social learning within the framework of teamwork (Fig. 11.14).

470

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.14 Semantic fields of team work

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical Colleges in Switzerland: Examples of Teaching and Examinations

Karin Gäumann-Felix and Daniel Hofer

11.3.1 The Higher Vocational Nursing Schools in Switzerland In the Swiss educational landscape, there are currently two options for training in the professional care and assistance for people. On the one hand, studies can take place at a higher technical college (HF). In addition to a successful entrance examination, the admission requirements are a vocational or school-leaving certificate at secondary level II1. The second option is to study at a university of applied sciences (FH). The admission requirement is a Matura degree (university entrance qualification) or completed HF training. Switzerland, therefore, has two equivalent variants of higher (tertiary) continuing vocational education and training for nurses, one more practiceoriented and one more academically oriented. 1 After compulsory schooling (9 years), young people enter upper secondary education. Secondary level II can be subdivided into general education (grammar schools and technical secondary schools) and vocational training (learning a trade in a training company with supplementary schooling).

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

471

The training of qualified HF nurses (Höhere Fachschule) is a joint task of the federal government, the cantons and OdA (organisations in the world of work). The three partners in the network jointly assume responsibility for vocational training and the quality of training. The higher vocational schools have an educational concept that leads directly to employability. In contrast to “academic” courses of study, this makes it unnecessary for students to gain practical experience in their profession after completing their studies. With its genuine duality, nursing training requires didactics that are based on authentic professional situations and convey the competence to master them professionally. The study to become a qualified nurse takes 3 years. With a relevant apprenticeship as a health specialist (FaGe), training can be shortened by 1800 learning hours. The learning hours are divided into three learning areas: Practice, School and Training & Transfer (LTT). A nationwide curriculum framework2 regulates the temporal proportions of the learning areas. Half of the training takes place in school and half in professional practice. Both learning venues each contribute approx. 10% of their time resources to the design and organisation of the third learning area LTT. The practical assignments take place in various fields of health care3. The school lessons impart the necessary breadth of knowledge of the work process. At the third learning location (LTT), authentic professional tasks in their respective complexity and considering the diverging requirements are processed and reflected in a protected framework. LTT offers both the possibility of recording current cases relevant to training independent of the curriculum and of taking individual needs on the part of learners into account. Following Dehnbostel, we distinguish three types of work-related learning (Dehnbostel, 2005): Learning in the work process (practical learning area), acquiring knowledge of the work process on the basis of reflected work experience and the relevant specialist knowledge (scholastic learning area) as well as practical and practice-related learning outside the workplace context (LTT) in specialist practice rooms and school. The concrete descriptions of the competences4 to be achieved and our portfolio system, which is platform-based in all three learning areas, serve as a connecting element. In 2012, this concept was expanded by the COMET competence model. The examples and experiences documented below show the successful implementation of the COMET Competence Model as a didactic concept in the training of qualified nursing staff at the Bildungszentrum Gesundheit und Soziales in Olten5. 2

http://www.sbfi.admin.ch/bvz/hbb/index.html?lang¼de&detail¼1&typ¼RLP&item¼17 Practical training covers six fields of work in the care and support of: (1) people with long-term illnesses/(2) children, adolescents, families and women/(3) mentally ill people/(4) people in rehabilitation/(5) somatically ill people/(6) people at home 4 https://www.bz-gs.ch/bildungszentrum/lernen-am-bz-gs-1/konkrete-kompetenzen 5 The Bildungszentrum Gesundheit und Soziales [health and social education centre] (BZ-GS) is a part of the Berufsbildungszentrum Olten (BBZO). The BBZO is a regional vocational training centre with over 4200 apprentices and students in 28 professions. Refer to http://www.bbzolten.so. ch/startseite/ and http://www.bbzolten.so.ch/bz-gs-olten/ 3

472

11

The Didactic Quality of the Competence and Measurement Model

11.3.2 COMET in the Context of the BZ-GS [Health and Social Education Centre] In 2012, the BZ-GS, together with five other Swiss educational centres6 in the health and social sectors, launched the first Swiss COMET project under the title “Surveying and imparting vocational competence, professional identity and professional commitment in the training occupations of nursing in Switzerland” (cf. GäumannFelix & Hofer, 2015).

Example Lesson: Pain and Temperature Regulation This example from the first year of an HF training course in somatics shows a rather simple case description to introduce the students to the COMET method and to give them an understanding of the criteria and items of the competence dimensions. After a brief introduction to the understanding of the eight competence criteria (sub-competences), the students received the case description from Ms. G. (Table 2.2). In a first step, they analysed the case study in working groups with the aid of instructions in which the competence criteria and their breakdown into five items are presented. They tried to assign the information in the case description to the competence criteria and to discuss the first possible interventions (Table 11.6). After initial processing of the case in groups, the results were presented and discussed in the plenum. The following questions were examined: • • • • •

What are the family, social and cultural factors influencing Ms. G.? What influence do they have on her current state of health? What would be interventions that support sustainable exit planning? What other services should be included? How does the student manage the situation, as human resources are scarce due to absenteeism? • How does the student set priorities? What kind of interventions are necessary? During this expert discussion, the COMET competence criteria were gradually filled with content. The holistic view of the patient’s situation became increasingly clear and this triggered important “Eureka!” moments among the students. Due to the various starting points and possibilities provided by the situation description, it was also possible to consider the heterogeneous previous education of the students in the educational programme. All were able to build on their current state of knowledge and experience and reflect on their individual competence development.

6 In addition to Solothurn, the Cantons of Aargau, Basel, Berne, Lucerne/Central Switzerland and Zurich.

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

473

Table 11.6 Ms. Graziano as a case study Ms. Graziano as a case study You have the early shift on a surgical ward when you are notified of an emergency admission. That’s all you need... It’s all very hectic already, because a senior nurse is sick. There are three of you, group management, one FaGe* student and you. Reason for admission: Severe stomach pains. As you are assigned to admit Ms. Graziano, you take her over from the emergency nurse, who tells you that Ms. Graziano has been suffering from upper abdominal pain for two days, which is getting worse all the time. In addition, a stubborn cough has been plaguing her for days. With this little information, she leaves the ward again. When you turn to Ms. Graziano, she lies crying in bed with her legs pulled up. You greet her and measure her vital signs (BP 125/90, P 80, T 37.5 ). You ask her how she is, whereupon she tells you in broken German about her last 2 days: Her husband is currently abroad for a week for work, she misses him very much, feels very alone. When the pain started, she wouldn’t tell him on the phone so he wouldn’t worry unnecessarily. So he doesn’t know she’s in hospital yet either. She tells us that until 5 years ago they lived in her native country of Italy, where her husband also came from. After their wedding, they moved to Switzerland for professional reasons, but he is always abroad on business. She has hardly slept the last two nights because of the severe pain and only managed to come to hospital with difficulty because she hates hospitals. Ms. G. is very afraid of a malignant disease, because her mother died of a carcinoma many years ago, and according to Ms. G., had suffered excruciating pain in the end. In the following conversation, you will learn that she is “experienced in pain”. She has been suffering from severe backache due to a herniated disc for 3 years. She has so far resisted surgery—the focus has always been on pain therapy. For several months, she has been taking 4  1 g Dafalgan tablets and 4  20 drops of Tramal for backache. During the conversation, she suddenly begins to freeze and you notice that she has chills. You also tell her that she really must be in a lot of pain and offer to get her a painkiller. FaGe ¼ Health Specialist

*

In the setting described, the competence criteria and items were used as work aids with the help of a simplified representation (Fig. 11.15). The learning outcome was discussed together. As the aim was to enable an initial examination of the COMET competence model, no written evaluation was carried out. It turned out that all students benefited from this setting several times. On the one hand, they learnt to view a situation holistically with the help of the competence criteria and to recognise “blind spots” in their knowledge and skills. On the other hand, they were able to build on their current state of knowledge and experience and identify topics for further in-depth study in subsequent lessons. This form of tuition has meanwhile been tested in a variety of ways. For example, in an open book7 test, like the test tasks in the COMET project, a situation was processed and assessed with slightly adapted rating items.

7 Open-book examinations allow students to use all available documents during the examination. They have free access to their own documents and books. They have free Internet access with their laptops and, therefore, also access to the online library and other resources. The only thing forbidden is mutual exchange among each other.

474

11

The Didactic Quality of the Competence and Measurement Model

Fig. 11.15 Simplified presentation of criteria and items (This representation of the requirements dimension of the COMET Competence Model adapted to nursing training was simplified linguistically and graphically for the students (cf. Fischer, Hauschildt, Heinemann, & Schumacher, 2015)

11.3.3 Example Lesson: Nursing Relatives and Palliative Care This example is set in the third academic year, 6 months before the diploma examinations. During a whole week, the students dealt with the topics: Nursing relatives, Caring and Palliative Care. The initial situations were 20-minute encounter sequences, which the students carried out with simulation patients8. In a fully equipped (teaching) hospital room, an initial contact was simulated with a bedridden cancer patient and his wife, who had taken care of him at home until the current emergency occurred. The students were confronted with this simulation without long preparation time. Table 3.1 shows which preliminary information the students received shortly before their assignment. As there was little preparation time available, they had to rely on their previous knowledge and experience. The 20-min simulation sequence was recorded on video and then handed over to the students. Each student received a video documentation of their own sequence on a USB stick 8

At the BZ-GS we use amateur actors to simulate (“play”) certain given roles. Depending on the lessons they receive a more or less given script. Within this teaching setting, they orient themselves to a basic starting position, supplemented with possible questions and topics.

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

475

for further processing and reflection during the week of the event. The questions described in the initial situation for the simulation patients already show how the sequences were designed. The following teaching days were finally oriented towards the eight competence criteria of the COMET competence model. The simulated case study, which the students experienced very realistically, was viewed, analysed and discussed from different perspectives. The lack of specialist knowledge was also dealt with and the students were able to reflect on their own recorded situation. At the end of the week, a similar sequence was played and again recorded on video. The students were impressed by the increase in their competence in “Nursing caring relatives and Palliative Care” during this week: how they learnt to analyse the simulated cases in all their complexity and to derive conclusions for their caring actions from their analyses. Conclusion After this week, the students reported a significant amount of learning progress. It was precisely because this teaching took place at a time shortly before the diploma examinations that it was important for the students to be able to assess their own level of knowledge and experience. Due to the always very real experienced settings with the simulation patients and the video recordings, the identification of competences that still needed to be acquired was multi-layered. With the help of video recordings, they were able to reflect on their appearance and behaviour (appearance, interaction with patient and wife, reproduction of information, communication, technical language, facial expressions, gestures, etc.). They were always confronted with their expertise (Which questions was I able to answer? Where did I lack expertise?). In addition, the targeted holistic approach based on the eight COMET criteria drew their attention to other problems that they would otherwise not have “discovered”. Through this continuous process of reflection throughout the week, a variety of interrelated topics could be explored in depth. The differences between the video recordings at the beginning and end of the week were impressive. As central to their learning process, the students stressed that they did not work on “strange” learning examples, but that their own experienced examples formed the basis for the teaching week (Tables 11.7 and 11.8). Table 11.7 Initial situation for students Initial situations for students • You work in the medical-oncological department. • You appear at 07:00 for the early shift. • The night shift reports on Mr. X*, who was admitted 2 h ago, accompanied by his wife. • Diagnosis: Metastatic bronchial carcinoma, known for 1 year. Mr. X has undergone chemotherapy, which he discontinued a month ago due to his poor general condition and the severe side effects. • Mrs. X takes care of Mr. X at home; they live in a detached house. • Mr. X was admitted because his condition was steadily worsening, with weakness and shortness of breath becoming a problem. Breathlessness can hardly be alleviated with the portable oxygen device at home, weakness hardly permits mobility. • This is all the information you have for the moment. You are responsible for Mr. X today and now go to his room. *

may also be Mrs. X

476

11

The Didactic Quality of the Competence and Measurement Model

Table 11.8 Initial situation for simulation patients Initial situation for simulation patients Initial situation (set default) • It is 07:00 in the morning. • Mr. X came in two hours ago, accompanied by his wife. • Diagnosis: Metastatic bronchial carcinoma, known for one year. Mr. X has undergone chemotherapy, which he discontinued a month ago due to his poor general condition and the severe side effects. • Mrs. X takes care of Mr. X at home; they live in a detached house. • Mr. X was admitted because his condition was steadily worsening, with weakness and shortness of breath becoming a problem. Breathlessness can hardly be alleviated with the portable oxygen device at home, weakness hardly permits mobility. Due to the increasing bed rest there is an acute danger of decubitus (bed sores). Mr. X already has a red right heel that hurts him. (Note on shortness of breath: The shortness of breath should not be in the centre, because it is mainly about communication. Mr. X can cough and have some breathing problems, but not the whole 200 , because otherwise there is too much weight on the shortness of breath and the conversation can stagnate.) • Mr. and Mrs. X have seen the nurses of the emergency ward (admission at night) as well as the nurse who has the night shift on the medical-oncological ward. She briefly showed them the room and gave Mr. X oxygen by nasal probe. Then, she referred to the nurse of the day shift who will take care of the further admission formalities and give the couple further information. Freely designable elements (discussed by each couple X with each other in advance) • Description of living conditions (wheelchair accessible? at ground level? stairs?) • Description of other family members (who? relationship? Relationship with each other? Good? Tense? Quarrels? Other associates— not related, e.g. friends, neighbours, work colleagues? ...) • Who supported care at home? (family members? Spitex? meal service? etc.) • How has dying and death (and burial, if any) been addressed so far? (do the spouses talk about it? Is “foreseeable death” accepted? Do they rage against death? What else do you want to do from a medical perspective?) • Are you considering moving to another institution? (e.g. hospice?) Mandatory questions/topics for the nurse (topics should be addressed in some form if possible, but there is also room for manoeuvre here) • Mrs. X wants to take her husband back home because she promised him that he could die in peace at home. Or: • Mr. X absolutely wants to die at home, but Mrs. X is currently overwhelmed by the idea. Or: • Mrs. X is visibly disappointed with herself that she was no longer able to look after her husband at home and struggles a lot with the fact that he now must be in hospital “because of her”. Accordingly, she is very critical of all nursing activities and often “complains”. • Mr. and Mrs. X ask about support possibilities at home (“technical” support such as a nursing bed, but also “spiritual” support such as a self-help group, pastoral care, opportunities for conversation, etc.). • Mr. and Mrs. X have heard that palliative care is offered here on the ward. However, they do not know what this term means and ask the nurse. • Questions about dying, death, grief, burial, ... Possible questions to the nurse (can be included depending on the situation) • Possible questions regarding assisted dying. • Possible questions about problems within the family (disagreement about how to proceed, some find that chemotherapy should definitely be continued, others confirm that Mr. X stopped chemotherapy, etc.). • There may be a family member with whom the spouses are in dispute, and the spouse X do not (continued)

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

477

Table 11.8 (continued) want this person to be given information about the state of health. • Various possible questions/anxieties about dying, dying phases, pain, shortness of breath, nutrition, infusions, life-extending measures, legal issues, death, grief, ..... • And much more ...

11.3.4 Example Lesson: CPR—Cardiopulmonary Resuscitation Resuscitation courses are regular units during the 3 years of study. As a rule, the resources required for basic life support (BLS) and advanced life support (ALS) are offered together with other topics over 2 days. The instructions for the resuscitation measures are primarily characterised by flow charts and algorithms. This suggests that there can be no doubt about what is right and what is wrong. However, the dimension of ethical decision-making already clarifies the fact that resuscitation is also about standard-oriented, clever solutions. It quickly became apparent to us that the COMET dimensions were also suitable structuring aids in this area. Here is an example of one of three BLS/ALS units in the second academic year: The students were provided with a document with two pictures. The pictures are starting points for teaching about shock management, cardiac arrhythmias, cardiac output and resuscitation measures. The students were given the task of forming groups of a maximum of four people and then describing a realistic story, a situation they had experienced, which matched the pictures. We hoped, among other things, that the narratives would explicate “implicit” knowledge. The students also had the opportunity to use their portfolios or the patient documentation tool to draw on a concrete situation that they had already described. Then the students had to choose one of the stories to work on. The questions shown in Table 3.3, most of which were based on the COMET dimensions and items, were to be dealt with (Table 11.9). After the discussion within the groups, the examples, answers and findings were discussed and further deepened. Very soon, it became clear that at first glance rather “technical” situation of a resuscitation covers all dimensions in broad and complex manner. Various questions were, therefore, discussed and transferred to the dimensions. As an example, it became clear how important it is within the dimension of work-process orientation to argue with the involved services (e.g. medical service) using the correct technical terms. After this sequence, the students reported great learning success at various levels.

478

11

The Didactic Quality of the Competence and Measurement Model

Table 11.9 Questions to be dealt with in relation to the stories Questions to be dealt with in relation to your story 1. What caused you to favour this story? 2. Problems: • Presentation of the “main problems” arising from the overall situation of the chosen story. • Which reasons and reference standards arise for the defined problems? • What additional significance do the following criteria have in this story: Efficiency, costs/ benefits, personnel resources/skill and grade mix, follow-up costs? • Which hygienic features must be observed? • What is important regarding personal health protection? • Which relevant issues concerning the social/sociocultural level might have an influence on the situation? 3. Solutions: • Which solution approaches are suitable to stabilise or improve the overall situation? • Which work processes are particularly important in this situation? • What related reference standards do I know? 4. Resources: • Which knowledge and skills do I need to activate in order to sustainably cope with the situation/ problems? • What do I know for sure—what do I want to deepen and differentiate? 5. Communication: • To visualise everything in suitable form. • How do I present the situation to the other groups—So that it is easy to understand? • Note any questions that may have arisen during processing. The criteria for a complete task solution Clarity/presentation • What situations could I be confronted with in my field of work that could lead to resuscitation? • What infrastructure is available to me? • What tasks can I assume during resuscitation? • What tasks would I like to assume during resuscitation? • Which tasks during resuscitation do I feel unsure about? Functionality/professional solutions • How is resuscitation performed? • What variations are there? • How can I prevent resuscitation? • Can I evade/refuse to help during resuscitation? • When is a person dead? • Which score systems can I apply? Sustainability • When is resuscitation mandatory? • When is resuscitation not performed? • How are living wills observed? • Organ donation and resuscitation. What should I pay special attention to? Efficiency/cost-effectiveness • How many people does it take to resuscitate? • What assistance is available? • How long does one resuscitate? Work-process orientation • Who determines what during resuscitation? • Which partner services are helpful for resuscitation? • Who do I involve in which situations? • Process following resuscitation—from the patient’s perspective? (continued)

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

479

Table 11.9 (continued) • Process following resuscitation—from the expert’s perspective? Social and environmental compatibility • What standards must I adhere to during resuscitation? • From which standards can I deviate during resuscitation? • What protection do I need? Family/sociocultural context • In what situations can relatives be present during resuscitation? • How do I communicate what and when with relatives, the press, etc.? • Which cultural/ethical/philosophical peculiarities are to be considered? “DNR” stamp Creativity • What creative aspects can be required of me during resuscitation? • What aesthetic aspects can be important during resuscitation?

11.3.5 Examinations The examples described above show a wide range of possibilities for integrating COMET into exam settings. Two variants are explained here as examples, which demonstrate very well the creative use of the basic COMET principles.

Synthesis Examination as an Example The first example is an oral synthesis examination in the second year of study. The basis was the tuition of the entire 12-week school block as well as the knowledge imparted since the beginning of studies. Based on a real complex patient situation9, the students dealt with the COMET competence model during the preparation period and described their thoughts and assessments with concrete reference to the competence criteria (see information on preparation, general conditions and assessment criteria for the students in Table 3.4). For example, the synthesis expert discussion focused on work-process orientation, social and environmental compatibility, sustainability, economy and other competence criteria. The discussions revealed broad thematic diversity, which was summatively assessed with the help of the items (Table 11.10). With this and similar examination settings over the course of the three academic years, we introduce the examination interviews that will take place at the end of training.

9 The written 3-page patient situation contains information on medical history, diagnosis, admission reason and situation, procedure, medication, treatment plan, nursing diagnoses and previous course of hospitalisation. The personal data are anonymised for data protection reasons, but originate from a real situation, which is why the content is not reproduced here.

480

11

The Didactic Quality of the Competence and Measurement Model

Table 11.10 Preparation, general conditions of the synthesis examination Exam Preparation and General Conditions • Introduction: As part of an introductory sequence, students receive relevant information about the exam setting as well as a case study (real anonymous patient example) 2 weeks before the exam day. • Preparation: Each student prepares for the interview individually. The focus is on the clinical picture, medication and care process. Nursing diagnoses are listed in the case study, goals and interventions of a selected diagnosis are formulated by the students themselves. In addition, all subject areas previously taught in class may be used. • Presentation: For this exam, we give the student free rein regarding creativity (PowerPoint presentations, MindMaps, flip charts, slides, collages, case notes, etc.). • For assessment criteria, see below. • The assessment is based on the competence level at the end of the second training year and the competence criteria and items of the COMET model. • The expert discussion takes place in groups of two with one teacher and lasts 60 minutes. Approx. 25 minutes are available per student. The interview is opened with a five-minute presentation of the patient’s situation in accordance with the preparation assignment—this is then continued in accordance with the evaluation criteria. The last 10 min are available for random questions, transfer questions, clarifications and additions or discussions. • Students receive their results on the following day. • Assessment criteria: see Table 11.11.

Diploma Examination as an Example The final examination interview is explained here as a second example. And this was certainly one of the most decisive moments for the training construct to demonstrate the stringency and ultimately the credibility of our competence-oriented training. The oral interview is one of three elements of the final qualification process10, lasts 40 min and takes place in the last 12 weeks of the last year of training. The training companies are also involved in the examination interview and its evaluation by an expert. The basis for the interview at our school is a real patient situation from the field of work of the person to be tested. This creates the framework within which the candidate can give a broad presentation of his or her planning and reasoning skills. In this form of final examination, complexity is not trivialised, knowledge content is not atomised and broken down into subjects. Considering the nature of the final qualification methods, it will probably result in the preliminary situation that those teaching will teach and those learning will learn. This is also where the stringency of a genuinely competence-oriented education becomes apparent in the final analysis. The dimensions that are evaluated: • The description of the situation contains the essential information and is presented systematically.

10

The other two parts are a practice-oriented written diploma or project thesis one the one hand and the practical training qualification on the other: The final practical assessment is conducted by the training company in the second half of the last practical training period.

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

481

Table 11.11 Assessment criteria for synthesis examinations 1

2

3

4

5

6

7

Criteria Clarity/presentation • To whom do I have to present the solution? • Do I use an adapted form of communication? • How can I present the solution comprehensibly? • Is the presentation clear and structured? • Do I use suitable assessment tools, reports, etc. for communication/presentation? Work-process orientation • Where and how does the concrete situation influence the organisation and leadership? • Which work processes are also affected? • Which persons, groups and organisations are also affected? • What competences does this require? • Is cooperation inter-, intra- or transdisciplinary? Family/sociocultural context What aspects need to be considered regarding: • Family context? • Institutional-social context? • General conditions? • The social environment? • The cultural context? Sustainability • What does a sustainable solution look like (avoiding a revolving door effect)? What do I have to consider regarding: • Health promotion and prevention? • Autonomy? • Social integration? • What must be basic effect be? Efficiency/cost-effectiveness • What does an efficient strategy look like? • What resources are available (time, personnel, financial, etc.)? • What level of quality is required? • Which expenditure must/can be applied? • What follow-up costs could arise? • How is society/the system affected? Functionality/professional solutions • What is the justification for my strategy/solution? • What are the current technical findings (evidence)? • Is the ..... feasible (solution/intervention)? • What are the professional contexts? • How can I present the .... correctly? • Are my statements well-founded? Creativity • What scope for design is offered and how can I exploit it? • What are the “main problems”? • Is my solution approach: • Sensible?

0

1

2

3

Commentsa

(continued)

482

11

The Didactic Quality of the Competence and Measurement Model

Table 11.11 (continued) Criteria

8

a

0

1

2

3

Commentsa

• Broadly based? • Aesthetic? • Creative? Social and environmental compatibility Significance for • Environment, hygiene and health protection? • Occupational safety and accident prevention? • Ergonomics? • Work and organisational design? • What consequences do I expect on the social/sociocultural level?

Mandatory if rating is 0 or 1 point

• Nursing problems, focal points and objectives are identified and justified. • Natural and social science problems, focal points and objectives are identified and justified. • Activating, preventive and/or health-promoting measures are identified and a position is taken on their possibilities and limits. • Professional policy problems, focal points and objectives are identified and justified. • Management problems, focal points and objectives are identified and justified. • Ethical aspects are critically reflected. • Concepts, models and theories are used for analysis, planning and justification, including evidence. • Linguistic expression is differentiated, and professional terminology is used correctly. If we compare our items with the COMET model’s criteria of the complete (holistic) solution of professional tasks, we can assign all evaluation groups to the COMET criteria. We have described the solution space with “fulfilment standards”. Consequently, we can also use the COMET criteria to evaluate the final examinations.

Examinations: Conclusion The two examples above and our experience with examination settings during the course of training and in the oral diploma examinations at the end of the three-year course of study in HF nursing confirm that the COMET competence criteria and the corresponding items meet the quality criteria of validity, objectivity and reliability to a high degree. Our experience with the quality criteria is thus clearly in line with the statements made by Rauner et al. (2015a) in their “Feasibility study on the use of the COMET test procedure for examinations in vocational education and training”. Based on our experience, we also clearly agree with the conclusion written by

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

483

Rauner et al. (cf. Rauner et al., 2015a, 31–32). The inclusion of the basic idea of the COMET model confirms our conviction that our examination settings can comprehensively capture the competencies of our students. It should be mentioned that the teaching team is also convinced that the acquisition of factual knowledge is an indispensable part of competence development. However, competence-oriented teaching should ideally not only ask for factual knowledge in exam questions, since holistic problem solutions involve valid open exam tasks. Consequently, there cannot simply be a right or wrong, but acceptable justifications and strategies within a defined solution space. We implement this continuously and successfully in our understanding and design of examinations.

11.3.6 Suggestion for Lesson Preparation As can be seen from the examples, the COMET method is not only a scientific evaluation tool, but also a didactic model to learn what can be understood in the discussions by the term “the whole”. Since the start of the project in 2012 (Gäumann-Felix & Hofer, 2015), the COMET competence model has also found its way into the didactic concept for lesson planning, as the following example shows. The “suggestion for lesson preparation” is composed as follows: • Personal reference – My reference, my resources and competences in relation to the topic/problem • Meaningfulness of schooling for learners/students – What relevance does the topic/problem have in the specific occupational field? – What relevance does the topic/problem have in general? – Significance of the topic/problem In the past Currently In the future From the practical perspective From the theoretical perspective • General conditions – Curricular guidelines and focal areas – What needs to be tested? • The situation – Which current and concrete situation/problem fits? With regard to the work area With regard to the learners • Key points of the situation and the solution space (according to COMET incl. the 40 items). – With regard to the: – Clarity/presentation – Functionality/professional solutions – Sustainability – Efficiency/cost-effectiveness – Work-process orientation – Social and environmental compatibility – Family/sociocultural context – Creativity (continued)

484

11

The Didactic Quality of the Competence and Measurement Model

• Structure/methodology concerning participants. – Time budget and priorities – Appropriate problem confrontation (in addition to the situation) – Methods (activities) Documents for the information and data

Portfolios and Patient Documentation Tool In the above examples, it has not yet been mentioned that the BZ-GS works intensively with portfolios managed by the students and the electronic patient documentation tool. Both are instruments based on real-life situations and patient examples. Here, too, there are innumerable variants for integrating COMET. The aim of our patient documentation tool is to provide teachers and students with an instrument with which real patient situations can be recorded, processed, further developed and reflected upon. The patient documentation tool is available at all learning venues (school & LTT school, practice & LTT practice) and can be used in various fields (at the BZ-GS Canton of Solothurn, specifically in acute somatics, psychiatry, long-term care and Spitex). The password-protected tool facilitates: • Recording of new cases • Processing and further developing existing cases • Reflection on individual steps of the process and making considerations regarding the individual steps transparent for others • The design of examination—module degrees as well as the final qualification procedure • Feedback—teachers/students or “peer-to-peer” In this project phase, the competence criteria are integrated into the COMET tool. This makes it possible to illuminate real or fictitious patient situations with the help of the corresponding criteria. Here, too, COMET enables quality assurance in the sense of a holistic approach to patient situations. Two real extracts in Table 3.6 show how the competence criteria are examined in the context of the documentation of real practical situations for the final oral expert discussions (part of the diploma examination) (Table 11.12). Table 11.12 Excerpts from described competence criteria Examples of COMET references in the patient documentation tool Sustainability dimension: It is highly probable that the incremental structure and exact implementation of the staged anorexia concept will lead to long-term success. Efficiency/cost-efficiency dimension: The recommended exit solution (assisted living) could not be implemented because the family’s financial situation was difficult at that time.

11.3

COMET as a Didactic Concept in Nursing Training at Higher Technical. . .

485

11.3.7 Conclusion The examples illustrate how the COMET competence model can be used for the planning, implementation and evaluation of teaching and examinations. Students learn to look at situations from different perspectives, ones that they had previously often forgotten or neglected and which at first glance do not appear to be common “everyday topics” in student practice. Teachers experience the COMET method as an ideal supplement for the preparation and implementation of vocational training based on the guiding principle of holistic education, which contributes to not losing sight of the interrelationships between complex professional tasks. The teachers further emphasise that the COMET method prevents the complexity of professional fields from being trivialised and the knowledge content from being atomised and broken down into subjects. The facts are not taken out of context. This was also ensured by the concrete and authentic working and learning situations that form the centre of the lessons. The results of the competence measurements carried out within the framework of the COMET project prove the success of the strategy documented in the teaching examples, to take the complexity of the work and learning processes seriously and to understand the heterogeneity in the educational programmes as a resource and a challenge. For teachers, this also means meeting the contextual requirements of educational practice with a high degree of didactic creativity and flexibility. The results of the COMET project confirmed our competence orientation, which has been anchored for years. Consequently, it is not surprising that even after completion of the project, the COMET model remains an integral part of our everyday lives. With the eight competence criteria, there is an optimally suitable reference standard for the “holism” construct, which can guide many processes at a school (cf. Gäumann-Felix & Hofer, 2015). Kapitel 11: Verzeichnisse.

Appendix A: The Four Developmental Areas

Developmental Area 1: Orienting Work Tasks—Knowledge for Orientation and Overview Job starters already have some prior experience and knowledge with regard to their occupation, which they selected not the least on the basis of this prior knowledge. At the beginning of their training, they are introduced to orienting work tasks that give them an opportunity to gain an overview of the work in this occupation. Novices work on these tasks systematically and in accordance with existing rules, prescriptions and quality standards. This first learning area is thus characterised by the acquisition of professional knowledge for orientation and overview that allows the trainees to become aware of the structure of the training occupation from a professional perspective. At the same time, they experience the diverse requirements of work processes and the integration of these processes into the development and innovation processes in the enterprise. Work and technology are thus experienced also as phenomena that can be structured by the people involved. Developmental area 2: Systemicwork tasks—integratedprofessional knowledge The advanced beginner, who already has concrete ideas of the occupation from the perspective of application and utilisation and who has acquired some relevant competences, now encounters systemic work tasks for the development of integrated professional knowledge (perspective of systems architecture). The relationship and interaction of skilled worker, technology and work organisation also requires an integrated view. The mastering of systemic tasks means that the trainees fulfil these tasks with a view to the context and in consideration of the systemic structure of technology and work organisation. (continued) © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

487

488

Appendix A: The Four Developmental Areas

At this second level of vocational learning, the basic concept of the occupation formulated at the first level and the integrated professional knowledge can lead to a reflected professional identity when the educational potentials of the corporate work environment are exploited. Developmental area 3: Problem-oriented specialwork tasks—knowledge of details and functions The professional knowledge for orientation and overview, the integrated knowledge and the ability to solve tasks systematically enable the trainees at the third level to work on problem-oriented special work tasks. The solution of these tasks is no longer possible on the basis of pre-defined rules and patterns. The task includes some novelty that is not fully covered by the problemsolving strategies applied to former tasks. The trainees need to analyse the task first and to identify the problem in order to plan their activities. The paradigm of the holistic and complex work activity, which was developed in the 1980s, and the associated capacity of independent planning, implementation, control and evaluation of professionalwork tasks, corresponds to the third step of the logical structuring of vocational education. At this level, the professional identity leads to professional responsibility as a condition for performance (intrinsic motivation) and quality awareness as an essential condition for the fulfilment of complete work tasks in problematic work contexts. Developmental area 4: Unpredictablework tasks—experiential and systematic in-depth knowledge When the trainees have developed a sufficient understanding of the tasks of professional work, they can gain experience with the handling of non-routine situations and problems. Unpredictablework tasks that are too complex to be fully analysed in the concrete work situation so that they cannot simply be mastered systematically put high demands on the trainees on their way to the level of competent professionals. Competence in this case is based on knowledge about previous tasks where the constellation was at least similar, on the anticipation of possible strategies, on theoretical knowledge and practical skills as well as on intuition. Problems are solved in a situative way without the necessity to calculate the activity with all its preconditions and consequences in detail. The aim at the fourth level of this model of vocational education is to integrate reflected professionalism with subject-specific competence in order to open the opportunity for higher education. The aptitude for higher education emerges from an extended self-conception, which is not so much rooted in a narrowly defined occupational profile, but rather in a career path that is associated with this occupation. The four developmental areas according to which vocational training courses can be arranged in a developmentally logical manner

Appendix B: Rating Scale

Rating sheet COMETSouth Africa 2011–2016 Code: Teacher (1) Clarity/presentation Is the solution’s presentation understandable for the client/ orderer/customer/employer? Is the solution presented on a skilled worker’s level? Is the solution visualised (e.g. graphically)? Is the presentation of the task’s solution structured and clearly arranged? Is the presentation adequate (e.g. theoretically, practically, graphically, mathematically, causative)? (2) Functionality Is the solution operative? Is the solution state of the art? Are practical implementation and construction considered? Are the relations to professional expertise adequately presented and justified? Are presentations and explanations right? (3) Use value/sustainability Is the solution easy to maintain and repair? Are expendabilities and long-term usability considered and explained? Is countering susceptibility to faults considered in the solution? How much user-friendly is the solution for the direct user? How good is the solution’s practical use value (e.g. of some equipment) for the orderer/client? (4) Cost-effectiveness/efficiency

Requirement is . . . not met rather rather at all not met met

fully met

(continued) © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

489

490 Rating sheet COMETSouth Africa 2011–2016 Code: Teacher Is the solution efficient and cost-effective? Is the solution adequate in terms of time and persons needed? Does the solution consider the relation between time and effort and the company’s benefit? Are follow-up costs considered? Is the procedure to solve the task (work process) efficient? (5) Orientation on business and work processes Is the solution embedded in the company’s work and business processes (in company/at the client)? Do the solutions base on work experiences? Does the solution consider preceding and following work/ business processes? Does the solution express skills related to work processes that are typical for the profession? Does the solution consider aspects that go beyond the particular profession? (6) Social compatibility To what extent does the solution consider possibilities of a humane work organisation? Does the solution consider aspects of health protection? Does the solution consider ergonomical aspects? Does the solution follow the relevant rules and regulations regarding work safety and prevention of accidents? Does the solution consider social consequences? (7) Environmental compatibility Does the solution consider the relevant environmental regulations? Do the materials used comply criteria of environmental compatibility? To what extent does the solution consider an environmentally friendly work organisation? Does the solution consider recycling, reuse, and sustainability? Does the solution address possibilities of energy saving and better energy efficiency? (8) Creativity Does the solution include original aspects in excess of the solution space? Have different criteria been weighted against each other? Has the solution some creative quality? Does the solution show awareness of the problems? Does the solution tap the task’s leeway?

Appendix B: Rating Scale Requirement is . . . not met rather rather at all not met met

Rating sheet as used in the tests COMETSouth Africa 2011–2016

fully met

Appendix B: Rating Scale Evaluation sheet (rating scale) for large-scale projects in the field of nursing and health care professions: Measurement of cognitive dispositions

(1) Clarity/presentation Is the presentation form of the task solution suitable for discussing it with clients, patients, parents, relatives, etc.? Is the task solution adequately presented for experts (colleagues, superiors)? Is the solution of the task illustrated (e.g. by means of risk assessment scales, documentation sheets, etc.)? Is the presentation of the task's solution structured and clearly arranged? Is the presentation of the task solution appropriate to the issue (e.g. technical, technical-practical, language-based, etc.)? (2) Functionality Is the task solution technically justified? Is the state of technical knowledge taken into account? Is practical feasibility taken into account? Are the professional contexts adequately presented and justified? Are the descriptions and explanations correct? (3)Sustainability/use value Is the task solution aimed at long-term success? Are aspects of health promotion and prevention taken into account? Is the task solution aimed at encouraging self-determined, autonomous action? Does the task solution aim at fundamental effects? Is the aspect of social inclusion taken into account? (4) Cost-effectiveness/efficiency Is the implementation of the solution economical in terms of material costs? Is the implementation of the solution appropriate (justified) in terms of time and human resources? Is the relationship between effort and quality considered and justified? Are the follow-up costs of implementing the solution variant considered and justified? Is the efficiency of the solution also considered in terms of social costs at system level? (5) Orientation on business and work process Is the solution embedded in the process and organisational structure of the institution?

491

Requirement is . . . not met rather rather at all not met met 0 1 2

fully met 3

(continued)

492 Evaluation sheet (rating scale) for large-scale projects in the field of nursing and health care professions: Measurement of cognitive dispositions

Are the upstream and downstream tasks and processes considered and justified in the solution? Does the solution include the transfer of all necessary information to everyone involved in the care process? Is the solution an expression of typical work processrelated skills? Does the solution take into account aspects that go beyond the boundaries of one's own professional work (involvement of other professionals)? (6)Social compatibility To what extent does the proposed solution take into account aspects of humane work and organisational design? Are the relevant rules and regulations of hygiene and health protection considered and justified? Are ergonomic design aspects considered and justified in the proposed solution? Are the relevant rules and regulations for work safety and accident prevention considered? Are aspects of environmental protection and sustainable management considered and justified? (7) Family/social-cultural context Is the family context taken into account in the analysis and development of a proposed solution? Are the institutional social framework conditions taken into account? Are the task-related aspects of the social milieu taken into account? Are the cultural aspects of the task (e.g. migration background) analysed and taken into account in the justification of the task solution? To what extent are social/socio-cultural consequences also taken into account in the solution? (8)Creativity Does the solution contain elements that extend beyond the expected solution space? Is an unusual and at the same time sensible solution being developed? Does the solution have a design (e.g. aesthetic) quality? Does the solution show problem sensitivity? Is the creative leeway offered by the task exhausted in the solution?

Appendix B: Rating Scale

Requirement is . . . not met rather rather at all not met met

fully met

Appendix B: Rating Scale

493

494

Appendix B: Rating Scale

Only items that differ from rating scaleA are listed.

Appendix B: Rating Scale 495

496

Appendix B: Rating Scale

Appendix C: Examples for Test Tasks

Note The representation of the solution spaces is deliberately not standardised. The solution spaces for the different learning situations therefore have different forms of presentation. This gives the reader the opportunity to try out the different variants when creating his own learning situations.

Example Millwright

COMET Test Task: Signals Situation A railway crossing without barriers in a remote location is to be secured by light signals. The signals shall supplement the existing four St. Andrew’s crosses (see Fig. C.1). The level crossing is located in a water protection area. The power supply for the signals and the controls has a voltage of 24 V DC ( 20%). The next public power line connection is 2 km away. Between 7:00 am and 8:00 pm, two trains per hour are operating on the single-track railroad. The rest of the day one train per hour passes the crossing. The maximum speed of the trains in this part of the line is 80 km/h. In accordance with the relevant regulations, the signals have to start operating 30 seconds before a train reaches the crossing. The signalling consists of an initial phase of yellow light and a subsequent phase of red. The estimated power requirements of the system are as follows:

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

497

498

Appendix C: Examples for Test Tasks

rail track road

Fig. C.1 Map and close-up of the signals

Controls For each passage of a train Power consumption for each light

5 W continuously 5 seconds yellow light, then ca. 1 minute red light 60 W

The operator of the railway line expects a guaranteed independent power supply for the system over a period of at least 7 days. Assignment Prepare a documentation of the system as complete as possible. If you have further questions, e.g. to the client or workers from other trades, please note them down in preparation of a meeting. Please give a detailed and comprehensive explanation of your proposal, taking into account the following criteria: • The functionality of a good complete solution. • The clear presentation of your solution so as to be able to discuss it with customers and work superiors. • The utility and economy of the proposal. • The aspects of environmental compatibility and related regulations. • The effectiveness of the process and its integration into the business operations. • The aspects of (work) safety and health. • Finally, you are always encouraged to show your creativity. Auxiliary material You may use all of the standard materials such as table manuals, textbooks, your own notes and a pocket calculator.

Appendix C: Examples for Test Tasks

499

Solution Space Test Task: Signals Indicator 1: Clarity/presentation • • • • • •

The structure of the system is clearly described (ground plan). The method of power supply is clear (circuit diagram). The work steps are clear (work plan). The functioning is clear (programme). The functioning of the system is described. The necessary components are indicated (bill of materials). Indicator 2: Functionality

• Specification of the energy requirements (50 trains per day, 125 W maximum power, 240 Wh in 7 days) • The method of power supply guarantees independent functionality for 7 days, for example, solar panels with backup battery • Dimensioning of the photovoltaic modules (two in line for 24 V) • Estimated energy conversion efficiency of the solar modules (monocrystalline): ηS ¼ 0.17 • Estimated energy conversion efficiency of the charge controller: ηC ¼ 0.95 • Worst case: winter month with an insolation of approx. E ¼ 0.9 kWh/m2 0:24 kWh 2 • Module surface: A ¼ η ηQdED ¼ 0:170:950:9 kWh ¼ 1:65 m S

C

m2

Alternatively: • Standard module with 40 cells (10  10 cm each), output approximately 50 W under favourable conditions. • A desired maximum output of 125 W would require three modules plus one module in order to allow for series circuits with two modules each. • Input with uninterruptible power supply or generator (here, the voltage difference must be considered as well as the separation of circuits). • The power supply delivers a maximum of 250 W. • Appropriate protective means were selected (difference between networkpowered and solar-powered operation). • Appropriate control sensors were selected and adequately placed, for example, induction loops for detecting the axes (minimal distance s ¼ 80 km/h  30 s ¼ 667 m). • Functionality of the control system is guaranteed. • Disturbances are taken into consideration. (continued)

500

Appendix C: Examples for Test Tasks

Indicator 3: Utility • The system can be adapted to other situations (e.g. time adjustment, transferability to other systems) • Maintenance is possible by client staff, instructions of use were provided • Malfunctions are signalled appropriately Indicator 4: Economy • The system requires little maintenance. • Standard components are used. • Procurement/production costs, operational costs, maintenance costs, follow-up costs. Indicator 5: Work and business process • It is clearly indicated what needs to be arranged with the client for the installation (At what time can the work be done? Who is responsible for the safety? Will the railroad be closed during the installation works?) • Does the work plan take into account more than one worker? • Third-party suppliers were included, for example, for preparing the foundations. • There are suggestions for a maintenance plan. Indicator 6:Social compatibility • • • •

The safety of the workers is guaranteed (protective equipment, posts). Appropriate technical equipment, for example, hoists, barriers Is the status of the system communicated to the train driver? Were alarms taken into consideration? Indicator 7:Environmental compatibility

• No hazardous material (e.g. battery acid, fuel) can leak into the environment. • If hazardous material is used, instructions are given for safe use and disposal. • Application of LED instead of light bulbs Indicator 8:Creativity • The system is equipped with an automatic signalling and alarm system. • The design and structure of the system suit the surroundings. • The system detects when a train is stopping on the railway crossing.

Appendix C: Examples for Test Tasks

501

Example Electrician

Skylight Control Test Task Background situation A company produces equipment for aircraft kitchens in a two-shift mode (Monday to Friday from 6 am to 10 pm, Saturday from 6 am to 2 pm). Up to now, the four skylights of the heated assembly hall have been opened and closed decentrally by means of four separate crank handles (see Fig. C.2). Due to this time-consuming manual control, it sometimes happens that workers forget to close the skylights at the end of the day. There were also incidents when open skylights were damaged during stormy weather. The management requests a new skylight control system that is safer and more comfortable. In a meeting, further specifications are formulated: Assignment Prepare complete documents for the revision of the system. If you have further questions, for example, to the client, the users or workers from other trades, please note them down in preparation of a meeting. Please give a detailed and comprehensive explanation of your proposal, taking into account the following criteria: • The functionality of a good complete solution. • The clear presentation of your solution so as to be able to discuss it with customers and work superiors. • The utility and economy of the proposal. • The aspects of environmental compatibility and related regulations. • The effectiveness of the process and its integration into the business operations. • The aspects of (work) safety and health. • Finally, you are always encouraged to show your creativity. Auxiliary material You may use all of the standard materials such as table manuals, textbooks, your own notes and a pocket calculator.

502

Appendix C: Examples for Test Tasks

Fig. C.2 Close-up of a skylight and sketch of the assembly hall- “The skylights ought to be opened and closed centrally.”- “When the temperature in the workspace within the hall gets too high, the skylights have to open.”- “There is an enlargement of the assembly hall scheduled for the next year.”

Solution Space: Skylight Control Indicator 1: Clarity/presentation • Did the candidate present a technology scheme or any other sketch with explanations? • Did the candidate prepare a clear bill of materials (e.g. table) showing the required components or materials? • Are there appropriate circuit diagrams? • Are special features highlighted by colours? Indicator 2: Functionality Realisation is possible by means of contactors, a programmable logic controller (LOGO, Easy) or EIB. Solutions that use a programmable logic controller should be given better marks with a view to the convenient enlargement options. Bill of materials (example): • • • •

4  button for opening skylight 4  button for closing skylight 1  wind sensor 1  rain sensor (continued)

Appendix C: Examples for Test Tasks

503

• 1  temperature sensor • 1  controller • 4  motors with clockwise and counterclockwise rotation (possibly with automatic stop, in which case end sensors are unnecessary) • 4  relay for clockwise rotation • 4  relay for counterclockwise rotation • Wiring material • Installation material • Fuses or circuit breakers as necessary The control can be installed in the existing distribution board. Wiring (example): • • • •

Distribution board to skylight motors (4 wires) Wind sensor (on the roof) to distribution board (4 wires) Rain sensor (on the roof) to distribution board (4 wires) Temperature sensor (at a representative place in the hall) to distribution board (3 wires) • Button panel (near the door) to distribution board (9 wires) When a programmable logic controller is used, the integrated time switch can be used for closing the skylights at the end of the working day. • Would a proposed skylight control be operative from a technical point of view? • Are the explanations and diagrams correct from a technical point of view? • Have the stop switches been implemented correctly? • Have the sensors (wind, temperature) been implemented correctly? • Is it possible to open and close the skylights? Indicator 3: Utility • Easy operation, easy adaptation to changing requirements by programmable controller, choice and placement of sensors, instructions for maintenance (e.g. for the motors) • Can the explanations and diagrams be understood by a non-expert as well? • How convenient is the operation of the skylights for the user? • Are there status signals and alert signals? • Is a time switch (integrated into the controller) used for changing between the weekday/Saturday/Sunday modes? Indicator 4: Economy • Easy expansibility of the system in the event of an enlargement of the assembly hall, cost-efficient use of a programmable logic controller, capacity of the control system, application of standard sensors, use of existing equipment (continued)

504

Appendix C: Examples for Test Tasks

• Were costs and workload of different control systems taken into account? • Is the solution economical? • Was the cost-benefit ratio taken into account? Indicator 5: Work and business process • Compliance with the requirements of the management, coordination with master/foreman, use of existing equipment. • Have the requirements of the client been taken into account? • Does the proposal refer to particular circumstances of the installation (e.g. installation during holidays)? • Does the proposal foresee the involvement of professionals from other departments in the installation works (e.g. installation of the motors by in-house mechanics)? • Has the handing over to the client been planned? • Is there a time schedule? Indicator 6:Social compatibility • Consideration of work safety, automatic opening of the skylights in case of high temperature. • Does the proposal comply with particular work safety regulations, for example, with regard to the installation of components on the roof? • Does the proposal comply with safety regulations for electrical equipment? • Is there a kill switch? • Is there a feature for closing the skylights in case of fire? Indicator 7:Environmental compatibility • Saving energy by appropriate opening and closing of the skylights. • Does the proposal refer to environment-friendly materials (e.g. wires without PVC or halogen)? • Does the proposal consider energy-saving measures (e.g. opening the skylights only for a short time when the outdoor temperature is below zero)? Indicator 8:Creativity • Proposals for the extension of the control system, for example, integration of the rolling gate, projected enlargement of the hall, integration of the heating control • Ideas that go beyond the assignment, for example, to use the roof for the installation of solar panels • Did the students come up with special functions for the control system?

Appendix C: Examples for Test Tasks

505

Example Welder

Welding Lifting Lug Test Task Background situation A ship repair company operating in Cape Town has been using a lifting lug fabricated from a 50 mm plate in the form of a washer. It has been welded to a flat base. The base is in turn bolted to the structure. The diagram shows the configuration. The ring was fillet welded to the base using a fillet weld all round. See Figs. C.3 and C.4. While the weld joint material was sufficient for the load lifted, the lug failed in duty. Review the drawing and give your assessment of why the lug failed. How would you redesign the lug to ensure success, given that it must be able to lift a load of 2 tons? Assignment Prepare the required documentation that thoroughly explains and covers the task at hand. Write down any additional steps to implement the client’s requirement/s. Write down additional questions and/or suggestions to be asked or made in follow-up meeting/s with the client or with workers from within or outside the trade. Provide a comprehensive explanation of your proposed solution by taking into account the following criteria: The clear presentation of your solution so as to be able to discuss it with customers and your work supervisors: (a) (b) (c) (d)

The functionality of a good complete solution. Aspects of value in use and over time. The cost-effectiveness and cost-efficiency of your solution. The effectiveness of the process and its integration into business operations. (e) The social responsibility, including aspects of work, health and safety. (f) The aspects of environmental compatibility and related regulations. (g) You are encouraged to show your creativity. Additional material You may use all standard materials such as tables, manuals, textbooks, calculators, the Internet, your own notes and the applicable regulations and standards, including but not limited to the South African National Standards (SANS) and the Occupational Health and Safety Act (85) of 1993 with the various amendments to do the assignment.

506

Appendix C: Examples for Test Tasks

Fig. C.3 Drawing of failed lug design

Fig. C.4 Photo of failed lug design

Welding Lifting Lug Solution Space 1. Clarity/Presentation • Did the candidate prepare a detailed project plan outlining, amongst others, the project timelines, project human resources and financial resources/budget? • Has a drawing showing the new design been submitted? • Is the drawing technically correct, using the correct symbols and notations (first angle autographic projection drawing is given)? • Has the candidate has clearly identified the problem with the original lug? • Does the presentation make clear the choice of materials and consumables, processes and types of welds to be used? (continued)

Appendix C: Examples for Test Tasks

507

• Is there enough information for a qualified welder to be to be able to correctly weld the new design of lifting lug? • Is there enough information, covering all aspects, to be able to implement the plan without gathering additional information; or has the candidate produced a comprehensive list of questions to be addressed at a follow-up meeting? • Does the presentation convincingly and logically motivate the solution to the correct “client”? 2. Functionality • Will the solution hold under load? • Does the solution discuss forces expected to be encountered by the proposed lug? • Does the method of joining meet the required standard? • Does the solution comply with safety regulations for lifting equipment (Working Load Limit and Safe Working Limit)? • Is the solution practical for lifting? • Does the solution cover adequate non-destructive testing? • Does the plan cover risk assessment and risk management? 3. Use Value/Sustainability • Is the new design suitable for many different uses? • Are the materials proposed readily available? • Are the materials proposed long lasting in normal conditions of use? 4. Cost-effectiveness/Efficiency • Is the suggested design the most cost-effective for the task? • Does the design avoid over-engineering? 5. Business and Work Process Orientation • Does the design work for common forms of lifting and lifting equipment? • Does the solution use skills related to work processes that are typical of lifting gear? • Does the solution consider aspects that go beyond the welding occupation? • Does the solution include reference to using correct communication channels to relevant people (according to company procedures) in order to achieve the solution? (continued)

508

Appendix C: Examples for Test Tasks

6. Social Responsibility • Does the explanation of the failure of the original design talk about safety implications? • Will the solution be safe (i.e. hold under load and allow secure attachment of the lifting gear)? • Does the solution specify (or make mention of the need to specify) the load capacity of the lug? • Is there provision for safely testing the new design? • Does the solution discuss safety precautions while making the lug? • Does the solution comply with or make reference to all relevant codes and standards? 7. Environmental Responsibility • Does the solution consider responsible use and disposal of materials? • Does the solution consider recycling/reuse? 8. Creativity • In examining the reasons for the failure of the first design, has the candidate explored multiple options? • Does the solution consider a wide range of options? – Processes – Materials – Designs • Does the solution include original aspects in excess of the solution space?

Appendix D: Four-Field Matrix (Tables)

Profession Warehouse operator Specialist employee for bathing establishments Warehouse logistics specialist Specialist for hospitality industry Specialist for vehicle operations Hairdresser Cook Carpenter Painter and varnisher Stonecutter Duct builder Gardener Farmer Fully qualified groom Office clerk Managementassistantinhotelandhospitality Management assistant in real estate Salesman Retail dealer Car mechatronic Surface coater Vehicle painter Glass constructor Plant mechanic Industrial mechanic Mechatronic

Occupational identity 0.6706285 0.0477596 0.1034256 0.1010839 0.4443006 0.2843638 0.1661394 0.3340340 0.1822839 0.4405951 0.2861519 0.1556112 0.5670278 0.3597894 0.2458676 0.1632624 0.6913549 0.0324006 0.0478673 0.2626955 0.3876961 0.3774316 0.7582568 0.6334552 0.2211778 0.1196560

Organisational identity 0.4078155 0.0624348 0.0417342 0.0747539 0.6287455 0.2770461 0.1675460 0.0104548 0.1487034 0.0334404 0.1998749 0.2201187 0.3551654 0.0794280 0.0943638 0.1650111 0.1859677 0.0263969 0.1696661 0.1527297 0.3424532 0.2295652 0.3130811 0.4081456 0.0704398 0.1086569 (continued)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

509

510 Profession Process mechanic Cutting machine operator

Appendix D: Four-Field Matrix (Tables) Occupational identity 0.3424487 0.2805183

Organisational identity 0.0048912 0.2104140

Occupational and organisational identity: List of professions in the four-field matrix

Profession Plant mechanic Industrial mechanic Mechatronic Process mechanic Cutting machine operator Car mechatronic Surface coater Vehicle painter Glass constructor Office clerk Managementassistantinhotelandhospitality Management assistant in real estate Salesman Retail dealer Gardener Farmer Fully qualified groom Warehouse operator Specialist employee for bathing establishments Warehouse logistics specialist Specialist for hospitality industry Specialist for vehicle operations Hairdresser Cook Carpenter Painter and varnisher Stonecutter Duct builder

Occupational commitment 0.58 0.26 0.27 0.24 0.20 0.21 0.28 0.18 0.42 0.11 0.11 0.36 0.06 0.05 0.12 0.18 0.31 0.79 0.02

Organisational commitment 0.48 0.08 0.07 0.03 0.15 0.15 0.34 0.02 0.45 0.02 0.01 0.30 0.12 0.05 0.29 0.30 0.38 0.50 0.04

0.06 0.07 0.13 0.21 0.23 0.20 0.18 0.40 0.12

0.14 0.14 0.36 0.03 0.21 0.13 0.07 0.07 0.09

Occupational and organisational commitment: List of professions in the four-field matrix

Appendix E: Correlation Values for the Correlation Between Occupational Competences and I-C Averages

Correlations car mechatronics **

Correlation is significant at level 0.01 (two-sided)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

511

Appendix E: Correlation Values for the Correlation Between Occupational. . .

512

Correlations electronics technicians for industrial engineering *

Correlation is significant at level 0.05 (two-sided) Correlation is significant at level 0.01 (two-sided)

**

Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations electronics technicians for energy and building technology *

Correlation is significant at level 0.05 (two-sided) Correlation is significant at level 0.01 (two-sided)

**

513

514

Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations medical assistant **

Correlation is significant at level 0.01 (two-sided)

Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations carpenter *

Correlation is significant at level 0.05 (two-sided) Correlation is significant at level 0.01 (two-sided)

**

515

516

Appendix E: Correlation Values for the Correlation Between Occupational. . .

Correlations logistic clerks **

Correlation is significant at level 0.01 (two-sided)

Data:Total score (TS)ofoccupationalcommitment (OC)

Appendix E: Correlation Values for the Correlation Between Occupational. . .

517

List of References

This COMET Handbook is based on a large number of publications on the COMET test procedure and its application in regional, national and international comparative projects. These publications also consistently document the development, application and evaluation of the methods of competence diagnostics and development based on the COMET competence and measurement model. The compilation of the sources, which is structured according to chapters, avoids overloading the text with a large amount of literature references. For example, the short form “COMET Vol. I” will be introduced for the book series “Measuring Vocational Competences” Volumes I to V and the short form “A + B” (e.g. A + B 11/2013) for the “Research Reports on Work and Education”. Chapter 2 3

4 4.7

5.1

5.2 5.3 5.5.1

Reference COMET IV: 1.1; 1.4 COMET I: 2 COMET III: 1 COMET IV: 2.1 COMET I: 3 COMET III: 2 A + B 01/2016; Rauner, F.; Frenzel, J; Piening, D.; Bachmann, N. (2015): Engagement und Ausbildungsorganisation. Einstellungen sächsischer Auszubildender zu ihrem Beruf und ihrer Ausbildung. Eine Studie im Rahmen der Landesinitiative Steigerung der Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen (QEK). Bremen: Universität Bremen, I:BB. Kleiner, M.; Rauner, F.; Reinhold, M.; Röben, P. (2002): Curriculum design I: Identifizieren und Beschreiben von beruflichen Arbeitsaufgaben, Arbeitsaufgaben für eine neue Beruflichkeit. In: Berufsbildung und innovation—Instrumente und Methoden zum Planen, Gestalten und Bewerten, band 2, Koblenz: Christiani. COMET IV: 3.2; 3.3.3 COMET IV: 2.4 COMET III: 4.2–4.3; COMET IV: S. 53–56 (continued)

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

519

520 Chapter 5.6 5.7 6 6.1 6.2 6.3 6.4

6.5

7.1–7.3

7.4 7.5.3 8.1–8.2 8.3 8.4 8.5 8.7 9.1

9.5 9.6 10.1–10.4 10.5

11

List of References Reference COMET IV: 2.5 COMET IV: 2.5; S. 67–74 COMET IV: Abb. 16 (S. 64) COMET I: 3.5 COMET I: 5.1; COMET III: 4.2 COMET III: 4.3 Kalvelage, J.; Heinemann, L.; Rauner, F; Zhou, Z. (2015): Messen von Identität und engagement in beruflichen Bildungsgängen. In: M. Fischer, F. Rauner, Z. Zhou (hg.). Münster: LIT, 305–326. Zhuang, R.; li, J. (2015): Analyse der interkulturellen Anwendung der COMETKompetenzdiagnostik. In: M. Fischer, F. Rauner, Z. Zhou (hg.). Münster: LIT. S. 341–350. Rauner, F.; Frenzel, J.; Piening, D. (2015): Machbarkeitsstudie: Anwendung des KOMET-Testverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB. A + B 16/2015 A + B 18/2014 COMET III: S. 53–56 COMET II: 3; A + B 14/2014; COMET IV: 78–91 COMET III: 6.2; A + B 14/2014 COMET IV: 6.3; A + B 15/2014 A + B 01/2016; COMET III: 3.5 A + B 01/2016 A + B 18/2015 Fischer, M; Rauner, F; Zhao, Z. (2015): Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und entwickeln berufliche Kompetenz: COMET auf dem Prüfstand. Berlin: LIT Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In: Fischer, M.; Rauner, F.; Zhao, Z. (hg.): Kompetenzdiagnostik in der beruflichen Bildung—Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand. Münster: LIT. Piening, D.; Frenzel, J.; Heinemann, L.; Rauner, F. (2014): Berufliche Kompetenzen messen—Das Modellversuchsprojekt KOMET NRW. 1. und 2. Zwischenbericht. A + B 11/2013 COMET III: 4.2 A + B 11/2013 Rauner, F. (2015): Messen beruflicher Kompetenz von Berufsschullehrern. In: Fischer, M.; Rauner, F.; Zhao, Z. (hg.) (2015): Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand. Münster: LIT, S. 413–436. Zhao, Z. (2015): Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer und Dozenten beruflicher Bildung in China. In: Ebd., S. 437–450. Lehberger, J.; Rauner, F. (2014): Berufliches Lernen in Lernfeldern. Ein Leitfaden für die Gestaltung und organisation projektförmigen Lernens in der Berufsschule. Bremen: Universität Bremen, I:BB.

Bibliography

Adolph, G. (1984). Fachtheorie verstehen. Reihe Berufliche Bildung, Band 3. Wetzlar: Jungarbeiterinitiative an der Werner-von-Siemens-Schule. Projekt Druck. Aebli, H., & Cramer, H. (1963). Psychologische Didaktik. Didaktische Auswertung der Psychologie von Jean Piaget. Stuttgart: Klett-Cotta. Akaike, H. (1987). Factor analysis and AIC. Psychometrika, 52, 317–332. Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing. New York: Longman. Asendorpf, J., & Wallbott, H. G. (1979). Maße der Beobachterübereinstimmung: Ein systematischer Vergleich. Zeitschrift für Sozialpsychologie, 10, 243–252. Baethge, M., Achtenhagen, F., Babie, F., Baethge-Kinsky, V., & Weber, S. (2006). BerufsbildungsPISA. Machbarkeitsstudie. München: Steiner. Baethge, M., Gerstenberger, F., Kern, H., Schumann, M., Stein, H. W., & Wienemann, E. (1976). Produktion und Qualifikation: eine Vorstudie zur Untersuchung von Planungsprozessen im System der beruflichen Bildung. Hannover: Schroedel. Baruch, Y. (1998). The rise and fall of organizational commitment. Human Systems Management, 17, 135–143. Bauer, W. (2006). Einstellungsmuster und Handlungsprinzipien von Berufsschullehrern. Eine empirische Studie zur Lehrarbeit im Berufsfeld Elektrotechnik. Bielefeld: W. Bertelsmann. Bauer, W. (2013). Conceptual change research in TVET. In L. Deitmer, U. Hauschildt, F. Rauner, & H. Zelloth (Eds.), The architecture of innovative apprenticeship (pp. 219–229). Dordrecht: Springer Science + Business Media. Baumert, J., Klieme, E., Neubrand, M., Prenzel, M., Schiefele, U., Schneider, W., et al. (Eds.). (2001). PISA 2000: Basiskompetenzen von Schülerinnen und Schülern im internationalen Vergleich. Opladen: Leske + Budrich. Beck, U. (1993). Zur Erfindung des Politischen. Zu einer Theorie reflexiver Modernisierung (1st ed.). Frankfurt/Main: Suhrkamp. Beck, U., Giddens, A., & Lash, S. (1996). Reflexive Modernisierung. Eine Kontroverse. Frankfurt/ Main: Suhrkamp. Becker, M. (2003). Diagnosearbeit im Kfz-Handwerk als Mensch-Maschine-Problem. Konsequenzen des Einsatzes rechnergestützter Diagnosesysteme für die Facharbeit. Dissertation, Bielefeld: W. Bertelsmann. Becker-Lenz, R., & Müller-Hermann, S. (2013). Die Notwendigkeit von wissenschaftlichem Wissen und die Bedeutung eines professionellen Habitus für die Berufspraxis der sozialen Arbeit. In R. Becker-Lenz, S. Busse, G. Ehlert, & S. Mueller (Eds.), Professionalität in der

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

521

522

Bibliography

sozialen Arbeit. Standpunkte, Kontroversen, Perspektiven (3rd ed., pp. 203–229). Wiesbaden: VS Verlag für Sozialwissenschaften. Benner, P. (1984). From novice to expert. Excellence and power in clinical nursing practice. Menlo Park: Addison-Wesley. Benner, P. (1994). Stufen der Pflegekompetenz. From novice to expert. Bern u. a. O.: Huber. Benner, P. (1997). Stufen zur Pflegekompetenz. From novice to expert. (2. Nachdruck). Bern u.a.: Huber. Bergmann, J. R. (1995). “Studies of work” – Ethnomethodologie. In U. Flick, E. von Kardorff, H. Keupp, & L. von Rosenstiel (Eds.), Handbuch Qualitative Sozialforschung. Grundlagen, Konzepte, Methoden und Anwendungen (pp. 269–272). Weinheim: Beltz. Bergmann, J. R. (2006). Studies of work. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung (2nd ed., pp. 640–646). Bielefeld: wbv. Blankertz, H. (1972). Kollegstufenversuch in Nordrhein-Westfalen – das Ende der gymnasialen Oberstufe und der Berufsschulen. DtBFsch, 68(1), 2–20. Blankertz, H. (1983). Einführung in die Thematik des Symposiums. In: Benner, D., Heid, H., Thiersch, H. (Hg.) Beiträge zum 8. Kongress der Deutschen Gesellschaft für Erziehungswissenschaften vom 22–24. März 1982 in der Universität Regensburg. Zeitschrift für Pädagogik, 18. Beiheft. 139–142. Blankertz, H. (Ed.). (1986). Lernen und Kompetenzentwicklung in der Sekundarstufe II. Abschlussbericht der wissenschaftlichen Begleitung Kollegstufe NW. 2 Bde. Soest: Soester Verlagskontor. BLK. (2002). Kompetenzzentren – Kompetenzzentren in regionalen Berufsbildungsnetzwerken Rolle und Beitrag der beruflichen Schulen, BLK-Fachtagung am 3/4. Dezember 2001 in Lübeck. Heft 99. Bonn. BLK (Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung). (1973). Bildungsgesamtplan, Bd. 1. Stuttgart: Klett-Cotta. Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13, 473–505. Böhle, F. (2009). Weder rationale Reflexion noch präreflexive Praktik – erfahrungsgeleitetsubjektivierendes Handeln. Wiesbaden: Springer. Böhle, F., & Rose, H. (1992). Technik und Erfahrung. Arbeit in hochautomatisierten Systemen. Frankfurt a. M., New York: Campus. Borch, H., & Schwarz, H. (1999). Zur Konzeption und Entwicklung der neuen IT-Berufe. In Bundesinstitut für Berufsbildung (Ed.), IT-Best-Practise, Gestaltung der betrieblichen Ausbildung. Bielefeld: W. Bertelsmann. Borch, H., & Weißmann, H. (2002). IT-Berufe machen Karriere. Zur Evaluation der neuen Berufe im Bereich Information und Telekommunikation. In Bundesinstitut für Berufsbildung (Ed.), IT-Best-Practise, Gestaltung der betrieblichen Ausbildung. Bielefeld: W. Bertelsmann. Boreham, N. C., Samurçay, R., & Fischer, M. (Eds.). (2002). Work process knowlege. London, New York: Routledge. Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer. Bozdogan, H. (1987). Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions. Psychometrika., 52(3), 345–370. Bozdogan, H., & Ramirez, D. E. (1988). FACAIC: Model selection algorithm for the orthogonal factor model using AIC and CAIC. Psychometrika., 53(3), 407–415. Brater, M. (1984b). Künstlerische Übungen in der Berufsausbildung. In Projektgruppe Handlungslernen (Hg.), Handlungslernen in der beruflichen Bildung (pp. 62–86). Wetzlar: W.-von Siemens-Schule, Projekt Druck. Brand, W., Hofmeister, W., & Tramm, F. (2005). Auf dem Weg zu einem Kompetenzstufenmodell für die berufliche Bildung. Erfahrungen aus dem Projekt ULME. In: bwp@-Berufs- und Wirtschaftspädagogik. Online. 8 (Juli 2005)

Bibliography

523

Brater, M. (1984). Künstlerische Übungen in der Berufsausbildung. In Projektgruppe Handlungslernen (Ed.), Handlungslernen in der beruflichen Bildung (pp. 62–86). Wetzlar: W.-von Siemens-Schule. Projekt Druck. Braverman, H. (1974). Die Arbeit im modernen Produktionsprozess (Übersetzung von: Labor and monopoly capital. The degradation of work in the twentieth century. New York, London: Monthly Review Press 1974). Frankfurt/Main, New York: Campus. Bremer, R. (2001). Entwicklungslinien wesentlicher Identität und Kompetenz vom Anfänger zum Experten. In W. Petersen, F. Rauner, & F. Stuber (Eds.), IT-gestützte Facharbeit – gestaltungsorientierter Berufsbildung. Bildung und Arbeitswelt (Vol. 4, pp. 269–282). BadenBaden: Nomos. Bremer, R. (2004). Zur Konzeption von Untersuchungen beruflicher Identität und fachlicher Kompetenz – ein empirisch-methodologischer Beitrag zu einer berufspädagogischen Entwicklungstheorie. In K. Jenewein, P. Knauth, P. Röben, & G. Zülch (Eds.), Kompetenzentwicklung in Arbeitsprozessen. Bildung und Arbeitswelt (Vol. 9, pp. 107–121). Baden-Baden: Nomos. Bremer, R. (2006). Lernen in Arbeitsprozessen – Kompetenzentwicklung. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung (2nd ed., pp. 282–294). Bielefeld: W. Bertelsmann. Bremer, R., & Haasler, B. (2004). Analyse der Entwicklung fachlicher Kompetenz und beruflicher Identität in der beruflichen Erstausbildung. In: Bildung im Medium beruflicher Arbeit. ZfPäd, 50(2), 162–181. Bremer, R., & Jagla, H.-H. (Eds.). (2000). Berufsbildung in Geschäfts- und Arbeitsprozessen. Bremen: Donat. Bremer, R., Rauner, F., & Röben, P. (2001). Der Experten-Facharbeiter-Workshop als Instrument der berufswissenschaftlichen Qualifikationsforschung. In F. Eicker & A. W. Petersen (Eds.), Mensch-Maschine-Interaktion. Arbeiten und Lernen in rechnergestützten Arbeitssystemen in Industrie, Handwerk und Dienstleistung (HGTB 1999) (pp. 211–231). Baden-Baden: Nomos. BLK (Bund-Länder-Kommission für Bildungsplanung und Forschungsförderung). (1973). Bildungsgesamtplan (Bd. 1). Stuttgart: Klett-Cotta. Brödner, P., & Oehlke, P. (2008). Shaping work and technology. In F. Rauner & R. Maclean (Eds.), Handbook of technical and vocational education and training research. Berlin: Springer. Brosius, F. (2013). SPSS 21 (1st ed.). Heidelberg: mitp. Brown, A., Kirpal, S., & Rauner, F. (Eds.). (2007). Identities at work. Dordrecht: Springer. Bruner, J. S. (1977). Wie das Kind lernt, sich sprachlich zu verständigen. Zeitschrift für Pädagogik, 23, 153 ff. Brüning, L., & Saum, T. (2006). Erfolgreich unterrichten durch Kooperatives Lernen. Strategien zur Schüleraktivierung. Essen: Neue Deutsche Schule Verlagsgesellschaft mbH. Bundesministerium für Bildung und Forschung (BMBF). (2006a). Berufsbildungsbericht 2006. Teil I, Anhang. Bundesministerium für Bildung und Forschung (BMBF) (Hg.) (2006b). Umsetzungshilfen für die Abschlussprüfungen der neuen industriellen und handwerklichen Elektroberufe. Intentionen, Konzeption und Beispiele (Entwicklungsprojekt). Stand: 30.12.2005. (Teil 1 der Abschlussprüfung); Stand: 09.01.2006. (Teil 2 der Abschlussprüfung). Manuskript. Bundesministerium für Wirtschaft und Energie (BMWi). (2005). Abschlussbericht: Was muss ich an Ausbildungsordnungen ändern, damit Unternehmen mehr ausbilden? (Oktober 2005). Erstellt von Ramböll Management. Bund-Länder-Kommission für Bildungsplanung. (1973). Bildungsgesamtplan I. Stuttgart: Ernst Klett. Bungard, W., & Lenk, W. (Eds.). (1988). Technikbewertung. Philosophische und psychologische Perspektiven. Frankfurt/Main: Suhrkamp. Butler, P., Felstead, A., Ashton, D., Fuller, A., Lee, T., Unwin, L., et al. (2004). High performance management: a literature review. Leicester: University of Leicester. Bybee, R. W. (1997). Achieving scientific literacy: from purposes to practices. Portsmouth, NH: Heinemann.

524

Bibliography

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally. Carey, S. (1985). Conceptual change in childhood. MIT Press. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Erlbaum. Cohen, A. (1991). Career stage as a moderator of the relationship between organizational commitment and its outcomes: A meta-analysis. Journal of Occupational Psychology, 64, 253–268. Cohen, A. (2007). Dynamics between occupational and organizational commitment in the context of flexible labour market: A review of the literature and suggestions for a future research agenda. Bremen: ITB-Forschungsbericht 26/2007. Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing and mathematics. In L. B. Resnick (Ed.), Knowing, learning and instruction (pp. 453–494). Hillsdale, NJ: Erlbaum. COMET-Konsortium (in Zusammenarbeit mit dem Hessischen Kultusministerium und der Senatorin für Bildung und Wissenschaft der Freien Hansestadt Bremen). (2010). Berufliche Kompetenzen messen: Das Projekt KOMET der Bundesländer Bremen und Hessen. Zweiter Zwischenbericht der wissenschaftlichen Begleitung – Ergebnisse 2009. Forschungsgruppe I: BB: Universität Bremen. Connell, M. W., Sheridan, K., & Gardner, H. (2003). On abilities and domains. In R. J. Sternberg & E. L. Grigorenko (Eds.), The psychology of abilities, competencies and expertise (pp. 126–155). Cambridge: Cambridge University Press. Cooley, M. (1988). Creativity, skill and human-centred systems. In B. Göranzon & J. Josefson (Eds.), Knowledge, skill and artificial intelligence (pp. 127–137). Berlin, Heidelberg, New York: Springer. Corbett, J. M., Rasmussen, L. B., & Rauner, F. (1991). Crossing the border. The social and engineering design of computer integrated manufacturing systems. London u. a. O.: Springer. Crawford, M. (2010). Ich schraube, also bin ich: Vom Glück, etwas mit den eigenen Händen zu schaffen. Berlin: Ullstein. Crawford, M. B. (2016). Die Wiedergewinnung des Wirklichen. Eine Philosophie des Ichs im Zeitalter der Zerstreuung. Berlin: Ullstein. Dehnbostel, P. (1994). Erschließung und Gestaltung des Lernorts Arbeitsplatz. Berufsbildung in der wissenschaftlichen Praxis, 23(1), 13–18. Dehnbostel, P. (2005). Lernen-Arbeiten-Kompetenzentwicklung. Zur wachsenden Bedeutung des Lernens und der reflexiven Handlungsfähigkeit im Prozess der Arbeit. In G. Wiesner & A. Wolter (Eds.), Die lernende Gesellschaft. Juventus: Weinheim. Deitmer, L., Fischer, M., Gerds, P., Przygodda, K., Rauner, F., Ruch, H., et al. (2004). Neue Lernkonzepte in der dualen Berufsausbildung. Bilanz eines Modellversuchsprogramms der Bund-Länder-Kommission (BLK). Reihe: Berufsbildung, Arbeit und Innovation (Vol. 24). Bielefeld: W. Bertelsmann. Dengler, K., Matthes, B. (2015). Folgen der Digitalisierung für die Arbeitswelt. Substituierungspotentiale von Berufen in Deutschland. IAB-Forschungsbericht 11/2015. Deutsche Forschungsgemeinschaft (DFG). (1998). Sicherung guter wissenschaftlicher Praxis. Denkschrift. Empfehlungen der Kommission “Selbstkontrolle in der Wissenschaft”. Weinheim: WILEY-VCH. (ergänzende Auflage 2013). Deutscher Bundestag (11. Wahlperiode). (1990). Berichte der Enquête-Kommission „Zukünftige Bildungspolitik – Bildung 2000“. Drucksache 11/7820. Bonn. Dewey, J. (1916). Democracy and education. The middle works of John Dewey 1899–1924 (Vol. 9). Edwardsville: Southern Illinois University Press. Dörner, D. (1983). empirische Psychologie und Alltagsrelevanz. In G. Jüttemann (Ed.), Psychologie in der Veränderung (pp. 13–30). Beltz: Weinheim. Drescher, E. (1996). Was Facharbeiter können müssen: Elektroinstandhaltung in der vernetzten Produktion. Bremen: Donat.

Bibliography

525

Drexel, I. (2005). Das Duale system und Europa. Ein Gutachten im Auftrag von ver.di und IG Metall. Berlin: Hausdruck. Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt. Dybowsky, G., Haase, P., & Rauner, F. (1993). Berufliche Bildung und betriebliche Organisationsentwicklung. Reihe: Berufliche Bildung (Vol. 15). Bremen: Donat. Efron, B., & Tibshirani, R. J. (1994). An introduction to the Bootstrap. Boca Raton: Chapman & Hall/CRC. Embretson, S. E., & Reise, S. P. (2013). Item response theory for psychologists. Hoboken: Taylor and Francis. Emery, F. E., & Emery, M. (1974). Participative design. Canberra: Centre for Continuing Education. Australian National University. Erdwien, B., & Martens, T. (2009). Die empirische Qualität des Kompetenzmodells und des Ratingverfahrens. In Rauner, F. u. a.: Messen beruflicher Kompetenzen. Bd. II. Ergebnisse COMET 2008. Reihe Bildung und Arbeitswelt. Münster: LIT. Erpenbeck, J. (2001). Wissensmanagement als Kompetenzmanagement. In G. Franke (Ed.), Komplexität und Kompetenz. Ausgewählte Fragen der Kompetenzforschung (pp. 102–120). Bielefeld: W. Bertelsmann. Euler, D. (2011). Kompetenzorientiert prüfen – eine hilfreiche Version? In E. Severing & R. Weiß (Eds.), Prüfungen und Zertifizierungen in der beruflichen Bildung. Anforderungen – Instrumente – Forschungsbedarf (pp. 55–66). Bielefeld: W. Bertelsmann. Fischer, M. (2000a). Arbeitsprozesswissen von Facharbeitern – Umrisse einer forschungsleitenden Fragestellung. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches Arbeitsprozesswissen. Ein Forschungsgegenstand der Berufsfeldwissenschaften (pp. 31–47). Baden-Baden: Nomos. Fischer, M. (2000b). Von der Arbeitserfahrung zum Arbeitsprozesswissen. Rechnergestützte Facharbeit im Kontext beruflichen Lernens. Opladen: Leske + Budrich. Fischer, M. (2002). Die Entwicklung von Arbeitsprozesswissen durch Lernen im Arbeitsprozess – theoretische Annahmen und empirische Befunde. In M. Fischer & F. Rauner (Eds.), Lernfeld: Arbeitsprozess. Ein Studienbuch zur Kompetenzentwicklung von Fachkräften in gewerblichtechnischen Aufgabenbereichen (pp. 53–86). Baden-Baden: Nomos. Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus: Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang. Fischer, R. (2013). Berufliche Identität als Dimension beruflicher Kompetenz. Entwicklungsverlauf und Einflussfaktoren in der Gesundheits- und Krankenpflege. Reihe Berufsbildung, Arbeit und Innovation (Vol. 26). Bielefeld: wbv. Fischer, B., Girmes-Stein, R., Kordes, H., & Peukert, U. (1995). Entwicklungslogische Erziehungsforschung. In H. Haft & H. Kordes (Eds.), Methoden der Erziehungs- und Bildungsforschung. Band 2 der Enzyklopädie Erziehungswissenschaft (pp. 45–79). Stuttgart: Klett. Fischer, R., Hauschildt, U., Heinemann, L., & Schumacher, J. (2015). Erfassen beruflicher Kompetenz in der Pflegeausbildung europäischer Länder. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz (pp. 375–392). Münster: LIT. Fischer, M., Jungeblut, R., & Römmermann, E. (1995). “Jede Maschine hat ihre eigenen Marotten!” Instandhaltungsarbeit in der rechnergestützten Produktion und Möglichkeiten technischer Unterstützung. Donat: Bremen. Fischer, M., & Rauner, F. (Eds.). (2002). Lernfeld: Arbeitsprozess. Ein Studienbuch zur Kompetenzentwicklung von Fachkräften in gewerblich-technischen Aufgabenbereichen. Reihe: Bildung und Arbeitswelt (Vol. 6). Baden-Baden: Nomos.

526

Bibliography

Fischer, M., Rauner, F., & Zhao, Z. (2015). Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand. Münster: LIT. Fischer, M., & Röben, P. (2004). Arbeitsprozesswissen im Fokus von individuellem und organisationalem Lernen. Ergebnisse aus Großbetrieben in vier europäischen Ländern. Zeitschrift für Pädagogik, 2(2004), 182–201. Flick, U. (1995). Handbuch qualitative Sozialforschung: Grundlagen, Konzepte, Methoden und Anwendung. Beltz: Weinheim. Frank, H. (1969). Kybernetische Grundlagen der Pädagogik. Baden-Baden: Kohlhammer. Frei, F., & Ulich, E. (Eds.). (1981). Beiträge zur psychologischen Arbeitsanalyse. Bern: Huber. Freund, R. (2011). Das Konzept der multiplen Kompetenz auf den Analyseebenen Individuum, Gruppe, Organisation und Netzwerk. Hamburg: Verlag Dr. Kovac. Frey, A. (2006). Methoden und Instrumente zur Diagnose beruflicher Kompetenzen von Lehrkräften – eine erste Standortbestimmung zu bereits publizierten Instrumenten. In: Allemann-Gheonda, C., Terhard, E. (Hg.). Kompetenzen und Kompetenzentwicklung von Lehrerinnen und Lehrern: Ausbildung und Beruf. Zeitschrift für Pädagogik. 51(Beiheft): 30–46 Frieling, E. (1995). Arbeit. In U. Flick et al. (Eds.), Handbuch Qualitative Sozialforschung (2nd ed., pp. 285–288). Weinheim: Beltz. Ganguin, D. (1992). Die Struktur offener Fertigungssysteme in der Fertigung und ihre Voraussetzungen. In G. Dybowski, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat. Ganguin, D. (1993). Die Struktur offener Fertigungssysteme in der Fertigung und ihre Voraussetzungen. In G. Dybowsky, P. Haase, & F. Rauner (Eds.), Berufliche Bildung und betriebliche Organisationsentwicklung (pp. 16–33). Bremen: Donat. Gardner, H. (1991). Abschied vom IQ: die Rahmentheorie der vielfachen Intelligenzen. Stuttgart: Klett-Cotta. Gardner, H. (1999). Intelligence reframed: multiple intelligences for the 21st century. New York, NY: Basic Books. Gardner, H. (2002). Intelligenzen. Die Vielfalt des menschlichen Geistes. Stuttgart: Klett-Cotta. Garfinkel, H. (1967). Studies in Ethnomethodology. Englewood Cliffs, N.J.: Prentice-Hall. Garfinkel, H. (1986). Ethnomethodological Studies of Work. London u. a.: Routledge & Kegan Paul. Gäumann-Felix, K., & Hofer, D. (2015). COMET in der Pflegeausbildung Schweiz. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 93–110). Münster: LIT. Georg, W., & Sattel, U. (1992). Einleitung: Von Japan lernen? In Dies (Ed.), Von Japan lernen? Aspekte von Bildung und Beschäftigung in Japan (p. 7ff). Weinheim: Deutscher Studien Verlag. Gerecht, M., Steinert, B., Klieme, E., & Döbrich, P. (2007). Skalen zur Schulqualität: Dokumentation der Erhebungsinstrumente. Pädagogische Entwicklungsbilanzen mit Schulen (PEB). Frankfurt/Main: Gesellschaft zur Förderung Pädagogischer Forschung. Deutsches Institut für Internationale Pädagogische Forschung. Gerstenmaier, J. (1999). Situiertes Lernen. In C. Perleth & A. Ziegler (Eds.), Pädagogische Psychologie. Bern: Huber. Gerstenmaier, J. (2004). Domänenspezifisches Wissen als Dimension beruflicher Entwicklung. In F. Rauner (Ed.), Qualifikationsforschung und Curriculum (pp. 151–163). Bielefeld: W. Bertelsmann. Giddens, A. (1972). In A. Giddens (Ed.), Introduction: Durkheim’s writings in sociology and social psychology (pp. 1–50). Cambridge: Cambridge University Press. Girmes-Stein, R., & Steffen, R. (1982). Konzept für eine entwicklungsbezogene Teilstudie im Rahmen der Evaluation des Modellversuchs zur Verbindung des Berufsvorbereitungsjahres (BVJ) mit dem Berufsgrundschuljahr (BGJ) an berufsbildenden Schulen des Landes NW. Münster: Zwischenbericht.

Bibliography

527

Glaser, B., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publisher Company. Granville, G. (2003). “Stop making sense”: Chaos and Coherence. In the formulation of the Irish qualifications framework. Journal of Education and Work, 16(3), 259–270. Gravert, H., & Hüster, W. (2001). Intentionen der KMK bei der Einführung von Lernfeldern. In P. Gerds & A. Zöller (Eds.), Der Lernfeldansatz der Kultusministerkonferenz (pp. 83–97). Bielefeld: W. Berlelsmann. Griffin, P., Gillis, S., & Calvitto, P. (2007). Standards-referenced assessment for vocational education and training in schools. Australian Journal of Education, 51(1), 19–38. Grob, U., & Maag Merki, K. (2001). Überfachliche Kompetenzen: Theoretische Grundlegung und empirische Erprobung eines Indikatorensystems. Bern u. a. O.: Peter Lang. Grollmann, P. (2003). Professionelle Realität beruflichen Bildungspersonals im institutionellen Kontext ausgewählter Bildungssysteme. Eine empirische Studie anhand ausgewählter Fälle aus den USA, Dänemark und Deutschland. Bremen: Institut Technik und Bildung der Universität. Grollmann, P. (2005). Professionelle Realität von Berufspädagogen im internationalen Vergleich: eine empirische Studie anhand ausgewählter Beispiele aus Dänemark, Deutschland und den USA. Berufsbildung, Arbeit und Innovation (Vol. 3). Bielefeld: W. Bertelsmann. Grollmann, P., Kruse, W., & Rauner, F. (2003). Scenarios and Strategies for VET in Europe (Vol. 130). Dortmund: Landesinstitut Sozialforschungsstelle Dortmund. Grollmann, P., Kruse, W., & Rauner, F. (Eds.). (2005). Europäisierung beruflicher Bildung. Bildung und Arbeitswelt (Vol. 14). Münster: LIT. Grollmann, P., & Rauner, F. (Eds.). (2007). International perspectives on teachers and lecturers in technical and vocational education. Dordrecht: Springer. Grollmann, P., Spöttl, G., & Rauner, F. (2006). Europäisierung Beruflicher Bildung – eine Gestaltungsaufgabe. Reihe: Bildung und Arbeitswelt (Vol. 16). Münster: LIT. Grollmann, P., Spöttl, G., & Rauner, F. (Eds.). (2007). Europäisierung beruflicher Bildung – eine Gestaltungsaufgabe. Münster: LIT. Gruber, H., & Renkl, A. (2000). Die Kluft zwischen Wissen und Handeln: Das Problem des trägen Wissens. In G. H. Neuweg (Ed.), Wissen – Können – Reflektion. Ausgewählte Verhältnisbestimmungen (pp. 155–174). Innsbruck: Studien-Verlag. Grünewald, U., Degen, U., & Krick, H. (1979). Qualifikationsforschung und berufliche Bildung. Ergebnisse eines Colloquiums des Bundesinstituts für Berufsbildung (BIBB) zum gegenwärtigen Diskussionsstand in der Qualifikationsforschung. Heft 2. Berlin: BIBB. Gruschka, A. (1983). Fachliche Kompetenzentwicklung und Identitätsbildung im Medium der Erzieherausbildung – über den Bildungsgang der Kollegschule und zur Möglichkeit der Schüler, diesen zum Thema zu machen. In D. Benner, H. Herd, & H. Thiersch (Eds.), Zeitschrift für Pädagogik 18 (pp. 142–152). Beiheft: Beiträge zum 8. Kongreß der Deutschen Gesellschaft für Erziehungswissenschaft. Gruschka, A. (Ed.). (1985). Wie Schüler Erzieher werden. Studie zur Kompetenzentwicklung und fachlichen Identitätsbildung. (2 Bände). Wetzlar: Büchse der Pandora. Gruschka, A. (2005). Bildungsstandards oder das Versprechen, Bildungstheorie in empirischer Bildungsforschung aufzuheben. In L. A. Pongratz, R. Reichenbach, & M. Wimmer (Eds.), Bildung - Wissen - Kompetenz (pp. 9–29). Bielefeld: Janus Presse. Guillemin, F., Bombardier, C., & Beaton, D. (1993). Cross-cultural adaptation of health-related quality of life measures: literature review and proposed guidelines. Journal of Clinical Epidemiology, 46(12), 1417–1432. Guldimann, T., & Zutavern, M. (1992). Schüler werden Lernexperten. Arbeitsberichte. Forschungsstelle der Pädagogischen Hochschule des Kantons St. Gallen. Band 9. Pädagogische Hochschule St. Gallen. Haasler, B. (2004). Hochtechnologie und Handarbeit – Eine Studie zur Facharbeit im Werkzeugbau der Automobilindustrie. Bielefeld: W. Bertelsmann Verlag. Haasler, B., & Erdwien, B. (2009). Vorbereitung und Durchführung der Untersuchung. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen beruflicher Kompetenzen.

528

Bibliography

Bd. 1. Grundlagen und Konzeption des KOMET-Projekts. Reihe Bildung und Arbeitswelt. Münster: LIT. Haasler, B., Heinemann, L., Rauner, F., Grollmann, P., & Martens, T. (2009). Testentwicklung und Untersuchungsdesign. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des KOMET-Projektes (2. Aufl.) (pp. 103–140). Bielefeld: W. Bertelsmann. Haasler, B., & Rauner, F. (2012). Lernen im Betrieb. Konstanz: Christiani. Hacker, W. (1973). Allgemeine Arbeits- und Ingenieurspsychologie. Bern: Huber. Hacker, W. (1986). Arbeitspsychologie. Psychische Regulation von Arbeitstätigkeiten. Bern: Huber. Hacker, W. (1992). Expertenkönnen – Erkennen und Vermitteln. Göttingen: Verlag für Angewandte Psychologie. Hacker, W. (1996). Diagnose von Expertenwissen. Von Abzapf-(Broaching-) zu Aufbau-([Re-] Constuction-)Konzepten. In Sitzungsberichte der sächsischen Akademie der Wissenschaften zu Leipzig. Bd. 134. Heft 6. Berlin: Akademie-Verlag. Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work. Test of a theorie. Organizational Behaviour of human Performance, 60, 250–279. Hastedt, H. (1991). Aufklärung und Technik. Grundprobleme einer Ethik der Technik. Frankfurt/ Main: Suhrkamp. Hattie, J. A. (2003). Teachers make a difference: What is the research evidence? Australian councel for education, research annual conference on: Building Teacher Quality. Hattie, J. A. (2011). Influences on students’ learning. www.arts.auckland.acoz/education/staff. Hattie, J., & Yates, C. R. (2015). Lernen sichtbar machen aus psychologischer Perspektive. Hohengehren: Schneider. Hauschildt, U., Brown, H., Heinemann, L., & Wedekind, V. (2015). COMET Südafrika. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand (pp. 353–374). Berlin: LIT. Havighurst, R. J. (1972). Developmental Tasks and Education. New York: David McKay. Heeg, F. J. (2015). Stellenwert des COMET-Kompetenzmodells für duale Ingenieur-studiengänge. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf den Prüfstand (pp. 111–126). Berlin: LIT. Heid, H. (1999). Über die Vereinbarkeit individueller Bildungsbedürfnisse und betrieblicher Qualifikationsanforderungen. ZfPäd, 45(2), 231–244. Heid, H. (2006). Werte und Normen in der Berufsbildung. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der Berufsbildung (2nd ed., pp. 33–43). Wiesbaden: VS Verlag für Sozialwissenschaften. Heidegger, G., Adolph, G., & Laske, G. (1997). Gestaltungsorientierte Innovation in der Berufsschule. Bremen: Donat. Heidegger, G., Jacobs, J., Martin, W., Mizdalski, R., & Rauner, F. (1991). Berufsbilder 2000. Soziale Gestaltung von Arbeit, Technik und Bildung. Opladen: Westdeutscher Verlag. Heidegger, G., & Rauner, F. (1997). Reformbedarf in der Beruflichen Bildung für die industrielle Produktion der Zukunft. Düsseldorf: Ministerium für Wirtschaft und Mittelstand, Technologie und Verkehr NRW. Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In F. Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz. Bd. III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT. Heinemann, L., & Rauner, F. (2008). Identität und Engagement: Konstruktion eines Instruments zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität. A+B Forschungsberichte 1. Universität Bremen: IBB. Heinz, W. R. (1995). Arbeit, Beruf und Lebenslauf. Eine Einführung in die berufliche Sozialisation. München: Juventa.

Bibliography

529

Heinz, W. R. (2002). Self-socialization and post-traditional society. In R. A. Settersten & T. J. Owens (Eds.), Advances in life course research. New frontiers in socialization (pp. 41–64). New York: Elsevier. Heinz, W. R. (2006). Berufliche Sozialisation. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 321–329). Bielefeld: W. Bertelsmann Verlag. Heinz, W. R. (2012). Die Perspektive des Lebenslaufs. In B. Dippelhofer-Stiem (Ed.), Enzyklopädie Erziehungswissenschaften Online (EEO). Weinheim u. Basel: Beltz/Juventa. Heinze, T. (1972). Zur Kritik an den Technologisierungstendenzen des Unterrichtsprozesses. Die Deutsche Schule, 6, 347–361. Hellpach, W. (1922). Sozialpsychologische Analyse des betriebstechnischen Tatbestandes “Gruppenfabrikation”. In R. Lang & W. Hellpach (Eds.), Gruppenfabrikation (pp. 5–186). Berlin: Springer. Heritage, J. (1984). Garfinkel and Ethnomethodology. Cambridge: Polity Press. Hirtt, N. (2011). Education in the “knowledge economy”: Consequences for democracy In: Aufenanger, S., Hamburger, F., Ludwig, L., Tippelt, R. (Hg.) Bildung in der Demokratie: Beiträge zum 22. Kongress der Deutschen Gesellschaft für Erziehungswissenschaft. Schriftenreihe der Deutschen Gesellschaft für Erziehungswissenschaft (DGfE). Budrich Hoey, D. (2009). How do we measure up? Benchmarking the world skills competition. In R. McLean & D. Wilson (Eds.), International handbook of education for the changing world of work (Bridging academic and vocational learning) (Vol. 6, pp. 2827–2840). Hoff, E.-H., Lappe, L., & Lempert, W. (1991). Persönlichkeitsentwicklung in Facharbeiterbiographien. Bern, Stuttgart: Huber. Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. Thousand Oaks, Calif: Sage Publications. Holzkamp, K. (1985). Grundlagen der Psychologie. Frankfurt/Main, New York: Campus. Howe, F., & Heermeier, R. (Eds.). (1999). Abschlussbericht (MV GOLO): Gestaltungsorientierte Lern- und Arbeitsaufgaben. Bremen: ITB Universität Bremen. Industrie- und Handelskammer in Nordrhein-Westfalen. (2010). Eine Handreichung für Unternehmen und Prüfer. Industrielle Metall- und Elektroberufe: Der Umgang mit dem varianten Modell. Jäger, C. (1989). Die kulturelle Einbettung des europäischen Marktes. In M. Haller, H.-J. Hoffmann-Nowonty, & W. Zapf (Eds.), Kultur und Gesellschaft: Verhandlungen des 24. Deutschen Soziologentages, des 11 Österreichischen Soziologentages und des 8. Kongresses der Schweizerischen Gesellschaft für Soziologie in Zürich 1988. Frankfurt/Main und New York: Campus. Jäger, C., Bieri, L., & Dürrenberger, G. (1987). Berufsethik und Humanisierung der Arbeit. Schweizerische Zeitschrift für Soziologie, 13(1987), 47–62. Johnson, D., & Johnson, R. (1999). Learning together and alone: cooperative, competitive, and individualistic learning. Boston: Allyn and Bacon. Jongebloed, H.-C. (2006). Vorwort. In T. Retzmann (Ed.), Didaktik der berufsmoralischen Bildung in Wirtschaft und Verwaltung. Eine fachdidaktische Studie zur Innovation der kaufmännischen Berufsbildung (pp. VII–XIV). Norderstedt: Books on Demand. Kalvelage, J., Heinemann, L., Rauner, F., & Zhou, Y. (2015). Messen von Identität und Engagement in beruflichen Bildungsgängen. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung – Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 305–326). Münster: LIT. Kanungo, R. N. (1982). Work alienation: An integrative approach. New York: Praeger Publishers. Karlinger, F. N. (1964). Foundation of Behavioral Research. New York: Holt Rinehart and Winston Inc.. Katzenmeyer, R., Baltes, D., Becker, U., Gille, M., Hubacek, G., Kullmann, B., et al. (2009). Das COMET-Kompetenzmodell in der Unterrichtspraxis. In F. Rauner et al. (Eds.), Messen

530

Bibliography

beruflicher Kompetenzen. Bd. II. Ergebnisse COMET 2008. Reihe Bildung und Arbeitswelt (pp. 161–205). LIT: Münster. Kelle, U., Kluge, S., & Prein, G. (1993). Strategien der Geltungssicherung in der qualitativen Sozialforschung. Zur Validitätsproblematik im interpretativen Paradigma. Arbeitspapier Nr. 24. Hg. Vorstand des Sfb 186. Universität Bremen. Kern, H., & Sabel, C. F. (1994). Verblasste Tugenden. Zur Krise des Deutschen Produktionsmodells. In N. Beckenbach & W. v. Treeck (Eds.), Umbrüche gesellschaftlicher Arbeit. Soziale Welt, Sonderband 9 (pp. 605–625). Göttingen: Schwartz. Kern, H., & Schumann, M. (1970). Industriearbeit und Arbeiterbewusstsein. Eine empirische Untersuchung über den Einfluss der aktuellen technischen Entwicklung auf die industrielle Arbeit und das Arbeiterbewusstsein (Vol. I, II). Frankfurt/Main: Europäische Verlagsanstalt. Kern, H., & Schumann, M. (1984). Das Ende der Arbeitsteilung? Rationalisierung in der industriellen Produktion. München: Beck. Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften. Kleiner, M. (2005). Berufswissenschaftliche Qualifikationsforschung im Kontext der Curriculumentwicklung. Studien zur Berufspädagogik 18. Hamburg: Dr. Kovac Verlag. Kleiner, M., Meyer, K., & Rauner, F. (2001). Berufsbildungsplan für den Industrie-mechaniker. ITB-Arbeitspapier Nr. 32. Bremen: ITB. Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-Design I. Arbeits-aufgaben für eine moderne Beruflichkeit – Identifizieren und Beschreiben von beruflichen Arbeitsaufgaben. In: Berufsbildung und Innovation – Instrumente und Methoden zum Planen, Gestalten und Bewerten (Vol. 2). Konstanz: Christiani. Kliebard, H. (1999). Schooled to Work. Vocationalism and the American Curriculum, 1876–1946. New York, NY: Teachers College Press. Klieme, E., Avenarius, H., Blum, W., Döbrich, P., Gruber, H., Prenzel, M., et al. (2003). Zur Entwicklung nationaler Bildungsstandards: Eine Expertise. Berlin: Bundesministerium für Bildung und Forschung. Klieme, E., & Hartig, J. (2007). Kompetenzkonzepte in den Sozialwissenschaften und im empirischen Diskurs. Zeitschrift für Erziehungswissenschaft. Sonderheft, 08, 11–29. Klieme, E., & Leutner, D. (2006). Kompetenzmodelle zur Erfassung individueller Lernergebnisse und zur Bilanzierung von Bildungsprozessen. Beschreibung eines neu eingerichteten Schwerpunktprogramms der DFG. Zeitschrift für Pädagogik, 53(6), 876–903. Klotz, V. K., & Winther, E. (2012). Kompetenzmessung in der kaufmännischen Berufs-ausbildung: Zwischen Prozessorientierung und Fachbezug. Eine Analyse der ak-tuellen Prüfungspraxis. bwp@-Ausgabe Nr. 22. Juni 2012. Universität Pader-born. URL: http://www.bwpat.de/ ausgabe22/klotz_winther_bwpat22.pdf (Stand: 03.09.2014). Klüver, J. (1995). Hochschule und Wissenschaftssystem. In: Huber, L. (Hg.) Enzyklopädie Erziehungswissenschaft. Bd. 10. Ausbildung und Sozialisation in der Hochschule. 78–91. KMK – Kultusministerkonferenz. (1999). Handreichungen für die Erarbeitung von Rahmenlehrplänen der Kultusministerkonferenz (Köln) für den berufsbezogenen Unterricht in der Berufsschule und ihre Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte Ausbildungsberufe, Bonn (Stand: 05.02.1999). KMK – Kultusministerkonferenz. (2004a). Standards für die Lehrerbildung – Bildungswissenschaften, Bonn (Stand: 16.12.2004). KMK – Kultusministerkonferenz. (2004b). Argumentationspapier Bildungsstandards der Kultusministerkonferenz, Bonn (Stand: 16.12.2004). KMK – Kultusministerkonferenz. (2005). Bildungsstandards im Fach Physik (Chemie/Biologie) für den mittleren Schulabschluss. München: Luchterhand. KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland. (1991). Rahmenvereinbarung über die Berufsschule. Beschluss der Kultusministerkonferenz vom 14./15.3.1991. ZBW, 7, 590–593.

Bibliography

531

KMK – Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der Bundesrepublik Deutschland (Hg.) (1996). Handreichungen für die Erarbeitung von Rahmenlehrplänen der Kultusministerkonferenz für den berufsbezogenen Unterricht in der Berufsschule und ihre Abstimmung mit Ausbildungsordnungen des Bundes für anerkannte Ausbildungsberufe, Bonn. Kohlberg, L. (1969). Stage and sequence: The developmental approach to moralization. New York: Holt. König, J. (2010). Lehrerprofessionalität – Konzepte und Ergebnisse der internationalen und nationalen Forschung am Beispiel fächerübergreifender und pädagogischer Kompetenz. In J. König & B. Hoffmann (Eds.), Professionalität von Lehrkräften – was sollen Lehrkräfte im Lese- und Schreibunterricht wissen und können (pp. 40–105). Berlin: DGLS. Kruse, W. (1976). Die Qualifikation der Arbeiterjugend. Eine Studie über die gesellschaftliche Bedeutung ihrer Veränderung. Frankfurt/Main: Campus. Kruse, W. (1986). Von der Notwendigkeit des Arbeitsprozeßwissens. In J. Schweitzer (Ed.), Bildung für eine menschliche Zukunft (pp. 188–193). Weinheim, Basel: Juventa Verlag. Kunter, M., Schümer, G., Artelt, C., Baumert, J., Klieme, E., Neubrand, M., et al. (2003). Pisa 2000 – Dokumentation der Erhebungsinstrumente. Berlin: MPI für Bildungsforschung. Kurtz, T. (2001). Aspekte des Berufs in der Moderne. Opladen: Leske + Budrich. Kurtz, T. (2005). Die Berufsform der Gesellschaft. Weilerswist: Velbrück Wissenschaft. Lamnek, G. (1988/89). Qualitative Sozialforschung. Bde. 1/2. Methodologie. München Laur-Ernst, U. (Ed.). (1990). Neue Fabrikstrukturen – veränderte Qualifikationen. Ergebnisse eines Workshops des Bundesinstituts für Berufsbildung. Berlin: BIBB. Lave, J., & Wenger, E. (1991). Situated Learning. Legitimate Peripheral Participation. New York: Cambridge University Press. Lechler, P. (1982). Kommunikative Validierung. In G. L. Huber & H. Mandl (Eds.), Verbale Daten (pp. 243–258). Beltz: Weinheim. Lehberger, J. (2013). Arbeitsprozesswissen – didaktisches Zentrum für Bildung und Qualifizierung. Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Berlin: LIT. Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMETMessverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT. Lehberger, J., & Rauner, F. (2014). Berufliches Lernen in Lernfeldern. Ein Leitfaden für die Gestaltung und Organisation projektförmigen Lernens in der Berufsschule. Bremen: Universität Bremen: I:BB. Lehmann, R. H., & Seeber, S. (Eds.). (2007). ULME III. Untersuchungen von Leistungen, Motivation und Einstellungen der Schülerinnen und Schüler der Berufsschulen. Hamburg: Behörde für Bildung und Sport. Lempert, W. (1995). Berufliche Sozialisation und berufliches Lernen. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der Berufsbildung. Verlag B. Budrich: Opladen. Lempert, W. (2000). Berufliche Sozialisation oder was Berufe aus Menschen machen. Eine Einführung (2nd ed.). Baltmannsweiler: Schneider Verlag. Lempert, W. (2006). Berufliche Sozialisation. Persönlichkeitsentwicklung in der betrieblichen Ausbildung und Arbeit. Baltmannsweiler: Schneider Verlag. Lempert, W. (2007a). Vom “impliziten Wissen” zur soziotopologisch reflektierten Theorie. Ermunterung zur Untertunnelung einer verwirrenden Kontroverse. Zeitschrift für Berufs- und Wirtschaftspädagogik, 103(4), 581–596. Lempert, W. (2007b). Nochmals: Beruf ohne Zukunft? Berufspädagogik ohne Beruf? Postskriptum zur Diskussion des Buches von Thomas Kurz “Die Berufsform der Gesellschaft”. Zeitschrift für Berufs- und Wirtschaftspädagogik, 103(3), 461–467. Lenger, A. (2016). Der ökonomische Fachhabitus – professionstheoretische Konsequenzen für das Studium der Wirtschaftswissenschaften. In G. Minnameier (Ed.), Ethik und Beruf. Interdisziplinäre Zugänge (pp. 157–176). Bielefeld: wbv. Lenk, H., & Ropohl, G. (Eds.). (1987). Technik und Ethik. Stuttgart: Reclam.

532

Bibliography

Lenzen, D., & Blankertz, H. (1973). Didaktik und Kommunikation: Zur strukturalen Begründung der Didaktik und zur didaktischen Struktur sprachlicher Interaktion. Athenäum: Frankfurt am Main. Lüdtke, G. (1974). Harmonisierung und Objektivierung von Prüfungen. PAL Schriftreihe Bd. 1. Konstanz: Christiani. Lutz, B. (1988). Zum Verhältnis von Analyse und Gestaltung der sozialwissenschaftlichen Technikforschung. In F. Rauner (Ed.), “Gestaltung” – eine neue gesellschaftliche Praxis. Bonn: Neue Gesellschaft. Martens, T. (2015). Wie kann berufliche Kompetenz gemessen werden? Das Beispiel COMET. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand. Berlin, Münster: LIT. Martens, T., Heinemann, L., Maurer, A., Rauner, F., Ji, L., & Zhao, Z. (2011). Ergebnisse zum Messverfahren [COMET]. In: Rauner, F. et al. Messen beruflicher Kompetenzen. Bd. III. Drei Jahre COMET-Testerfahrung, 90–126. Martens, T., & Rost, J. (1998). Der Zusammenhang von wahrgenommener Bedrohung durch Umweltgefahren und der Ausbildung von Handlungsintentionen. Zeitschrift für Experimentelle Psychologie., 45(4), 345–364. Martens, T., & Rost, J. (2009). Zum Zusammenhang von Struktur und Modellierung beruflicher Kompetenzen. In F. Rauner, B. Haasler, L. Heinemann, & P. Grollmann (Eds.), Messen beruflicher Kompetenzen. Bd. I. Grundlagen und Konzeption des COMET-Projekts (pp. 91–95). Münster: LIT. Mayring, P. (1988). Qualitative Inhaltsanalyse. Grundlagen und Techniken (2. Auflage). Weinheim: Deutscher Studien Verlag. McCormick, E. (1979). Job analysis. Methodes and applications. New York: Amacom. Meyer, P. J., & Allen, N. J. (1991). A three-component conceptualization of organizational commitment. Human Resource Management Review, 1, 61–89. Meyer-Abich, K. N. (1988). Wissenschaft für die Zukunft. Holistisches Denken in ökologischer und gesellschaftlicher Verantwortung. München: Beck. Minnameier, G. (2001). Bildungspolitische Ziele, wissenschaftliche Theorien und methodischpraktisches Handeln – auch ein Plädoyer für “Technologieführerschaft” im Bildungsbereich. In H. Heid, G. Minnameier, & E. Wuttke (Eds.), Fortschritte in der Berufsbildung? ZBW. Beiheft 16 (pp. 13–29). Stuttgart: Steiner. Monseur, C., Baye, A., Lafontaine, D., & Quittre, V. (2011). PISA test format assessment and the local independence assumption. IERI Monographs Series. Issues and Methodologies in LargeScale Assessments. 4, 131–158. http://hdl.handle.net/2268/103137 Müller, W. (1995). Der Situationsfi lm – Ein Medium partizipativer Organisationsentwicklung. In G. Dybowski, H. Pütz, & F. Rauner (Hg.), Berufsbildung und Organisationsentwicklung. „Perspektiven, Modelle, Forschungsfragen“ (pp. 333–344). Bremen: Donat. Müller-Fohrbroth, G. (1973). Wie sind Lehrer wirklich? Ideale Vorurteile Fakten. Stuttgart: Klett. National Automotive Technicians Education. (1996). ASE certification for automobile technician training programs. VA: Herndon. Nehls, H., & Lakies, T. (2006). Berufsbildungsgesetz. Basiskommentar. Frankfurt: Bund. Neuweg, G. H. (1999). Könnerschaft und implizites Wissen. Münster: Waxmann. Neuweg, G. H. (Ed.). (2000). Wissen – Können – Reflexion. Ausgewählte Verhältnisbestimmungen. Innsbruck, Wien, München: Studien-Verlag. Nickolaus, R., Gschwendtner, T., & Abele, S. (2009). Die Validität von Simulationsaufgaben am Beispiel der Diagnosekompetenz von Kfz-Mechatronikern. Stuttgart: Institut für Berufspädagogik. Nida-Rümelin, J. (2011). Die Optimierungsfalle. Philosophie einer humanen Ökonomie. München: Irisiana. Norton, R. E. (1997). DACUM handbook. The national centre on education and training for employment. Columbus/Ohio: The Ohio State University.

Bibliography

533

OECD. (2009). Länderbericht zur Berufsbildung in der Schweiz. Learning for Jobs, OECD Studie zur Berufsbildung Schweiz. http://www.bbt.admin.ch/themen/internationales/01020/index. html?lang¼de (Zugriff 11.01.2016). Oser, F. (1997). Standards der Lehrerbildung. Teil 1. Berufliche Kompetenzen, die hohen Qualitätsmerkmalen entsprechen. Beiträge zur Lehrerbildung, 15(1), 26–37. Oser, F., Curcio, G. P., & Düggeli, A. (2007). Kompetenzmessung in der Lehrerbildung als Notwendigkeit – Fragen und Zusammenhänge. Beiträge zur Lehrerbildung, 25(1), 14–26. Ott, B. (1998). Ganzheitliche Berufsbildung. Theorie und Praxis handlungsorientierter Techniklehre in Schule und Betrieb (2nd ed.). Stuttgart: Steiner. Pätzold, G. (1995). Vermittlung von Fachkompetenz in der Berufsbildung. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der Berufsbildung (pp. 157–170). Opladen: Leske + Budrich. Pätzold, G., Drees, G., & Thiele, H. (1998). Kooperation in der beruflichen Bildung. Zur Zusammenarbeit von Ausbildern und Berufsschullehrern im Metall- und Elektrobereich. Hohengehren: Baltmannsweiler. Wirtschaft und Berufserziehung, 4, 89/98. Pätzold, G., & Walden, G. (Eds.). (1995). Lernorte im dualen System der Berufsbildung. Reihe: Berichte zur beruflichen Bildung, Heft 177. Hg. vom BIBB. Bielefeld: W. Bertelsmann. Petermann, W. (1995). Fotographie und Filmanalyse. In U. Flick, E. von Kardoff, H. Keupp, L. von Rosenstiel, & S. Wolff (Hg.), Handbuch qualitative Sozialforschung. Grundlagen, Konzepte, Methoden und Anwendungen (2. Aufl, pp. 269–272). Weinheim: Beltz. Petersen, A. W., & Wehmeyer, C. (2001). Evaluation der neuen IT-Berufe. Forschungskonzepte und Ergebnisse der bundesweiten BiBB-IT-Studie. In A. W. Petersen, F. Rauner, & F. Stuber (Eds.), IT-gestützte Facharbeit. Gestaltungsorientierte Berufsbildung. Reihe: Bildung und Arbeitswelt (Vol. 4, pp. 283–310). Baden-Baden: Nomos. Piaget, J. (1973). Äquiliberation der Kognitiven Strukturen. Stuttgart: Klett. Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014). Berufliche Kompetenzen messen. Das Modellversuchsprojekt KOMET NRW. Zweiter Zwischenbericht (Juli 2014). IBB, Universität Bremen. http://www.ibb.uni-bremen.de. Piening, D., & Rauner, F. (2010). Umgang mit Heterogenität. Eine Handreichung des Projektes KOMET. Bremen: Universität Bremen I:BB. Piening, D., & Rauner, F. (2014). Kosten, Nutzen und Qualität der Berufsausbildung. Berlin: LIT. Pies, I. (2016). Individualethik versus Institutionenethik? – Zur Moral (in) der Marktwirtschaft. In G. Minnameier (Ed.), Ethik und Beruf. Interdisziplinäre Zugänge (pp. 17–39). Bielefeld: Bertelsmann Verlag. Polanyi, M. (1966a). The tacit dimension. London: Routledge & Kegan Paul. Polanyi, M. (1966b). The tacit dimension. Garden City: Doubleday & Company. Polanyi, M. (1985). Implizites Wissen. Frankfurt/Main: Suhrkamp (orig.: The Tacit Dimension. 1966). Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: towards a theory of conceptual change. Science Education, 66(2), 201–227. Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36(4), 717–731. Prenzel, M., Baumert, J., Blum, W., Lehmann, R., Leutner, D., Neubrand, M., et al. (Eds.). (2004). PISA 2003. Der Bildungsstand der Jugendlichen in Deutschland – Ergebnisse des zweiten internationalen Vergleichs. Münster: Waxmann. Przygodda, K., & Bauer, W. (2004). Ansätze berufswissenschaftlicher Qualifikationsforschung im BLK-Programm “Neue Lernkonzepte in der dualen Berufsausbildung”. In F. Rauner (Ed.), Qualifikationsforschung und Curriculum. Analysieren und Gestalten beruflicher Arbeit und Bildung. Reihe: Berufsbildung, Arbeit und Innovation (Vol. 25, pp. 61–79). Bielefeld: W. Bertelsmann. Rademacker, H. (1975). Analyse psychometrischer Verfahren der Erfolgskontrolle und der Leistungsmessung hinsichtlich ihrer didaktischen Implikationen. In Programmierte Prüfungen:

534

Bibliography

Problematik und Praxis. Schriften zur Berufsbildungsforschung (Vol. 25, pp. 63–100). Hannover: Schroedel. Randall, D. M. (1990). The consequences of organizational commitment. Administrative Science Quarterly, 22, 46–56. Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests (2nd ed.). Chicago: University of Chicago Press. Rauner, F. (1986). Elektrotechnik Grundbildung. Überlegungen zur Techniklehre im Schwerpunkt Elektrotechnik der Kollegschule. Soest: Landesinstitut für Schule und Weiterbildung. Rauner, F. (1988). Die Befähigung zur (Mit)Gestaltung von Arbeit und Technik als Leitidee beruflicher Bildung. In G. Heidegger, P. Gerds, & K. Weisenbach (Eds.), Gestaltung von Arbeit und Technik – Ein Ziel beruflicher Bildung (pp. 32–51). Frankfurt/Main, New York: Campus. Rauner, F. (1995). Gestaltung von Arbeit und Technik. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der Berufsbildung (pp. 50–64). Opladen: Leske + Budrich. Rauner, F. (1997). Automobil-Service im internationalen Vergleich. In F. Rauner, G. Spöttl, & W. Micknass (Eds.), Service, Qualifizierung und Vertrieb im internationalen Automobil-Sektor: Ergebnisse des Automobil-Welt-Congresses am 15. und 16. Oktober 1996 in München (pp. 35–47). Bremen: Donat-Verlag. Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur reflektierten Meisterschaft. Zeitschrift für Berufs- und Wirtschaftspädagogik (ZBW), 95(3), 424–446. Stuttgart: Franz Steiner Verlag. Rauner, F. (2000). Zukunft der Facharbeit. In J.-P. Pahl, F. Rauner, & G. Spöttl (Eds.), Berufliches Arbeitsprozesswissen (pp. 49–60). Baden-Baden: Nomos. Rauner, F. (2002a). Qualifikationsforschung und Curriculum. In M. Fischer & F. Rauner (Eds.), Lernfeld: Arbeitsprozess (pp. 317–339). Baden-Baden: Nomos. Rauner, F. (2002b). Berufliche Kompetenzentwicklung – vom Novizen zum Experten. In P. Dehnbostel, U. Elsholz, J. Meister, & J. Meyer-Henk (Eds.), Vernetzte Kompetenzentwicklung. Alternative Positionen zur Weiterbildung (pp. 111–132). Berlin: Edition Sigma. Rauner, F. (2004). Eine transferorientierte Modellversuchstypologie – Anregung zur Wiederbelebung der Modellversuchspraxis als einem Innovationsinstrument der Bildungsreform (Teil 2). Zeitschrift für Berufs- uns Wirtschaftspädagogik, 100, 424–447. Rauner, F. (2004a). Qualifikationsforschung und Curriculum. Analysieren und Gestalten beruflicher Arbeit und Bildung. In Berufsbildung, Arbeit und Innovation (Reihe). Band 25 Forschungsberichte. Bielefeld: W. Bertelsmann Verlag. Rauner, F. (2004b). Praktisches Wissen und berufliche Handlungskompetenz. Reihe: ITB-Forschungsberichte, Nr. 14. Universität Bremen: ITB. Rauner, F. (2005). Offene dynamische Kernberufe als Dreh- und Angelpunkt für eine europäische Berufsbildung. In P. Grollmann, W. Kruse, & F. Rauner (Eds.), Europäische Berufliche Bildung (pp. 17–31). Münster: LIT. Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.), Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W. Bertelsmann. Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches Zentrum für die Förderung der Berufsbildung. Rauner, F. (2015a). Messen beruflicher Kompetenz von Berufsschullehrern. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung – Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. KOMET auf dem Prüfstand (pp. 413–436). Münster: LIT. Rauner, F. (2015b). Machbarkeitsstudie. Anwenden des KOMET-Testverfahrens für Prüfungen in der beruflichen Bildung. (Unter Mitarbeit von Klaus Bourdick, Jenny Frenzel, Dorothea Piening). Bremen: Universität Bremen, I:BB.

Bibliography

535

Rauner, F. (2017). Grundlagen der beruflichen Bildung. Mitgestalten der Arbeitswelt. Bielefeld: wbv. Rauner, F. (2018a). Der Weg aus der Akademisierungsfalle. Die Architektur paralleler Bildungswege. Münster: LIT. Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB. Rauner, F., Bourdick, H., Frenzel, J., & Piening, D. (2015). Anwendung des COMETTestverfahrens für Prüfungen in der beruflichen Bildung. Machbarkeitsstudie. Bremen: I:BB. Rauner, F., & Bremer, R. (2004). Bildung im Medium beruflicher Arbeitsprozesse. Die berufspädagogische Entschlüsselung beruflicher Kompetenzen im Konflikt zwischen bildungstheoretischer Normierung und Praxisaffirmation. In: Bildung im Medium beruflicher Arbeit. Sonderdruck. ZfPäd, 50(2), 149–161. Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015). Engagement und Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer Ausbildung. Eine Studie im Auftrage der Landesinitiative “Steigerung der Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen”. Bremen: Universität Bremen I:BB. Rauner, F., Grollmann, P., & Martens, T. (2007). Messen beruflicher Kompetenz(entwicklung). ITB-Forschungsbericht 21. Bremen: Institut Technik und Bildung. Rauner, F., Schön, M., Gerlach, H., & Reinhold, M. (2001). Berufsbildungsplan für den Industrieelektroniker. ITB-Arbeitspapiere 31. Bremen: Universität Bremen, ITB. Rauner, F., & Spöttl, G. (2002). Der Kfz-Mechatroniker – Vom Neuling zum Experten. Bielefeld: Bertelsmann. Rauner, F., Zhao, Z., & Ji, L. (2010). Empirische Forschung zum Messen Beruflicher Kompetenz der Auszubildenden und Studenten. Beijing: Verlag Tsinghua Universität. Reckwitz, A. (2003, August). Grundelemente einer Theorie sozialer Praktiken: eine sozialtheoretische Perspektive. Zeitschrift für Soziologie, 32(4), 282–301. Ripper, J., Weisschuh, B., & Daimler Chrysler, A. G. (1999). Ausbildung im Dialog: das ganzheitliche Beurteilungsverfahren für die betriebliche Berufsausbildung. Konstanz: Christiani. Röben, P. (2004). Kompetenzentwicklung durch Arbeitsprozesswissen. In K. Jenewein, P. Knauth, P. Röben, & G. Zülch (Eds.), Kompetenzentwicklung in Arbeitsprozessen (pp. 11–34). BadenBaden: Nomos. Röben, P. (2006). Berufswissenschaftliche Aufgabenanalyse. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung. 2. aktualisierte Aufl (pp. 606–611). Bielefeld: W. Bertelsmann. Rost, J. (1999). Was ist aus dem Rasch-Modell geworden? Psychologische Rundschau, 50(3), 171–182. Rost, J. (2004a). Lehrbuch Testtheorie - Testkonstruktion (2nd ed.). Bern: Huber. Rost, J. (2004b). Psychometrische Modelle zur Überprüfung von Bildungsstandards anhand von Kompetenzmodellen. Zeitschrift für Pädagogik, 50(5), 662–678. Rost, J., & von Davier, M. (1994). A conditional item-fit index for Rasch models. Applied Psychological Measurement, 18(2), 171–182. Roth, H. (1971). Pädagogische Anthropologie. Bd. II: Entwicklung und Erziehung. Grundlagen einer Entwicklungspädagogik. Hannover: Schroedel. Sachverständigenkommission Arbeit und Technik. (1986). Forschungsperspektiven zum Problemfeld Arbeit und Technik. Bonn: Verlag Neue Gesellschaft. Sachverständigenkommission Arbeit und Technik. (1988). Arbeit und Technik. Ein Forschungsund Entwicklungsprogramm. Bonn: Verlag Neue Gesellschaft. Schecker, H., & Parchmann, I. (2006). Modellierung naturwissenschaftlicher Kompetenz. Zeitschrift für Didaktik der Naturwissenschaften (ZfDN), 12, 45–66. Scheele, B. (1995). Dialogische Hermeneutik. In U. Flick, E. von Kardoff, H. Keupp, L. von Rosenstiel, & S. Wolff (Hg.), Handbuch Qualitativer Sozialforschung (pp. 274–276). Weinheim: Beltz. Schein, E. (1973). Professional education. New York: McGraw-Hill.

536

Bibliography

Schelten, A. (1994). Einführung in die Berufspädagogik (2nd ed.). Stuttgart: Franz Steiner. Schelten, A. (1997). Testbeurteilung und Testerstellung. Stuttgart: Franz Steiner. Schmidt, H. (1995). Berufsbildungsforschung. In R. Arnold & A. Lipsmeier (Eds.), Handbuch der Berufsbildung (pp. 482–491). Opladen: Leske+Budrich. Schoen, D. A. (1983). The reflective practitioner. How professionals think in action. New York: Basic Books, Habercollins Publisher. Scholz, T. (2013). Beitrag des Koordinators der Industriemechaniker-Arbeitsgruppe. In: Forschungsgruppe Berufsbildungsforschung (I:BB): Berufliche Kompetenzen messen – das Modellversuchsprojekt KOMET (Metall). Abschlussbericht. Bremen: Universität Bremen, I:BB. Scholz, T. (2015). Warum das KOMET-Projekt “Industriemechaniker (Hessen)” eine so unerwartete Dynamik entfaltete. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 149–161). Berlin: LIT. Schreier, N. (2000). Integration von Arbeiten und Lernen durch eine arbeitsprozessorientierte Qualifizierungskonzentration beim Einsatz tutorieller Diagnosesysteme im Kfz-Service. In: Pahl, J.-P., Rauner, F., Spöttl, G. (Hg.) Berufliches Arbeitsprozesswissen. Ein Forschungsgegenstand der Berufsfeldwissenschaften (pp. 289–300), Baden-Baden. Sennett, R. (1998). Der flexible Mensch. Die Kultur des neuen Kapitalismus (Originalausgabe: The Erosion of Charakter. New York). Berlin. Sennett, R. (2008). Handwerk. Berlin: Berlin-Verlag (aus dem Amerikanischen übersetzt). The craftman. New Heaven and London: Yale University Press. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428. Skowronek, H. (1969). Lernen und Lernfähigkeit. München: Juventa. Skule, S., & Reichborn, A. N. (2002). Learning-conducive work. A survey of learning conditions in Norwegian workplaces. Luxembourg: Office for Official Publications of the European Communities. Spöttl, G. (2006). Experten-facharbeiter-workshops. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung (2nd ed., pp. 611–616). Bielefeld: W. Bertelsmann. Stegemann, C., von Eerde, K., & Piening, D. (2015). KOMET Nordrhein-Westfalen: Erste Erfahrungen in einem kaufmännischen Berufsfeld. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 127–136). Berlin: LIT. Sternberg, R. J., & Grigorenko, E. L. (Eds.). (2003). The psychology of abilities, competencies, and expertise. Cambridge: Cambridge University Press. Steyer, R., & Eyd, M. (2003). Messen und testen. Berlin: Springer. Straka, A., Meyer-Siever, K., & Rosendahl, J. (2006). Laborexperimente und Quasi-Experimente. In F. Rauner (Ed.), Handbuch Berufsbildungsforschung. 2. aktual. Aufl (pp. 647–652). Bielefeld: wbv. Stuart, M. (2010). The national skills development handbook 2010/11. Rainbow SA. Suppes, P., & Zinnes, J. L. (1963). Basic measurement theory. In R. D. Luce et al. (Eds.), Handbook of mathematical psychology. I (pp. 1–76). New York: Wiley. Taylor, I. A. (1975). An emerging view of creative actions. In I. A. Taylor & J. W. Getzels (Eds.), Perspectives in creativity (pp. 297–325). Chicago, IL: Aldine. Tenorth, H.-E. (2009). Ideen und Konzepte von Bildungsstandards. In R. Wernstedt & M. JohnOhnesorg (Eds.), Bildungsstandards als Instrument schulischer Qualitätsentwicklung (pp. 13–16). Berlin: Friedrich-Ebert-Stiftung. Terhart, E. (1998). Lehrerberuf. Arbeitsplatz, Biografie und Profession. In H. Altrichter et al. (Eds.), Handbuch der Schulentwicklung (pp. 560–585). Innsbruck, Weinheim: Studienverlag. Tiemeyer, E. (2015). Nordrhein-Westfalen klinkt sich ein. Ziele und erste Erfahrungen mit einem ambitionierten COMET-Projekt. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: KOMET auf dem Prüfstand (pp. 73–91). Berlin: LIT.

Bibliography

537

Tomaszewski, T. (1981). Struktur, Funktion und Steuerungsmechanismus menschlicher Tätigkeit. In T. Tomaszewski (Ed.), Zur Psychologie der Tätigkeit (pp. 11–33). Berlin: Deutscher Verlag der Wissenschaft. Ulich, E. (1994). Arbeitspsychologie (3rd ed.). Stuttgart: Zürich. Ulrich, O. (1987). Technikfolgen und Parlamentsreform. Plädoyer für mehr parlamentarische Kompetenz bei der Technikgestaltung. In: Aus Politik und Zeitgeschichte. Beilage zu „Das Parlament“ v. 9.5.1987, 15–25. VDI. (1991). Verband deutscher Ingenieure. Technikbewertung. Begriffe und Grundlagen. VDI 3780, März 1990. Volkert, W., Oesterreich, R., Gablenz-Kollakowicz, S., Grogoll, T., & Resch, M. (1983). Verfahren zur Ermittlung von Regulationserfordernissen in der Arbeitstätigkeit (Vera), Köln Volpert, W. (1987). Psychische Regulation von Arbeitstätigkeiten. In U. Kleinbeck & J. Rutenfranz (Eds.), Enzyklopädie der Psychologie, Themenbereich D, Serie III (Vol. 1, pp. 1–42). Göttingen: Hogrefe. Volpert, W. (2003). Wie wir handeln, was wir können. Ein Disput als Einführung in die Handlungspsychologie (3rd ed.). Sottrum: artefact. von Davier, M. (1997). Bootstrapping goodness-of-fit statistics for sparse categorical data. Results of a monte carlo study. Methods of Psychological Research Online, 2(2), 29–48. von Davier, M., & Carstensen, C. H. (Eds.). (2007). Multivariate and mixture distribution rasch models – extensions and applications. New York: Springer. von Davier, M., & Rost, J. (1996). Die Erfassung transsituativer Copingstile durch StimulusResponse Inventare. Diagnostica, 42(4), 313–331. Vosniadou, S. (2008). International handbook of research on conceptual change (2nd ed.). New York: Routledge. Weber, M. (1922). Gesammelte Aufsätze zur Religionssoziologie I (5. Auflage 1982). Tübingen: UTB. Wehner, T., & Dick, M. (2001). Die Umbewertung des Wissens in der betrieblichen Lebenswelt: Positionen der Arbeitspsychologie und betroffener Akteure. In Wissen in Unternehmen. Konzepte, Maßnahmen, Methoden (pp. 89–117). Berlin: Erich Schmidt. Weinert, F. E. (1996). “Der gute Lehrer”, die “gute Lehrerin” im Spiegel der Wissenschaft. Beiträge zur Lehrerbildung, 14(2), 141–150. www.bzl-online.ch. Weinert, F. E. (2001). Concept of competence: A conceptual clarification. In D. S. Rychen & L. H. Salganik (Eds.), Defining and selecting key competencies (pp. 45–65). Seattle: Hogrefe & Huber. Weiß, R. (2011). Prüfungen in der beruflichen Bildung. In: Severing, E, Weiß, R. (Hg.). Prüfungen und Zertifizierungen in der beruflichen Bildung, Bonn. http://www.bibb.de/dokumente/pdf/ a12_voevz_agbfn_10_weiss_1.pdf Weniger, E. (1957). Die Eigenständigkeit der Erziehung in Theorie und Praxis. Beltz: Weinheim. Winther, E., & Achtenhagen, F. (2008). Kompetenzstrukturmodell für die kaufmännische Ausbildung. Adaptierbare Forschungslinien und theoretische Ausgestaltung. Zeitschrift für Berufs- und Wirtschaftspädagogik., 104, 511–538. Wirtz, M., & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität. Göttingen: Hogrefe. Womack, J. P., Jones, T., & Roos, D. (1990). The machine that changed the world. New York, Oxford, Singapore, Sydney: Macmillan. Womack, J. P., Jones, D. T., & Ross, D. (Eds.). (1991). Die zweite Revolution in der Automobilindustrie: Konsequenzen aus der weltweiten Studie aus dem Massachusetts Institute of Technology. New York: Campus Verlag. Frankfurt/Main. Wosnitza, M., & Eugster, B. (2001). MIZEBA – ein berufsfeldübergreifendes Instrument zur Messung der betrieblichen Ausbildungssituation? Eine Validierung in der gewerblichtechnischen Ausbildung. Empirische Pädagogik, 15(3), 411–426. Wyman, N. (2015). How to find wealth and success by developing the skills companies actually need. New York: Crown Business.

538

Bibliography

Young, M. (2007). Auf dem Weg zu einem europäischen Qualifikationsrahmen: Einige kritische Bemerkungen. In P. Grollmann, G. Spöttl, & F. Rauner (Eds.), Europäisierung beruflicher Bildung – eine Gestaltungsaufgabe. Hamburg: LIT. Young, M. (2009). National qualification framework: Their feasibility for effective implementation in developing countries. Skill Working Paper No. 22. Geneva: ILO. Zentralverband der Elektrotechnischen Industrie (ZVEI). (1973). Ausbildungs-Handbuch für die Stufenausbildung elektrotechnischer Berufe (Vol. 7, 2nd ed.). Frankfurt/Main: ZVEISchriftenreihe. Zhao, Z. (2014). KOMET-China: Die Schritte auf dem Weg zu einem nationalen Schlüsselprojekt der Qualitätssicherung in der Beruflichen Bildung. Zeitschrift für Berufs- und Wirtschaftspädagogik, 110(3), 442–448. Zhao, Z. (2015). Schritte auf dem Weg zu einer Kompetenzentwicklung für Lehrer und Dozenten beruflicher Bildung in China. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung (pp. 437–449). Münster: LIT. Zhao, Z., Rauner, F., & Zhou, Y. (2015). Messen von beruflicher Kompetenz von Auszubildenden und Studierenden des Kfz-Servicesektors im internationalen Vergleich: Deutschland – China. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzentwicklung in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 393–410). Berlin: LIT. Zhao, Z., Zhang, Z., & Rauner, F. (2016). KOMET based professional competence assessment for VET teachers in China. In M. Pilz (Ed.), Youth in transition from school to work – vocational education and training (VET) in times of economic crises. Dordrecht: Springer. Zhao, Z., & Zhuang, R. (2012). Research and development of the curriculum for the secondary teachers’ qualification. Education and Training, 5, 12–15. Zhao, Z., & Zhuang, R. (2013). Messen beruflicher Kompetenz von Auszubildenden und Studierenden berufsbildender (Hoch)Schulen in China. Zeitschrift für Berufs- und Wirtschaftspädagogik, 109(1), 132–140. Zhou, Y., Rauner, F., & Zhao, Z. (2015). Messen beruflicher Kompetenz von Auszubildenden und Studierenden des Kfz-Service Sektor im internationalen Vergleich: Deutschland – China. In M. Fischer, R. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand (pp. 393–410). Münster: LIT. Zhuang, R., & Ji, L. (2015). Analyse der interkulturellen Anwendung der COMETKompetenzdiagnostik. In M. Fischer, F. Rauner, & Z. Zhao (Eds.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand (pp. 341–352). Berlin: LIT. Zhuang, R., & Zhao, Z. (2012). Empirische Forschung zum Messen Beruflicher Kompetenz der Auszubildenden und Studenten. Peking: Verlag Tsinghua Universität. Zimmermann, M., Wild, K.-P., & Müller, W. (1999). Das “Mannheimer Inventar zur Erfassung betrieblicher Ausbildungssituationen” (MIZEBA). Zeitschrift für Berufs- und Wirtschaftspädagogik, 95(3), 373–402. Zöller, A., & Gerds, P. (Eds.). (2003). Qualität sichern und steigern. Personal- und Organisationsentwicklung als Herausforderung beruflicher Schulen (pp. 333–355). Bielefeld: Bertelsmann.

List of COMET Publications Bd. I A+B 01/2008 Heinemann, L., Rauner F. „Identität und Engagement: Konstruktion eines Instruments zur Beschreibung der Entwicklung beruflichen Engagements und beruflicher Identität“

Bibliography

539

A+B 01/2016 Rauner, F., Frenzel, J., Heinemann, L., Kalvelage, J., Zhou, Y. (2016). „Identität und Engagement: ein Instrument zur Beschreibung und zum Messen beruflicher Identität und beruflichen Engagements. A+B Forschungsberichte“ (2. vollständig überarbeitete Auflage des 01/2006) A+B 02/2009 Rauner, F., Heinemann, L., Haasler, B. „Messen beruflicher Kompetenz und beruflichen Engagements“ A+B 04/2009 Maurer, A., Rauner, F., Piening, D. „Lernen im Arbeitsprozess – ein nicht ausgeschöpftes Potenzial dualer Berufsausbildung“ A+B 10/2012 Rauner, F. „Multiple Kompetenz: „Die Fähigkeit der holistischen Lösung beruflicher Aufgaben“ A+B 11/2012 Rauner, F. „Messen beruflicher Kompetenz von Berufsschullehrern“ A+B 12/2013 Rauner, F. „Überprüfen beruflicher Handlungskompetenz. Zum Zusammenhang von Prüfen und Kompetenzdiagnostik“ A+B 14/2014 Rauner, F., Piening, D. „Heterogenität der Kompetenzausprägung in der beruflichen Bildung“ A+B 15/2014 Fischer, M., Huber, K., Mann, E., Röben, P. „Informelles Lernen und dessen Anerkennung aus der Lernendenperspektive – Ergebnisse eines Projekts zur Anerkennung informell erworbener Kompetenzen in Baden-Württemberg“ A+B 16/2014 Rauner, F., Piening, D. „Kontextanalysen im KOMET-Forschungsprojekt: Erfassen der Testmotivation” A+B 17/2014 Rauner, F., Piening, D., Frenzel, J. „Der Lernort Schule als Determinante beruflicher Kompetenzentwicklung“ A+B 18/2014 Rauner, F., Piening, D., Zhou, Y. „Stagnation der Kompetenzentwicklung – und wie sie überwunden werden kann“ A+B 19/2015 Rauner, F., Piening, D., Scholz, T. „Denken und Handeln in Lernfeldern. Die Leitidee beruflicher Bildung – Befähigung zur Mitgestaltung der Arbeitswelt – wird konkret“ A+B 20/2015 Rauner, F., Piening, D. „Die Qualität der Lernortkooperation“ A+B Forschungsberichte: Forschungsgruppe Berufsbildungsforschung (I:BB) (Hg.), Universität Bremen. KIT – Karlsruher Institut für Technologie, Institut für Berufspädagogik und Allgemeine Pädagogik. Carl von Ossietzky Universität Oldenburg, Institut für Physik/ Technische Bildung. Pädagogische Hochschule Weingarten, Professur für Technikdidaktik Blömeke, S., & Suhl, U. (2011). Modellierung von Lehrerkompetenzen. Nutzung unterschiedlicher IRT-Skalierungen zur Diagnose von Stärken und Schwächen deutscher Referendarinnen und Referendare im internationalen Vergleich. Zeitschrift für Erziehungswissenschaft, 13(2011), 473–505. Brüning, L., & Saum, T. (2006). Erfolgreich unterrichten durch Kooperatives Lernen. Strategien zur Schüleraktivierung. Essen: Neue Deutsche Schule Verlagsgesell schaft mbH. Fischer, M., Rauner, F., & Zhao, Z. (Eds.). (2015b). Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz. COMET auf dem Prüfstand. Münster: LIT. Hellpach, W. (1922). Sozialpsychologische Analyse des betriebstechnischen Tatbestandes „Gruppenfabrikation“. In R. Lang, & W. Hellpach (Hg.), Gruppenfabrikation (pp. 5–186). Berlin: Springer. Lehberger, J. (2013). Arbeitsprozesswissen - didaktisches Zentrum für Bildung und Qualifizierung. Ein kritisch-konstruktiver Beitrag zum Lernfeldkonzept. Münster: LIT. Lehberger, J. (2015). Berufliches Arbeitsprozesswissen als eine Dimension des COMETMessverfahrens. In M. Fischer, F. Rauner, & Z. Zhao (Hrsg.), Kompetenzdiagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher Kompetenz: COMET auf dem Prüfstand. Bildung und Arbeitswelt (Bd. 30, pp. 209–224). Münster: LIT. Kleiner, M., Rauner, F., Reinhold, M., & Röben, P. (2002). Curriculum-design I. Konstanz: Christiani. Kleemann, F., Krähnke, U., & Matuschek, I. (2009). Interpretative Sozialforschung. Eine praxisorientierte Einführung. Wiesbaden: VS Verlag für Sozialwissenschaften. Rauner, F. (2018b). Berufliche Kompetenzdiagnostik mit COMET. Erfahrungen und Überraschungen aus der Praxis. Bielefeld: wbv.

540

Bibliography

Rauner, F. (2019a). Ausbildungsberufe. Berufliche Identität und Arbeitsethik. Eine Herausforderung für die Berufsentwicklung und die Berufsausbildung. Münster: LIT. Rauner, F. (2019b). Kreativität. Ein Merkmal der modernen Berufsbildung und wie sie gefördert werden kann. Münster: LIT. Rauner, F. (2020). Berufliche Umweltbildung zwischen Anspruch und Wirklichkeit. Eine Systemanalyse. Bielefeld: wbv. Rauner, F., Haasler, B., Heinemann, L., & Grollmann, P. (2009). Messen beruflicher Kompetenzen. Bd. 1. Grundlagen und Konzeption des KOMET-Projekts. Reihe Bildung und Arbeitswelt. Münster: LIT. Rauner, F., & Hauschildt, U. (2020). Die Stagnation der beruflichen Kompetenzentwicklung – und wie man sie überwinden kann. Grundlagen der Berufs- und Erwachsenenbildung (Vol. 87). Baltmannsweiler: Schneider Verlag Hohengehren. Rauner, F., & Heinemann, L. (2015). Messen beruflicher Kompetenzen. Bd. IV. Eine Zwischenbilanz des internationalen Forschungsnetzwerkes COMET. Reihe Bildung und Arbeitswelt. Münster: LIT. Rauner, F., Heinemann, L., Martens, T., Erdwien, B., Maurer, A., Piening, D., et al. (2011). Messen beruflicher Kompetenzen. Bd. III. Drei Jahre KOMET-Testerfahrung. Reihe Bildung und Arbeitswelt. Münster: LIT. Rauner, F., Heinemann, L., Maurer, A., Haasler, B., Erdwien, B., & Martens, T. (2013). Competence development and assessment in TVET (COMET). Theoretical framework and empirical results. Dordrecht, Heidelberg: Springer. Rauner, F., Frenzel, J., & Piening, D. (2015a). Machbarkeitsstudie: Anwendung des KOMETTestverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB. Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015b). Engagement und Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer Ausbildung. Eine Studie im Auftrage der Landesinitiative „Steigerung der Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen“. Bremen: Universität Bremen I:BB. Rauner, F., Heinemann, L., Piening, D., Haasler, B., Maurer, A., Erdwien, B., et al. (2009). Messen beruflicher Kompetenzen. Bd. II. Ergebnisse KOMET 2008. Reihe Bildung und Arbeitswelt. Münster: LIT. Rauner, F., Lehberger, J., & Zhao, Z. (2018). Messen beruflicher Kompetenzen. Bd. V. Auf die Lerhrer kommt es an. Reihe Bildung und Arbeitswelt. Münster: LIT. Zhuang, R., & Ji, L. (2015). Analyse der interkulturellen Anwendung der COMETKompetenzdiagnostik. In M. Fischer, F. Rauner, & Z. Zhao (Hg.), Kompetenz diagnostik in der beruflichen Bildung. Methoden zum Erfassen und Entwickeln beruflicher. Kompetenz: COMET auf dem Prüfstand (pp. 341–352). Berlin: LIT.

COMET-Berichte Bortz, J., & Döring, N. (2003). Forschungsmethoden und Evaluation für Human- und Sozialwissenschaftler (3. Auflage). Berlin, Heidelberg: Springer. Brown, H. (2015). Competence measurement in South Africa: Teachers‘ reactions to feedback on COMET results. In E. Smith, P. Gonon, & A. Foley (Eds.), Architectures for apprenticeship. Achieving economic and social goals (pp. 91–95). North Melbourne: Australian Scholarly Publishing. Bundesministerium für Bildung und Forschung (BMBF) (Hg.). (2006). Umsetzungshilfen für die Abschlussprüfungen der neuen industriellen und handwerklichen Elektroberufe. Intentionen, Konzeption und Beispiele (Entwicklungsprojekt). Stand: 30.12.2005. (Teil 1 der Abschlussprüfung); Stand: 09.01.2006. (Teil 2 der Abschlussprüfung). Manuskript. Dreyfus, H. L., & Dreyfus, S. E. (1987). Künstliche Intelligenz. Von den Grenzen der Denkmaschine und dem Wert der Intuition. Reinbek bei Hamburg: Rowohlt.

Bibliography

541

Fischer, M., & Witzel, A. (2008). Zum Zusammenhang von berufsbiographischer Gestaltung und beruflichem Arbeitsprozesswissen. In M. Fischer, & G. Spöttl (Hg.), Im Fokus: Forschungsperspektiven in Facharbeit und Berufsbildung. Strategien und Methoden der Berufsbildungsforschung (pp. 24–47). Frankfurt a. M.: Peter Lang. Fischer, R., & Hauschildt, U. (2015). Internationaler Kompetenzvergleich und Schulentwicklung. Das Projekt COMCARE bietet neue Ansatzmöglichkeiten. PADUA Fachzeitschrift für Pflegepädagogik, Patientenedukation und -bildung, 10(4), 233–241. Forschungsgruppe Berufsbildungsforschung (I:BB). (2015). KOMET NRW – Ein ambitioniertes Projekt der Qualitätssicherung und -entwicklung in der dualen Berufsausbildung. Bericht der Wissenschaftlichen Begleitung. Bremen: Universität Bremen, I:BB. Hauschildt, U. (2015). Me siento bien en mi centro de formación – I feel good at my training institution: Results of an international competence assessment in nursing. In E. Smith, P. Gonon, & A. Foley (Eds.), Architectures for apprenticeship. Achieving economic and social goals (pp. 100–104). North Melbourne: Australian Scholarly Publishing. Hauschildt, U., Brown, H., & Zungu, Z. (2013). Competence measurement and development in TVET: Result oft he first COMET test in South Africa. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 177–184). Münster: Lit. Hauschildt, U., & Heinemann, L. (2013). Occupational identity and motivation of apprentices in a system of integrated dual VET. In L. Deitmer, U. Hauschildt, F. Rauner, & H. Zelloth (Eds.), The architecture of innovative apprenticeship. Technical and vocational education and training: Issues, concerns and prospects 18 (pp. 177–192). Dordrecht: Springer. Hauschildt, U., & Piening, D. (2013). Why apprentices quit: A German case study. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 199–202). Münster: Lit. Hauschildt, U., & Schumacher, J. (2014). COMCARE: Measurement and teaching of vocational competence, occupational identity and organisational commitment in health care occupations in Spain, Norway, Poland and Germany. Test instruments and documentations of results. Bremen: Universität Bremen, I:BB. Heinemann, L., & Rauner, F. (2011). Measuring vocational competences in electronic engineering: Findings of a large scale competence measurement project in Germany. In Z. Zhao, F. Rauner, & U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern economy (pp. 221–224). Peking: Foreign Language Teaching and Research Press. Heinemann, L., Maurer, A., & Rauner, F. (2011). Modellversuchsergebnisse im Überblick. In F. Rauner, L. Heinemann, A. Maurer, L. Ji, & Z. Zhao (Eds.), Messen beruflicher Kompetenz. Bd. III. Drei Jahre KOMET Testerfahrung (pp. 150–209). Münster: LIT. Ji, L., Rauner, F., Heinemann, L., & Maurer, A. (2011). Competence development of apprentices and TVET students: A Chinese-German comparative study. In Z. Zhao, F. Rauner, & U. Hauschildt (Eds.), Assuring the acquisition of expertise. Apprenticeship in the modern economy (pp. 217–220). Peking: Foreign Language Teaching and Research Press. Kunter, M. u. a. (2002). Pisa 2000 - Dokumentation der Erhebungsinstrumente. Berlin: MaxPlanck-Institut für Bildungsforschung. Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014b). Berufliche Kompetenzen messen – Das Modellversuchsprojekt KOMET NRW. 1. Zwischenbericht. Bremen: Universität Bremen, I: BB. Piening, D., Frenzel, J., Heinemann, L., & Rauner, F. (2014c). Berufliche Kompetenzen messen – Das Modellversuchsprojekt KOMET NRW. 2. Zwischenbericht. Bremen: Universität Bremen, I: BB. Piening, D., & Rauner, F. (2015a). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Elektroniker/-in/Abschlussbericht. Bremen: Universität Bremen, I: BB. Piening, D., & Rauner, F. (2015b). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Industriemechaniker/-in/Abschlussbericht. Bremen: Universität Bremen, I:BB.

542

Bibliography

Piening, D., & Rauner, F. (2015c). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Kaufmann/-frau für Spedition und Logistikdienstleistung und Industriekaufmann/-frau/Abschlussbericht. Bremen: Universität Bremen, I:BB. Piening, D., & Rauner, F. (2015d). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Kfz-Mechatroniker/-in/Abschlussbericht. Bremen: Universität Bremen, I:BB. Piening, D., & Rauner, F. (2015e). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Medizinische/-r Fachangestellte/-r/Abschlussbericht. Bremen: Universität Bremen, I:BB. Piening, D., & Rauner, F. (2015f). Messen und Entwicklung von beruflicher Kompetenz in NRW (KOMET NRW). Teilprojekt Tischler/-in/Abschlussbericht. Bremen: Universität Bremen, I:BB. Piening, D., & Rauner, F. (2015g). Umgang mit Heterogenität. Eine Handreichung des Projektes KOMET. Bremen: Universität Bremen I:BB. Rauner, F. (1999). Entwicklungslogisch strukturierte berufliche Curricula: Vom Neuling zur reflektierten Meisterschaft. In: ZBW – Zeitschrift für Berufs- und Wirtschaftspädagogik, 3 (95), 424–446. Rauner, F. (2006). Qualifikations- und Ausbildungsordnungsforschung. In F. Rauner (Hg.), Handbuch Berufsbildungsforschung. 2. aktualisierte Auflage (pp. 240–247). Bielefeld: W. Bertelsmann. Rauner, F. (2007). Praktisches Wissen und berufliche Handlungskompetenz. In Europäische Zeitschrift für Berufsbildung. Nr. 40, 2007/1 (pp. 57–72). Thessaloniki: Cedefop – Europäisches Zentrum für die Förderung der Berufsbildung. Rauner, F. (2013). Applying the COMET competence measurement and development model for VET teachers and trainers. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 181–184). Münster: Lit. Rauner, F. (2014). Berufliche Kompetenzen von Fachschulstudierenden der Fachrichtung MetallTechnik – eine KOMET-Studie (Hessen). Abschlussbericht. Bremen: Universität Bremen, I:BB. Rauner, F., & Piening, D. (2014). Heterogenität der Kompetenzausprägung in der beruflichen Bildung. A+B-Forschungsberichte Nr. 14/2014. Bremen: Universität Bremen, I:BB. Rauner, F., Frenzel, J., & Piening, D. (2015a). Machbarkeitsstudie: Anwendung des KOMETTestverfahrens für Prüfungen in der beruflichen Bildung. Bremen: Universität Bremen, I:BB. Rauner, F., Frenzel, J., Piening, D., & Bachmann, N. (2015b). Engagement und Ausbildungsorganisation. Einstellungen Sächsischer Auszubildender zu ihrem Beruf und ihrer Ausbildung. Eine Studie im Auftrage der Landesinitiative „Steigerung der Attraktivität, Qualität und Rentabilität der dualen Berufsausbildung in Sachsen“. Bremen: Universität Bremen I:BB. Rauner, F., Heinemann, L., & Hauschildt, U. (2013). Measuring occupational competences: Concept, method and findings of the COMET project. In L. Deitmer, U. Hauschildt, F. Rauner, & H. Zelloth (Eds.), The architecture of innovative apprenticeship. Technical and vocational education and training: Issues, concerns and prospects 18 (pp. 159–176). Dordrecht: Springer. Rauner, F., Piening, D., & Bachmann, N. (2015). Messen und Entwicklung von beruflicher Kompetenz in den Pflegeberufen der Schweiz (KOMET Pflegeausbildung Schweiz): Abschlussbericht. Bremen: Universität Bremen, I:BB. Rauner, F., Piening, D., Fischer, R., & Heinemann, L. (2014). Messen und Entwicklung von beruflicher Kompetenz in den Pflegeberufen der Schweiz (COMET Pflege Schweiz): Ergebnis der 1. Testphase 2013. Bremen: Universität Bremen, I:BB. Rauner, F., Piening, D., Heinemann, L., Hauschildt, U., & Frenzel, J. (2015). KOMET NRW – Ein ambitioniertes Projekt der Qualitätssicherung und -entwicklung in der dualen Berufsausbildung. Abschlussbericht: Zentrale Ergebnisse. Bremen: Universität Bremen, I:BB. Scholz, T., & Heinemann, L. (2013). COMET learning tasks in practice – how to make use of learning tasks at vocational schools. In S. Akooje, P. Gonon, U. Hauschildt, & C. Hofmann (Eds.), Apprenticeship in a globalised world. Premises, promises and pitfalls (pp. 107–110). Münster: Lit.

Index

A Abele, S., 332 Achtenhagen, F., 13 Adolph, G., 53, 342, 424, 431, 432, 447 Aebli, H., 70 Akaike, H., 175, 178, 179 Allen, N.J., 85 Anderson, L.W., 14 Asendorpf, J., 116, 152, 161

B Babie, F., 2, 332 Bachmann, N., 144, 313, 315, 339, 374, 390 Baethge, M., 2, 5 Baethge-Kinsky, V., 2, 332 Baltes, D., 312 Baruch, Y., 82, 84 Bauer, W., 299, 344, 391, 460 Baumert, J., 8, 225, 390, 401 Baye, A., 158 Beaton, D., 185 Beck, U., 16 Becker, M., 44, 48 Becker, U., 312 Becker-Lenz, R., 352 Benner, P., 43, 44, 50, 51, 425 Bergmann, J.R., 30, 34, 49, 387 Bieri, L., 81 Blankertz, H., 3, 5, 42, 70, 79, 329, 338, 339, 346, 355, 425 BLK, 8, 282, 397 Blömeke, S., 392

Blum, W., 261, 266 BMBF, 78, 199, 200, 202–205 BMWi, 208 Böhle, F., 346, 455 Bombardier, C., 185 Borch, H., 194, 195, 201 Boreham, N.C., 346, 373, 387 Bortz, J., 128, 134, 151 Bourdick, H., 482, 483 Bozdogan, H., 148, 149, 162, 163 Brand, W., 14 Brater, M., 12, 73 Braverman, H., 5 Bremer, R., 19, 43, 44, 70, 249, 268, 342, 455 Brödner, P., 6 Brosius, F., 219, 235, 236 Brown, A., 80, 339 Brown, H., 69 Brown, J.S., 70, 387 Bruner, J.S., 70 Brüning, L., 458, 459, 461 Bungard, W., 24 Butler, P., 267 Bybee, R.W., 65, 66, 298, 400

C Calvitto, P., 345 Campbell, D.T., 251 Carey, S., 331 Carstensen, C.H., 156 Caspar, F., 152, 153 Cohen, A., 82, 84, 355

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

543

544 Connell, M.W., 22, 54, 55, 387, 394, 395 Cooley, M., 20 Corbett, J.M., 20 Cramer, H., 70 Crawford, M.B., 351 Curcio, G.P., 393

D Daimler Chrysler, A.G., 267 Davier, M. von, 150, 156, 162, 163 Degen, U., 6, 48 Dehnbostel, P., 46, 471 Deitmer, L., 282, 344 Dengler, K., 345 Deutscher Bundestag, 40, 342, 344, 424 Dewey, J., 347 DFG, 14, 61, 335, 352 Dick, M., 50 Döbrich, P., 267 Döring, N., 128, 134, 151, 262 Dörner, D., 250 Drees, G., 268 Drexel, I., 428 Dreyfus, H.L., 32, 43, 44, 73, 387 Düggeli, A., 393 Dürrenberger, G., 81 Dybowski, G., 344

E Efron, B., 150 Emery, F.E., 21 Emery, M., 21 Erdwien, B., 118, 120, 131, 148, 150–154, 161, 291, 349, 405, 422 Erpenbeck, J., 47 Eugster, B., 267 Euler, D., 13 Eyd, M., 156

F Fischer, B., 387 Fischer, M., 10, 46, 49, 70, 83, 312, 332, 346, 387 Fischer, R., 1, 249, 257, 259, 333, 335, 344, 345, 382, 474 Fleiss, J.L., 115, 116, 153 Flick, U., 250 Frank, H., 8 Frei, F., 23 Frenzel, J., 257, 313, 315, 339, 374, 378, 390

Index Freund, R., 387 Frey, A., 392 Frieling, E., 23, 34

G Gablenz-Kollakowicz, S., 23 Ganguin, D., 6, 72, 344 Gardner, H., 10, 22, 52, 54, 55, 140, 356, 362, 387, 394, 395, 424 Garfinkel, H., 18, 49–52 Gäumann-Felix, K., 104, 144, 257, 336, 470–485 Georg, W., 84 Gerds, P., 344 Gerecht, M., 267 Gerlach, H., 44 Gerstenberger, F., 5 Gerstenmaier, J., 11 Giddens, A., 16 Gille, M., 312 Gillis, S., 345 Girmes-Stein, R., 70, 387 Glaser, B., 249 Granville, G., 7 Gravert, H., 7 Griffin, P., 345 Grigorenko, E.L., 126 Grob, U., 11 Grogoll, T., 23 Grollmann, P., 7, 53, 63, 84, 148, 249, 265, 332, 411, 421 Gruber, H., 66 Grünewald, U., 6 Gruschka, A., 43, 54, 79, 355, 387, 425 Gschwendtner, T., 332 Guillemin, F., 185 Guldimann, T., 374

H Haase, P., 344 Haasler, B., 54, 70, 150–154, 265, 332, 422 Hacker, W., 6, 47, 72, 346, 402 Hackman, J.R., 21 Hartig, J., 54 Hastedt, H., 24 Hattie, J.A., 208, 336, 374, 380, 390, 407 Hauschildt, U., 69, 257, 259, 336, 474 Havighurst, R.J., 43, 70, 249, 425 Hayes, A.F., 234 Heeg, F.J., 129 Heermeiner, R., 342

Index Heid, H., 5, 351 Heidegger, G., 17, 342, 424 Heinemann, L., 69, 86, 257, 259, 265, 309, 312, 336, 339, 378, 405, 474 Heinz, W.R., 79, 80, 83, 332, 339, 354 Heinze, T., 8 Hellpach, W., 72, 402 Heritage, J., 346 Hirtt, N., 5, 7 Hoey, D., 9, 20, 142 Hofer, D., 104, 144, 257, 336, 470–485 Hoff, E.-H., 354 Hofmeister, W., 14 Holzkamp, K., 53, 387 Howe, F., 342 Hubacek, G., 312 Hüster, W., 7

I IHK Nordrhein-Westfalen (IHK NRW), 201, 203

J Jäger, C., 81, 82, 85, 172, 355 Jagla, H.-H., 44, 342 Ji, L., 150, 185, 187 Johnson, D., 461 Johnson, R., 461 Jones, D.T., 197, 343 Jongebloed, H.-C., 353 Jungeblut, R., 70

K Kalvelage, J., 86, 87, 171–178 Kanungo, R.N., 85 Karlinger, F.N., 251 Katzenmeyer, R., 14, 336, 403, 463 Kelle, U., 421 Kern, H., 5, 16, 48, 84, 197, 342 Kirpal, S., 80, 339 Kleiner, M., 44, 95, 414 Kliebard, H., 81 Klieme, E., 7, 13, 54, 61, 63, 267 Klotz, V.K., 135, 136 Kluge, S., 421 Klüver, J., 352 KMK, 17, 20, 31, 39, 40, 42–44, 344, 361, 383 Kohlberg, L., 298, 352 König, J., 392 Kordes, H., 387

545 Krathwohl, D.R., 14 Krick, H., 6, 48 Kruse, W., 6, 46, 48, 84 Kullmann, B., 312 Kunter, M., 226 Kurtz, T., 17, 83, 345

L Lafontaine, D., 158 Lakies, T., 40 Lamnek, G., 23 Lappe, L., 354 Lash, S., 16 Laske, G., 342, 424 Laur-Ernst, U., 20 Lave, J., 20, 43, 44, 70, 387 Lechler, P., 249 Lehberger, J., 391, 405, 419–421, 440, 468 Lehmann, R., 66, 266 Lehmann, R.H., 14 Lempert, W., 17, 332, 339, 345 Lenger, A., 352 Lenk, H., 24 Lenk, W., 24 Lenzen, D., 70 Leutner, D., 61, 335 Lüdtke, G., 133 Lutz, B., 7

M Maag Merki, K., 11 Martens, T., 2, 63, 120, 130, 131, 147, 148, 150, 154, 156, 161, 163, 171, 249, 257, 265, 291, 332, 335, 349, 405, 422 Matthes, B., 345 Maurer, A., 309, 312, 405 McCormick, E., 23 Meyer, K., 17 Meyer, P.J., 85 Meyer-Abich, K.N., 24 Meyer-Siever, K., 331 Minnameier, G., 66 Monseur, C., 158 Müller, W., 37, 267 Müller-Fohrbroth, G., 250 Müller-Hermann, S., 352

N Nehls, H., 40 Neuweg, G.H., 10, 49, 66

546 Newman, S.E., 70, 387 Nickolaus, R., 332 Nida-Rümelin, J., 353

O Oehlke, P., 6 Oesterreich, R., 23 Oldham, G.R., 21 Oser, F., 392–394 Ott, B., 5

P Parchmann, I., 290, 400 Pätzold, G., 53, 268 Petersen, A.W., 194, 196, 197, 201 Peukert, U., 387 Piaget, J., 298 Piening, D., 144, 250, 257, 270, 271, 313, 315, 339, 344, 374, 378, 380, 384, 386, 390, 458, 461 Pies, I., 354 Polanyi, M., 10, 49 Posner, M., 331 Preacher, K.J., 234 Prein, G., 421 Prenzel, M., 66, 261, 266 Przygodda, K., 344, 391

Q Quittre, V., 158

R Rademacker, H., 16, 131, 134, 135, 332 Ramirez, D.E., 162 Randall, D.M., 84 Rasch, G., 149, 155, 159, 161–163, 171 Rasmussen, L.B., 20 Rauner, F., 6, 17, 63, 94, 148, 203, 309, 332, 390, 424 Reckwitz, A., 50 Reichborn, A.N., 267 Reinhold, M., 44 Renkl, A., 66 Resch, M., 23 Ripper, J., 267 Röben, P., 20, 103 Römmermann, E., 70 Roos, D., 84, 197, 343 Ropohl, G., 24

Index Rose, H., 346 Rosendahl, J., 331 Rost, J., 2, 130, 131, 147, 150, 154–156, 162, 171, 249, 332, 422 Roth, H., 13, 14, 40, 42

S Sabel, C.F., 16, 84 Samurçay, R., 373, 387 Sattel, U., 84 Saum, T., 458, 459, 461 Schecker, H., 290, 400 Schein, E., 30, 346, 399 Schelten, A., 16, 134 Schiefele, U., 8, 390, 401 Schmidt, H., 20 Schneider, W., 8, 390, 401 Schoen, D.A., 44, 50, 52, 53, 126, 346, 387, 398 Scholz, T., 336, 377, 387 Schreier, N., 194 Schumacher, J., 257, 259, 336, 474 Schumann, M., 5, 48, 197, 342 Schwarz, H., 195 Seeber, S., 14 Sennett, R., 5, 10, 16, 17, 428 Sheridan, K., 22, 54, 55, 387, 394, 395 Shrout, P.E., 153 SK Arbeit und Technik, 40 Skowronek, H., 331 Skule, S., 267 Spöttl, G., 7, 17, 45, 48, 103, 414 Stanley, J.C., 251 Steffen, R., 70 Stegemann, Ch., 344, 384 Stein, H.W., 70 Steinert, B., 267 Sternberg, R.J., 126 Steyer, R., 156 Straka, A., 331 Strauss, A.L., 249 Suhl, U., 392 Suppes, P., 147, 156

T Taylor, I.A., 81, 343 Tenorth, H.-E., 13 Terhart, E., 389 Thiele, H., 268 Tibshirani, R.J., 150 Tiemeyer, E., 384

Index Tomaszewski, T., 72, 402 Tramm, F., 14

U Ulich, E., 20, 21, 23, 72, 402 Ulrich, O., 16, 23

V VDI, 23, 24 Volkert, W., 23 Volpert, W., 6, 72, 402 von Eerde, K., 344 Vosniadou, S., 331

W Walden, G., 268 Wallbott, H.G., 116, 152, 161 Weber, M., 17 Weber, S., 2, 332 Wedekind, V., 69 Wehmeyer, C., 194, 196, 197, 201 Wehner, T., 50 Weinert, F.E., 15, 62, 70, 247 Weiß, R., 136 Weisschuh, B., 267 Weißmann, H., 194, 201 Wenger, E., 20, 43, 44, 70, 387

547 Weniger, E., 391 Wienemann, E., 5 Wild, K.-P., 267 Winther, E., 13, 135, 136 Wirtz, M., 152, 153 Witzel, A., 83, 312, 332 Womack, J.P., 84, 197, 343 Wosnitza, M., 267 Wyman, N., 352

Y Yates, C.R., 208, 336, 374, 380, 407 Young, M., 7, 8

Z Zhang, Z., 78, 414, 415 Zhao, Z., 1, 78, 100, 187, 188, 249, 255, 259, 280, 335, 340, 344, 345, 362, 386, 412, 414, 415, 418, 422 Zhou, Y., 86, 87, 100, 171–178, 250, 255, 259, 280, 386, 418 Zhuang, R., 150, 185, 188, 255, 418 Zimmermann, M., 267 Zinnes, J. L., 147, 156 Zöller, A., 344 Zutavern, M., 374 ZVEI, 81

Subject Index

A Ability implicit, 32 professional, 10, 11, 18, 49, 65, 131, 134 Action artistic, 12, 74 competence, 83, 193, 194, 372, 383, 406, 410, 420, 458 complete, 6, 55, 59, 72, 73, 396, 402, 406, 455 professional, 6, 16, 28, 51, 57, 59, 71–73, 96, 100, 102, 103, 105, 109, 128, 131, 137, 142, 203, 264, 352, 383, 402, 410, 412, 414, 425, 426, 431, 438, 458, 469 types, 73–74 vocational, 55, 57, 72, 73, 102, 213, 216, 224, 426, 430, 451 Applied Academics, 52 Apprenticeship, 18, 70, 100, 126, 250, 320, 321, 387, 433, 446, 471 Architecture of parallel educational paths/ pathways, 18, 19, 352 Assignment company, 136, 196, 197 work, 92, 195, 208 Attractiveness, 276, 320, 329, 338, 339, 341

B BIBB, 134, 209, 282, 320 Bologna reform, 18, 352 Business process orientation, 66, 147, 161, 169, 268, 269, 327–329

C Capability model, 332 Career aspirations, 80 Certification systems, 7, 15, 427 China, 100, 150, 185–187, 192, 244, 255, 260, 280, 281, 308, 309, 375, 376, 412–416, 520 Chinese teachers, 187, 189, 260, 376 Classification systems, 18, 260 Coefficient of variation, 216, 217, 272, 299, 300, 349, 354, 445 COMET, see Competence development and assessment in TVET (COMET) Commitment occupational, 85, 88, 248, 328, 360, 517 organisational, 85, 88, 184, 313, 319, 328, 358 professional, 311, 314, 319, 327, 328, 333, 358–360 research, 82, 84, 85, 312, 338, 339 vocational, 184, 355 Communicativity, 51 Community of practice, 12, 69, 70, 80, 83 Company project work, 193–195, 197 Comparative projects, 109, 127, 129, 144, 257, 258, 519 Competence to act, 41, 53, 77, 78, 157 in action, 32 assessment, 8, 15 conceptual-planning, 136, 193, 221, 403, 410

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 F. Rauner, Measuring and Developing Professional Competences in COMET, Technical and Vocational Education and Training: Issues, Concerns and Prospects 33, https://doi.org/10.1007/978-981-16-0957-2

549

550 Competence (cont.) criterion/criteria, 68, 75, 76, 120, 159, 161, 162, 232, 249, 255, 285–287, 290, 298, 420, 421, 472, 473, 475, 479, 482, 484, 485 development, 20, 31, 41–45, 48–50, 55, 61, 62, 65, 66, 70–72, 79, 130, 159, 193, 201, 208, 213, 229, 241, 242, 247, 250–256, 260, 261, 263, 264, 268, 269, 275, 277, 279, 287, 290, 297, 298, 301, 307–310, 325, 331, 335–338, 340–342, 349, 355, 357, 361, 363–365, 370, 372–374, 378, 380–384, 386, 389, 390, 394, 395, 400, 402, 407, 412, 418, 422, 425, 426, 432, 433, 437, 439–446, 450–454, 462, 472, 483 diagnostics, 1–4, 8–10, 19–20, 30, 31, 61, 62, 68–70, 74, 76, 86, 87, 97, 98, 100, 104, 126, 131, 141, 142, 146, 193–202, 208, 218–224, 243, 246–248, 258, 260, 262, 263, 266, 277, 278, 282, 332, 333, 335, 338, 340, 344, 348, 354, 361, 372, 373, 376, 377, 387, 389, 391, 392, 394, 403, 410–412, 421, 422, 428, 444, 519 functional, 19, 41, 44, 66, 206, 224, 287, 297, 298 holistic, 55, 63, 66, 67, 75, 129, 147, 166, 169, 171, 194, 206, 215, 219, 220, 244, 264, 280, 286, 287, 289, 298, 301, 337, 376, 386, 387, 389, 398, 401, 409, 483, 484 interdisciplinary, 187, 392 large scale, 2, 4, 8, 63, 69, 131, 258, 260, 386, 391, 404, 422 level, 9, 11, 14, 55, 62–66, 71, 74–76, 96–98, 117, 121, 122, 137, 140–144, 146, 148, 154, 159, 191–193, 201, 206, 216, 219–221, 228, 230, 241, 251, 253–256, 274, 279, 281, 282, 287–299, 302, 306–308, 310, 312, 337, 344, 347–349, 363–365, 367, 369, 371, 380, 384–386, 389, 390, 393, 400, 402, 405, 409, 410, 415, 416, 441, 442, 449, 451, 454, 480 measurement, 4, 66, 157, 158, 168–169, 186, 188–190, 192, 276, 356, 362, 485 and measurement model, 2, 10, 75, 78, 129, 131, 154, 389, 399, 402, 416, 423–485 methodological, 1–4, 13, 27, 33, 48, 126, 335, 344, 392–394, 421 model, 1, 13–15, 61–64, 68, 70–73, 75, 78, 86, 100, 122, 131, 137, 154, 157, 167–169, 171, 187, 190, 192, 194, 199, 200, 206, 209, 212, 215, 218–220, 224, 249, 264, 279, 285, 287, 288, 290, 297,

Subject Index 298, 333, 335–337, 340, 341, 349, 351, 387, 388, 391, 394–396, 399, 400, 402, 403, 410, 411, 414, 419, 421, 422, 426, 438, 442, 471, 473, 475, 479, 483, 485 model-based, 61, 63, 154, 294 multiple, 54, 55, 126, 249, 290, 387 nominal, 67, 281, 298, 302, 381, 400, 401 occupational, 2, 15, 20, 22, 27, 32, 70, 74, 75, 104, 130, 137, 150, 206, 294, 297, 331, 332, 389, 400, 402 practical, 1, 31, 46, 48–52, 54, 78, 136, 149, 203, 205, 212, 270, 281, 291, 368, 398, 484 procedural, 49, 74, 121, 122, 166, 169, 190, 191, 220–224, 281, 286, 287, 289–291, 293, 297, 298, 401, 405 processual, 66, 121, 206 professional, 1, 9, 19–20, 27, 30, 41, 43, 55, 58, 59, 64, 66, 67, 69, 70, 74, 75, 78, 97, 122, 127, 129, 136, 147, 187, 244, 256, 280, 281, 296, 299, 310, 332, 337, 348–349, 351, 373, 377, 397, 405, 416, 418, 422 profiles, 4, 9, 11, 55, 59, 62, 64, 78, 96, 98, 100, 117, 120, 121, 124, 129, 130, 140, 143, 145, 148, 149, 159, 163–169, 193, 194, 201, 206, 212, 216–218, 224, 251, 252, 254, 256, 262, 278, 280–282, 287, 290, 291, 296–301, 337, 338, 345, 347–349, 351–354, 363, 364, 372, 388, 390, 393–395, 412, 414, 416–418, 422, 442, 443, 445, 451, 461 research, 9, 13, 61–63, 249, 394, 442 social, 11, 13, 14, 23, 32, 40, 47, 51, 59, 67, 86, 126, 268, 345, 352, 392, 426 technical, 1, 14, 19, 23, 28, 30, 31, 47, 48, 50, 52, 74, 75, 120, 129, 137, 144, 194, 203, 228, 252, 253, 256, 278, 299, 301, 337, 347, 348, 352, 354, 363, 370, 371, 376, 380, 396, 401, 406, 412, 418, 442 vocational, 2–4, 6, 9, 11, 14, 16, 18, 20, 23, 27, 40, 42, 43, 46, 54, 55, 62, 67–69, 71, 76, 78, 86, 100, 104, 107, 121, 122, 126, 129, 137, 142, 143, 191, 203, 206, 218, 224, 249, 254, 255, 260, 265, 268, 277, 279, 287, 290, 291, 294, 296, 298, 299, 301, 309, 310, 331, 332, 336, 338, 340, 344, 347, 348, 353, 354, 360, 365, 371–373, 381, 386, 391, 402, 406, 419, 421, 422, 452 Competence development and assessment in TVET (COMET), 1, 3, 9, 15, 56, 61–89, 99, 100, 102, 107, 109, 110, 112–117, 127–129, 131, 137, 143, 145–192, 194, 198–202, 206–210, 212, 215, 216,

Subject Index 218–226, 232, 234–236, 241–249, 251, 254, 255, 257, 260, 264, 265, 267, 274, 276–279, 282, 287, 288, 290–293, 297–300, 302, 305, 308, 310, 314, 331–389, 396, 400, 402, 413, 416, 418, 421, 422, 426, 438, 442, 470–485, 490, 497, 519, 520 competence and measurement model, 4, 107, 131, 206, 207, 209, 218, 224, 244, 279, 282, 341, 375, 384, 405, 418, 519 competence diagnostics, 4, 103, 185–192, 194, 333, 334, 362 consortia/consortium, 248, 260, 288 dimensions, 477 examination, 209, 216, 221, 224 method, 4, 9, 193, 254, 257, 336, 376, 472, 483, 485 methodology, 257, 282 projects, 1–3, 10, 12, 63, 69, 71, 72, 75, 86, 100, 101, 103, 107, 111, 113, 117, 123, 124, 129, 137, 138, 142, 146, 150, 154, 158–160, 171, 185, 188, 218, 219, 224, 225, 232, 242, 244, 248–283, 300–302, 309, 310, 335–337, 339, 341, 349, 355, 362, 372–375, 377, 382, 384–387, 389, 422, 440, 462, 472, 473, 485 rating, 79, 159–160, 211, 212, 421, 453 testing, 219, 247 test procedure, 2, 3, 76, 111, 112, 117, 127, 131, 143, 154, 155, 194, 196, 201, 206, 207, 212, 219, 232, 234, 235, 242, 243, 247, 274, 279, 299, 354, 372, 373, 376, 377, 387, 402, 441, 482, 519 Competence-oriented practical examination, 77 Competence-oriented training, 480 Competence-oriented vocational education and training, 224 Competency model, 3, 13–15, 63, 70, 125, 137, 154, 171, 187, 194, 198, 216, 224 Concept of competence, 13, 42, 51, 52, 54, 62, 208, 290, 393, 394 Confirmatory factory analysis, 171–185 Consolidation, 17, 21, 53, 249, 439 Content validity, 128, 131, 160, 257, 259 Context analysis, 39, 144, 219, 220, 224, 229, 264, 265, 268, 271–272, 279, 362, 378–382 Context data, 238, 282, 320, 334, 340–342, 373, 390 Contextuality, 51 Contextual learning, 52

551 Core occupations, 396, 428 Core professions, 361 Core workforce, 84, 86 Covariance matrix, 148, 149, 175, 179 Creativity, 39, 58–59, 107, 121, 147, 161, 162, 169, 190, 192, 199, 214, 249, 345, 348, 401, 405, 409, 425, 442, 465, 479–481, 483, 485, 490, 492, 498, 500, 501, 504, 505, 508 Criterion validity, 128, 130–131, 136 Cross-over design, 71, 225, 274 Cross-over test arrangement, 65 Curriculum development, 7, 41, 43, 44, 70, 186, 257, 258, 342, 344, 424, 425

D Degree of difficulty, 16, 64, 102, 110, 123, 124, 134–136, 138, 140, 143, 146, 195, 196, 451 Design a curriculum (DACUM), 27 Deskilling thesis, 5 Determinism, 6, 25 Development of competence, 2, 15, 61, 62, 70, 126, 142, 279, 309, 311, 339, 412 of identity, 313, 374 organisational, 6, 17, 21, 25, 46, 129, 248, 258, 269, 297, 328, 333, 355, 397, 439 theory, 43–45, 70 Developmental logic, 70 Developmental process, 28 Developmental tasks, 26, 41, 387 Diagnosis of knowledge, 21 Didactic concept, 44, 49, 309, 310, 406, 433, 471, 483 Didactic reduction, 30, 52 Didactic research, 331 Difficulty level, 146 Dimension of competence, 325, 392, 393 Duality alternating, 101 informal, 18 integrated, 18 Dual system, 323, 380 Dual vocational education, 252, 254, 267–268, 336, 362, 364, 382, 383 Dual vocational training, 18, 39, 69, 89, 104, 129, 132, 141, 198, 225, 251, 258, 266, 270, 329, 344, 362, 363, 372, 373, 380, 382, 384, 389, 396, 408, 428, 468

552 E Economic action, 16, 353 Education academic, 6, 18, 19, 40, 49, 301, 423 competence-oriented, 480 Educational research, 7, 28, 61, 66, 70, 126, 247, 249, 250, 262, 267, 374, 390 Electrical engineering, 71, 72, 138, 139, 187–189, 254, 265, 398 Electricians, 117, 135, 275, 501–504 Electronics engineers, 138, 225, 226, 240–242, 274, 280, 300, 375 Electronics technicians, 78, 96, 101, 131, 137–139, 141, 160, 202, 210, 226, 227, 229, 244, 251, 260, 263–265, 274–276, 299, 303, 304, 308, 349, 350, 356, 358, 363, 369, 378, 380, 389, 390, 418, 431, 433, 446, 463, 512, 513 Employees to their company, 354 Environmental compatibility, 2, 58, 68, 98, 99, 106, 121, 122, 147, 161, 162, 169, 190, 192, 207, 213, 214, 348, 465, 479, 482, 483, 490, 498, 500, 501, 504, 505 Environmental responsibility, 508 Ethnomethodology, 51 European Qualifications Framework (EQF), 7, 18, 428 Evaluation criterion/criteria, 68, 76, 112, 188, 202, 213, 249, 372, 375, 453, 461, 466, 467 didactic, 14, 30, 110, 112, 213, 279, 309, 335, 355, 392, 410, 415, 418, 483 items, 121, 161, 163, 185, 186, 188, 190–192, 201 objectivity, 127 procedure(s), psychometric, 2, 131, 135, 147–192, 291, 422 psychometric, 2, 78, 86, 131, 135, 154, 287, 335, 340, 349, 405, 416, 422 of task solutions, 73, 110, 186, 337, 441, 466 of teaching, 279, 404, 407, 410, 418, 419, 485 of the test tasks, 110, 112, 124, 151 Examination extended, 77, 198, 208–213, 256, 419 final, 127, 135, 194, 197–199, 208–213, 219–222, 225, 230, 242, 248, 250, 277, 372, 373, 402, 420, 480, 484 requirements, 200, 391 tasks, 16, 98, 132–135, 196, 208, 209, 212, 213, 216, 224, 247, 277, 377 Experimental research, 251, 282, 336 Expert assessments, 75, 157, 158

Subject Index Expert discussions, 30, 34, 42, 77–79, 206, 211, 212, 215, 472, 479, 480, 484 Expertise research, 20, 42, 44, 54, 70 Expert knowledge, 21, 65, 214 Expert opinion, 342 Expert specialist workshops, 27, 44, 91–95, 103

F Feedback discussions, 144 Feedback workshops, 238, 241, 279, 282 Field of work, 82, 85, 478, 480 Finn coefficient, 115–117, 151–153, 161, 384 Formative sciences, 20 Form of examination, 195, 196, 200–203 Functional competence, 65, 67, 121, 122, 166, 169, 190–192, 220–222, 244, 281, 286–292, 294, 297, 298, 401, 402, 405, 418 Functional literacy, 65 Functional understanding, 66

G German research foundation (DFG), 14, 61, 335, 352, 353 Germany, 18, 44, 62, 69, 82, 150, 185, 187, 225, 244, 260, 261, 265, 281, 336, 373, 383, 384, 389, 412, 428 Group learning, 457

H Health care, 471, 492 Health protection, 23, 58, 99, 214, 408, 464, 478, 482, 490, 492 Heterogeneity, 11, 62, 127, 143, 201, 202, 240, 261, 266, 270, 272, 276, 279, 301–311, 369, 389, 391, 407, 442, 449, 459–461, 485 Higher technical colleges in Switzerland, 257, 385 Homogeneity, 117, 145, 151, 159, 161–163, 216, 217, 271, 299, 347–349, 354, 363, 372 Humanisation of Working Life, 72

I Identity development, 1, 79–81, 332, 355 occupational, 82–87, 171, 174, 179, 185, 265, 269, 313, 315, 320, 338

Subject Index organisational, 80–82, 84, 86, 87, 171, 174, 178, 181, 184, 265, 269, 279, 311, 312, 316, 319–321, 323–326, 329, 332–333, 335, 339, 351, 354, 357, 358 pre-professional, 80 professionals, 15, 26, 81, 85, 172, 315, 316, 321, 323, 324, 326, 329, 332–333, 339, 341, 351, 358, 488 vocational, 1, 27, 44, 79, 80, 83, 85, 89, 174, 179, 184, 279, 312, 325, 326, 329, 332, 338, 339, 346, 354, 355, 360 Industrial culture, 81, 84 Innovation projects, 282 technological, 39, 41, 91 Input/output didactics, 8 Intelligence multiple, 52, 55, 140, 394 practical, 52, 398 profile, 55, 395 Interdisciplinarity, 24 Intermediate examinations, 132, 198, 200, 207, 250, 254 International comparative competence diagnostics/projects/studies/surveys, 2, 15, 68, 71, 87, 102, 109, 127, 144, 150, 154, 258 Internationalisation, 19, 96 International World Skills (IWS), 9, 20, 96, 142, 264 Interrater reliability, 105, 113, 127, 150–154, 158–161, 185–192, 218, 224, 341, 348, 375–377, 387, 415, 419, 421 ISCET, 18 ISCO, 18

J Job descriptions, 2, 9, 10, 18, 26, 69, 71, 80, 91, 95, 98, 109, 120, 138, 142, 194, 215, 345, 352, 391 Job profiles, 69, 71, 109, 138, 141, 258, 433, 449

K Key competences, 11–13, 41 KMK (Conference of Ed. Ministers in Germany), 7, 17, 20, 31, 39, 40, 42–44, 143, 299, 344, 361, 383, 389, 391, 395–400, 422, 424–427 Know how, 47, 50, 65, 294, 372, 453

553 Knowledge action-explaining, 47, 294, 372, 381, 405 action-leading, 294, 393, 401, 405, 441, 443 action-reflecting, 9, 47, 142, 293, 294, 351, 372, 405, 441 disciplinary, 52, 353, 397, 398, 423 implicit, 10, 11, 49, 50, 83, 311, 386 level, 143, 294, 296, 298 motivational-emotional, 50 practical, 46, 49–52, 387 professional, 9, 10, 19, 28, 30, 31, 46–49, 52–54, 56–58, 65, 74, 97, 111, 136, 142, 195, 353, 387, 392, 398, 401, 402, 406, 430, 468 specialist, 27, 28, 30, 49, 65, 91, 136, 143, 372, 386, 387, 392, 447, 471, 475 systemic, 491 tacit, 10, 11, 32, 49–50, 52 theoretical, 18, 27, 46, 49, 52–54, 66, 126, 326, 353, 430, 451, 488 work process, 9, 24, 30–32, 45, 48, 49, 58, 65, 326, 331, 373, 381, 387, 405, 406, 424, 430, 437, 443, 462, 467 Know that, 47, 65, 293, 294, 372, 407 Know why, 47, 294, 372

L Large scale assessement, 8 Large scale competence diagnostic (LS-CD), 9, 12 Learning action-oriented, 27, 53, 407, 438 area, 14, 44, 61–63, 68, 70, 71, 80, 93, 95, 471, 487 climate, 261–263, 269, 270, 369–371, 374, 380, 408, 420 competence, 456 contents, 25, 29, 267, 378, 406, 412 cooperative, 6, 21, 458, 459, 469 decentralised, 46 field concept, 39, 43, 69, 73, 144, 146, 257, 280, 282, 335, 344, 348, 382, 387, 391, 398, 403, 406, 425–428, 431, 438, 446 inductive, 53 location cooperation, 268–271, 362 methods, 465, 466 outcomes, 15, 61, 282, 335, 336, 407, 408, 433, 442, 453, 466–469, 473 processes, 4, 25, 43, 73, 262, 332, 339, 383, 386, 396, 405, 406, 420, 425, 436, 442,

554 447–449, 453, 455, 457, 459, 462–467, 469, 475, 485 school-based, 25, 224, 269, 271, 310, 333, 363, 371, 373, 374, 378, 380, 383, 437 situation, 104, 263, 344, 374, 378, 387, 392, 409, 420, 426, 427, 429, 433, 437, 438, 440, 446, 452, 457, 458, 466, 485, 497 at the workplace, 46 Learning-outcome taxonomies, 442 Learning tasks design of, 386, 433 Learning time difference (LTD), 306–308 Learning venues, 310–311, 323–325, 327, 328, 336, 361–364, 367, 372–374, 378–384, 402, 408, 412, 419, 435, 436, 471, 484 Level of competence, 250, 281, 292, 380, 384, 400, 456 Level of competence development, 44, 65 Level of difficulty, 93, 120, 123, 124, 133, 135, 138, 141–144, 356, 415, 451 Level of knowledge, 47, 78, 129, 142, 207, 280, 293, 298, 348, 372, 443, 451, 454, 475 Level of work process knowledge, 47, 56, 102, 285, 293, 294, 298, 438 Longitudinal analyses, 159, 168–169 Longitudinal study/studies, 143, 144, 225, 250, 253, 254, 264, 274, 276, 340

M Manual skills, 193 Measurement model, 3, 14, 68, 77, 78, 86–87, 131, 147–149, 154, 156–159, 163, 164, 171, 186, 187, 191, 192, 201, 249, 292, 293, 299, 332, 338, 395, 403, 404, 412 Mechatronics car, 137, 138, 194, 358 motor vehicle, 17, 219, 242, 438 Mediator analysis, 233–236 Mixed distribution models, 149, 163, 171 Mixed Rasch Model, 149, 162, 171 Model validity, 150 Model verification, 86 Motivation intrinsic, 83, 84, 312 primary, 230, 233, 235, 238, 242–244, 247 professionals, 84, 238, 244, 312, 327 secondary, 225, 230, 233, 235, 238, 243, 244, 247 test, 225, 226, 229, 230, 235, 238, 240, 242–244, 247, 277 Multiple-choice tasks, 102, 132–134, 201

Subject Index N National Qualifications Framework (NQF), 7 Novice-expert-paradigm, 26, 44, 70, 79, 249, 387, 425, 439 Novices, 27, 43, 80, 312, 487 Nursing training, 109, 238, 240, 257, 336, 382, 384–386, 471, 474

O Objectivity of implementation, 127, 158, 274, 277 Occupational profile, 10, 19, 25, 26, 62, 80, 260, 324, 339, 341, 396, 488 Occupational research, 6, 329, 402, 403 Operational order, 77, 202–216 Operational projects, 77–78, 203, 206, 207 Operational project work, 208 Organisation development, 16

P Pedagogical research, 421 Percentile bands, 294, 302, 305–307, 444 Performance dispositions, 9, 15, 55, 62, 70, 98, 402 Peripheral workforce, 84 Personality development, 9, 23, 197, 333, 460 Personality theory, 41 PISA, 7, 8, 65, 66, 69, 126, 226, 261, 266, 298, 302, 332, 389, 401 Polarisation thesis, 5 Post-SII test arrangement, 102, 103 Practical assignment, 471 Practical experience, 10, 50, 123, 259, 436, 454, 468, 471 Practical tasks, 202, 211, 212, 432 Practical terms, 53–54 Practice communities, 51, 53–54 Pretest/pre-test, 68, 101, 104–125, 144, 226, 229, 258, 259, 277, 278, 348, 349, 375, 384–386, 415 Problem solution, 57, 73, 206, 221, 483 Problem solving-pattern(s), 337, 385 Profession academic, 98 commercial, 75, 145, 324, 328, 354, 426 industrial-technical, 313 technical, 43, 263, 399 Professional competences, 1–4, 10, 12, 13, 15, 16, 18–24, 28–32, 34, 40–45, 48–50, 52, 55, 56, 58, 59, 62, 63, 65, 67, 68, 70, 71, 74, 77–79, 83, 98, 100, 103, 120, 122,

Subject Index 126–131, 135, 136, 142, 143, 145, 147–149, 154, 156–158, 171, 187, 190, 193–196, 199, 202, 203, 205, 206, 208–210, 213–215, 250, 260, 261, 275, 277, 286–290, 298, 299, 307, 310–312, 320, 331–335, 338–342, 346, 351, 352, 354–360, 362, 363, 368, 372, 380, 382, 387, 389–422, 424–426, 428, 429, 433, 442, 444, 446, 455 Professional concept, 17, 468, 469 Professional development, 3, 15, 21, 92, 93, 137, 249, 328, 333, 425 Professional ethics, 17, 81–83, 353, 354 Professional expertise, 48, 93, 148, 169, 372, 489 Professional identity, 1, 3, 15, 16, 79–81, 83, 85, 174, 230, 269, 311–313, 315, 317, 319, 323–326, 332, 333, 338–342, 346, 351, 354, 355, 357–360, 382, 407, 467, 472, 488 Professionalism modern, 360 open dynamic, 41 Professional knowledge, 10, 11, 18, 30, 32, 48, 49, 52, 53, 57, 194, 328, 331, 387, 393, 405, 406, 430–432, 454, 487, 488 Professional learning, 11, 430, 432, 446, 455, 456 Professional role, 80, 83, 311, 312, 355 Professional skills, 2, 10, 29, 40, 41, 49, 52, 63, 98, 99, 132, 134, 154, 193, 194, 196, 346, 372, 381, 431, 433 Professional typology, 316–319 Project design, 257–264, 266, 282 Project objectives, 257–258, 276 Project organisation, 258 Psychometric modelling, 158

Q Qualification frameworks, 102 Qualification levels, 97, 102, 103, 107, 129, 138, 140, 144, 257, 306, 309, 310 Qualification requirements, 5, 6, 9, 10, 19, 20, 25–27, 37, 41, 48, 49, 59, 62, 98, 103, 110, 129, 138, 141, 194, 203, 206–208, 397, 407, 429 Qualification research, 30, 41, 42, 44, 48, 51, 54, 59, 71, 103, 342, 344 Quality assurance, 1, 4, 142, 218, 224, 254, 257, 260, 264, 267, 282, 329, 335, 342, 343, 361, 374, 407, 484 Quality competition, 72

555 Quality control, 41, 77, 129, 211, 215, 269, 457 Quality criteria, 4, 16, 70, 126, 128, 135, 200, 201, 218, 224, 259, 269, 271, 361–365, 375, 482 Quality diagram, 268–272, 363, 364, 370, 371 Quality profile, 272, 273, 339, 363 Questionnaires context, 186, 232

R Rasch model, 155, 159, 161, 162 Rater training, 79, 105, 110–115, 117, 118, 120, 127, 128, 144, 145, 150, 153, 160, 188–190, 192, 201, 218, 224, 256, 278, 337, 341, 348, 375–377, 384–388, 418 Rating procedure, 10, 78, 79, 112, 127, 150, 159–160, 162, 201, 206, 209, 212, 279, 292, 299, 338, 341, 393, 403, 421, 453 Rating results, 78, 79, 112–114, 117–122, 216, 341, 376 Rating scale, 68, 110, 112, 151, 161, 202, 211–213, 257, 279, 341, 404, 405, 410, 415, 419, 421, 489–496 Rating training, 79, 105, 111, 145, 162, 377, 421 Re-engineering, 17 Reflection on and in action, 53 Reliability, 78, 79, 105, 111, 120–122, 126, 127, 135, 136, 150–153, 156, 159–162, 178, 182–186, 188, 191, 192, 201, 202, 209, 212, 219, 277, 335, 416, 421, 482 analysis, 120–122, 178, 183, 191, 192 calculations, 151, 153, 376 Requirement dimension, 56, 63, 200, 285, 394, 404, 405 Research designs, 249, 250, 336, 340 hypothesis-driven, 249, 250 hypothesis-led, 250 strategies, 248–256 Risk group, 67, 227, 281, 292, 298, 302, 303, 385, 401 Role distance, 80

S Safety occupational, 58, 99, 135, 345, 464, 482, 505 works, 345, 408, 490, 492, 498, 501, 504 Scale properties, 151 School climate, 248, 269, 324, 327, 374

556 Scope for design, 99, 102, 104, 138, 214, 410, 437, 439, 451, 465, 481 Scope of the task/test, 201, 225, 276, 411 Selectivity index, 132–135 Shaping competence, 7, 39, 65–67, 75, 97, 121, 122, 146, 166, 169, 190–192, 206, 220, 222–224, 244, 286, 287, 289–291, 293, 297, 298, 304, 346, 351, 384, 389, 401, 405, 418, 425, 426, 428, 446, 469 Shaping the working world, 7, 41, 143 Shapiro-Wilk test, 151 Situated learning, 261, 262, 387 Situativity, 27, 51 SK Arbeit und Technik, 40 Skills implicit, 10, 49 practical, 9, 10, 49, 52, 187, 488 professionals, 9, 10, 468 social, 49 technical, 10, 33, 381 vocational, 109 Social compatibility, 28, 39, 58, 106, 120–122, 147, 161, 162, 169, 190, 192, 199, 299, 348, 351, 352, 404, 408, 428, 464, 490, 492, 500, 504 Social responsibility, 42, 505 Solution spaces, 15, 59, 102, 104, 105, 107, 109–113, 144, 201, 214, 215, 277, 278, 299, 332, 333, 347, 376, 399, 402, 440, 441, 454, 461, 462, 467, 482, 483, 490, 492, 497, 499, 502, 506, 508 South Africa, 118, 150, 244, 245, 490 Specialisation, 17, 19, 103, 339, 354, 361, 419 S-R (behavioural) theory, 41 Stagnation, 250–256, 261, 336, 341 Stagnation hypothesis, 251, 253 Standard educational, 13, 61, 427, 442 Studies hypothesis-led, 44 professionals, 25, 37, 44 of works, 52 Subject areas, 155, 392, 480 Subject didactics, 3, 61, 249 Sustainability, 39, 57, 59, 66, 68, 76, 106, 121, 122, 147, 161, 162, 169, 190, 192, 214, 283, 351, 404, 407, 428, 465, 478, 479, 481, 483, 484, 490, 491

T TA, see Technical colleges (TA) Tacit knowledge, 10, 11, 32, 49–50

Subject Index Tacit skills, 10, 32 Tasks holistic, 55, 59, 63, 99, 105, 138, 147, 149, 195, 201, 202, 212, 215, 249, 264, 401, 406, 438 solutions, 1, 28, 56, 57, 59, 66, 76, 97–99, 102, 104, 105, 107, 110, 111, 115, 123, 127, 128, 143–145, 147–149, 151, 159–166, 168, 169, 199, 201, 212–216, 229, 236, 249, 256, 264, 272, 277, 282, 288, 298, 341, 351, 375–377, 388, 401, 406, 409, 411, 412, 433, 438–439, 443, 444, 450, 451, 453, 461–462, 466–468, 478, 491, 492 Taylorisation, 6 Taylorism, 82, 343 Teacher evaluation, 326, 327, 374 Teachers assessment, 138, 269 Teaching-learning process, 427, 447–459 Teaching-learning research, 332–338, 340, 342 Teaching quality, 269, 368–370, 374, 380 Technical colleges (TA), 104, 144, 225, 226, 253, 264, 280, 291, 301, 376, 382, 385, 389, 415 Technicians mechatronics, 17, 101, 219–224, 242, 251, 261, 262, 380, 396, 418 Technological and economic change, 197 Technology assessment, 23 Technology design, 23, 24 Technology genetics research, 23 Technology impact assessment, 23 Test arrangements, 3, 101–104, 107, 109, 257, 395 Test concept, 70, 99, 143, 144 Test design, 128, 261 Test format, 1, 96, 99, 102, 104, 123, 132, 224, 335 Test group primary, 102, 110, 129, 257 secondary, 257 Test motivation, 225–248, 276, 277, 340 Test participants, 64, 98, 102, 111, 112, 117, 122–124, 127, 129, 130, 138, 142–144, 186, 202, 213, 225, 226, 229, 230, 235, 238–240, 244, 247, 248, 251, 258, 261, 264, 265, 268, 274–280, 283, 285–287, 290, 300, 306, 340, 347, 348, 354, 356, 357, 362, 363, 372, 374, 376, 384, 385, 390, 410, 412, 418 Test population, 104, 120, 123, 125, 143, 144, 261, 264, 266, 384 Test quality criteria, 10, 126, 128, 132 Test results

Subject Index interpretation of, 64, 268, 276, 279, 340 representation of, 290, 297, 354 Test scope, 229, 276–277 Test tasks authenticity/reality reference, 98 criteria-oriented, 99, 102, 136, 137 developing open, 91–146 difficulty of, 64, 98, 102, 110, 120, 123, 124, 135, 138, 142–144, 146 evaluation and choice of test tasks, 105–125 norm-based, 102 representativeness of, 72, 98 revision of, 110, 124, 125 selection and development, 72 Test theory classical, 134, 156 probabilistic, 134, 156 Total point values, 117, 138, 139, 202 Total score (TS), 120, 130, 138, 139, 202, 216, 218–221, 226, 228, 229, 234–237, 262, 281, 286, 293, 294, 297, 299, 300, 306, 308, 310, 337, 348–350, 356, 357, 363, 365, 371, 445, 517 Training objectives, 67, 109, 192, 200, 260, 392, 451 paradox, 346, 430–432, 447, 455 practical, 18, 30, 45, 49, 53, 57, 63, 76, 79, 100, 140, 151, 158, 187, 326, 373, 382, 383, 398, 408, 428, 433, 435 programmes, 9, 100, 101, 103, 104, 253, 263, 265–267, 282, 301, 306, 309, 310, 323, 335, 382, 396, 397, 416, 425, 428 qualities, 238, 248, 268–271, 279, 324–327, 333, 334, 341, 362, 363, 365, 366, 371, 378, 381 regulations, 9, 20, 25, 29, 71, 80, 101, 120, 142, 194, 199–202, 206, 207, 213, 224, 300, 339, 391, 396, 406, 433 support, 268, 269, 272, 325, 326, 328, 329, 367, 368 Typology of occupations, 86

U Understanding of (the) context, 20, 22, 100, 299 Utility values, 57, 76, 77, 213, 214, 268, 345, 351, 352, 424, 453, 454, 461, 463

V Validity consensus, 91, 187 constructs, 128, 130, 131, 134, 154

557 contents, 69–72, 97, 98, 101, 102, 110, 112, 125, 128–130, 135, 136, 155, 157, 159, 160, 166, 168, 169, 187, 209, 258, 335, 406, 422 curricular, 9, 19, 169, 259, 260, 264, 406 occupational, 110, 259 professionals, 98, 112, 134, 142, 258, 259, 406 vocational, 101 VDI, 98, 99 VET, see Vocational and educational training (VET) Vocational and educational training (VET), 1, 22, 45, 62, 63, 68, 69, 71, 144, 155, 157, 160, 169, 192, 198, 206, 224, 254, 258, 260, 264, 300, 301, 309, 333, 334, 337, 344, 352, 381, 385, 418, 442, 445, 446 Vocational development, 355, 425 Vocational education, 1, 3–7, 11, 13–19, 24–26, 28, 30, 40–42, 44, 45, 49, 50, 59, 61–67, 69, 70, 72, 73, 80, 82, 126, 128, 130, 131, 134, 136, 141–145, 154–157, 163, 169, 171, 184, 185, 187, 197, 206, 214, 216, 218, 224, 230, 248, 249, 257, 258, 260, 262, 263, 268, 275, 276, 278, 282, 283, 293, 296–302, 306, 309–311, 323, 331–333, 335, 336, 338, 342–346, 351, 352, 354, 355, 361–363, 371–373, 383, 389, 391, 397–399, 403, 406–408, 423–428, 438, 441, 442, 446, 455, 457, 462, 470, 482, 488 Vocational identity, 1, 62, 80, 172, 174, 181, 183, 184, 323, 333, 339, 354, 355 Vocationalisation of higher education, 18, 352 Vocational learning, 4, 16, 18, 25, 32, 43, 46, 48, 56, 63, 126, 262, 263, 266, 268, 298, 333, 339, 344, 346, 354, 374, 396, 405, 426–428, 466, 488 Vocational pedagogy, 29, 402, 431 Vocational research, 17, 80, 339, 360 Vocational schools, 7, 39, 40, 101, 102, 104, 123, 144, 151, 195, 208, 226, 247, 248, 251–253, 256, 258, 263, 264, 267–272, 274–276, 278, 286, 291, 297, 299, 301, 324, 339, 344, 356, 362, 364, 372–374, 376, 378, 380–384, 391, 393, 396–399, 402, 405, 408, 416, 433, 436, 445, 468, 471 Vocational school teachers, 138, 186, 389, 391, 393, 394, 396–398, 401, 402, 406, 411, 413, 414, 416, 417, 421, 422 Vocational skills, 8, 16, 55, 110, 132, 151, 187, 194, 207, 455

558 Vocational tasks, 55, 59, 128, 298, 398, 411, 414, 425, 438, 444 Vocational training practice, 49, 143, 260, 282, 336, 344, 401 Vocational training systems, 2, 19, 71, 88, 89, 141, 185, 309, 342, 380, 428

W Work culture, 84 design, 6, 16, 20, 59, 68, 72, 403, 408 design and organisation, 58, 59, 214 ethics, 17, 81–83, 85–89, 172, 184, 311, 313, 315, 319, 339, 346, 351–355, 357, 358, 361 experiences, 27, 28, 30, 45, 50, 51, 67, 80, 91, 343, 373, 430, 431, 434, 437, 447, 449, 451, 453, 464, 468, 471, 490 industrial technical, 5, 51, 74, 91, 193 morale, 81–82, 174 organisation, 6, 11, 20, 21, 129, 466, 487, 490 organised skilled, 16 process, 6, 9, 18, 25, 27, 30–37, 39, 41, 44, 46, 57, 58, 66, 72, 73, 106, 121, 122, 129, 142, 157, 162, 186, 187, 190, 193, 197, 199, 202, 203, 206, 214, 266–268,

Subject Index 343, 346, 347, 352, 360, 381, 433, 436, 437, 453, 457, 469, 471, 477, 478, 481, 483, 490–492, 507 process analyses, 39 process knowledge, 30, 32, 45–48, 50, 51, 56, 65, 66, 102, 136, 142, 207, 256, 285, 293, 294, 297, 298, 346, 381, 387, 398, 405, 406, 429, 438, 454, 455, 464, 468, 469 sample, 158 secondary technical, 12 situations, 10, 20, 31, 34–37, 39, 42, 44, 46, 48, 50, 51, 98, 346, 387, 406, 425–427, 431–435, 437–440, 488 systemic, 344 tasks, 9, 11, 20–24, 26–28, 31, 33, 40, 41, 43, 44, 48, 51, 56, 58, 59, 65, 71, 73, 80, 85, 91, 93–95, 98, 101, 128–130, 136, 198–200, 202, 207, 351, 361, 429, 432–433, 436, 437, 441–443, 487, 488 unpredictable, 488 Work activity complete, 23 incomplete, 23 Work and Technology, 6, 20, 24, 40, 41, 72, 197, 312, 342, 429, 487 Working contexts, 22, 26, 28, 36, 55, 100, 438