330 19 11MB
English Pages 602 Year 2019
Current Studies in Educational Measurement and Evaluation
Editors Prof. Dr. Salih ÇEPNİ Assoc. Prof. Dr. Yılmaz KARA
Paradigma Akademi – August 2019
Current Studies in Educational Measurement and Evaluation Editörler: Salih ÇEPNİ, Yılmaz KARA ISBN: 978-605-7691-06-4 Sertifika No: 32427 Matbaa Sertifika No: 43370 Her bölümün sorumluluğu yazarına aittir. Paradigma Akademi Basın Yayın Dağıtım Fetvane Sokak No: 29/A ÇANAKKALE Tel: 0531 988 97 66 Mizanpaj: Fahri GÖKER [email protected] Dizgi: Gürkan ULU [email protected] Kapak Tasarım: Gürkan ULU Matbaa Adres: Ofis2005 Fotokopi ve Büro Makineleri San. Tic. Ltd. Şti. Davutpaşa Merkez Mah. YTÜ Kampüsiçi Güngören / Esenler İSTANBUL
Tanıtım için yapılacak kısa alıntılar dışında yayıncının yazılı izni olmaksızın hiçbir şekilde çoğaltılamaz. Bu Kitap T.C. Kültür Bakanlığından alınan bandrol ve ISBN ile satılmaktadır. Bandrolsüz kitap almayınız.
Paradigma Akademi – Ağustos 2019
Current Studies in Educational Measurement and Evaluation Editors: Salih ÇEPNİ, Yılmaz KARA ISBN: 978-605-7691-06-4 Certificate Number: 32427 Printing House Certificate Number: 43370 The responsibility of each chapter belongs to its author(s). Paradigma Akademi Basın Yayın Dağıtım Fetvane Sokak No: 29/A ÇANAKKALE Tel: 0531 988 97 66 Layout: Fahri GÖKER [email protected] Typesetting: Gürkan ULU [email protected] Cover design: Gürkan ULU Printing House Address Ofis2005 Fotokopi ve Büro Makineleri San. Tic. Ltd. Şti. Davutpaşa Merkez Mah. YTÜ Kampüsiçi Güngören / Esenler İSTANBUL
This book is sold with the banderole and ISBN obtained from the Ministry of Culture. Do not buy books without bandrole.
Paradigma Akademi – August 2019
Current Studies in Educational Measurement and Evaluation
Editors Prof. Dr. Salih ÇEPNİ Assoc. Prof. Dr. Yılmaz KARA
Paradigma Akademi – August 2019
Preface In its most general definition, education is a system that makes the individual useful for society. Like every system, education needs to be considered in a multifaceted way. The outputs of training processes are individuals who grow and develop in the processes. The individuals need to be evaluated in the care of society. Considering the long-time efficiency of the education, the process needs to be evaluated in terms of functioning. For this reason, countries make continuous evaluations in order to determine the needs of the society and whether the education systems respond to the needs. Although educational expectations vary according to the needs and targets of societies, there is an increasing need to make comparisons among countries’ education systems and to meet common expectations of nations to necessitate the comparative evaluation. International organizations make international evaluations to meet this need. On the other hand, the approaches are applied to education originating from various educational philosophies to meet the changing needs of the society. These applications make it necessary to evaluate the effectiveness of new educational practices. Individuals should be well educated in order to be able to reflect to their skills and to be useful in the society in the best possible way. Being useful for the society is possible when the students choose the right school and profession. There is a need for assessments of the students' attitudes, interests and communication skills as well as their content knowledge in order to guide them to the right schools that will enable them to achieve their goals. The high level of knowledge and skills expected to be acquired from the individual during the education process required the emergence of different fields of education. The measurement of the specific knowledge and skills of the area necessitates the recruitment of the appropriate evaluation process which is specific to the knowledge domain. The need for multidimensional evaluation of the complexity of educational processes and the function of the system have attracted the attention of many researchers. Educational assessment, measurement and evaluation have become so serious issue to shape and guide the educational policy. The success-oriented educational evaluations, that determine the student's career, lead to educational understanding that can be phrased as “education for exam”. Individuals were asked to shape their lives by marking the bubbles on a test paper. But, the studies v
in the field of education reveal that an individual is multi-faceted, and the right interests, attitudes and insights should be developed, and evaluations should be made on them. Current studies in the field of educational evaluation tried to be compiled in this book. The book specifically focusses on educational evaluation theories, evaluation of educational implementations, field-specific evaluation approaches and educational assessment examples.
Prof. Dr. Salih ÇEPNİ Assoc. Prof. Dr. Yılmaz KARA Editors
vi
Content
PREFACE ........................................................................................ V
Part I Problem Solving in Education Chapter 1 Problem Solving Procedure in terms of Cognitive Theories Salih ÇEPNİ & Yılmaz KARA Introduction ............................................................................................ 1 Cognitive Theories for Learning and Problem Solving ............................ 1 Human Cognitive Architecture ................................................................ 3 Cognitive Load Theory ......................................................................... 11 Cognitive Processes in Problem Solving................................................ 16 Conclusion ............................................................................................ 19 Chapter 2 Improving Item Validity through Modification in Terms of Test Accessibility Yılmaz KARA Introduction .......................................................................................... 25 Conceptual Understanding of Test Accessibility .................................... 26 Test Accessibility Model ....................................................................... 27 Item Modifications for Accessible Test Items: Theory to Practice.......... 31 Conclusion ............................................................................................ 36 Chapter 3 Traditional Measurement and Evaluation Tools in Mathematics Education Cemalettin YILDIZ Introduction .......................................................................................... 41 Verbal Exams ....................................................................................... 43 Long-answer Written Exams ................................................................. 46 Short-Answer Written Examinations ..................................................... 50 True-False Tests.................................................................................... 53 vii
Multiple Choice Tests ........................................................................... 56 Conclusion............................................................................................ 61 Chapter 4 Reliability and Validity Mustafa YADIGAROĞLU & Gökhan DEMIRCIOĞLU Introduction .......................................................................................... 67 Reliability ............................................................................................. 67 Error in Measurement ........................................................................... 68 Theorty of Measurement ....................................................................... 70 Standard Error of Measurement............................................................. 71 Methods of Reliability Estimate ............................................................ 72 The Factors That Affecting Reliability .................................................. 78 Validity................................................................................................. 80 Relationship Between Reliability and Validity ...................................... 85 Chapter 5 Features and Characteristics of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS Ümmühan ORMANCI Introduction .......................................................................................... 89 Programme for International Student Assessment (PISA) ...................... 89 Trends in International Mathematics and Science Study (TIMSS) ......... 94 National Examinations in Turkey (LGS and YKS) ................................ 98
Part II Summative Evaluation Chapter 6 Portfolıo Assessment Mine KIR Introduction ........................................................................................ 115 What is Portfolio? ............................................................................... 116 General Characteristics of Portfolios ................................................... 117 viii
Types of Portfolio ............................................................................... 118 Necessary Contents within the Developmental and Learning Portfolio. 119 Important Points to Consider in Creating a Portfolio ............................ 120 Reflective Writing............................................................................... 121 Application Steps of Portfolio ............................................................. 123 The Evaluation of Portfolios................................................................ 127 Limitations of a Portfolio .................................................................... 130 Electronic Portfolios ........................................................................... 131 Conclusion .......................................................................................... 132 Chapter 7 Project Assignments and Its Evaluation: Sample Presentation and Implementation Ufuk TÖMAN Introduction ........................................................................................ 137 Project Assignments ............................................................................ 138 Steps of Project Implementation .......................................................... 138 The Characteristics of the Project ........................................................ 139 Types of Projects ................................................................................ 140 Benefits of Project Assignments .......................................................... 140 Limitations of The Project Assignments .............................................. 141 Project Samples (Samples of Presentation and Implementation) .......... 142 Project Evaluation ............................................................................... 144 Chapter 8 Using Many-Facet Rasch Measurement Model in Rater-Mediated Performance Assessments Beyza Aksu Dünya Introduction ........................................................................................ 149 Conceptual Explanation of Rasch Measurement Model and Many Facet Rasch Model (MFRM) .................................................................. 150 Conclusion .......................................................................................... 157
ix
Part III Formative Evaluation Chapter 9 Measuring and Assessing 21st Century Competencies in Science Education İsa DEVECİ Introduction ........................................................................................ 163 21st Century Competencies .................................................................. 163 Measurement and Assessment ............................................................. 170 The Measurement and Assessment of Skills ........................................ 172 Measurement and Assessment of Literacies ......................................... 177 Conclusion.......................................................................................... 181 Chapter 10 The Use of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics Burçin GÖKKURT ÖZDEMİR Introduction ........................................................................................ 191 Concept Cartoons and Concept Maps as a Formative Assessment Tool in Mathematics Classrooms............................................................... 192 Conclusion.......................................................................................... 201 Chapter 11 Web 2.0 Applications Can Be Used in Assesment and Evaluation Seyhan ERYILMAZ, İlknur REİSOĞLU Introduction ........................................................................................ 209 Interactive Video Lesson Applications ................................................ 210 Applications that Develop Assessment and Evaluation Tools .............. 237 Conclusion.......................................................................................... 252 Chapter 12 How to Create and Use Rubrics? Buket Özüm BÜLBÜL Introduction ........................................................................................ 257 x
The Rubrics and their Importance........................................................ 258 Why Do We Need Rubrics? ................................................................ 259 Types of Rubrics ................................................................................. 261 How to Construct Rubrics? ................................................................. 264 Conclusion .......................................................................................... 268 Chapter 13 Assessment through the Scramper Banuçiçek ÖZDEMİR Introduction ........................................................................................ 273 What is the Scamper? .......................................................................... 273 Scamper’s Practice Steps..................................................................... 274 SCAMPER APPLICATION EXAMPLES .......................................... 278 Conclusion .......................................................................................... 280 Chapter 14 Self Peer and Group Assessment Ahmet Volkan YÜZÜAK Introduction ........................................................................................ 285 Self-Assessment .................................................................................. 286 Peer Assessment ................................................................................. 290 Group Assessment .............................................................................. 294 Conclusion .......................................................................................... 295 Chapter 15 Diagnostic Branched Tree and Vee Diagram Sevilay ALKAN, Ebru SAKA Introduction ........................................................................................ 301 Diagnostic Branched Tree? ................................................................. 301 2. What Is Vee Diagram? .................................................................... 314 Conclusion .......................................................................................... 327
xi
Part IV Subject Specific Evaluation Chapter 16 Alternative Assessment and Evaluation Practices in primary School Gizem SAYGILI Introduction ........................................................................................ 337 Alternative assessment and evaluation................................................. 338 Conclusion.......................................................................................... 344 Chapter 17 Measurement and Evaluation in Special Education Sibel ER NAS, Şenay DELİMEHMET DADA & Hava İPEK AKBULUT Introduction ........................................................................................ 347 Mainstreaming/ Inclusion .................................................................... 349 Assessment Processes for Students with Special Needs ....................... 350 Assessment Models in Special Education ............................................ 352 Assessment Methods in Special Education .......................................... 357 Example of Enriched Worksheet for Mainstreamed Students ............... 359 Evaluating of Mainstreamed Students’ Conceptual Understanding ...... 363 Chapter 18 A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests) Lütfiye ÖZALEMDAR Introduction ........................................................................................ 369 Word Association Tests (WAT) .......................................................... 371 Literature Samples for the Aim of Word Association Tests in Biology Education...................................................................................... 372 Conclusion and Evaluation .................................................................. 374
xii
Chapter 19 Assessment and Evaluation Process in Art Education Hüseyin UYSAL Introduction ........................................................................................ 383 Academic Assessment and Evaluation in Art Education ...................... 385 Artistic Assessment and Evaluation in Art Education .......................... 387 An Application Example of Assessment and Evaluation in Art Education ..................................................................................................... 390 Application Process of Assessment and Evaluation ............................. 393 Conclusion .......................................................................................... 395 Chapter 20 Assessing Student Learning Outcomes in Counselling Bilge SULAK AKYÜZ Background of Outcome-Based Education .......................................... 400 Dimensions of OBE ............................................................................ 400 OBE in Higher Education.................................................................... 402 OBE in Mental Health Profession........................................................ 403 Counselling......................................................................................... 404 Program Level Assessment System ..................................................... 407 Readiness survey................................................................................. 407 Comprehensive assessment ................................................................. 408 Electronic Delivery Map ..................................................................... 409 Self-evaluation form ........................................................................... 410 Conclusion .......................................................................................... 411 Chapter 21 Elementary Mathematics and the Most Used Alternative Assessment Techniques Sedat TURGUT Introduction ........................................................................................ 419 Elementary Mathematics and the Most Used Alternative Assessment Techniques.................................................................................... 420 Monitoring Students’ Progress ............................................................ 422 Making Instructional Decisions ........................................................... 423 Evaluating Student Achievement ......................................................... 424 xiii
Evaluating Programs ........................................................................... 424 Collecting Evidence in Assessment ..................................................... 425 Performance-based Assessment........................................................... 425 Conclusion.......................................................................................... 439
Part V Issue Specific Evaluation Chapter 22 Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes Hasan BAKIRCI Introduction ........................................................................................ 445 The Importance of Assessment and Evaluation in Education ............... 446 A general overview of the understanding on assessment and evaluation in science education curriculum ........................................................ 446 Common Knowledge Construction Model ........................................... 448 Diagnostic Branched Tree ................................................................... 455 Structured Grid ................................................................................... 455 Word Association Tests ...................................................................... 456 Concept Maps ..................................................................................... 457 Conclusion.......................................................................................... 458 Chapter 23 Assessment and Evaluation of Socioscientific Issues Ümit DEMİRAL Introduction ........................................................................................ 465 Current Situation in Assessment and Evaluation .................................. 466 SSI and PISA ...................................................................................... 469 Assessment and Evaluation of Socio-scientific Issues .......................... 476 Authentic Assessment ......................................................................... 478 Evaluation of SSI ................................................................................ 481 Pedagogical Components of Socio-scientific Issues and Assessment and Evaluation Instruments.................................................................. 482 Socio-scientific Issues and Decision-making ....................................... 484 xiv
Socio-scientific Issues and Argumentation .......................................... 486 Socio-scientific Issues and Reasoning ................................................. 490 Socio-scientific Issues and Character and Reflective ........................... 494 Conclusion .......................................................................................... 494 Chapter 24 Outdoor Learning Environments and Evaluation Mustafa ÜREY Introduction ........................................................................................ 501 Outdoor Learning................................................................................ 501 Importance and Limitations of Outdoor Learning Environments .......... 503 Outdoor Learning Environments Used in Science Education ............... 505 Factors Affecting Learning in Outdoor Learning Environments ........... 508 Things to Pay Attention in Outdoor Learning Environment ................. 509 Evaluation in Outdoor Learning Environments .................................... 512 Chapter 25 Evaluation in Inquiry-Based Science Education Özlem YURT & Esra ÖMEROĞLU Introduction ........................................................................................ 523 Inquiry Based Learning ....................................................................... 524 Theoretical and Philosophical Foundations of Inquiry-Based Learning Approach ...................................................................................... 525 Research Skills ................................................................................... 526 Inquiry-Based Science Education ........................................................ 529 Models Used in Inquiry-Based Science Education ............................... 530 Evaluation in Inquiry-Based Science Education................................... 533 Explanatory Recording Techniques ..................................................... 533 Conclusion .......................................................................................... 539 Chapter 26 Problem-Solvıng Skılls Banuçiçek ÖZDEMİR Introduction ........................................................................................ 549 The Emergence of Flipped Classroom Model Concept ........................ 550 Theoretical Basis of The Flipped Classroom Model ............................. 551 xv
Use of Flipped Classroom Model ........................................................ 552 Components of Flipped Classroom Model ........................................... 553 The Effect of Flipped Classroom Model on Learning .......................... 554 Advantages of Flipped Classroom Model Applications........................ 555 Possible Difficulties as Flipped Classroom Model Applied .................. 556 Things to Consider When Implementing the Flipped Classroom Model 557 Conclusion.......................................................................................... 559 Chapter 27 Assessment of the STEM in the Classroom Bestami Buğra ÜLGER Introduction ........................................................................................ 565 STEM based lessons ........................................................................... 565 Measuring STEM Skills ...................................................................... 567 Rubrics ............................................................................................... 569
xvi
Part I Problem Solving in Education
Salih ÇEPNİ, Yılmaz KARA
Chapter 1 Problem Solving Procedure in terms of Cognitive Theories Salih ÇEPNİ & Yılmaz KARA
Introduction People live in nature as a social being. In order to live in a better environment with better conditions, it has been the goal of human beings to discover and learn the rules of nature. This situation required learning and research. Both in learning and research procedures, people try to overcome with the faced problems. In earlier times, people tried to solve problems in order to survive. They facilitated their life with the simple tools that they developed and made researches to unhide the mystery of nature. In today's complex and human interacted world, people try to make sense of the many faceted problems, seek for the necessary information leads to the solution and discover a solution to the problem. While doing all this, people make efforts with his human knowledge and cognition. In this section, the most accepted theories about human cognition will be introduced at first. Then, human cognitive structure and functioning will be revealed in the direction of theories. Finally, mechanism of human cognition will be explained in the process of problem solving. Cognitive Theories for Learning and Problem Solving The realization of effective learning and overcome the encountered problems in an easy manner depend on the clarification of the processes taking place in human cognition. Many theories have been put forward to explain the human cognition structure, functioning and problem-solving processes (Kala, 2012). The most frequently discussed cognitive theories by the researchers are presented in Table 1.1.
1
Problem Solving Procedure in terms of Cognitive Theories
Table 1.1 Basic theories for human learning and problem solving Theory Dual Coding Limited Capacity Active Processing
Basic Approach Audio and visual information are processed in two distinct channels. Information that can be processed in cognitive processes is limited. Learners create meaningful mental presentations and models by operating the received information cognitive processes.
Dual Coding Theory The dual coding theory has been considered as a fundamental theoretical explanation for the representation of information in memory. The theory states that two coding systems exist or there are two ways of representing information in memory. These are “non-verbal mental imagery” and “verbal symbolic processing”. Visual and verbal coding can be done in the processing of the information. However, one of them is more dominant. The information in the learning environment perceived by the learner is symbolized and encoded and stored in memory through a conversion in visual or verbal symbols. The symbolisation operations which occurs in two ways as visual and verbal symbolization indicate that information is processed in two independent channels. One of these channels processes the non-verbal information such as visual presentation and the other processes verbal information such as spoken words and text. In this respect, it is stated that remembering the information received through both the eye and the ear is easier when compared with the information received through a single sensation (Moreno, 2017). Limited Capacity Theory The limited capacity theory is founded on the assumption that human shortterm memory has limits in executing information at a time. Short-term memory is limited in terms of time and data storing. The longer stay in short-term memory can be enabled through thinking on information, grouping information, and continuous repetition. The stay of data in the short-term memory is determined according to its necessity. It is essential to transmit the data to long-term memory for later use. In this case, the information is processed through repeating, coding and associating with the data in the long-term memory. In this respect, it is important to process information through dividing information into meaningful pieces, associating with each other or existing knowledge and by increasing interaction (Brydges, Gignac & Ecker, 2018). 2
Salih ÇEPNİ, Yılmaz KARA
Active Processing Theory The active processing theory focuses on the processes in which learner constructs meaningful mental presentations and models through processing the received information in cognitive processes. The active processor explained with the cognitive processes such as attention, editing the received information, and integrating the new information to existing ones. The learners are active in the learning process, if they are aware of their own cognition and learning characteristics. It is emphasized that learners who are active processors should be accepted as individuals who can use executive cognitive strategies by carrying the awareness of knowledge rather than being passive receivers that receive and store as much information as possible. Five different processes were defined which can be used to create consistent mental structures: process, comparison, generalization, listing and classification. Learners can actively participate in the process by constructing their own information (Acuna et al., 2011). Human Cognitive Architecture Both cognitivists and behaviourists tried to explain the information processing through the regulation of human behaviour by environment. But unlike behaviourists, cognitivists claim that there is a variable between environment and behaviour. This variable is the memory of the learner. At first, memory was thought to be a mental function where information could be stored for a long time. But later studies have shown that memory is a more complex mental apparatus than simply storing information. In general terms, memory can be defined as the place to process and store the external stimuli. The information processing procedure focuses on two basic elements. The first is a collection of information structures consisting of three structures; sensory memory, short-term memory or working memory and long-term memory (Table 1.2). The second involves cognitive processes. These are intrinsic, mental actions and enable the transfer of knowledge from one structure to another (Sun, 2011). Table 1.2 Memory types of human cognition system Memory Sensory ShortTerm
Function Receive stimulus and processes instantly. It imposes meaning to the stimulus and combines the information units. It enables the learner to make sense through visual and spatial logic operations.
Capacity 3-7 digit 7-9 digit
Recall Time 0.5-3 second 5-15 second
3
Problem Solving Procedure in terms of Cognitive Theories
LongTerm
Provides permanent storage for different types of information.
Unlimited
Constant
Sensory Memory The individuals are exposed to dozens of stimulants at any time in their daily life. For example, in a laboratory environment, the sensory organs are constantly stimulated by elements such as the voice of students talking among themselves, the sound of the equipment working in the lab, the smell of chemicals, the smell of fragrance from students, coloured dress of students and so on. These instant messages received by our sensory organs are transmitted to the brain through our nervous system.
Figure 1.1 Multi-Memory Store Model (Takir, 2011). The capacity of sensory memory is unlimited, but the information that is received is lost very quickly if it is not processed immediately. Thus, a limited number of information is transferred to short-term memory from the unlimited stimulus to the sensory memory. Others disappear from sensory memory (Widrow & Aragon, 2013). Short-Term memory This part of the human cognition system is the place to hold several pieces of information in a related manner at the same time. It is the memory for the mind to execute information, making organizations to store and discard, relating one to another. It can also be described as part of the memory to transitory hold and manipulate the information during the execution of various cognitive processes. The short-term memory executes the information through its components (Figure 1.2). Central executive enables to direct attention on related information and 4
Salih ÇEPNİ, Yılmaz KARA
coordinates the cognitive operations. Phonological information is held and stored in phonological loop more than a few seconds. Visual and spatial information is executed through the visual-spatial pad. Episodic buffer is the component to retain combined episodes together and provides the interface among the entries which have various features. It is the base for conscious awareness that connect the compounds of short-term memory with long-term memory (Addis, Barense, & Duarte, 2015).
Figure 1.2 Cognitive Memory Model (Chen et al., 2016). Long term memory This type of memory is the storage for more persistent information. The knowledge and skills can be stored in here with an unlimited capacity. Some theorists claim that long-term memory is composed of two basic parts as episodic memory and semantic memory, while some theorists add procedural memory. The episodic memory is the part of personal life. It relates to a specific time, place and events. For example; the meal that eaten at dinner, the clothes that worn on a special day, the trip that enjoyed is in episodic memory. It is also referred as autobiographical memory. During our lives, all the events, jokes, rumours are stored in the episodic memory. Memories are learned without any effort. But memories tend to intermingle. Therefore, it is difficult to recall the information. However, important and traumatic events are remembered in detail. In addition, it is difficult to recall ordinary and repetitive events, because new events may disrupt the former. Semantic memory is the part where general information is stored such as generalizations, concepts, problem solving skills. Information is stored in semantic memory in both visual and verbally coded and interconnected 5
Problem Solving Procedure in terms of Cognitive Theories
networks. Semantic memory stores the information in propositional networks and schemas. Procedural memory is the basis for motor, cognitive or visuospatial skills. It stores the necessary information to execute a procedure such as walking, riding and driving a car (Wixted, 2018). Schema Formation Schema are considered as the cognitive structures that provide harmony between the cognitive commands triggered regarding the encountered situation and physical actions. Information elements are categorized according to the way they are used and stored in schemas in memory. Schema formation and use form the basis of cognitive expertise. One of the most important tasks of schemes is providing a process for the regulation and storage of information. Another task is to reduce the load of short-term memory. This is because, although the schemes are composed of numerous sub-elements, they cause less load than the processing of a large number of information units, which are independent of each other, because they are operated as a single unit in the short-term memory (van Kesteren et al., 2012). Automation The information and abilities are saved in long-term memory and executed reflecting to the encountered situation through the operations in short-term memory. If the cognitive operations are conducted repeatedly for the similar situation in each time, cognitive procedures gain automaton. In other words, continuously repeated operations for the familiar situations provide automation through requiring relatively less memory sources. Automation enable completely executing familiar cognitive tasks. Unfamiliar tasks still can be operated but require more cognitive sources. Formerly faced cognitive tasks can be executed without automation, but the operation is probably will be slow and require more time (Addis, Barense, & Duarte, 2015). Capacity of the Short-Term Memory The boundaries of the memory cannot be determined based on the amount of information returned after being sent to memory. Because if the individual keeps the information in his memory by making large groupings, then the amount of returning information will be too much. The memory is overloaded when more than seven new grouped information (chunk) are processed simultaneously in the short-term memory. As a result of his work Miller; the number of elements (chunk) in a young adult, in other words the width of the memory, is 6
Salih ÇEPNİ, Yılmaz KARA
approximately 7 ± 2. These elements can be numbers, letters, words and other units. Again, these elements can be a single element or a group of related information. Miller has done this measurement using a kind of memory (digitspan) test. Several numbers listed in this test is read and individual immediately asked to say what they remember (Kamiński, Brzezicka & Wróbel, 2011). Although the number of operational items is limited in short term memory, the complexity, level and size of the unit are not limited. The human brain can group a set of related information and operates it as a single unit. However, individuals can increase capacity by increasing the size of each unit. In short, this process (chunking) increases the limitation of short-term memory. For example; a 7 units number set such can become 4 units if grouped as “5 7 2 8 9 1 0” (Ejones, 2012). The short-term memory is also considered to have limits since the duration of the information is very short in here. According to the researchers, the duration is around 5 to 20 seconds, although it varies. The information remains longer in the short-term memory through thinking on information and repeating it. More remained data in the short-term memory is more possible to be transmitted to the long-term memory by repetition. Owing to the limited capacity of short-term memory, the unoperated or transmitted data will be lost very soon due to the force of new information. In summary, grouping related small pieces into large parts and mental repetition is required to keep more information in the short-term memory longer (Brydges, Gignac & Ecker, 2018). There are factors draw the boundaries of cognitive capacity. The capacity of short-term memory is different from one individual to another with respect to the operated cognitive tasks. Individual differences become apparent when the operated cognitive task require to use the cognitive sources at the limit of cognitive capacity. In addition, initial knowledge of the individuals plays important role in cognitive processes since the individual differences are effective on the availability of sorted schemas in the memory. Furthermore, cognitive organization abilities are different among individuals. Cognitive strategies can be learned to improve cognitive organization abilities. Thus, the boundaries of the cognition capacity can be broadened through leaning how to learn (Brydges, Gignac & Ecker, 2018).
7
Problem Solving Procedure in terms of Cognitive Theories
Cognitive Architecture Principles The natural selection and the human learning process are similar in natural environments. Living things fulfil their vital functions and survive through the genetic information that they have. Information that fulfils vital functions is stored in genes and transmitted over generations. Living in new or changing conditions requires differentiation of genetic information. Genetic information is used and transferred over generations if the differences in genetic material are appropriate for the organism to survive in a new or changing environment. Genetic differentiation cannot survive if it is not suitable for new or changing environment. This information cannot be passed on to new generations and disappear. The information changes are stored and disseminated in this way (Pickering & Clark, 2014). According to evolutionary perspective, knowledge can be derived from primary and secondary biological information. Primary biological information is gained through the generations that obtained as a result of activities such as the face recognition establishing social relations, speaking our native language and listening. This type of information can be gained unconsciously and without any effort. The opposite, secondary biological information, is acquired consciously and often requires a mental load. Almost everything taught in teaching-related institutes is secondary biological information. Reading and writing are also the examples of secondary biological information (Churchill & Fernando, 2014). Human cognition is characterized by five main principles managing the operations and functions (Sun, 2011). These principles are also valid for processes managing biological evolution and constitute the natural information processing system. Cognitive architecture principles are summarized in Table 1.3. Table 1.3 Cognitive Architecture Principles Principle Information store principle Borrowing and reorganizing principle Randomness as genesis Narrow limits of change Environmental organizing and linking principle
8
Function Store data for a long time Enable constructing data storage Compose new data Entry of environmental data to the data storage Use data in data storage
Salih ÇEPNİ, Yılmaz KARA
The Information Store Principle According to the theory of evolution, the genetic material (DNA) that determines the activity of the organism in the surrounding environment contains large amounts of information. This genetic information must be large enough because the organism must survive in complex and information-rich environments. There is no consensus about the complexity or dimensions of the genetic material. The ability of living things to maintain their genetic function in such environments depends on their ability to keep large quantity of data. In the same way, humans need a structure that has the capacity to keep large quantity of data for the learning process. Otherwise, the learned information is quickly forgotten. In cognition system, the long-term memory meets these needs through saving the learned information (Schweickert, Fisher & Sung, 2012). The Borrowing and Reorganizing Principle Long-term memory is the cognitive component enable to save large amounts of data. A large quantity of data can be obtained both through the natural selection process of evolution and the borrowing and reorganization principle for the human learning process. The data that saved in long-term memory is mostly taken from other people in different ways. These are acquired by listening, reading or looking at visual materials. In this way, the information is obtained by listening to what others are saying, by reading what they write, or by studying the animations or visuals they put forward. Thus, the data is transferred from the long-term memory of others to our own long-term memory. This information is usually combined with the data existing in long-term memory and is reorganized. This transformation may have negative, neutral or positive effects. If the transformation has a negative effect, the data is then later transformed or discarded for meaningful learning to take place. If the conversion has a neutral or positive effect, it is going to be saved in long-term memory. In this way, the borrowed information is reorganized by combining the existing data in long-term memory and a new schema is created. Once the reorganization is finished, the validity of the extended schema must be tested. The validity of the information can only be determined after testing. However, this principle is incapable of explaining the emergence of new knowledge (Phillips, 2014). The Randomness as Genesis Principle In terms of the principle of borrowing and reorganization, data is transmitted from long-term memories of other people to our own long-term memory in different ways. According to this, new information cannot be created. For a new 9
Problem Solving Procedure in terms of Cognitive Theories
encountered problem, most of the solution steps needed to solve the problem will be based on data saved in long-term memory obtained by the principle of borrowing and reorganization. However, the student has no information about which of the possible solution steps will be applied to solve the problem. Under these conditions, one of the possible solution steps is applied randomly and the accuracy is tested for the solution of the problem. This is inevitable in the lack of information. Many students cannot reach the correct results many times when solving complex and new problems. It is understood that each possible solution step which does not reach the correct result is not suitable for the complete solution. The validity of the possible solution step is proved when the correct result is reached. According to this principle, the randomly generated solution steps and their validation can be shown as an example of the principle of randomness as genesis. So, new information is produced. The principle of borrowing and reorganization and the randomness as genesis principle work together. With the principle of randomness as generation, new information is formed, the generated information by the borrowing and reorganization principle is transferred to other people and stored in their long-term memory (Kaya, 2015). The Narrow Limits of Change Principle According to the principle of randomness as genesis, new data is obtained from the outer environment. Randomly produced data is not organized and therefore the information processing system has limitations to process unorganized information. For this reason, the number of information to be processed in processing systems should be keep in limits. Short-term memory is a component of human cognition system which used to make change in long-term memory and does not have the capacity to process large quantity of new data. Short-term memory cannot hold more than seven new unit and cannot process more than four unit. Knowledge and skills are derived from the large quantity of data held in long-term memory and are often borrowed from others. This principle requires that data should be well structured and schemas to be effectively created and transmitted to long-term memory to prevent over-load of short-term memory (Takır, 2011). The Environmental Organizing and Linking Principle This principle explain how information should be used in the learning environment. Since there are boundaries for the short-term memory, the processing of more than four units becomes difficult. If the information comes from long-term memory, in other words, if the information has already been 10
Salih ÇEPNİ, Yılmaz KARA
organized in long-term memory and its effectiveness has been tested, there is no known limitation in the processing of such information in the short-term memory. The principle of environmental organizing and linking emphasizes that large quantity of organized data can be transmitted from long-term memory to shortterm memory without exceeding the limitations of short-term memory in order to realize the complex actions required in the learning process. This principle is the last step of the cognitive data processing procedure which allows working in an environment. The first four principles allow the environmental organizing and linking principle to function in a learning environment. Without this principle, there would be no aim in the creation of new information through the principles of randomness as genesis and the narrow limits of change, the storage of new data through the principle of information store, or transfer of new data from other data stores through the principle of borrowing and reorganizing (Sussman & Hollander, 2015). Cognitive Load Theory This is a teaching theory based on how people construct knowledge in their cognitive structure. The theory argues that learning is optimized as long as it is compatible with the cognitive structure of human beings. The theory primarily focuses on cognitive processes and related in learning of intricate cognitive tasks due to the amount and interaction of knowledge that must be simultaneously processed before the beginning of learning (Sweller, Ayres, & Kalyuga, 2011). Cognitive load is described as a multifaceted construction that indicates the load that the student loads on his cognitive system while performing a task. It was defined that the cognitive load as the pressure on the student's own cognitive system when dealing with various tasks such as problem solving, graphic interpretation, and concept learning. Cognitive load refers to the resources used by short-term memory working in a certain time period. It can also be defined as the sum of the mental activities that must be performed at the same time in the short-term memory. According to the definitions, it is possible to conclude that the cognitive load is all the mental processes that happen in the short-term memory trying to process the information (Plass, Moreno & Brünken, 2010). The theory is based on assumptions that are closely related to cognitive construction and their functions. The main focus of the theory is short-term memory has limited capacity and learning, remembering and transferring will be reduced if there is an overloaded in short-term memory (Paas, Renkl, & Sweller, 2003). 11
Problem Solving Procedure in terms of Cognitive Theories
Types of Cognitive Load According to Cognitive Load Theory, there are three types of cognitive load as intrinsic load, extrinsic load (ineffective load) and germane load (effective load) (Unlu, 2015). Intrinsic Cognitive Load This load is formed on the short-term memory from the complexity of the content of the subject and is basically determined by the objectives of the teaching. The main source of intrinsic cognitive load is element interaction. Element interaction simply means the information arrangement in the short-term memory that must be used to enable the learner to achieve a given task. Some learning tasks have low element interaction. For example, knowing what words mean when learning a foreign language is an example of low element interaction. Because each word can be memorized independently of the others. But when a sentence tried to be created, the interaction of the item will increase. It is not only enough to know the meaning of words when establishing a sentence, but also the grammar and spelling rules. All these rules must be considered simultaneously in order to establish a correct sentence. This load is mainly identified by the knowledge and skills related to the teaching objectives. Designer of the learning procedure can control the intrinsic cognitive load (Kalyuga, 2009). It can be controlled by the designer of the instruction. Even though the intrinsic cognitive load from the teaching content cannot be directly modified (reduced), it is possible to manage complex tasks more easily in the form of a series of ordered tasks. Extraneous (Ineffective) Cognitive Load This is the load that any content on cognitive structure is not associated with learning objectives. The extraneous cognitive load plays an impeding role in learning as it unnecessarily utilizes the capacity of working memory. Including unrelated data or materials adversely affect the information processing procedure and lead to the increase in extraneous cognitive load. For example, if a visual and the necessary information to make this visual easier to understand is given separately, the extraneous cognitive load will increase. Instead, the information should be integrated into the relevant places of the image or the information should be presented as a voice narration because the short-term memory has verbal and visual sub-channels. In this way, the information will be shared among the sub-channels of the short-term memory and the cognitive load will be decreased. As understood from the example, designer of the educational environment can control the extraneous cognitive load. Therefore, visual, written 12
Salih ÇEPNİ, Yılmaz KARA
text, voice narration, animations and simulations should be prepared appropriate to the learning objectives in terms of cognitive load theory in instructional design procedure (Kalyuga, 2015). Germane (Effective, Relevant) Cognitive Load This is the load that all kinds of teaching activities contribute to the learning objectives on cognitive structure. Teaching has two main objectives. First, students create new schemas. The second, teaching helps students to automate the newly acquired schemes. The operations such as interpretation, illustration, classification, deduction, differentiation and regulation are carried out during schema formation and automation. The performed operations will cause a load on the short-term memory. This load is a germane cognitive load since derived from activities that increase learning. For example, providing students with examples of the same structure but with different contents increases the germane cognitive load (Plass, Moreno & Brünken, 2010). Cognitive Overload Total cognitive load is composed of the combination of intrinsic, extraneous and germane cognitive load. In order to ensure effective teaching, the sum of these three types of load should not exceed the limited capacity of short-term memory. If the teaching program consists of a complex content, it means the intrinsic cognitive load is high. If the teaching program also includes design components that add extraneous cognitive load, it may lead to a small capacity remaining for the germane cognitive load. In this case, the teaching program will not be effective (Figure 1.3). As a result, acquiring the desired skills will take longer or more time for learners or learning will not be realized as expected level (Kaya, 2015).
Figure 1.3 The relationship between task demands, performance, and workload (Chen et al., 2016) 13
Problem Solving Procedure in terms of Cognitive Theories
The cognitive overload describes the situation of having more total cognitive load than the limited capacity of the short-term memory. In this case, effective learning does not take place. In here, effective learning means high performance without overloading the short-term memory capacity. In order to create effective learning, it is necessary to minimize the sources of extraneous cognitive load and maximize the sources of germane cognitive load. Although intrinsic cognitive load related to learning objectives cannot be controlled in general, content can be fragmented and sorted to optimize the required amount of interaction of the material component at any given time (Clark, Nguyen, & Sweller, 2006). Effects of the Cognitive Load Theory The explanations of cognitive load theorists on human cognitive structure and learning process highlight the cognitive effects. Studies on learning procedure and materials through considering the definition of cognitive effects, determining effects on learning, and cognitive principles have become possible through cognitive load theory. The cognitive effects that occur during the learning process are considered in accordance with the type of related cognitive load (Kalyuga, 2015; Uyulur, 2011). Extraneous Effects Worked-Example Effect: It may be useful to use the worked examples instead of the traditional problem-solving strategy. The practiced examples include a problem statement, solution steps, and the solution. The worked examples focus attention on problem situations and related processes (solution steps), helping students to create general solutions or schemes. Completion Effect: When attempting to solve a problem, referring to the worked examples requires to process the problem and the worked example in the short-term memory at the same moment. This causes overloading of the working memory. Alternatively, it is recommended to use problem completion samples. Split-Attention Effect: The presentation of multiple data that addresses the same perception channel of short-term memory causes distraction. This effect should be avoided since increasing the cognitive load. Modality Effect: If the information required to be handled through a channel exceeds the capacity of the channel, excessive cognitive load occurs. To prevent this, some of the data needs to be handled must be shifted to another empty channel. This is described as the modality effect. 14
Salih ÇEPNİ, Yılmaz KARA
Redundancy Effect: The presentation of data that does not conduce to schema formation or automation has a negative impact on learning. This is called as redundancy effect. Expertise Reversal Effect: Using effective teaching techniques may result the increase in expertise of novice learner but have impeding effects on expert learners. This situation is called the reversal effect in expertise. Guidance Fading Effect: Learners should be provided guidance in gaining expertise and schema construction. This guidance should be convenient to learners and specific to the objectives. Unnecessary guidance has negative effects. Goal-Free Effect: This type of problems is designed to decrease the irrelevant cognitive load caused by means-ends analysis and to support schema formation. Goal-free problems do not allow learners to discern differences between the present problem state and the target situation. Because, there is not any stated goal situation. Therefore, cognitive load will also decrease as there will be means-ends analysis. Learners must develop an alternative strategy to means-ends analysis while solving goal-free problems. Transient Information Effect: This effect can be defined as the occurrence of learning losses depending on the disappearance of the information before integrating it with the new information that will come before or after the information can be processed in an enough time period. It underlines that under certain circumstances transient information may interrupt learning. Intrinsic Effects Element interactivity Effect: If learning is related to a high element interactive material, it seems to exceed the limited capacity of short-term, memory. The effects of cognitive load are more difficult to see because the low element interactive material requires lower germane cognitive load. Isolated/interacting Effect: When the material is presented with all its interactive elements, it will not be processed in the memory as the capacity of the short-term memory will be exceeded. In such a case, interactive elements need to be taught as isolated and non-interacting elements for the realization of learning. In this way, primarily schemes related to items are formed. As soon as the sufficiently developed schemas are created, the 15
Problem Solving Procedure in terms of Cognitive Theories
interactive elements will be understandable because they will be able to be processed in the short-term memory. Germane Effects Variable Examples Effect: Modifying the variables in the example makes it easier to learn as it will influence the processing of similar samples in cognitive processes. Imagination Effect: Imagination requires cognitively review of the procedure in the short-term memory. For the materials including high cognitive load, processing data in short-term memory is impossible before the creation of schemas. Imagination techniques can be applied as soon as schemas are created. The application of imagination helps the schema automation. Cognitive Processes in Problem Solving Human nature interacts with its environment. The faced people, objects and events activate our cognitive system. Objects or changes are recognized, interpreted and reflected through cognition. The meaningful cognitive processes that we do can be considered as problem solving. The problem is solved by converting the given state to a target state. For this reason, a problem consists of given state, target state and cognitive processes that must be carried out to convert given state to the target state. The problem-solving process can be considered as the investigation of knowledge elements for the required for the solution in the problem space. As a result, the problem solution is finding the cognitive operations required to convert the given state to target state in the problem space (Robertson, 2017). While solving a problem, our cognitive system initiates set of algorithms. In other words, problem solver makes a search on problem space for operators for the solution of the problem. For this, problem solver chooses a state among active states. Then, an operator is chosen which is operable to the state. Later, the chosen operator is applied to the selected active state to produce new state. Finally, the new state is tested whether it is a target state for the solution. If there is a match between the new state and the target state, there is as success. If there is a mismatch, problem solver places the new state in active states. In any failure of this process, a subset of states is chosen among active states and the procedure is repeated until find a match between the new state and target state (Kiesewetter et al., 2013). 16
Salih ÇEPNİ, Yılmaz KARA
In problem solving procedure, problem solver also can follow set of strategies. The problem-solving strategies can be learned and transferred into other problem situations. The transfer of problem-solving strategy requires of deducing solution rules. Problem solver can gain expertise through knowledge elements, cognitive operators, and cognitive problem-solving strategies which are deductible. Knowledge elements are effective on cognitive task domains. Problems can be examined in two groups according to their task domain (Allard, Verhaeghen, & Hertzog, 2014). Knowledge-lean task domains The problems desiring any specific training or initial knowledge are categorized in knowledge-lean task domain. In this domain, problem solver generally operates three types of strategy. In back up strategy, problem solver keeps the set of old states and choses one when necessary. Problem solver can follow proceed strategy through choosing the operator to operate current state, apply it and test the resulting state. Both backup and proceed strategy is considered as nondeterministic strategy since the followed procedure contain number of choice points but doesn’t specify any criteria to make selection. The strategy is called heuristic since setting criteria and narrowing the set of choices (Liu et al., 2012). Novice problem solvers generally use weak cognitive problem-solving methods. One of the most used weak methods is forward chaining method as proceed strategy. Procedure take start with the initial state. Then, an operator is selected among the possible ones through heuristic strategy. Last, the operator is applied and cognitive procedure repeats. In backwards chaining method, procedure is initiated with the final state. Next, the procedure continues with operation selection and application to the final state. Inversely applied operator produces a solution path from the final state to the initial state. In operator subgoaling method, problem solver selects the most possible operator but does not consider whether the operator is suitable for the current state. If the operator turns out since some operator preconditions are not met, a subgoal is formed to find a way to change the current state and used as the solution state. Analogy method enable problem solver to use the solution procedure for a problem to another problem (Montague et al., 2014). More experienced problem solvers tend to use weak methods in combined forms. In means-ends analysis, problem solver uses forward chaining and 17
Problem Solving Procedure in terms of Cognitive Theories
subgoaling method together. First, an operator is searched to decrease the disparity between target state and given state. Then, the subgoals are set up to provide a pathway to reach target state. So, the given state is compared to the target state and operators are chosen to reduce the difference (Díaz et al., 2015).
Figure 1.4 Performance and Task Demand (Chen et al., 2016). Through using cognitive problem-solving methods, problem solvers are not only expected to come with the solution but also, they are supposed to induce solution rules (Figure 1.4). Solution rules enable the problem solver to transfer successful algorithms to another problem-solving procedure. Even if the problem solver come with the solution and complete couples of problems by using cognitive problem-solving methods, they may not induce a solution rule. The rule induction requires to provide extra information and successfully solving corresponding problems to construct schemas. On the other hand, the weak cognitive problem-solving methods can be unsuccessful to solve the problem and problem solution can demand combined methods. But combined methods require more cognitive resources since demanding to complete cognitive steps such as focusing attention on features of the problem, choosing the right operator and accomplishing subgoals. In other words, the more cognitive operations and considering the proceed of the method increases the cognitive load. The high cognitive load also occupies the space that can be used to learn important features of the problem. Thus, the problem solver can fail to solve problem since lack of
18
Salih ÇEPNİ, Yılmaz KARA
schema acquisition and high cognitive load even executing the cognitive problem-solving methods (Kiesewetter et al., 2016; Díaz et al., 2015). Knowledge-rich task domains This domain demand specific knowledge for the problem solution as a prerequisite. The most of the knowledge peculiar to the domain can be categorised in conceptual and procedural. The conceptual knowledge is the information about the concepts and principles of the related domain such as genotype in biology, reaction in chemistry, or gravity in physics. The knowledge about the steps to successfully complete a task can be described as procedural knowledge. A problem solver should have developed a schema including both conceptual and procedural knowledge to solve the problem from knowledge-rich domain. In other words, problem solver should develop enough expertise on specific domain. Once the schema is formed including the pre-required knowledge of the domain, problem solver can use it for the similar problems (Kurniati & Annizar, 2017). In this domain, problem solving take start with the activation of prerequired domain specific knowledge. This require short term memory to searching a schema related to pre-required conceptual and procedural knowledge in the longterm memory which is suitable for the features of the problem. Then short-term memory also choice an operator among the domain specific ones and apply it to the given state to produce a problem solution. This means extra cognitive load for the memory and potential failure of the problem solver to come with the solution (Schilling, 2017). The domain specific knowledge brings extra complexity in problem solving since require to know as much as experience with the domain specific principles, concepts, and interactions among them in addition to how to use information to produce solution. Problem solver is expected to seek for a proper schema in long term memory and fill the domain specific parameters in the problem. The features of the domain specific problem are the determiner for domain specific knowledge, operator and the schema to activate. Thus, problem construction process becomes prominent because the demand on domain specific knowledge, operator and schema construction as pre-requirement (Decker, & Roberts, 2015). Conclusion In today's education system, learners are asked to solve problems starting from early ages. Learners are evaluated as successful or unsuccessful according to their 19
Problem Solving Procedure in terms of Cognitive Theories
problem-solving situations. In fact, interests, attitudes, skills and abilities of individuals are decided according to the results of the evaluation and serious decisions are making such as assigning to the next level education, defining the school type, deciding the occupation. It is necessary to develop cognition and put in to work in order to success all these cases. Many theories have been put forward to explain the complex nature of human cognition. When cognitive theories are examined, it is understood that problem solving is a complex process and cognitive structures must be effective in this process. Individuals who are aware of their cognitive structures, develop experiences and make practice, have the potential to be successful in problem solving processes. The individual who is not aware of the cognitive structures fails because of problem misunderstanding, lack of the necessary pre-structures for the solution, inability to choose cognitive algorithms and lack of enough problem-solving experience. The cognitive approach of the problem-solving process requires the introduction of learning experiences that will enable the learner to become aware of the cognitive structures and procedure. This does not mean that learners should meet with routine problem-solving situations as much as possible. In the conducted studies, it was reported that the learners did not develop the desired level of cognitive skills even after continuous problem-solving practices and had difficulty in applying the acquired knowledge to similar problem situations. Therefore, learning should be designed by taking into account the cognitive structures, operations and principles. In addition to teaching activities, cognitive principles should also be taken into consideration when designing problems especially used in evaluation processes.
20
Salih ÇEPNİ, Yılmaz KARA
References Acuna, S. R., Garcia Rodicio, H. & Sanchez, E. (2011). Fostering Active Processing of Instructional Explanations of Learners with High and Low Prior Knowledge. European Journal of Psychology of Education, 26(4), 435-452. Addis, D. R., Barense, M., & Duarte, A. (2015). The Wiley handbook on the cognitive neuroscience of memory. Oxford, UK: Wiley Blackwell. Allard, E., Verhaeghen, P., & Hertzog, C. (2014). The Oxford handbook of emotion, social cognition, and problem solving in adulthood. New York: Oxford University Press. Brydges, C. R., Gignac, G. E., Ecker, U.K.H. (2018). Working memory capacity, short-term memory capacity, and the continued influence effect: A latentvariable analysis. Intelligence, 69, 117-122. Chen, F., Zhou, J., Wang, Y., Yu, K., Arshad, S. Z., Khawaji, A., Conway, D. (2016). Robust Multimodal Cognitive Load Measurement (Human– Computer Interaction Series). Cham: Springer International Publishing. Churchill, A., & Fernando, W. (2014). An evolutionary cognitive architecture made of a bag of networks. Evolutionary Intelligence, 7(3), 169-182. Clark, R., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidencebased guidelines to manage cognitive load. San Francisco: Pfeiffer. Decker, S., & Roberts, A. (2015). Specific cognitive predictors of early math problem solving. Psychology in the Schools, 52(5), 477-488. Díaz, Córdova, Cañete, Palominos, Cifuentes, & Rivas. (2015). Inter-channel Correlation in the EEG Activity During a Cognitive Problem-Solving Task with an Increasing Difficulty Questions Progression. Procedia Computer Science, 55, 1420-1425. Ejones, G. (2012). Why chunking should be considered as an explanation for developmental change before short-term memory capacity and processing speed. Frontiers in Psychology, 3, Article 167. Kala, N. (2012). The effect of instructional design prepared on thermodynamics unit by using Cognitive Load Theory on chemistry students’ learning at
21
Problem Solving Procedure in terms of Cognitive Theories
retention and transfer level. Unpublished Doctoral Dissertation. Trabzon: Karadeniz Technical University. Kalyuga, S. (2009). Cognitive Load Factors in Instructional Design for Advanced Learners. New York: Nova Science. Kalyuga, S. (2015). Instructional Guidance: A Cognitive Load Perspective. Charlotte: Information Age Publishing. Kamiński, J., Brzezicka, A, & Wróbel, A. (2011). Short-term memory capacity (7 ± 2) predicted by theta to gamma cycle length ratio. Neurobiology of Learning and Memory, 95(1), 19-23. Kaya, E. (2015). Determining the effectiveness of technology supported guided materials based on cognitive load theory principles related to "solar system and beyond: Space Puzzle" unit. Unpublished Doctoral Dissertation. Trabzon: Karadeniz Technical University. Kiesewetter, J., Ebersbach, R., Görlitz, A., Holzer, M., Fischer, M., & Schmidmaier, R. (2013). Cognitive Problem-Solving Patterns of Medical Students Correlate with Success in Diagnostic Case Solutions. PLoS One, 8(8), 1-8 E71486. Kiesewetter, J., Ebersbach, R., Tsalas, N., Holzer, M., Schmidmaier, R., & Fischer, M. (2016). Knowledge is not enough to solve the problems - The role of diagnostic knowledge in clinical reasoning activities. BMC Medical Education, 16(1), 1-8. Kurniati, D., & Annizar, A. (2017). The Analysis of Students' Cognitive ProblemSolving Skill in Solving PISA Standard-Based Test Item. Advanced Science Letters, 23(2), 776-780. Liu, C., Liu, J., Cole, M., Belkin, N. J., & Zhang, X. (2012). Task difficulty and domain knowledge effects on information search behaviors. Proceedings of the American Society for Information Science and Technology, 49(1), 1-10. Montague, M., Krawec, J., Enders, C., & Dietz, S. (2014). The Effects of Cognitive Strategy Instruction on Math Problem Solving of Middle-School Students of Varying Ability. Journal of Educational Psychology, 106(2), 469-481.
22
Salih ÇEPNİ, Yılmaz KARA
Moreno, O. A. (2017). Attention and Dual Coding Theory: An Interaction Model Using Subtitles as a Paradigm. Doctoral Dissertation. Barcelona: Universitat Autònoma De Barcelona. Paas, F., Renkl, A. & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist 38(1), 1–4. Phillips, S. (2014). Analogy, cognitive architecture and universal construction: A tale of two systematicities. PloS One, 9(2), E89152. Pickering, M. J. & Clark, A. (2014). Getting ahead: Forward models and their place in cognitive architecture. Trends in Cognitive Sciences, 18(9), 451456. Plass, J. L., Moreno, R., & Brünken, R. (2010). Cognitive Load Theory. New York: Cambridge University Press. Robertson, S. I. (2017). Problem solving: Perspectives from cognition and neuroscience (Second ed.). New York: Routledge. Schilling, J. (2017). In Respect to the Cognitive Load Theory: Adjusting Instructional Guidance with Student Expertise. Journal of Allied Health, 46(1), E25-E30. Schweickert, R., Fisher, D. L., & Sung, K. (2012). Discovering cognitive architecture by selectively influencing mental processes. New Jersey: World Scientific. Sun, R. (2011). Memory systems within a cognitive architecture. New Ideas in Psychology, 30(2), 227-240. Sussman, A., & Hollander, J. (2015). Cognitive architecture: Designing for how we respond to the built environment (Online access with DDA: Askews (Architecture). New York: Routledge. Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive Load Theory (Vol. 1, Explorations in the Learning Sciences, Instructional Systems and Performance Technologies). New York, NY: Springer New York. Takir, A. (2011). The effect of an instruction designed by cognitive load theory principles on 7th grade students‟ achievement in algebra topics and cognitive load. Unpublished Doctoral Dissertation. Ankara: Middle East Technical University. 23
Problem Solving Procedure in terms of Cognitive Theories
Unlu, M. (2015). The investigation of studying and learning strategy based online activities in terms of achievement, retention and cognitive load. Unpublished Doctoral Dissertation. Ankara: Gazi University. Uyulur, A. (2011). Evaluation of the efficiency of learning environments: A cognitive load approach. Unpublished Master Dissertation. Istanbul: Bahcesehir University. van Kesteren M.T., Ruiter, D. J., Fernández, G., Henson, R. N. (2012). How schema and novelty augment memory formation. Trends in Neurosciences, 35(4), 211-219. Widrow, B., & Aragon, J. (2013). Cognitive memory. Neural Networks: The Official Journal of the International Neural Network Society, 41, 3-14. Wixted, J. T. (2018). Stevens' Handbook of Experimental Psychology and Cognitive Neuroscience, Learning and Memory. Newark: John Wiley & Sons, Incorporated.
24
Yılmaz KARA
Chapter 2 Improving Item Validity through Modification in Terms of Test Accessibility Yılmaz KARA
Introduction High expectations about test scores of central examinations, contestant atmosphere mostly effected by the national wishes to get better international examination scores, and adoption of performance based, activity related formative classroom assessment approaches brought increasing attention to the item and test development efforts (Wößmann, 2005). In recent years, increasing and differentiated demands of society related to the assessment and measurement processes conducted in education area mostly focused on more reliable and valid question writing efforts. Efforts about the complexity of preferred learning objects embedded into the questions stimulated the implementation of principles of cognitive load theory to the question development techniques (Paas et al., 2003). At the same time, accessibility concept needs to be comprehended to understand the approach of performance assessment suggested by the cognitive load theory. Test accessibility is the measure to indicate the degree of provided opportunities to display proficiencies over the target frameworks for the all test takers (Beddow et al., 2008). Being successful in an examination requires giving true answers to the test questions by using physical and cognitive sources to some extent. If there is not any convenience between students’ cognitive capacities and questions’ cognitive levels, test accessibility most possibly will negatively affect the test results due to misunderstandings of what is really stated in the questions. Thus, the increase of test accessibility briefly means reduction of question elements possible to accumulate excessive cognitive work load and getting more valid test scores (Kettler et al., 2009).
25
Improving Item Validity through Modification in Terms of Test Accessibility
Cognitive capacity levels of structures used in questions need to be balanced to the level that can all test takers cognitively comprehend. Beddow et al. (2008) studied on increasing test accessibility through focusing on to elements of test questions, giving place only to the necessary components for giving answer, removing unnecessary texts, visuals, graphs, or tables. In this chapter, it was aimed to focus on accessibility term at first. Then, adaptation of accessibility theory was introduced to the test items for educational measurement and assessment. Finally, item accommodation for test accessibility was exampled through the assumption of availability to evaluate directly all test takers that have highly wide and various specifications. Conceptual Understanding of Test Accessibility Access is more than participation to the general education with learning program standards and standard based assessments as one of the basic principles of education. Access is the term that is used to underline overcoming the impeding factors which limit learner characteristics and skills to achieve purposed and tested objectives of an educational teaching program. In terms of teaching, access represents a whole range of opportunities for a learner to learn the gains related to the intended teaching program. Considering current educational system, this means all learners have proper opportunity to achieve the objectives in the relevant learning program and to demonstrate performance in objective-related achievement tests. In this process, teachers are encouraged to teach learning program objectives by designing learning which will increase the likelihood of learning for each student rather than testing. Unfortunately, there are many obstacles to student lea rning (Akbulut & Çepni, 2013; Elliott et al., 2018). Accessibility is essential for effective teaching and fair testing which expressed as a measure of how a system clears impeding factors and allows the fair use of elements and services for different individuals. Learning, learning materials and tests should be accessible for all students participating in the learning process. Otherwise, it is highly probable that inferences from observation and test results may be wrong as well as incomplete learning. Therefore, educators have important responsibilities in achieving the best possible accessibility (Solano-Flores et al., 2014). In terms of the educational evaluation, accessibility is considered as a measure of the ability of the student to indicate their gaining related to the tested standard. 26
Yılmaz KARA
If the student is enabled to indicate their gaining in the educational evaluation process, the educational evaluation is considered as accessible. Test accessibility is a measure of how much a test event allows the participant to show information about the target structure (Carney et al., 2016). Therefore, an accessible test or test item does not include any structure that prevents the test participant from showing how much student has the qualifications measured by the test. The balance between the required physical, material, or cognitive facilities for a test and the designed structure to measure determines the likelihood of the inferences from test scores that reflect to test accessibility. The implications of such test accessibility concerns are salient particularly for test-takers for whom extraneous test or item demands preclude them from demonstrating what they know. Indeed, extraneous demand reduces a test’s accuracy and precision as a measuring tool for students for whom extraneous demand poses a hindrance, while test accessibility is not reflected in the inferences made from test scores for students for whom the extraneous demand does not reduce the accessibility of the test. Test accessibility is at the highest level as all the students exhibit their gains on the tested construct without any obstacles (Cawthon et al., 2013). Thus, item access minimizes the bias and increases the justice for the learning and evaluation process. Test Accessibility Model A test item that permits access for a student is free from features that reduce the students’ ability to represent their qualification of the target construct mostly described in terms of the standards on the related learning program. Test accessibility should be considered as a correlation between the student and the test item. More precisely, it is an interplay between properties of the test item and qualifications of the students. Figure 2.1 presents the model that situates access to assessment in the educational measurement and assessment. Viewing the figure from left to right, a student in school receives access to learning through the instruction he or she receives in the classroom in the context of the standards which are defined on the learning program. The purpose of this instruction is to provide the student (or, in our terms, the test-taker) with the knowledge, skills, and abilities that will be needed to participate successfully in the test event, which involves a set of interactions between the student and the test. The outcome of this test is a score, from which an inference is made about the student knowledge, skills, and abilities as they pertain to the tested content. Based on these inferences,
27
Improving Item Validity through Modification in Terms of Test Accessibility
decisions are made that may influence the subsequent learning program and/or instruction the student receives (Elliott et al., 2018).
Figure 2.1 Unified model of educational access (Beddow, 2011). The central part of this model is the notion of the test event (Figure 2.2). A test event is supposed to consist of the student’s engagement with the test materials with the purpose of generating a result that accurately reflects their qualification of the tested content. An optimal test event, therefore, produces a score that represents only the interaction between the student qualification of the tested content and the test itself. If a students’ improper access to the test event influences their score on the test, then the test event consisted of not only the targeted interaction (i.e., the interaction intended for measurement) but also one or more ancillary interactions. The accessibility of a test for an individual testtaker is based on the impact of these interactions on the test score. Therefore, the accessibility of a test necessarily differs from one test-taker to another based on the individual differences between those test-takers (Laughton, 2014).
28
Yılmaz KARA
Figure 2.2 Test Event In scope of assessment development and evaluation, test error refers to the discrepancy between a students’ “true score”(i.e., his or her score if the test or test item represented a perfect measurement of the students’ qualification in terms of the tested content, yielding a score that is free of construct-irrelevant variance) and their actual score. In the theoretical model of test error resulting from accessibility, the test-taker characteristics (i.e., potential interactors with features of the test) are considered in the five groups: perceptive, physical, receptive, emotive, and cognitive. Each of these sources of error can be linked with one or more categories of test or test item features, including the mode or means of response, mode of delivery, setting, consequences, and the demand for cognitive resources. Although physical access is an important dimension of accessible testing, most physical access needs can be addressed through typical accessibility or universal design methods (Kavanaugh, 2017). For example, the intended construct for a university entrance assessment may be to solve problems involving measurement and estimation. Such items may 29
Improving Item Validity through Modification in Terms of Test Accessibility
contain substantial printed text that students must read. For a student to understand the problem presented and subsequently respond, he or she must be able to detect and decode this printed text. A test-taker with a reading disability or visual impairment may be unable to do so (Harayama, 2013). The students are prevented from indicating their qualifications due to an inability to access item content. According to Beddow (2011), “the test-taker characteristics that interact with test or test item features and either promote or inhibit one’s access to the test event are referred to as access skills” (p. 381-382). As illustrated, often implicit in the design of many state tests is an assumption that test takers will possess certain access skills (e.g. ability to decode printed text, see a graph, hold a pencil and legibly handwrite their responses, maintain attention and motivation throughout the test) that are necessary for meaningful participation. However, the extent to which individual students possess these skills can vary substantially. The influential nature of categories on subsequent test scores is often discussed in terms of access skills, defined as the specific qualifications required to engage a test for the purpose of accurate measurement. The measurement purpose of most educational tests is to examine the degree of a test-taker’s mastery of qualifications, referred to as the target construct. Similarly, each of the items on the test is either implicitly or explicitly designed to measure part of this target construct, with the assumption that the sum of the test items represents an enough sample to measure standards of the learning program. Although the target construct of a complete test often is relatively clear (e.g., Year 3 Grammar, Advanced University Biology), the specific construct targeted for measurement by an individual test item within the test may be less so. It is of critical importance, therefore, to determine the target construct not only for the test, but also for each of the test items. Ideally, these definitions are generated by the test developers. When the target construct is sufficiently set apart (i.e., defined in terms of the level of knowledge tested, cognitive demand, reading level, context), it is easier to discern the various access skills that are necessary to engage the construct. However, many achievement tests set apart the target construct to the level of clusters or strands of knowledge, but the item writers are given great latitude in terms of how the items measure them. As a result, the items may measure ancillary constructs that are not explicitly defined in the target construct specified by the developer (Perlman et al., 2016).
30
Yılmaz KARA
Item Modifications for Accessible Test Items: Theory to Practice A group of students has different members in terms of accessibility skills. For this, it is hard to write test items which are accessible for all students. Still, an item writer needs to consider the accessibility to ensure that students enough interacted with the item elements through using their access skills. Otherwise, students will not be provided an evaluation process to demonstrate their knowledge on target construct. In order to enable optimal accessibility, item writer need to identify accessibility level of the item and modify the elements to increase the accessibility for more students (Vanchu-Orosco, 2012). The elements of an item are presented in Figure 2.3.
Figure 2.3 Anatomy of an item 31
Improving Item Validity through Modification in Terms of Test Accessibility
As seen in Figure 2.3, an item needs to be considered in five categories: paragraph, visuals, item stem, answer choices, and layout (Beddow, 2011). In order to modify the item, each category need to be identified in terms of item accessibility. First, item should be reviewed to identify accessibility barriers. Then, item need to be modified through following the principles of accessibility theory and considering the characteristics of accessible test items (Elliott et al., 2018). The characteristics of accessible items presented in Table 2.1. Table 2.1 Element and characteristics of accessible test items. Elements Passage
Stem
Visuals
Answer choices
Layout
Characteristics Includes only required words. Clearly written through minimum words. Consisted sentence structure in appropriate with student taker grade. Contains clear instructions or pre-reading. Clearly written through minimum words as much as possible. Considers the target structure or related teaching program standard. Includes distinct target construct. Set up with positively worded verb in active voice. Includes only required visuals. Consisted of clearly pictured and simple figures. Supported with necessary text. Purified from any components possible to distract students Clearly written through minimum words as much as possible. Includes key and distractor choices which are almost at the same length, order and content. Contains equally reasonable distractors. Consist of one correct choice. Arranged to present an item with all of its elements. Includes the visuals engaged with other elements of the item. Designed to facilitate answering. Promises enough empty area to comprehend item elements. Set up with large and readable item elements.
Practicing the item accessibility principles an on a sample item will be useful to comprehend item modification for accessible test item for more students. Figure 2.4 displays the items included in a 9th grade biology test. The target construct of the interested item was defined in terms of biology curriculum standards. The item was aligned with a targeted standard defined as “Student 32
Yılmaz KARA
explain the cellular constructs and their function” by the education council. The item begins with a short information about the discovery of cell. Then, the information underlines that the cell has specialized parts which has different functions. The item requires to imagine a cell and their parts at first. Then, the function of nucleus was directly asked to be known. The students must imagine the nucleus and its function to find the correct answer choice.
Figure 2.4 An item in a 9th grade biology test The same item in the test was modified in terms of accessibility theory to ensure the accessibility for the students who has various qualifications about the 33
Improving Item Validity through Modification in Terms of Test Accessibility
related standard of the learning program. The modified item presented in Figure 6. It is better to make modification for more accessible items by a team. The members of the team preferred to be experienced in educational measurement and assessment in addition to expertise in educational research and practice after having training to modify test items in terms of accessibility theory. The team members expected to follow the element and characteristics of accessible test items (Table 2.1).
Figure 2.5 Revised item in a 9th grade biology test While making modification for item stimulus, the team argued whether intended modification support to measure the target construct more generously in terms of accessibility. The modified item has shortened item stimulus against to original form of the item. The team decided to remove the history of cell exploration since it’s not directly related with the target construct. As defined in standard, the target construct should include knowledge of cell constructs and its functions. Thus, the stimuli should include the information to highlight the cell constructs and their functions. All other information was removed which are not directly related with the target construct (Figure 2.5). Item stem was integrated with item stimulus in the initial form of the item. Considering the length of the text, the modification team decided to separate the 34
Yılmaz KARA
item stem. The visual used as separator. The item stem was shortened in text and presented in larger fonts. The main purpose was highlighted trough bold font which is “function of cell part” in this case. Thus, the item was arranged to guide cognitive load from visuals to item stem until to the item choices. Before the modification, the item had no visual to support the measured target construct. Students were expected to imagine a cell and their parts, then had to know their functions. The modification team decided to add a cell figure to increase item accessibility. The figure contains the constructs of a cell. The figure help students to imagine the cell and its parts. In addition, it provides contextual support for students who are unfamiliar with the parts of cells but their functions. The figure also separates the item stimulus from item stem to facilitate the identification of item construct. The answer choices were also reviewed by the modification team. All answer choices were minimalized to reduce cognitive load and increase item accessibility. First, the cognitive load was distributed onto visual and item stem. For this, the visual was included a mark to indicate the targeted cell construct. Second, all answer choices were shortened by removing other cell constructs than the target construct. In the modified form, students are required to recognize the target construct from the visual and find its function among the answer choices instead of trying to imagine the cell and making comparison between the given cell construct and function. Modification team also decided to eliminate the most implausible or least preferred distractor by considering the meta-analysis study on answer choices (Rodriguez, 2005). The study revealed that three answer choices is optimal for reliability and item discrimination without reducing the item difficulty. In this case, the distractor which did not include a cell part eliminated among the answer choices. The modification team also arranged the page layout of the test. Instead of using separate answer sheet, students were enabled to answer directly on the same page. In this manner, students were free from misaligning the question and the answer. In the original test format, eight questions were placed in a single page at average. In the modified format, the item presented on a single page. Also, text and item elements were larger and more readable. Thus, the white space was increased to facilitate more accessibility. Through considering the principles of accessibility theory, the modification team arranged the item and generated more accessible test item in terms of 35
Improving Item Validity through Modification in Terms of Test Accessibility
cognitive demand, item difficulty, and item discrimination. While the processed arrangements considered, it can be concluded that the item writer and the modification team mostly focused on target construct only in terms of cognitive objectives. In recent years, most of the curriculums constructed not only on cognitive dimension and content but also additional dimensions such as attitudes, values, science process skills, science-technology-society, and skills such as 21st century, engineering and thinking. If the target construct was included additional dimensions, it is also necessary to measure the attitudes, values, and various skills through modifying the item. In this case, the modification team need find ways to measure the additional dimensions without reducing the item accessibility. As understood, the item modification is an endless effort to improve the quality of the item. The modification is always necessary and there are various points to consider. Modifications can be still necessary for the high-quality items with the changes in the target construct. The test Items can be contextualized in real-world or real-life context, visualized through pictures or figures, and facilitated in terms of accessibility by considering the characteristics of accessible test items. Conclusion Examinations including complex structures making difficult to interact for students should possibly negatively reflect to the test scores. In this case, students cannot exhibit target learning outcomes even if they have already gained during educational procedures. However, it is possible to write more cognitively accessible test questions simply by developing preventing unnecessary explanations that could make extreme cognitive load, writing balanced answer choices in each other, and placing all the question components convenient to target learning outcome(s) (Beckmann, 2010). Eventually, questions asked in large-scale, central examinations that hundred thousand of test takers making applications also required to be developed in terms of the cognitive load theory. It is necessary to consider test accessibility opportunities provided to test takers through evaluating with all dimensions starting from beginning of question writing procedures. In this way, the assumption of being accessible for all test takers could be fulfilled in large scale, nationwide or central assessments. Placement of question enabling maximally test accessibility for all test takers in an examination would positively affect the results of test takers also the validity and reliability. High test accessibility also 36
Yılmaz KARA
would be characteristic for the comparisons among different kind of large scale national and international examinations. Thus, the gladness could be increased in parallel to the positive opinions of teachers, parents, administrators, and public opinion about education in addition to students directly under effect of examination atmosphere. Consequently, education researchers are expected to reveal whether questions that have low level test accessibility are truly understood or not. Number of studies discussing the student achievement need to be increased about the effectiveness of test questions arranged according to the cognitive load theory and accessible for all test takers. Existing researches can be considered as the forerunner of the studies about effects of arrangements to the different kind of student groups, paradigm interactions, or results of various arrangement strategies over student achievements and power of understandings.
37
Improving Item Validity through Modification in Terms of Test Accessibility
References Akbulut, H. İ., & Çepni, S. (2013). Bir üniteye yönelik başarı testi nasıl geliştirilir: İlköğretim 7. sınıf kuvvet ve hareket ünitesine yönelik bir çalışma. Amasya Üniversitesi Eğitim Fakültesi Dergisi, 2(1), 18-44. Beckmann, J. F. (2010). Taming a beast of burden - On some issues with the conceptualisation and operationalisation of cognitive load, Learning and Instruction, 20, 250-264. Beddow, P. A., Kettler, R. J., & Elliott, S. N. (2008). Test Accessibility and Modification Inventory. Nashville, TN: Vanderbilt University. Beddow, P.A. (2011). Effects of Testing Accommodations and Item Modifications on Students' Performance: An Experimental Investigation of Test Accessibility Strategies. ProQuest Dissertations and Theses. Carney, M.B., Smith, E., Hughes, G.R., Brendefur, J.L., & Crawford, A. (2016). Influence of Proportional Number Relationships on Item Accessibility and Students' Strategies. Mathematics Education Research Journal, 28(4), 503522. Cawthon, S., Leppo, R., Carr, T. & Kopriva, R. (2013). Toward Accessible Assessments: The Promises and Limitations of Test Item Adaptations for Students with Disabilities and English Language Learners. Educational Assessment, 18 (2), 73-98. Elliott, S.N., Kettler, R.J., Beddow, P.A., & Kurz, A. (2018). Handbook of Accessible Instruction and Testing Practices: Issues, Innovations, and Applications (2nd ed. 2018 ed.). Cham: Springer International Publishing. Harayama, N. (2013). An Analysis of the Performance and Accommodations for Students Who Are Non-verbal Taking Pennsylvania's Statewide Alternate Assessment (PASA). ProQuest Dissertations and Theses. Kavanaugh, M. (2017). Examining the Impact of Accommodations and Universal Design on Test Accessibility and Validity. ProQuest Dissertations and Theses. Kettler, R. J., Elliott, S. N. & Beddow, P. A. (2009). Modifying Achievement Test Items: A Theory-Guided and Data-Based Approach for Better Measurement of What Students with Disabilities Know, Peabody Journal of Education, 84 (4), 529–551. 38
Yılmaz KARA
Laughton, S. (2014). Accessibility of Tests in Higher Education Online Learning Environments: Perspectives and Practices of U.K. Expert Practitioners. ProQuest Dissertations and Theses. Paas, F., Renkl, A. & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments, Educational Psychologist 38: 1–4. Perlman, A., Hoffman, Y., Tzelgov, J., Pothos, E., & Edwards, D. (2016). The notion of contextual locking: Previously learnt items are not accessible as such when appearing in a less common context. Quarterly Journal of Experimental Psychology, 69(3), 410-431. Rodriguez, M.C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24(2), 3-13. Solano-Flores, G., Wang, C., Kachchaf, R., Soltero-Gonzalez, L., & Nguyen-Le, K. (2014). Developing Testing Accommodations for English Language Learners: Illustrations as Visual Supports for Item Accessibility. Educational Assessment, 19(4), 267-283. Vanchu-Orosco, M. (2012). A Meta-analysis of Testing Accommodations for Students with Disabilities: Implications for High-stakes Testing. ProQuest Dissertations and Theses. Wößmann, L. (2005). The Effect Heterogeneity of Central Examinations: Evidence from TIMSS, TIMSS-Repeat and PISA. Education Economics, 13(2), 143–169.
39
Cemalettin YILDIZ
Chapter 3 Traditional Measurement and Evaluation Tools in Mathematics Education Cemalettin YILDIZ
Introduction One of the issues that mathematics educators frequently discuss is how students' knowledge about mathematics topics should be evaluated. In the education activities, measurement and evaluation concepts are encountered in determining the knowledge of the students about the subject to be discussed, to what extend is the targeted objectives are achieved and how effective was the conducted activities (Çepni & Çil, 2009). Measurement and evaluation are required to determine the situations in which students are successful-fail and improve the quality of learning environments. The concepts of measurement and evaluation are generally used together and in many cases are confused (Çepni & Ayvacı, 2007). Due to the confusion of these concepts, students' interests and abilities are sometimes measured and evaluated incorrectly, and as a result of this situation, students are directed to areas that are not of interest and ability (İşman, 2005). Measurement is the observation of a determined quality and expressing the observation results by numbers or symbols (Demirel, 2005). In short, measurement can be defined as the process of determining the characterics and quality of something (Oosterhof, 1994). The evaluation is a decision-making process through comparing the measurement results of an attribute with a criterion (Ministry of National Education [MoNE], 2002). The evaluation can also be expressed as a clear indication of the results for students, teachers, parents and others by carefully and precisely measuring students' learning (Harris, 1998). The evaluation is diffent from the measurement in this aspect (Demirel, 2014). In other words, the main characteristic of the measurement is numerical expression of the results while the main characteristic of the evaluation is making decision through the interpretation of the measurement results according to certain criteria 41
Traditional Measurement and Evaluation Tools in Mathematics Edocation
(Özgüven, 1994). In order to result evaluation in the right decisions, it should be remembered that measurement should be made as accurate and objective as possible (Güler, 2011). Although the concepts of measurement and evaluation are different concepts from each other, they can be defined as the concepts which are not separated from each other and complement to one another (Metin, 2010). Traditional assessments made with data from measuring instruments can be used as feedback for teachers, students and curricula (MoNE, 2010). Traditional measurement-evaluation is done in order to determine the rate at which students acquire the objectives and to eliminate any deficiencies by determining them with the help of feedback (Şimşek, 2009). The positive aspects of traditional assessment can be listed as follows (Enger & Yager, 1998): 1. It is economic and can be easily applied to many students. 2. The general rank of students can be easily calculated. This makes it possible to make comparisons against settlements, cities and countries. 3. The student's information and general situation quickly reveals. 4. It is very useful for research to be done for different learning objectives. In schools, true-false tests, multiple-choice tests, short-answer written exams, long-answer written exams and oral exams are used to measure learning products or cognitive characteristics of students in schools (Tekin, 1996). The fact that teachers allocate time to these measurement-assessment tools can increase the quality of education and thus the indispensable role of measurement-assessment tools can gain more importance in education (Şenel, 2008). If teachers use traditional measurement-assessment tools effectively in teaching mathematics, the teaching of this course can be easier which is hard to teach, the course can be enjoyable, and students' success may increase in this course. Teachers can direct their students to the right areas by using traditional measurement-assessment tools. For example, a student taking good grades in a mathematics lesson can be directed to a mathematics-based field. An evaluation made in this way allow the students to become aware of their individual abilities and allow the teacher to know the student better (Şimşek, 2009). Apart from recognizing and directing students, mathematics teachers also have other tasks to do in learning environments. Evaluating the student’s achievements is one of these teacher tasks. In the evaluation of student 42
Cemalettin YILDIZ
achievement, mathematics teachers benefit from many traditional measurementassessment tools. In order to carry out educational activities effectively, it is very important that teachers get to know the traditional measurement-assessment tools well, to know their characteristics and to have information about their positive and negative aspects. It should not be forgotten that there are several features that make every measurement-assessment tool incomplete and superior. In addition, there are different situations in which each measuring instrument is more appropriate to use. Moreover, some issues should be taken into consideration when preparing measuring tools. This chapter focuses on the traditional measurement-assessment tools used in mathematics courses and their properties. First, information is provided on the situations in which traditional measurementassessment tools can be preferred. Then, information on the issues to be considered while preparing these tools is presented. Then, detailed information was given about the positive and negative aspects of these measurementassessment tools. Finally, examples of questions are emphasized that can be used in traditional measurement-assessment tools. Verbal Exams It is a type of test that questions are given in verbal form and the students verbally put into words the answers through thinking (Güler, 2011). In other words, in verbal exams, students are asked questions and they are required to express their answers verbally (Baki, 2008). Operation of questioning and answering is usually done in front of the other students in the classroom (Özçelik, 2010). These exams are of great importance in the examination of verbal skills (Baykul, 2014). When Should Verbal Exams Be Preferred? The situations where verbal exams can be preferred can be stated as follows (Baki, 2008; Doğan, 2016; Güler, 2011; Tan, 2009):
Verbal questions can be used during the lecture to draw attention to the active participation of the students. It is used to examine the readiness of the students, to determine the points that are not understood or misunderstood during the course, to be used at the end of the subject or the unit again. Since the students do not develop written expression skills in the early years of pre and primary school, they should be asked verbally, and
43
Traditional Measurement and Evaluation Tools in Mathematics Edocation
the answers should be taken in verbal form. Verbal exams can be used in such cases. It could be used to measure the competencies to use native language correctly and to speak foreign language effectively. Verbal exams are indispensable if students arerequired to developed and measured verbal communication, word pronounciation, effective speaking or reading skills. In diagnosing, it can be used to identify students' misunderstandings and deficiencies. It can be used to select individuals with the most appropriate features for a job or task.
What should be considered in verbal examinations? The points to be taken into consideration for more effective use of verbal exams are given below (Baki, 2008; Demirel, 2014; Dogan, 2016; Guler, 2011; Isman, 2005; Tan, 2009; Tekin, 1996):
44
Verbal exams should not be done through calling student to the blackboard. The teacher should prepare the answer key about the questions. Examination plan should be prepared before the exam. In this plan, operations should be carried out to determine the the aim of the examination and the objectives required to be tested. How much points should be given to which information and the response times of each question should be determined in advance. The teacher should prepare features such as the heat, light and sound of the exam environment in advance. If the effective language use ability is not measured, short and many questions should be asked. The teacher should prepare the psychological environment to relax the students. For this, the teacher can start the exam by asking the students a question that they know. In addition, the teacher should not shower students with questions. The effect of the given scores in the verbal exams on the course note should be kept as little as possible.
Cemalettin YILDIZ
The teacher should prepare the questions in terms of the difficulty level at the same level and before by considering the contents and objectives of the subjects. The answers should not be too long. The questions should be clear, understandable and objective. By taking expert opinion, it can be ensured that the questions are appropriate, clear and understandable. Several questions should be prepared to cover all issues. The table of specifications can be used for this. Many pupils should not be made verbal exam together unless it is compulsory. Questions should be asked slowly and read aloud. Students should be informed in advance about the rules of verbal examination.
Teachers who pay attention to the above-mentioned issues can obtain more accurate measurements through verbal examinations in the measurement of student achievement. What are the advantages of verbal exams? The positive aspects of the verbal exams are given below (Doğan, 2016; Güler, 2011; Tan, 2009; Tan, Kayabaşı & Erdoğan, 2003):
It may enable to obtain in-depth answers from students. The rate is low for answering true by chance and get the score. The questions are easy to prepare and take little time. There is no possibility to copy. The possibility is low for students to answer wrong through misunderstand the questions wrongly and give the questions in the directions that they want. When the number of students is small, knowledge and academic confidence levels of students can be determined. It may enable to know more about students.
What are the disadvantages of verbal exams? Negative aspects of verbal exams can be listed as follows (Baki, 2008; Doğan, 2016; Güler, 2011):
Students often have no chance to think and review the response. 45
Traditional Measurement and Evaluation Tools in Mathematics Edocation
When verbal exams are individual, it takes a lot of time to practice in crowded classes. A small number of questions decreases validity and reliability. This situation makes it difficult to compare the students in terms of success. Since answers are not recorded, there is any opportunity to review answers and scoring. The objectivity is generally low because the rating is done with the overall impression. Students should ask different and equivalent questions. The teacher may ask an equal number of questions, but the difficulty levels of the questions may be different. In this case, it will be difficult to determine which students learn more. The scoring can be contaminated with the factors such as students' attire, current psychological conditions, personality traits (student's posture, tone of voice, facial expressions), verbal expression skills, speech skills. Teacher's tone, gestures and gestures; The attitudes of the students towards the teacher may have a negative effect on the scoring.
Examples of Verbal Exams Example 1: What is the rate? Example 2: What are the types of triangles in terms of their edges? Example 3: What are the differences between equation and identity? Example 4: How do you define the triangle? Long-answer Written Exams It is the exam that students are provided questions in written form and required to answer in written form within the same period (Baykul, 2014). Long-answer written exams are mostly the exams that required to provide solutions, proofs, and answers thought by students and written in order (Baki, 2008). It is the type that teachers and students are the most familiar (Dogan, 2016). Long-answer written exams differ from short-answer exams by requiring a long composition, such as one or several paragraphs (Turgut & Baykul, 2010).
46
Cemalettin YILDIZ
When to Choose Long-Answer Written Exams? The situations to prefer long-answer written exams is determined in below (Baki, 2008; Dogan, 2016; Tan, 2009):
Long-answer written exams are the most appropriate examinations for producing original and creative ideas, analysing ideas, evaluating opinions, problem solving, executing information in new situations, and determining interests and attitudes. Long-answer written exams can be used to measure objectives in practice, analysis, synthesis and evaluation. Long-answer written exams can be used to measure the skills of the composition, the ability to express in written form, and the skills to use the grammar rules. It may be preferred when there is not much time to prepare the exam question. It may be preferred when there is a low number of students in the class.
What should be considered in long-answer written exams? It may be useful to pay attention to the following points when preparing longanswer written exams (Baykul, 2014; Doğan, 2016; Özçelik, 2010; Şimşek, 2009; Tan, 2009; Turgut, 1984; Turgut & Baykul, 2010):
Questions should be expressed clearly and comprehensively. There should not be any language, expression or spelling mistakes in the questions. Questions should be prepared in a way that can be answered independent from each other. In other words, the answer to a question should not affect the answer to the other question. Each question should be measuring a certain objective. Exam questions should not be taken from textbooks or other sources. For example, it is not a convenient way to ask students the problems solved in the class or taken randomly from the textbook. More questions should be prepared than the total number of questions to be asked in the exam. The questions should be selected among those which can best assess the significant objectives expected to be learned in the course.
47
Traditional Measurement and Evaluation Tools in Mathematics Edocation
48
Preparation and replication of exam questions should be done in advance. Questions should be given to the students through reproducing in a regular and legible way. A response key should be used that prepared before scoring the answers. It should not be considered that which student belongs to the scoried paper. If it is not tried to be measured variables such as composition skills, written expression power, beauty of writing, convenience to the rules of grammar, and ability to use language effectively, these variables should not be rated. The teacher should not be under negative effects such as attention deficit, hunger or fatigue when scoring. Students should be asked about the type of questions that will allow them to select, organize and apply their knowledge. Simple questions should not be asked at the knowledge level. In longwritten written exams, written statements can be used such as “compare the similarities”, “compare the differences”, “show the reason”, “discuss the opposite idea” and “explain how or why”. Students should be given enough time to answer the questions. Exam papers must be scored objectively. When preparing the exam questions, a plan should be made, and it should be noted that whether the questions are well represented by the help of this plan. Exam papers should be read question by question. Questions should be sufficiently limited. Since students are not able to know exactly what is being asked in unrestricted questions, they try to write what they want. The degree of difficulty of the questions should be appropriate for the purpose of the exam. Exagerated answers of students should not be rated. Where and when the examination will be held should be announced to the students in advance.
Cemalettin YILDIZ
What are the advantages long-answer written exams? The positive aspects of long-answer written exams are given below (Aiken, 2000; Baki, 2008; Dogan, 2016; Guler, 2011; Ozcelik, 2010; Tan, 2009):
The questions are easy to prepare and take little time. The possibility is low to respond correctly by chance. Encourages students to work harder and enables students to develop their working habits. The possibility is low to copy. It is more effective than multiple choice and short answer exams in diagnosis. It enables to measure the ability to measure students' problem-solving skills, their ability to apply their knowledge to new situations, their ability to produce answers or their composition peculiar to them. It is effective in measuring writing skills. As the students think about their answers, it can be said that the longanswer written exams more accurately measure the objectives. It can be used in every education level starting from elementary education.
What are the disadvantages of long-answer written exams? The negative aspects of long-answer written exams can be listed as follows (Baki, 2008; Baykul, 2014; Doğan, 2016; Güler, 2011; Turgut & Baykul, 2010):
As the answers are long to the questions, there is no possibility to ask a lot of questions. Therefore, the number of questions that can be asked in the long-answer written exams and objectives that can be assessed accordingly is low. It is not always possible to divide the answers given to the questions in the long-answer written exams into two sections, either completely wrong or correct. In these exams, some of the answers are correct while some of them may be wrong. It is suitable for exaggerated responses. When the comprehensibility of the questions decreases, the possibility of the students to answer the questions in the desired direction increases. Students can write different answers by misunderstanding the questions.
49
Traditional Measurement and Evaluation Tools in Mathematics Edocation
The validity, reliability and difficulty of the questions cannot be predicted. As subjectivity is the subject of scoring long-written written exams, different teachers may give different scores to the same answer sheet. Variables such as legibility and beauty of the manuscript, paper order, composition ability, power of expression and writing speed can affect the students' score positively or negatively. Answering questions and scoring takes a lot of time. Since the students need to have more time and energy to write their answers; also, the separated time and energy remain limited to the objectives such as thinking, organizing, analyzing, evaluating and evaluating the time and energy are limited.
Examples of long-answer written exams Example 1: Prove why the multiplication of minus with minus equals to plus. Example 2: Give information about the life of Harizmi and its contribution to mathematics. Example 3: Explain the Menelaus theorem in visual and verbal form and make prove of this theorem. Short-Answer Written Examinations The measurement tools are called as the short-answer written test including questions which can be answered with a number, a word or a sentence, the responder presents the answer in written form through thinking (Baki, 2008). The main characteristic of the short-answer written exams is that they consist of questions that they have short answer (Özçelik, 2010). Short-answer written exams may be in the form of a complementary sentence, a fill-in sentence or a question sentence (MoNE, 2010). When can short-answer written exams be preferred? Short-answer written exams may be preferred at such situations in below (Baykul, 2014; Demirel, 2014; Doğan, 2016):
50
If the purpose of the exam is not to measure variables such as communication skills, effective speaking and proper use of language, short-answer written exams can be used.
Cemalettin YILDIZ
Short-answer written exams can be used for subjects related to the properties of numbers and operations in mathematics. These exams can also be used in geometry courses. It can be used to determine whether students have specific information on specific phomonenon.
What Should be Considered in the Preparation of Short-Answer Written Exams? For the effective use of short-answer written exams in the form of definition, fill-in blanks or question sentence, it is recommended to comply with the following points (Cunningham, 1986; Güler, 2011; Linn & Gronlund, 1995; Özçelik, 2010; Şimşek, 2009; Tan, 2009):
Each question should examine an important and single information, objective or skill. The problem should be precise and the only correct answer. Questions should be free of statements that can be clue or unnecessary information. The information required for the reply should be given in full. Some of the questions should not be a clue to other questions. Questions should be clear and easy to understand. The sentence that constitutes the question should not be taken from a familiar source by the respondent. Questions should be able to be answered independently of each other. Answer key must be prepared. The level of the problem should be at the same as the level of examined objective. The question should be written in accordance with the spelling rules. The questions should be appropriate to the relevant age or class level. There should not be many spaces to be filled in the short-answer written exams consisting of fill-im blank questions. A maximum of two words should be removed from a sentence. The blank lengths exams should be equal in each question in the short answer written. In the short-answer written exams, the words in fill-in blank questions should be arranged in a way that expresses what should be answered in the gap. 51
Traditional Measurement and Evaluation Tools in Mathematics Edocation
In the short-answer written exams including fill-in blank questions, the blank written statement, sentence or expression must be correct only when the expected answer is given; it should be wrong in other cases. In the short-answer written exams including fill-in blank questions, the blanks left for the answer should be arranged in a way to facilitate answering and scoring. In the short-answer written exams including fill-in blank questions, making blanks at the end of the sentence allows the student to better understand the problem or what is asked with respect to beginning of the sentence.
What are the Advantages of Short-Answer Written Exams? Short answer written exams have many advantages. The following sentences have been prepared by considering the positive aspects of short-answer written exams in the form of definition, fill-in blank or question-sentence. The superior aspects of these exams can be listed as follows (Baki, 2008; Baykul, 2014; Demirel, 2014; Dogan, 2016; Guler, 2011; Isman, 2005; Tan, 2009):
52
There is no possibility to answer the questions through exaggeration. It does not allow to provide predictive response. Short, specific and unique answers increase reliability. Because the answers are short, the subjectivity in the scoring is significantly reduced. It is easy to prepare, apply and score. It takes little time to answer. Easily applicable at every educational level. Unwanted variables such as writing beauty, composition skills, paper layout, and expression skills are less likely to intervene scoring. The shortness of the answers prevents the teacher from making mistakes. Many questions can be asked since the answers are short. It can be easily applied to large groups. It has responder independence. Responders can give their answers to questions.
Cemalettin YILDIZ
Short-answer written exams, which consist of questions in the form of identification or question sentences, provide students with the opportunity to provide original answers. The students are less likely to draw the questions and write what they want, especially in the short-answer written exams with fill-in blank questions.
What are the Disadvantages of Short-Answer Written Exams? The following sentences have been formed by considering the negative aspects of short answer written exams in the form of definition, fill-in blank or question sentence. The missing aspects of these exams can be listed as follows (Baykul, 2014; Dogan, 2016; Guler, 2011; Linn & Gronlund, 1995; Tan, 2009):
They are not suitable for use in the measurement of high-level cognitive objectives such as analysis, synthesis and evaluation. Since the students wrote the answer by themselves, there is a possibility that the student may draw the question to another side. Objectivity or scoring reliability is not complete. There are scoring difficulties. There is no problem in totaly wrong or correct answers but scoring of semi-correct answers is a problem.
Examples of Short-Answer Written Exams Example 1: What is the number that is the value does not change with the multiplication of number 7? Example 2: What need to be written in the operation of “30 +? = 39”? Example 3: Write the factors of “x2 + 5x + 6”. Example 4: Only one ……… passes from two different points. (Answer: True) Example 5: The set of all points is called as .......... (Answer: Space) Example 6: Define the concept of bisector. True-False Tests It is the type of exam where the answers are given only correctly or only wrongly (Güler, 2011). True-false type questions consist of true or wrong propositions (Ozcelik, 2010). The responder is required to read the item and classify the ideas in the scope of the item as true or false (Turgut & Baykul, 2010).
53
Traditional Measurement and Evaluation Tools in Mathematics Edocation
When Can True-False Tests Be Preferred? True-false tests may range from the lowest level of simple recall to the measurement of more complex cognitive behaviour (Osterlind, 1989). In other words, these tests can be used in the measurement of the objectives in knowledge, comprehension, application, analysis, synthesis and evaluation steps of cognitive field (Doğan, 2016). It is possible to examine advanced cognitive skills such as comprehension, application to new situations with true-false tests (Turgut and Baykul, 2010). What should be considered in preparation of true-false tests? For true-false tests, the issues that should be considered in item writing can be expressed as follows (Doğan, 2016; Güler, 2011; Linn & Gronlund, 1995; Özçelik, 2010; Tan, 2009; Turgut & Baykul, 2010):
54
In every true-false question, only one idea, proposition or objective should be be given and measured. An important learning product planned to be acquired should be tried to be examined in every true-false question. In other words, insignificant details should not be observed with the test items. The idea, proposition or achievement in the question of the true-flase question should be totally correct or totally wrong. In true-false questions, statements should be avoided that may cling to being true or false of the given idea, proposition or objective. It should be avoided to give clue about the correct answer by extending the item. Items should be be clear, easy to understand and contain short sentences. Double negative statements should not be used because they are difficult to understand. The number of true-false sentences should be close or equal. Care must be taken to ensure that the responses of the items are not systematically arranged in the test. Items should not be taken from the sources that are familiar to students. In true-false tests, related test items should be presented by grouping with a guideline.
Cemalettin YILDIZ
A correction formula (such as “two false delete one true”) can be used to eliminate the chance success. The correction formula should be identified in the test guideline. In order to decrease the success of chance, a statement can be made that says, “if the sentence is wrong, draw the wrong part and write the correction”. Care should be taken to ensure that the correct answer is not deceptive or unimportant. The length of the questions should be close to each other. When an opinion statement used towards a person or source, this person or source should be mentioned in the item. There should not be more than one wrong idea in an item. By making the verb of the sentence negative, “false” proposition should not be derived from a “true” proposition. Since negative statements can be overlooked in exam environments, attention should be paid to this issue. Definitive expressions should be used instead of words that do not have the same meaning in everyone's right-wrong articles. Items should be avoided from ambiguous and uncertain statements such as “often, in a number of cases, in many cases, often, sometimes, several times, largely”.
What Are the Advantages of True-False Tests? The positive aspects of true-false tests are given below (Doğan, 2016; Güler, 2011; Tan, 2009; Turgut & Baykul, 2010):
Item structures used in true-false tests are extremely simple. Therefore, the test is easy to prepare and implement. Easy to score and objective. Answering is easy and does not take much time. An easy and objective scoring can be ensuring by the marking answers of the students on the answer sheet. Variables such as expression skill and writing beauty do not affect scoring. It is suitable to test to advanced cognitive objectives. The test directive to the responder is short and simple. This directive may contain information on the purpose and subject of the test, the 55
Traditional Measurement and Evaluation Tools in Mathematics Edocation
number of items in the test, the time it takes for the test to be answered, how to respond to the test items, or how to mark them. A large area of information can be tested by asking a lot of questions in the true-false tests. It can be used in almost every step of education.
What are the Disadvantages of True-False Tests? The negative aspects of true-false tests can be listed as follows (Güler, 2011; Şimşek, 2009):
Chance success is high. If students required the correction of the wrong answer, chance success cn be reduced. If the test items are not carefully prepared, testing cannot go beyond simply measuring memorized, insignificant or simple information. Items may contain clues if not carefully prepared. True-false tests are not suitable for monitoring tests used to assess teaching and identify learning deficiencies. The validity and reliability of the questions in the true-false tests is low.
Examples of True-False Tests Examples: Explanation: Put “F” for the wrong ones and “T” for the correct ones to the beginning of the sentences. If there are wrong parts, underline them and write the correction under it. ( ) 1. Each line piece has one and only one midpoint. ( ) 2. Two crossing lines detemines one and only plane. ( ) 3. Two different lines intersect at most one point. ( ) 4. If different two planes intersect, the intersection is a triangle. (True: If two different planes intersect, the intersection is a line) ( ) 5. The sum of the inner angles of a triangle in the plane is 270 degrees. (False: The sum of the inner angles of a triangle in the plane is 180 degrees) Multiple Choice Tests It is the type of test in which the answers are not given by the responder and the correct answer is provided among the options and correct answer required to 56
Cemalettin YILDIZ
find by the responder (Güler, 2011). Multiple-choice tests are the most commonly used measurement and assessment tools in education and training institutions, especially in student selection and placement exams (Şimşek, 2009). A multiplechoice question consists of a item root that presents a problem situation and three or more optional responses following the item root (Demirel, 2014). In multiplechoice tests, the students consider the answer to the question and mark the option that they believe to be correct. There are many types of multiple-choice questions (such as the most accurate item format, common rooted item format and item format with common options), but the most common form is as follows: Example: Which is the below closest to the value of the “e number”? A) 2.5
B) 2.6
C) 2.7
D) 2.8
E) 2.9
The above question is an example of the most accurate item format. The first sentence in this question is a question sentence. This part of the problem is called as “item root”. The responds which follows the root of the item and is thought to be the answer to the question at the root of the item is called as “option” (Baykul, 2014). Four of the options are statements that do not have the answer to the problem in the root and are called as “distractors”. The task of the distractors is pretend to be the right answer to those who have not learned the questioned answer by the root and prevents them from getting points due to their answer to this question (Özçelik, 2010). When to Choose Multiple Choice Tests? The situations in which multiple-choice tests can be preferred can be stated as follows (Ayatar, 1965; Güler, 2011; Şimşek, 2009).
It is preferable if there is enough time to prepare a question and the scores need to be announced within a short period of time. It can be used in case of having the knowledge and expertise to prepare a multiple-choice test item. It is preferable if the test is valid and its reliability is high. It can be preferred in measuring student ability such as recognition, remembering, understanding, application and interpretation. Such tests can be used to measure simple and complex learning objectives. It can be used in situations that require selection and placement. It can be used when the number of questions is high. 57
Traditional Measurement and Evaluation Tools in Mathematics Edocation
What should be considered in preparation of multiple-choice tests? Following below rules in placing the questions in the test form may increase the validity and reliability of the test (Doğan, 2016; Güler, 2011; İşman, 2005; Linn & Gronlund, 1995; MEB, 2011; Şimşek, 2009; Tan, 2009; Turgut & Baykul, 2010):
58
The root of the item should not give any clue about the correct answer. The item root and options should not be taken from the written sources used by the student, such as the textbook. All the questions in the test should have the same number of options. The number of correct options in each question should be one. Students should be given time at average one minute to each question. The answer to each question should be independent of each other. The answer to a question should not be included in the item root or options of the other problem. The number of options should be determined according to the level of the students. Three options up to the fourth-grade level, four options from the fourth-grade to the high school level, and five options after the high school can be preferred. The item roots and options of multiple-choice tests should be short, clear and understandable. Deceptive and punctual expressions should not be used in test items. In writing test items, an appropriate language should be used for the age and psychological development of individuals who are going to take the test. The sentences in all the options of each item in the multiple-choice test should be similar and consistent in terms of their style of expression, length, subject, scope, meaning, and grammar. Distractors should be compatible with the expression at the item root in terms of grammar, meaning and subject. Distractors should not be made by making the expression disorder, grammar error or spelling mistakes. Distractors should be arranged according to the mistakes caused by misconceptions. The correct answer should be given in the most accurate form.
Cemalettin YILDIZ
Options of the multiple-choice test items should be placed according to a specific order (alphabetical order, from large to small, or from small to large, etc.). The correct answer should not be placed on a specific pattern. The number of correct answers in the questions (the total number of option A, option B ...) must be equal. Negative statements should not be used in the item root and options. If negative expressions are to be used, the negative expression should be underlined or should be written in bold. Each test item should test an objective specified in the statement table. High scope validity should be considered for the test. When it is difficult to find a distractor about the negative features, the item root can be converted into a negative sentence format. More questions should be asked above the level of knowledge, especially at the level of comprehension without harden the test and decrease the scope validity. In general, the most correct item format should be used for the correct item. A common rooted and option item format can be used. Distractors should mislead those who do not have the objective but should not mislead those who have the objective. The proximity degree of distractors to the correct answer should be adjusted in terms of difficulty degree of the test. Distractors proximal to the correct answer should not be used as close to the right answer as the experts of the subject may even be wrong. Options should be independent of each other, one should not contain another, they should not hint at each other. On the front page of the test booklet, there should be a directive containing the information that the students need. The page layout of the test should be set to provide easy reading and reception. Punctuation and spelling rules should be followed in writing the questions. In the item root or options, words should not be used such as “mostly, most of the times, never, always, forever, as a whole, often, none of the times, sometimes, rarely”.
59
Traditional Measurement and Evaluation Tools in Mathematics Edocation
Opposite expressions should not be uses in the options. For example, “none of the above” and “all of the above” should not be used together as an option in the same question. Several correct answers should not be repeadetly placed at the same option. Appropriate questions should be determined through considering the difficulty levels and differentiation of the questions. When the items are placed in the test, a certain gap should be left between the items in addition to item root and the options. Items should be presented to at least one field and language specialist and necessary corrections should be made in line with the recommendations of these experts.
What are the Advantages of Multiple-Choice Tests? The positive aspects of multiple-choice tests are given in below (Baykul, 2014; Güler, 2011; MoNE, 2011; Turgut & Baykul, 2010).
Scoring objectivity is high. Multiple choice tests are easy to implement. It is easy to score and takes little time. Most of the time is spent on reading and finding the right option. Thus, students do not have to spend time to write the answer during the exam time. The ability to ask many questions provides an opportunity to prepare an exam with high validity. Since the writing process is very easy, the variables such as writing time, composition ability and writing beauty do not interfere in the measurement results. It can be used successfully at different learning levels. It bases on strong statistical foundations. There are different question alternatives to test the objectives. Multiple knowledge and skills can be tested within a short period of time.
What are the missing aspects of multiple-choice tests? Negative aspects of multiple-choice tests can be listed as follows (Baykul, 2014; Eryılmaz Toksoy & Akdeniz, 2017; MoNE, 2010; Tan, 2009):
60
Cemalettin YILDIZ
There is a probality to find the correct answer by chance. With multiple-choice tests, it is hard to measure skills such as creative thinking, problem solving, and critical thinking. Writing questions and finding suitable distractors is time consuming and demanding. Writing a question requires expertise and experience. In multiple-choice tests, students are unlikely to provide original answers. Most of the exam duration is spent on reading items with options and finding the correct answer. The time is short for students to answer questions. In this exam type, the increasing number of questions causes the reading speed to affect the measurement. The organization of the information is not possible to monitor during problem solving as well as different problem-solving ways which are important for teaching mathematics and geometry.
Examples of Multiple-Choice Tests Example 1: What is the result of the operation “2! + 1!”? A) 2
B) 3
C) 4
D) 5
E) 6
Example 2: Which of the below is closest to the value of “√2”? A) 1.4
B) 1.5
C) 1.6
D) 1.7
E) 1.8
Example 3: In the “45: 8” operation, what is the hundredihs digit of the quotient? A) 0
B) 1
C) 2
D) 3
E) 4
Conclusion In this section, first, the concepts of measurement and evaluation are discussed, then verbal exams, long-answer written exams, short-answer written exams, truefalse tests and multiple-choice tests are discussed in the light of above explanations. Teachers should have enough knowledge on this subject in order to be able to fully apply the traditional assessment and evaluation approach expected by their teaching programs. In addition, it is necessary to know the methods by which of this information should be collected, and which measurement-assessment tools will be used by the teachers in order to get acquainted with the students, to gather 61
Traditional Measurement and Evaluation Tools in Mathematics Edocation
planned, programmatic information about them (İşman, 2005). The information obtained by trial and error in the workplace or hearing from other teachers does not allow traditional measurement-assessmenr to be performed in the manner expected by the teahing programs but may sometimes lead to improper practices. Because of the rapid change in the teaching programs, and the introduction of renewed or updated teaching program in a very short period, teachers may have to implement the programs without fully understanding what is expected of them, especially in terms of measurement-evaluation (Metin, 2010). Thus, it is inevitable for mathematics teachers to have problems with traditional measurement-assessment tools. In order to increase the efficiency obtained from learning-teaching activities, teachers should have knowledge about measurement and evaluation tools (Turgut & Gülşen Turgut, 2018). This makes it necessary to conduct in-service training activities such as seminars and courses related to the development and implementation of traditional measurement-assessment tools for teachers. The measurement and assessment activities for the mathematics course should be directed towards the objectives and basic skills in primary and secondary mathematics teaching program. In mathematics courses, it is aimed to measure the of enabled development and success level provisioned in the knowledge and skills dimension of mathematics teaching program by the traditional measurement-assessment tools. Activities should be prepared in order to follow the individual development of the students, gain multidimensional skills and help them in solving the faced problems and should be evaluated accordingly (Şimşek, 2009). In mathematics, teachers should be aware of student levels, so they can design and evaluate activities appropriate to their students' levels. Measurement and assessment have a crucial importance in terms of revealing to what extent the education and training practices in schools reach the goal (Demirel, 2014; Ormancı, Çepni & Ülger, 2018). Furthermore, it is of great importance that the measurement be performed without error in order not to make an incorrect evaluation. Required time and importance giving of teachers to measurement and assessment will have positive effect on healthy measurementassessment (Şenel, 2008). The educators in the colleagues of education should provide teacher candidates with information about the measurement tools in order to enable them to make correct measurement and assessment. Thus, measurement-assessment activities may become free of errors in traditional assessment processess. 62
Cemalettin YILDIZ
References Aiken, L. R. (2000). Psychological testing and assessment (Tenth Edition). Boston: Allyn and Bacon. Ayatar, H. (1965). Sınavlar ve sınavlarla ilgili işlem ve uygulamalar. Ankara: MEB Talim ve Terbiye Dairesi Yayınları. Baki, A. (2008). Kuramdan uygulamaya matematik eğitimi. Ankara: Harf Eğitim Yayıncılık. Baykul, Y. (2014). Ortaokulda matematik öğretimi (5-8. sınıflar) (Geliştirilmiş 2. Baskı). Ankara: Pegem Akademi Yayıncılık. Cunningham, G. K. (1986). Educational and psychological measurement. New York: Macmillan Publishing Company. Çepni, S., & Ayvacı, H. Ş. (2007). Fen ve teknoloji eğitiminde ölçme ve değerlendirme (6. Baskı). İçinde S. Çepni (Ed.) Kuramdan uygulamaya fen ve teknoloji öğretimi (ss. 249-268). Ankara: Pegem Akademi Yayıncılık. Çepni, S., & Çil, E. (2009). Fen ve teknoloji programı (Tanıma, planlama, uygulama ve SBS’yle ilişkilendirme) ilköğretim 1. ve 2. kademe öğretmen el kitabı. Ankara: Pegem Akademi Yayıncılık. Demirel, Ö. (2005). Öğretimde planlama ve değerlendirme: Öğretme sanatı (8. Baskı). Ankara: Pegem Akademi Yayıncılık. Demirel, Ö. (2014). Öğretim ilke ve yöntemleri: Öğretme sanatı (20. Baskı). Ankara: Pegem Akademi Yayıncılık. Doğan, N. (2016). Yazılı yoklamalar. İçinde H. Atılgan, A. Kan, & N. Doğan (Eds). Eğitimde ölçme ve değerlendirme (ss. 145-168). Ankara: Anı Yayıncılık. Enger, S. K., & Yager, R. E. (1998). The lowa assessment handbook. Arlington, VA. ERIC Document Reproduction Service No: ED424286. Eryılmaz Toksoy, S., & Akdeniz A.R. (2017). Öğrencilerin Problemleri Çözüm Süreçlerinin “İpucu Destekli Problem Çözme Aracı” İle Belirlenmesi. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 32(1), 185-208. Güler, N. (2011). Eğitimde ölçme ve değerlendirme. Ankara: Pegem Akademi Yayıncılık. 63
Traditional Measurement and Evaluation Tools in Mathematics Edocation
Harris, D. (1998). Understanding asssessment in Vermot’s schools. Arlington, VA. ERIC Document Reproduction Service No: ED435738. İşman, A. (2005). Türk eğitim sisteminde ölçme ve değerlendirme: Genel kavramlar, uygulamalar, sorunlar, çözüm önerileri ve yeni bir model (3. Baskı). Ankara: Pegem Akademi Yayıncılık. Linn, R. L., & Gronlund, N. E. (1995). Measurement and assessment in teaching (7th Edition). Englewood Cliffs, NJ: Prentice Hall. Metin, M. (2010). Fen ve teknoloji öğretmenleri için hazırlanan performans değerlendirmeye yönelik hizmet içi eğitim kursunun etkililiği. Yayımlanmamış doktora tezi, Karadeniz Teknik Üniversitesi, Fen Bilimleri Enstitüsü, Trabzon. Milli Eğitim Bakanlığı [MEB]. (2002). İlköğretim okulu ders programları: Matematik programı 6-7-8. İstanbul: Milli Eğitim Basımevi. Milli Eğitim Bakanlığı [MEB]. (2010). Ortaöğretim geometri dersi 9-10. sınıflar öğretim programı. Ankara: Milli Eğitim Yayınları. Milli Eğitim Bakanlığı [MEB]. (2011). Ortaöğretim geometri dersi 12. sınıf öğretim programı. Ankara: Milli Eğitim Yayınları. Oosterhof, A. (1994). Classroom applications of educational measurement (2nd Edition). New York, NY: Macmillan College Publishing Company. Ormancı, Ü., Çepni, S., & Ülger, B. B. (2018). Fen Bilimleri Öğretmenlerinin Ortaöğretime Geçiş Ortak Sınavları Hakkındaki Görüşleri. Academy Journal of Educational Sciences, 2(1), 1-15. Osterlind, S. J. (1989). Constructing test item. Boston: Kluwer Academic Publishers. Özçelik, D. A. (2010). Okullarda ölçme ve değerlendirme: Öğretmen el kitabı. Ankara: Pegem Akademi Yayıncılık. Özgüven, İ. E. (1994). Psikolojik testler. Ankara: Yeni Doğuş Matbaası. Şenel, T. (2008). Fen ve teknoloji öğretmenleri için alternatif ölçme ve değerlendirme tekniklerine yönelik bir hizmet içi eğitim programının etkililiğinin araştırılması. Yayımlanmamış yüksek lisans tezi, Karadeniz Teknik Üniversitesi, Fen Bilimleri Enstitüsü, Trabzon.
64
Cemalettin YILDIZ
Şimşek, N. (2009). Sosyal bilgilerde ölçme ve değerlendirme. İçinde M. Safran (Ed.) Sosyal bilgiler öğretimi (ss. 571-624). Ankara: Pegem Akademi Yayıncılık. Tan, Ş. (2009). Öğretimde ölçme ve değerlendirme: KPSS el kitabı (3. Baskı). Ankara: Pegem Akademi Yayıncılık. Tan, Ş., Kayabaşı, Y., & Erdoğan, A. (2003). Öğretimi planlama ve değerlendirme (Geliştirilmiş 4. Baskı). Ankara: Anı Yayıncılık. Tekin, H. (1996). Eğitimde ölçme ve değerlendirme. Ankara: Yargı Yayınları. Turgut, M. F. (1984). Eğitimde ölçme ve değerlendirme metotları. Ankara: Saydam Matbaacılık. Turgut, M. F., & Baykul, Y. (2010). Eğitimde ölçme ve değerlendirme. Ankara: Pegem Akademi Yayıncılık. Turgut, S., & Gülşen Turgut, İ. (2018). The effects of cooperative learning on mathematics achievement in Turkey: A meta-analysis study. International Journal of Instruction, 11(3), 663-680.
65
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
Chapter 4 Relıability and Validity Mustafa YADİGAROĞLU & Gökhan DEMİRCİOĞLU
Introduction In general, tests are used as a measurement tool in the fields of education and psychology. The results obtained from the tests are used to make important decisions about students. Therefore, it is extremely important that the results of the tests used in education have some characteristics such as validity and reliability. In this part, the definitions relation in the concepts of validity and reliability, and the methods used to ensure the validity and reliability of test scores will be explained. Reliability Reliability is one of the most important elements of test quality. In a general sense, reliability is defined as the degree for an assessment tool to produce stable and consistent results (Linn & Grounlund, 2000). In the other words, it is a measure of consistency of test scores from one implementation to another. The reliability is not a feature of the test, but rather a characteristic of scores or data obtained from it. It is not correct to use the statement “a test is reliable or unreliable” (Crooker & Algina, 1986). A test that gives reliable results for a group of students may not yield reliable scores for another group (Helms, 1999). For this reason, the statement “the reliability of the test scores” will be used in this chapter. The reliability is indicated by the reliability coefficient that is denoted by the letter "r". Reliability coefficients theoretically range from zero (no reliability) to 1.00 (perfect reliability). The larger reliability coefficient is the more reliable test scores. When evaluating the reliability of test scores, only the reliability coefficient is not enough. In addition to the reliability coefficient, the content of the test, the method of the reliability estimate, and the type of test should be considered. Consistency and stability of a measure are important elements for reliability of a test. Consistency can be expressed as a measure of internal consistency. 67
Reliability and Validity
Internal consistency is an indicator of the correlation of each item in a test with the total test score. The high correlation coefficient indicates that the items are homogeneous (consistent) in terms of the measured property. The low correlation shows that the items are not consistent with each other and the total test score, and the test items are heterogeneous. The stability is the consistency of scores obtained from a measurement tool across time. While there is almost no change in the measured property in physical measurements (i.e. the length of a table, the weight of gold) as time passes, the individual behaviours can change in time as they are measured in the measurements in education and psychology. Therefore, sources of measurement errors in education and psychology are more than in physical measurements. For example, suppose that the length of a table is measured with a meter at two different points in time. Since there is no change in the length of the table over time and the same measurement tool is used in every measurement, the measurement error will usually result from the person performing the measurement. On the other hand, suppose that the success of an individual is measured at different times with the same measurement tool. The differences (measurement errors) between scores obtained from the two applications can be caused by the measured individual himself, the measurement tool, conditions of the measurement environment and the person who makes the measurement. Measurements in education and psychology contain more errors. From this point of view, it can be said that it is very difficult to provide reliability of test scores in terms of stability compared to physical measurements. In repeated measurements, the difference between test scores is expected to be as low as possible. Thus, the smaller the difference is, the greater the reliability. Error in Measurement Measurement errors must be minimized in order to determine the true value of the measured property. However, it is not completely possible to eliminate measurement errors. Error types, sources of error and their effect on reliability is examined in this section. There are many sources of measurement errors in education, such as student’s anxiety, fatigue, or motivation during the test period and testing conditions. Measurement errors are generally examined under two headings; systematic error and random error. Systematic errors are repeatable errors that are consistently in the same direction (negative or positive) and are not determined by chance. These errors usually come from the measurement tool and the conditions that the test is applied. For example, students are taking a test in a cold classroom. The cold 68
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
environment negatively affects all students' scores. The scores of all students will be lower in a systematic way. The effect of systematic error on an X distribution is given in the Figure 4.1.
Figure 4.1 The effect of systematic error on an X distribution As seen in the Figure 4.1, systematic error increases the mean value. However, systematic error in another distribution may decrease the mean value. So, systematic errors increase or decrease the mean value. Systematic errors reduce the validity of a test rather than its reliability since they vary the central tendency. The direction and magnitude of the systematic error can be determined. The distribution with systematic error probably measures a property other than what is intent to measure. Random errors come from unpredictable changes during measurement and are not consistent and occur in both directions. The magnitude of error varies from one measurement to another. Random errors cannot be eliminated but can be reduced by performing repeat measurements or by using a more precise measurement tool (such as using a 30-question test rather than a 10-question test to measure student success). The effect of random error on an X distribution is given in the Graph 2.
69
Reliability and Validity
Figure 4.2 The effect of random error on an X distribution As seen from the Figure 4.2, random error reduces reliability of test scores because it increases variability (e.g., standard deviation) in test scores. Random errors can be reduced by increasing the number of measurements. Theorty of Measurement In classical test theory, a person’s observed score or obtained score (i.e., a student’s score on a test) equals to the sum of a true score and error related to the measurement. The error type mentioned here is the random error. The relationship between random error, observed score and true score is shown in the following equation (1): The observed score = The true score + The error X
=
T
+
E
Here, X= observed score, T= true score, and E= measurement error. When we interpret this equation in terms of measurements in education, the observed score is the student’s actual score on the exam and the true score is the student’s actual ability. As can be seen, the measurement error is the difference between the observed score and the true score (E=X-T). It is assumed that the measurement error is random with a mean of zero, so that it sometimes acts to increase the total score and sometimes decrease it but does not bias as it in a systematic way. Because each measurement has some degree of error (Bartlett & Frost, 2008), the true score cannot be determined; it is the average of all scores that a person would receive if he or she took the test in an infinite number of times (Allen & Yen, 1979). 70
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
If the parameters are used in this equation (2.1), the variance of the observed score (σ2X) is the sum of the true score variance (σ2 T) and the error variance (σ2 E). This equation is as follows: σ2 X = σ2 T + σ 2 E If a test is perfectly reliable, the error variance (σ2 E) is zero. As seen from the equation above, the observed score variance (σ2X) will be equal to the true score variance (σ2 T). In other words, the test reliability equals +1.00. According to the classical test theory, the reliability of a test ( ) is defined as the ratio of the true score variance to the observed score variance: = The test reliability can also be written as: =1− If a test is completely unreliable or all observed score variance is a result of error, the observed score variance (σ2X) will be equal to the error variance (σ2 E). In the other words, the test reliability equals to 0.00. Thus, reliability can be defined as the extent to which a measurement is free from error.
Standard Error of Measurement The standard error of measurement (SEM) is the standard deviation of errors of measurement in a test and is used to determine the effect of measurement error on individual scores on a test. The SEM is a measure of the distribution of test scores around a “true” score. As the SEM becomes smaller, more accurate measurement results are obtained. On the other hand, the standard error of measurement is related to reliability of the test. This relationship can be formulated as follows: S e=
S 1 rx
Here, Se is standard error of measurement, S is the standard deviation of test scores and rx is the reliability coefficient (Cohen et al., 1988). As seen from the equation, the SEM is a function of standard deviation and reliability coefficient. As the reliability increases, the SEM decreases or vice versa. With the help of 71
Reliability and Validity
this equation, the error rates in the test scores can be determined in terms of test points. As the direction of the error involved in the test scores is unknown, score of everyone from the test is calculated as an interval. The confidence interval (CI) for the true score can be found in the following equation: CIα/2 = X ± zscore x Se Where X represents the true score, Se shows the standard error of measurement, and CI represents confidence interval. Example 4: Assume that a chemistry test has a reliability coefficient of 0.84 and a standard deviation of 10. What is the standard error of measurement? S e=
1−
Se= 10√1 − 0,84 = 4 If an individual scored 40 on this test, because 84% of the scores are expected to occur within + or – 1 Se, her/his obtained score with 68% confidence will be between 36 and 44 (CIα/2 = 40 ± 1 x 4). Another example, if a student scored 55 on an achievement test with an SEM of 3, the student can be about 95% (or ±2 Se) confident that his/her obtained score falls between 49 and 61 (CI= 55 ± (3 + 3, 6). Methods of Reliability Estimate There are several methods of reliability estimates such as “test-retest reliability, alternate or parallel form reliability, inter-rater reliability, internal consistency reliability (Cronbach’s Alpha, split-half, and KR-20 and 21)”. Reliability Estimation through Single Administration Split-Half Reliability Estimate The split-half reliability, a measure of internal consistency, assesses the extent of both halves of a test contribute equally to what is being measured and can also be estimated from a single administration. In the method, the test is applied to a group of students and then it is divided into two equal halves. For an accurate estimate of reliability, the two halves of the test must be equivalent to each other in terms of the number of items and content. For each student, the total scores are calculated which obtained from each half of the test. Finally, the correlation is determined between the scores obtained from both halves of the test. This correlation reflects in consistency between the two halves of the test and can be 72
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
calculated by using Karl Pearson’s coefficient of correlation or Spearman’s rank difference correlation method. This coefficient is an indicator of the equivalence of the halves and can be considered as the reliability coefficient for each half of the test. Therefore, to calculate the reliability for the whole test, Spearman-Brown formula is usually used:
rw
2r 1 r
Here, rw= reliability coefficient for the whole test r= reliability coefficient for half-test Example: If the reliability of the half-test is 0.70, calculate the reliability of the whole test. Solution: Put the given values in the example to the formula;
rw
2.(0.70) 1.4 0.82 1 (0.70) 1.7
Example: A 20-item test was applied to 10 students. Then the test was divided into two halves (form X and form Y). The total scores obtained from each half of the test were calculated and given in Table 4.1. Calculate the reliability of the test by using the data in the Table 4.1. Table 4.1 The data from both halves of the test. Form X 4 6 5 10 7 8 5 6 3 9 63
Form Y 2 8 3 10 9 7 5 6 4 8 62
X2 16 36 25 100 49 64 25 36 9 81 441
Y2 4 64 9 100 81 49 25 36 16 64 448
X.Y 8 48 15 100 63 56 25 36 12 72 435
73
Reliability and Validity
Put the values in the table in the following formula (the Pearson correlation coefficient): rXY =
∑ XY∑ X2 -
[∑ X] x [∑ Y] n
( ∑ X) n
2
x ∑ Y2 -
( ∑ Y) n
2
= rXY =
the reliability of the half-test rXY is 0.84. As previously mentioned, this value is the reliability coefficient of each half. Reliability for the whole test can be determined using the following formula: =
=
=
. .
= 0.91.
Kuder-Richardson Reliability Estimate (KR-20 and 21 formulas) The KR-20, was developed by Kuder and Richardson, is a method of estimating the reliability of test scores from a single application as in the splithalf reliability. It can be used when the student responses obtained from a test are scored dichotomously (i.e. 1 “one” and 0 “zero” and assumes that each item in the test measures the same structure. In other words, the property measured by the test has a homogeneous structure. The KR-20 formula is given below:
KR 20
K p i .q i 1 K 1 S2
K: Number of items in the test pi: item difficulty index pi.qi : the sum of variances of items in the test qi: 1-pi S2: Variance of test scores The KR-20 formula can be used if it is possible to determine the difficulty index (p) of each item in the test. The item difficulty can be computed by dividing the number of students that answer the item correctly by the total number of students answering item. The item difficulty index can range between 0 and 1.00; the higher the value of difficulty index indicates the easiness of the item. 74
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
In cases where item difficulty indexes cannot be calculated, the KuderRichardson-21 formula can be used to determine reliability. However, this formula assumes the difficulty indexes of all items in the test are close to or equal. The in the formula shows the arithmetic mean of the test scores. K K.X (X) 2 KR 21 1 K 1 K.S2
K : number of items in the test X : arithmetic mean S2 : variance of test scores Let'S examine an example of how to determine the reliability of a test using the KR-20 and KR-21 formulas. Example 1: A test with 8 items is administered to 15 students. The results are listed in Table 4.2. Please, determine the reliability of the test by using Kuder and Richardson Formula 20. Table 4.2 The test scores for 15 students No 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Test items 1 2 1 1 0 1 0 1 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1 0 1
3 1 0 0 0 1 0 1 1 1 0 0 0 1 0
4 1 1 1 0 1 0 1 0 1 0 1 0 1 0
5 1 1 1 1 1 0 1 0 1 0 1 1 1 1
6 1 0 0 0 1 1 0 0 1 0 1 0 1 0
7 1 1 1 0 1 1 1 0 1 1 1 1 1 0
8 1 1 0 0 1 0 1 0 1 0 1 1 1 0
Xi 8 5 4 3 8 4 7 1 7 1 6 5 8 2
( − )2 9 0 1 4 9 1 4 16 4 16 1 0 9 9
75
Reliability and Validity
15 Sum p q=1p p.q
1 9 0,60 0,40
1 12 0,80 0,20
0 6 0,40 0,60
1 9 0,60 0,40
1 12 0,80 0,20
0 6 0,40 0,60
1 12 0,80 0,20
1 6 1 9 ( ) − 2 =84 0,60 S2 =6 0,40 =5
0,24 0,16 0,24 0,24 0,16 0,24 0,16 0,24 p.q
=1.81
The values of p in Table 4.2 are the percentage of students who answered that item correctly. For example, p value for item 1 is 0.60. 9 students made it correct, 15 students took the test, so 9/15 = 0.60. The values of q in row 19 are the percentage of students who answered that question incorrectly. For example, q value for item 1 is 0.40. 6 students made it wrong, 15 students took the test, so 6/15 = 0.40 or q=(1-p) = 1-0.60 = 0.40. As shown in the last line of Table 1, p and q values are multiplied and p.q values are calculated for each item. If the p.q values (0.24+0.16+0.24+0.24+0.16+0.24+0.16+0.24, see Table 1) are collected, the sum p.q value (p.q) is 1.81. In order to calculate the variance (s2), arithmetic mean (X) is needed. The arithmetic mean can be calculated in two methods. First and most common method is to add up all the values and divide the total number of values in the set. The mean of Xi (test score of each student) values was presented in Table 2;
X
8 5 4 3 8 4 7 1 7 1 6 5 8 2 6 75 5 15 15
Second method for the mean is to simply add up the item difficulty indexes; X=0.60 + 0.80 + 0.40 + 0.60 + 0.80 + 0.40 + 0.80 + 0.60 = 5 Now, variance can be calculated by using the formula “ =
∑(
)
=
=
∑(
= 6.
Then, these values (p.q and S2) was put into the KR-20 formula:
KR 20
76
8 1,81 8 1 1 0,301 0,798. 8 1 6 7
)
”.
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
Now, determine the reliability of the test by using Kuder and Richardson Formula 21. Put the values in Table 1 into the KR-21 formula; K K.X (X) 2 8 8x5 (5) 2 KR 21 1 2 K 1 K.S 8 - 1 1 8x6 0,785
As seen, the reliability coefficient calculated by using KR-21 was smaller than calculated by KR-20. The reason for this is the KR-21 reliability estimate is the assumption that the difficulty indexes of the items in the test are equal. When the KR-20 formula is examined carefully, it is understood that the reliability coefficient can be easily calculated if the sum of item variances (p.q) and the test variance (S2) are known. The following example will be explained this: Example 2: The sum item variance of a 30-item test is 2.4 and the variance of the test is 8.1. According to these data, what is the KR-20 reliability coefficient of the test? Solution: Put these values into the KR-20 formula;
KR 20
30 2.4 1 29 8.1 = 0.72.
The KR-20 coefficient for 30 items is 0.72, suggesting that the test have high reliability. Cronbach Alpha (α) Alpha, one type of internal consistency reliability, was developed by Lee Cronbach (1951). Cronbach’s alpha reliability estimate is widely used in education research because it only requires single administration of the test like KR-20 and 21, and split halves estimates. In other words, it does not require two administrations (i.e. test-retest reliability), or at least two raters (inter-rater reliability. A high level (0.90 and up) for Cronbach’s alpha may mean that the items in your test are highly correlated. In other words, all items in your test have high covariances. Cronbach Alpha value of more than 0.70 is usually acceptable as seen from Table 3.
77
Reliability and Validity
Table 4.3 Cronbach alpha values and their interpretations Cronbach Alpha
Interpretation
0.90 and up
perfect
0.80 and 0.89
good
0.70 and 0.79
adequate
0.60 and 0.69
questionable
0.50 and 0.59
poor
The equation for Cronbach’s alpha is (Crooker & Algina, 1986);
2 K Si 1 α 2 K 1 S x
In the equation, K shows the number of items in the test. As can be seen from the equation (K-1), the test or the measurement tool should contain at least two items. The alpha value (α) is sensitive to the number of items in the test and the alpha value can increase as the number of relevant items increases. The Factors That Affecting Reliability The measurement tools used in education are extremely important to correctly evaluate students’ success, to determine their deficiencies and to take necessary precautions, to make correct decisions about their interests and abilities. So, the tests in education should be free of errors and their results should reflect to the truth. The measurement errors can be caused by measurement tool, methods of reliability estimate, features of the individual or the group taking the test, testing conditions, spread of test scores and success of chance. These error sources are examined in detail below. Errors from the measuring instrument Homogeneity or heterogeneity of the test in terms of measured subject or behaviours affects the reliability. Homogeneous tests give more reliable results than heterogeneous tests.
78
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
The length of the test or the number of items directly affects the reliability. If the other factors affecting the reliability of the test are kept under control, and the number of items (question) in the test is increased, the reliability of test scores increases. Spearman-Brown formula can be used to determine how many questions should be added to the test to increase the reliability of test scores to the desired level. Test items that are confusing or ambiguous may cause misinterpretations and confusion. This type of questions reduces reliability of test scores. Reliability determination methods The methods (test-retest, parallel forms, split-half test, etc.) used to determine reliability differently uses the true score and error. Each method produces different reliability coefficient value for the same test scores. The reliability coefficients determined by the internal consistency reliability estimates are higher than the reliability coefficients obtained by test-retest and inter-rater reliability estimates. Selected method to determine reliability should be compatible with the purpose of the test. Features of the individuals or the group taking the test The individual is expected to receive the same or close scores from one measurement to another for high reliability. The factors such as tiredness, illness, fatigue, lack of motivation, excitement, reluctance affect individual’s score and increase amount of measurement error. Therefore, reliability coefficient adversely influences. The spread of scores Reliability coefficients are influenced by the spread of scores obtained from a group when other related variables are equal. As the distribution range of the scores increases, the reliability coefficient will increase. A more heterogeneous group in terms of measured content often yields more variable test scores and thus a heterogeneous group gives higher score reliability than a homogeneous group (Thompson, 1994). In a heterogenous group, it is more likely for the individual to have the same relative order in the group from one measurement to another because the differences between the scores of the individuals are large. Reliability coefficient will be higher when individuals stay in the same order in a group.
79
Reliability and Validity
Test application conditions The conditions in which the test is carried out should be equal and appropriate for all students. Otherwise, the attention of students took the test may be distracted. In other words, the testing conditions such as insufficient light, extreme hot or cold, loud voice adversely affect the students’ real performance. This causes measurement errors and low reliability coefficient. Success of chance Although a student does not know the answer, (s)he can correctly answer the question by chance. If the student correctly answers the question by chance, her/his total test score will increase. Therefore, the test reliability will decrease. Cheating is another important factor that affects the test reliability. Therefore, teachers should take precautions to prevent students from cheating. Validity In measurement processes conducted in the studies regarding the fields of education and psychology, generally the validity of the assessment tool is emphasized. However, here it is necessary to emphasize that the validity of the interpretations of the results acquired from the assessment tool rather than talking about the validity of the assessment tool. The validity of the interpretations made in accordance with the test results can vary based on the four characteristics given below (Figure 4.3). Aim of the Test
Group being Tested
Test Method
Test Application Conditions
Figure 4.3 Validity of the interpretations made based on the test results
80
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
As can be understood from the characteristics stated above, validity is not a characteristic of the measurement tool, but rather, it is about the interpretation of the results acquired from the measurement tool (Demircioğlu, 2012). When the related literature is analysed, it is seen that there are different classifications about the types of validity. However, the types of validity are mainly gathered under three main titles (Croceker & Algina, 1986; Ursavaş, Camadan & Reisoğlu, 2018). These titles are content validity, criterion validity and construct validity. Content Validity Tools for measurement in education are used to evaluate the effectiveness of the given education, and to reveal the deficiencies of education. The measurement tool must be enough in determining assessment tools’ level of access to the acquisitions. Ensuring content validity is a difficult process (Demircioğlu, 2012). In order to ensure content validity, it is necessary to determine the content of the test explicitly and clearly at first. After defining the test content, test items that will cover the subject content must be prepared. All the subject acquisitions must be taken into consideration while writing the test items. In that sense, it is possible to define content validity as the level of representation of the acquisition phase for the being prepared test. In content validity, the answer to the question of “do the test items sufficiently reflect the acquisition aimed to be assessed?” is sought (Büyüköztürk et al., 2011). Here, it is checked whether each test item is sufficient and appropriate in assessing the pre-determined acquisitions. A test that does not examine all the acquisitions about a subject does not have content validity. Preparing a table of specifications to check the subject-acquisition comparison for the achievement tests will be helpful in determining whether the test items are sufficient in assessing the determined acquisitions. Table of specifications are two dimensional tables that have subject or content acquisitions of a subject in one of the dimensions, and related content acquisitions in the other. The experts are referred in ensuring the content validity. At this stage, the experts are asked to examine the appropriateness of the items in the test in terms of acquisitions that are aimed to be assessed. Another way of ensuring content validity is the comparison of the test with another test, which was prepared beforehand for the same contents and acquisitions and validity and reliability was ensured. In this procedure, two tests are applied on the same sampling group and the correlation between the scores 81
Reliability and Validity
are analysed (Büyüköztürk et al., 2011; Demircioğlu, 2012). The fact that the correlation was found to be high (being close to +1,00) can be considered for the tests as an indicator of having content validity. Criterion Validity The criterion validity of a test is calculated by the correlation between the scores obtained from the test and the scores obtained from a different test whose validity and reliability are determined for the same feature. The test scores used for comparison purposes are called as criteria scores. The aim of the criterion validity is to predict individual’ future or current performance. Criterion validity is separated into two groups as in concurrent validity and predictive validity.
Concurrent Validity Predictive Validity
Figure 4.4 Relationship between concurrent and predictive validity Concurrent Validity In case the criterion scores are acquired at the same time or prior to the scores acquired from the assessment tool, the validity based on the correlation between the scores are called as concurrent validity (Croceker & Algina, 1986). The correlation between the scores acquired from a developed test to determine the chemistry performance of the students and the scores acquired from the scale developed to determine their attitude towards the chemistry course can be given as an example for the concurrent validity. Predictive Validity In case the criterion scores are acquired after the scores acquired from the assessment tool, the validity acquired through the calculation of the correlation between the scores are called as predictive validity (Croceker & Algina, 1986). Prediction is an estimation for the future through statistical techniques with the guidance of certain information (Tekin, 2007). Prediction in the studies on education are used to estimate the success of a student in the future. 82
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
Construct Validity When the literature is analysed, it is seen that there are different definitions of construct validity made by researchers. The ability level of an assessment tool in presenting the theoretical structure that is aimed to be assessed is called as construct validity (Anastasi, 1997; Baykul, 2000). To what extent the scores acquired from a test, and to what extent the concept (structure) aimed to be assessed through a test can be assessed is about construct validity (Büyüköztürk, 2002; Erkuş, 2003). In determining construct validity, Baykul (2000) suggested a method consist of 6 stages. These stages are given in Figure 4.5. Theoretical and operational definitions about the construct must be made. Hypotheses must be created based on operational definitions. Equipment-tool must be developed or the ready ones must be used in order to test the hypotheses. It must be applied to the suitable group and data must be collected. A research must be conducted to find out whether the data support the hypothesis. The features of the construct must be presented if the data support the hypothesis.
Figure 4.5 The stages of construct validity Various methods are used to determine the construct validity. The results acquired through these methods provide information about how well the test or the assessment tool reflects in the construct wanted to assess (Demircioğlu, 2012). Many methods have been proposed in the literature to ensure the construct validity (Baykul, 2000; Turgut, 1995). Factor analysis and hypothesis testing are two of the most frequently used methods in ensuring the construct validity. Factor analysis is a statistical method used to determine whether the questions in the test measure the construct that the test aims to measure. Hypothesis testing determines the significance of the difference between the scores obtained from similar scales. Another type of validity that can be found in literature along with the types of validity mentioned above is face validity. Face validity is about what the test 83
Reliability and Validity
assesses in appearance. Think that there is a test prepared for a chemistry subject. The name of the test, the method to be used in answering it, and the fact that the test questions are about chemistry all show its face validity. Factors that Impact Validity Validity of the scores acquired from the test depend on many factors. These factors could be that:
the test questions are not prepared in balance with the content,
there is a possibility of cheating in exam,
scoring is not objective,
the test questions are easy.
The factors listed above can decrease the validity of the scores acquired as a result of test (Turgut, 1995). The factors that impact validity can be put under four categories. These are listed in detail in Figure 4.6. Number of Items and Assessment Method Reliability Scorer Bias Application Conditions
Figure 4.6 Factors that Impact Validity Number of Items and assessment method: As the number of items in the test increases, test’s rate of covering the related content and representing the acquisitions increase as well. This situation will increase the validity of the test. As the number of items and the quality of the items in the test increase, the rate of covering the related content by the acquisitions that are aimed to be assessed will increase as well. The validity of the test will be positively affected by it (Büyüköztürk, 2006; Demircioğlu, 2012). In multiple-choice tests, there is an opportunity to ask more questions in comparison to written exams. The possibility of asking more questions may cause the validity of the multiple-choice tests to be higher than the written tests. In terms of assessment method, it is known that certain teachers prefer multiple-choice tests to written exams (Demircioğlu 84
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
& Demircioğlu, 2009). Along with that, there can be differences in the application methods of the teachers using similar assessment methods (Demircioğlu, 2012). These differences can dramatically affect the validity of the scores acquired as a result of the assessment. Reliability: Reliability is a pre-requisite for validity (Demircioğlu, 2012). In other words, the validity of the scores acquired as a result of the test necessitate the reliability of the scores (Büyüköztürk, 2006). Ensuring the reliability of an assessment tool does not mean its validity is ensured (Turgut, 1995). For example, the low reliability of the scores received by a group of students from a chemistry exam means that the scores received by the students mostly reflect the mistakes and do not reflect in the truth. This means that the scores received by the students in the test were interfered by different variables apart from the characteristic that is aimed to be assessed. This situation has an impact that decreases the validity. Scorer bias: While scoring the answers of the students given to the written exam or tests, the scorer’s (teacher, researcher etc.) inability to act objectively or the consideration of other factors (student having nice handwriting, student’s inclass behaviour etc.) will impact the validity. Because there are other variables interfering with the score of the student in addition to the characteristics that can be assessed through test. Application conditions: If the environment in which the assessment will take place does not constitute appropriate conditions, the validity of the results to be acquired can be affected negatively. For example, the possibility of external factors such as having noisy, insufficiently-lit, hot or cold environment, or providing insufficient time for answering the questions interfering with the test scores will decrease the validity of the test. Relationship Between Reliability and Validity Reliability refers to the consistency of test scores, while validity refers to the truthfulness of findings. Reliability is required for the validity of a measurement tool, but it is not enough. A test that gives consistent results may not provide correct information about the measured property. That is the test does not measure exactly what you want to measure even if it gives consistent results. So, low reliability coefficient limits the degree of validity. However, high reliability coefficient does not guarantee that the test is valid. In the other words, if a test is not valid, there is no meaning in questioning the reliability of the test scores. So, 85
Reliability and Validity
it can be said that validity is more important than reliability. There are three possible relationships between reliability and validity: a measure can be reliable but not valid, neither reliable nor valid, or both reliable and valid. REFERENCES Aiken, L. R. (2000). Psychological testing and assessment (10th Ed.). Massachusetts: Allyn and Bacon. Anastasi, A. (1997). Psychological testing (7th Ed.). New Jersey: Prentice-Hall Inc. Baykul, Y. (2000). Eğitimde ve psikolojide ölçme: klasik test teorisi ve uygulaması. Ankara: ÖSYM Yayınları. Büyüköztürk, Ş. (2002). Faktör analizi: Temel kavramlar ve ölçek geliştirmede kullanımı. Eğitim Yönetimi Dergisi, 32, 470-483. Büyüköztürk, Ş. (2006). Öğretimde Planlama ve Değerlendirme (2nd Ed.),Editors Ahmet Doğanay ve Emin Karip (Ed.), Güvenirlik ve Geçerlik içinde (pp. 309-332). Ankara: Pegem Akademi. Büyüköztürk, Ş., Kılıç Çakmak, E., Akgün, Ö.E., Karadeniz, Ş. & Demirel F. (2011). Bilimsel Araştırma Yöntemleri (8th Ed.). Ankara: Pegem Akademi. Cohen, R. J., Montague, P., Nathanson, L. S. ve Swerdlik, M. E. (1988). Psychological testing: An introduction to tests and measurement. Mountain View, CA: Mayfield Publishing. Crocker, L. ve Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt, Rinehart and Winston. Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychomerika. 16, 297-334. Demircioğlu, G. & Demircioğlu, H. (2009). Kimya öğretmenlerinin sınavlarda sordukları soruların hedef davranışlar açısından değerlendirilmesi. Necatibey Eğitim Fakültesi Elektronik Fen ve Matematik Eğitimi Dergisi, 3(1), 80-98. Demricioğlu G. (2012). Ölçme ve Değerlendirme (5th Ed.). Emin Karip (Ed.), Geçerlik ve Güvenirlik içinde (s. 89-122). Ankara: Pegem Akademi.
86
Mustafa YADİGAROĞLU, Gökhan DEMİRCİOĞLU
Erkuş, A. (2003). Psikometre üzerine yazılar. Ankara: Türk Psikologlar Derneği Yayınları, No: 24. Fleiss, J.L. (1981). Statistical Methods for Rates and Proportions. 2nd edition, NY: John Wiley & Sons Inc, New York. Helms, L. S. (1999). Basic concepts in classical test theory: Tests aren’t reliable, the nature of alpha, and reliability generalization as meta-analytic method (ERIC Document Reproduction Service No. ED 427 083). Kempa, R. (1986). Assessment in science. Cambridge science education series. Cambridge University Press. Landis, J. R. & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. Linn, R. L. ve Gronlund, N. E., (1995). Measurement and Assessment in Teaching, 7th Edition. New Jersey; Prentice-Hall, Inc. Sim, J. ve Wright, C. C. (2005). The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Physical Therapy, 85, 257–268. Tekin, H. (2007). Eğitimde Ölçme ve Değerlendirme 18th Ed.). Ankara: Yargı Yayınevi. Thorndike, L. R. (1982). Applied Psychometrics (2nd Ed.). Washington: American Council on Education. Turgut, M.F. (1995). Eğitimde ölçme ve değerlendirme metotları (10th Ed.). Ankara: Yargıcı Matbaası. Ursavaş Ö.F., Camadan F., & Reisoğlu İ. (2018). Öğretmenler İçin Teknostres Ölçeği: Geçerlik ve Güvenirlik Çalışması. Paper presented at 12th International Computer & Instructional Technologies Symposium, İzmir.
87
Ümmühan Ormancı
Chapter 5 Features and Characteristics of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS Ümmühan ORMANCI
Introduction In the world and in Turkey, international and national tests are performed to evaluate the educational systems of countries, to determine the success of students’ situation, to determine the status of such students to follow and to compare their skills. Anil (2010) states that the participating countries in international projects such as PISA, TIMSS, PIRLS do not qualify as a competition between countries, the aim is to evaluate their education systems, and ensure that students follow the years of development in their knowledge and skills in mathematics, science and reading. With the rapid increase in globalization, the use of large-scale standardized international assessments has increased significantly to assess and compare the quality of future workforce in different countries. (Huang, Wilson & Wang, 2016). Besides, the College Entrance Exam (LGS) and Higher Education Institutions Examination (WGS) are applied for both students are evaluated and placed in a higher education institution in Turkey. Programme for International Student Assessment (PISA) In a globalizing world, countries need education indicators to show their positions at the international level as well as their national evaluation studies in the field of education (Usta & Cikrikci-Demirtaslı, 2014). PISA represents a new approach to assessing student progress compared to national and international assessment initiatives (Sadler & Zeidler, 2009). The measurement of the factors that directly determine economic productivity such as science, mathematics and reading skills and basic skills in addition to various skills such as critical thinking,
89
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
analysis, reasoning, synthesis, creativity are the main reasons why PISA is an international reference (Sirin & Vatanartiran, 2014). Organized by the Organization for Economic Cooperation and Development (OECD), PISA is one of the largest international education researches in which students' knowledge and skills in mathematics, science and reading skills are assessed (EARGED, 2010). In this context, it is an international assessment, which measures the performance of reading literacy, mathematic literacy and science literacy in 15-year-old children every three years (Baldi et al., 2007; Fleischman et al., 2010; Lam & Lau, 2014; Ülger & Güler, 2019). One of the most important features of PISA practice is that PISA measures a structure called “literacy” while other studies focus mostly on the curriculum and what is learned in the classroom (Yildirim et al., 2013). In this context, PISA's evaluation framework covers the concept of literacy related to apply knowledge of students to daily life, to make logical inferences, to interpret problems related to various situations and to make inferences from what they have learned to solve problems (Anil, 2011). In PISA practice, it is aimed to determine not only how much remember of what they learned, but also the competencies of using what they learn in school and outside of school life to the extent to which they can benefit from their knowledge and skills to understand the new situations they will encounter, to solve problems, to predict and to make judgment (MEB, 2010). In addition, PISA collects data about students' motivations, opinions about themselves, learning styles, school environments and their families (Iskenderoglu, Erkan & Serbest, 2013). To this end, PISA is the most comprehensive and rigorous international program to collect data on student, family and institutional factors that can help in assessing student performance and explain performance differences (OECD, 2010). In this context, the outcomes of PISA (OECD, 2009):
A basic knowledge and skill profile among 15-year-old students. Contextual indicators of student and school characteristics. Trend indicators showing how the results change over time. A valuable knowledge base for policy analysis and research.
The PISA application has a complex design, which allows each to be able to follow specific changes over time (reading, mathematical and scientific literacy), while allowing for a specific focus in every nine years (Olsen, Prenzel & Martin, 2011). PISA has been repeated every three years since its first administration in 90
Ümmühan Ormancı
2000 (Le, 2009). Reading literacy based on PISA 2000, mathematics literacy based on PISA 2003 and science literacy based on PISA 2006. In 2009, a new cycle of nine years started (EARGED, 2010). In this context, the main areas of PISA were reading skills in 2009, mathematics in 2012, and science in 2015. In 2018, the main areas of PISA were repeated and included in reading skills. The figures for this situation are given in Figure 5.1.
Figure 5.1 PISA Application Years and Areas Acar (2012) stated that the project implementation intervals of three years and Turkey has participated for the first time in 2003. Impressions for participation state of Turkey's PISA application shown in Figure 5.2.
Figure 5.2 PISA Application Years and Turkey's Accession Status Turkey did not participate in the PISA 2000 and 2009 application but all other applications. In parallel with the exam held in PISA 2015, science literacy assessment’s criteria are structured, and the final version is shown in Figure 5.3.
91
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
Figure 5.3 PISA 2015 Science Literacy Assessment Framework (OECD, 2017) A science literacy question for the PISA 2015 test (PISA, 2015) is given in Figure 5.4 and interpreted below.
92
Ümmühan Ormancı
Figure 5.4 PISA Science Question
93
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
When the question was examined, the relationship between body temperatures and sweat amount was investigated parallel to the moisture content. In the option “A”, there is a question of multiple choice and open response (select data), while in option “B” there is an open response question. The context of the problem is in both personal “A” and “B” option, the problem is about health and disease. In option “A”, competency is to evaluate, and design scientific inquiry and knowledge-system is procedural living. The difficulty of the problem was determined as level three. In option “B”, competency is to explain phenomena scientifically and knowledge-system is content living. The difficulty was determined as level five. Trends in International Mathematics and Science Study (TIMSS) TIMSS is a comprehensive survey of mathematics and science trends on every four years conducted by the International Association for the Evaluation of Educational Achievement (IEA) in the Netherlands (TIMSS, 2013). In other words, TIMSS is an international evaluation exam designed to measure mathematics and science achievement of students (Kücük, Sengul & Katranci, 2014). In addition, TIMSS are tested the achievement and performance of science and mathematics among students from different countries. And, TIMSS is an international project that collects information about the factors that can affect learning and teaching processes such as student, teacher effect, school management, parental participation, etc. (Bilican, Demirtasli & Kilmen, 2011; Güler & Ülger, 2019; Wößmann, 2002). In this context, the data obtained in the TIMSS provide politicians, curriculum experts and researchers with a spectacular perspective for educational reform and improvements (Mullis et al., 2004). In addition to success and skills, TIMSS collects detailed information on the implementation of mathematics and science curricula (Dodeen et al., 2012). In this context, the aim of TIMSS implementation is to provide comparative data on education systems of countries to improve education and training in mathematics and science (TIMSS, 2013). Also, TIMSS is a tool that aims to investigate student achievement and school effectiveness considering the effects of variables such as home environment, teaching contexts and practices (Papanastasiou, 2008). TIMSS is trying to determine the success level of the country, not individual students. For this purpose, it does not actually calculate success scores for students, but instead calculates success distributions for all students with similar performance in the tests (Yildirim et al., 2013).
94
Ümmühan Ormancı
One of the important points in the TIMSS provides information about the 4year (8th year) cycle of the student community that he has evaluated in the 4th year (Thomson et al., 2012). In this application, it provides a long-term and a longitudinal comparison in the same age group (Sisman et al., 2011). In addition, science and mathematics evaluation framework of TIMSS includes “content dimension” and “cognitive dimension”. While content dimension refers to the subject and domain to evaluate science and mathematics, cognitive dimension assesses thinking processes and domains (Thomson et al., 2012; Thomson, Hillman & Wernert, 2012; Thomson et al., 2008). Under the auspices of IEA, TIMSS brings together researchers from more than 50 countries to design and implement science and mathematics learning and teaching in each country (Martin, 1996). At this point, the application of TIMMS, which is attended by many countries around the world, is an important project affecting the education policies of the participating countries (Karamustafaoglu & Sontay, 2012). IEA was performed First International Science Study (FISS) in 1970-71 and Second International Science Study (SISS) in 1983-84. The First and Second International Mathematics Studies (FIMS and SIMS) were conducted in 1964 and 1980-82, respectively. Third International Mathematics and Science Study (TIMSS) is the largest and complex IEA study conducted in 1995-1996 (Martin & Mullis, 2000). In this context, TIMSS applications were evaluated in mathematics and science in the 4th and 8th grades in 1995, 1999, 2003, 2007, 2011 and 2015. This is shown in Figure 5.5.
Figure 5.5 TIMSS Application Cycle The Ministry of Education, Educational Research and Development Department (EARGED) carry out TIMSS project in Turkey (Buluç, 2014). In Turkey, the students have participated in the TIMSS in 1999 at first time. In Turkey, Students did not attend in 2003 and they participated only at the 8th grade level in 2007 (Atar & Atar, 2012). In 2011 and 2015, Turkey participated in
95
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
TIMSS in 4th and 8th grade level. Information about this situation is given in Figure 5.6.
Figure 5.6 Status of TIMSS Cycles and Turkey's Participation Considering the questions in the TIMSS, the exam contains a significant number of open-ended and multiple-choice questions (Nyléhn, 2006). The TIMSS item pool contains three types of success items: multiple choice questions, free-response items, performance tasks (Garden & Orpwood, 1996; Kjærnsli & Lie, 2008). Examples (Mullis & Martin, 2013) and analysis of science questions for TIMSS 2015 application are as follows:
Figure 5.7 TIMSS Science Question 1 96
Ümmühan Ormancı
The question 1 in Figure 5.7 was asked at the 4th grade in the TIMSS application. In the question, a picture of a pond is provided and they are asked to give examples from the living and non-living things. In this context, it can be said that the problem is related to the human and environmental or ecosystem in the curriculum. At the same time, the problem is thought to be at the level of remember/understand.
Figure 5.8 TIMSS Science Question 2 The question 2 was asked at 8th grade level in TIMSS application. In the question, element, compound and mixture differences are inquired. Although it is a question parallel to the science program, it can be considered as a question at the level of remember/understand. At the same time, in the following question 3 was asked at the 8th grade level on the TIMSS application. The question is inquiring how the colours of snail shells help them survive. The question is an open question and it can be said that this is a high-level question.
Figure 5.9 TIMSS Science Question 3
97
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
National Examinations in Turkey (LGS and YKS) In Turkey, the Ministry of National Education (MONE) has been carried out secondary school student selection and placement process while Measurement, Selection and Placement Center (ÖSYM) has been carried out the student selection and placement process for higher education institutions (Birinci, 2014; Bakırcı, Kara & Çepni, 2016; Kutlu & Karakaya, 2007). The main aim is to place students in a higher education institution. The change in high school entrance exam in Turkey are as follows:
High School Entrance Examinations (LGS) 1999-2004 Secondary School Student Selection and Placement Examination (OKS) 2004-2008 High School Entrance Exam - Level Determination Exam (SBS) 2008-2013 Transition from Basic Education to Secondary Education (TEOG) 2013-2017 High School Entrance Exam (LGS) 2018 -... High school entrance exams in our country are as follows: High School Entrance Examinations (LGS), Secondary School Student Selection and Placement Examination (OKS), High School Entrance Exam - Level Determination Exam (SBS) and since the 2013-2014 academic year Transition from Basic Education to Secondary Education (TEOG) (Cayhan & Akin, 2016). Finally, High School Entrance Exam (LGS) has been implemented since 2018.
OKS, which was made by the Ministry of National Education to select students for high schools at the end of the eighth grade, started to be made on 2008 at the end of the 6th, 7th and 8th classes under the name of SBS and it was ensured that the students were measured in more detail (Cecen, 2011). SBS aims to determine the students' levels according to the achievements in the curriculum (Cepni, Kara & Cil, 2012; Yigittir & Caliskan, 2013). In this context, SBS can be important in terms of directing the future of the child because academic achievement is accepted as the basic criterion in transition to secondary education (Gunduver & Gokdas, 2011; Kara & Cepni 2011). The SBS was held once a year and in a session last in June 2013. Since this date, TEOG has been implemented and many other innovations came along with TEOG (Gultekin & Arhan, 2015). TEOG, the periodic examinations of the 98
Ümmühan Ormancı
courses (Turkish, mathematics, science and technology, etc.) determined the first of the two exams or the second of three exams, common exams were held every semester (MEB, 2013; 2016). With the TEOG application, the success of the student will be measured according to the performance in a wide process, against to the instant performance in a certain time (Karadeniz, Eker & Ulusoy, 2015). In Turkey, finally, transition system to the secondary to be implemented in the 2017-2018 academic year, have been amended and re-introduced as LGS. With the new practice, schools are divided into two categories: secondary education institutions which will take students to high schools and address-based student admission to secondary schools (Biber et al., 2018). In this context, not all students are required to take the exam. The exam will be held based on 8th grade curriculum, and there are two section, one of them is the verbal section (Turkish language, religious culture and ethics, Turkish history and foreign language) and the other one is numerical part (mathematics and science) that will be applied on the same day (MEB, 2018). In the old practices, it is a measurement exam because it places students in all schools with exams; with the LGS, it is a selection exam because students are required to take a limited number of school exams. These exams, conducted by the Ministry of National Education for selection and placement, consist of central and multiple-choice questions (Dinc, Dere & Koluman, 2014). Examples of the questions placed to LGS in 2018 are as follows with the analysis (MEB, 2018):
99
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
Figure 5.10 LGS Science Question 1 The question 1 is about heredity and it is a multiple-choice question. The question includes a description of the subject. In parallel with these explanations, students were asked a question to add their own comments. In question, it can be stated that the relationship with daily life is established and it is a medium level question. Another question (question 2) is related to the subject of pressure. In addition, the experimental setup is provided and the experimental setup is inquired to support the student's hypothesis. It can be said that this problem, which is prepared for students' high-level skills, is related to daily life.
100
Ümmühan Ormancı
Figure 5.11 LGS Science Question 2 In Turkey, change of transition to higher education exam are as follows:
Inter-University Student Selection and Placement Center (ÜSYM) (one stage) 1974-1980 Student Selection and Placement Exam (ÖSYS) (two stages - Student Selection Examination - Student Placement Exam) 1981-1998 Student Selection Examination (ÖSS) (single stage) 1999-2010 Student Selection and Placement System (ÖSYS) (two stages Transition to Higher Education Examination (YGS) - Graduate Placement Exam (LYS)) 2010-2017 Higher Education Institutions Exam (YKS) 2018Measurement, Selection and Placement Center (ÖSYM) which is to be in charge of selection and placement of students in universities in Turkey, organizes exam(s) every year and places the candidates in universities depending on the scores they receive from examinations, their level of success in secondary education, the preferences they have reported and the quota of university programs (Dogan, 2012). The aim of the tests, which are already used for selection and 101
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
placement purposes, is to measure the differences between the individuals as accurately as possible according to the ability to be measured (Kalaycioglu & Kelecioglu, 2011). In different years, such as 1981, 1985, 1987, 1994, 1997, 1999, and 2005, starting from the first years of the ÖSYM in relation to the tests and exam system used in student selection in higher education, ÖSYM made arrangements in various fields such as the scope of tests, types of placement, contribution of secondary education to placement (Karakaya, 2011). These exams are singlestage between 1974-1980, two-stage between 1981-1998 and then again from 1998 to 2010 in a single stage (Azar, 2006; Bakırcı & Kırıcı, 2018; Karabey, 2011). Then, a two-stage exam system was introduced in 2010. For this purpose, the general name of the system implemented since 2010 is the Student Selection and Placement System (ÖSYS), and the Transition to Higher Education Examination (YGS) and the Graduate Placement Exams (LYS) are included in this system (ÖSYM, 2016). In order to enter university education, students are required to succeed in the YGS and then LYS, which is based on the achievements of the Ministry of National Education in their respective curriculum for each course (Demirgunes, 2013). At this point, YGS is an examination, which aims to measure the students' thinking and thinking skills (Degirmencioglu, 2012; Yıldız, 2016). LYS is a multi-session examination. ÖSYS, which is multi-exam, had been criticized due to its many types of points and being quite complex, changed to Higher Education Institutions Exam (YKS) in 2018 with a decision taken in 2017 (Atilgan, 2018). YKS is an exam applied by ÖSYM for the selection of students to be admitted to university programs. As stated by ÖSYM (2018a), YKS is a three-session examination held at the end of a week in a year. One of them is Basic Competency Test (TYT) where all students are required to participate, the others are Field Competency Tests (AYT) and Foreign Language Test (YDT) which are optional for students. TYT includes Turkish Test, Social Sciences Test, Basic Mathematics Test and Science Test, which are based on common curriculum (ÖSYM, 2018b). In AYT, a booklet consisting of Turkish Language and Literature-Social Sciences-1, Social Sciences-2, Mathematics and Science tests is given to candidates (Sozen & Turksever, 2018). Candidates can solve the tests in AYT according to the type of score they want, while YDT is conducted in German, Arabic, French, English and Russian languages (ÖSYM, 2018b). In addition, a significant change in the
102
Ümmühan Ormancı
examination system is valid for the first session score of 2 years (Emil & Aksab, 2018). Two sample questions related to the 2016 TYT exam are given below (ÖSYM, 2018c):
Figure 5.12 TYT Science Question 1 103
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
In the above question 1, an experimental setup and graphs of variables are given. Students are expected to make inferences from this experimental setup. It can be stated that this item is a question that inquiries the students' thinking skills. In question 2, it can be stated that remember/understand levels of students are inquired in this question about kinetic energy and mechanical energy.
Figure 5.13 TYT Science Question 2 Conclusion When we examine the national and international examinations, there are usually questions about inquiring the students' knowledge. Besides, in the exams, mainly PISA, students' high-level thinking skills are among the variables that are inquired. It is especially important to improve the literacy of individuals and to emphasize the place of knowledge in daily life. It can be stated that the number of questions prepared for this purpose in our country is less, but in recent years, the number of questions based on context-based and inquiring skills has started to increase.
104
Ümmühan Ormancı
References Acar, T. (2012). The position of Turkey among OECD member and candidate countries according to PISA 2009 results. Educational Sciences: Theory & Practice, 12(4), 2561-2572. Anil, D. (2010). Factors effecting science achievement of science students in programme for international students’ achievement (PISA) in Turkey. Education and Science, 34(152), 87-100. Anil, D. (2011). Investigation of factors influencing Turkey's PISA 2006 science achievement with structural equation modelling. Educational Sciences: Theory & Practice, 11(3), 1253-1266. Atar, H. Y., & Atar, B. (2012). Examining the effects of Turkish education reform on students' TIMSS 2007 science achievements. Educational Sciences: Theory & Practice, 12(4), 2621-2636. Atilgan, H. (2018). Transition among education levels in Turkey: Past-present and a recommended model. Ege Journal of Education, 19(1), 1-18. Azar, A. (2006). Relationship of multiple inteligences profiles with area of concentration in high school and university entrance exam scores. Educational Administration: Theory and Practice, 12(2), 157-174. Bakırcı, H., & Kırıcı, M. G. (2018). Science teachers’ views on the removal of the transition from primary to secondary education exam. Yuzuncu Yıl Üniversity, Journal of Education Faculty, 15(1), 383-416. Bakırcı, H., Kara, Y., & Çepni, S. (2016). The examination of views of parents about the web-based performance evaluation program in science teaching process. Bartin University Journal of Faculty of Education, 5(3), 893-907. Baldi, S., Jin, Y., Skemer, M., Green, P.J., & Herget, D. (2007). Highlights from PISA 2006: Performance of U.S. 15-year-old students in science and mathematics literacy in an international context (NCES 2008–016). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC. Biber, A. C., Tuna, A., Uysal, R., & Kabuklu, U. N. (2018). Supporting and training course teachers’ opinions on sample mathematics questions of the high school entrance exam. Asian Journal of Instruction, 6(2), 63-80.
105
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
Bilican, S., Demirtasli, R. N., & Kilmen, S. (2011). The attitudes and opinions of the students towards mathematics course: The comparison of TIMSS 1999 and TIMSS 2007. Educational Sciences: Theory & Practice, 11(3), 12771283. Birinci, D. K. (2014). The first experience in central system common exams: Mathematics. Journal of Research in Education and Teaching, 3(2), 8-16. Buluc, B. (2014). An analysis of students’ mathematics achievements according to school climate in the frame of TIMSS 2011 results. Journal of Gazi University Faculty of Industrial Arts Education, 33, 105-121. Cayhan, C., & Akin, E. (2016). The evaluation of Turkish lesson questions TEOG examination in terms of Turkish lesson education program objectives. Siirt University Journal of Social Sciences Institute, 4, 1-9. Cecen, M. A. (2011). Turkish language teachers’ views about level determination exam and Turkish lesson questions. Mustafa Kemal University Journal of Social Sciences Institute, 8(15), 201-211. Cepni S., Kara, Y. & Cil E. (2012). Middle school science and items of high school entrance examination: examining the gap in Turkey. Journal of Testing and Evaluation, 40 (3), 501-511. (Indexed in SCI, SCI-Expanded) Degirmencioglu, L. (2012). The effects of the music department’s students’ high school types they have graduated and the student selection examination’s (ÖSS) scores on achievement level in elementary music theory lesson. Journal of Erciyes İletişim Akademia, 2(3), 40-56 Demirgunes, S. (2013). A linguistics content analysis on “language and speech” programme and the YGS-Turkish questions the year of 2010. The Journal of Academic Social Science Studies, 5, 557-570. Dinc, E., Dere, I., & Koluman, S. (2014). Experiences and view-points on the practices of the transition between various schooling levels. Adiyaman University Journal of Social Sciences, 7(17), 397-423. Dodeen, H., Abdelfattah, F., Shumrani, S., & Hilal, M. A. (2012). The effects of teachers’ qualifications, practices, and perceptions on student achievement in TIMSS mathematics: A comparison of two countries, International Journal of Testing, 12(1), 61-77.
106
Ümmühan Ormancı
Dogan, M. K. (2012). Encryption claims in transition to higher education exam and proposing a method to determine whether the candidates used the encryption. Education and Science, 37(166), 206-218. EARGED (2010). International student assessment program PISA 2009 national preliminary report. Ankara: Ministry of National Education. Emil, S., & Aksab, Ş. (2018, May). Turkey to understand the concept of university research in higher education: A comparative analysis. 13th International Congress of Educational Administration, Sivas, Turkey. Fleischman, H. L., Hopstock, P. J., Pelczar, M. P., & Shelley, B. E. (2010). Highlights from PISA 2009: Performance of US 15-year-old students in reading, mathematics, and science literacy in an international context. NCES 2011-004. National Center for Education Statistics. Garden, R. A., & Orpwood, G. (1996). Development of the TIMSS achievement tests. M.O. Martin and D.L. Kelly (eds.), Third International Mathematics and Science Study (TIMSS) Technical Report, Volume I: Design and Development. Chestnut Hill, MA: Boston College. Güler, H. K. & Ülger, B. B. (2019). TIMSS ve TEOG Sınavlarının Temel Aldığı Öğrenme Kuramları Salih Çepni (Ed.) PISA VE TIMSS Mantığını ve Sorularını Anlama (2nd Edition). Ankara: Pegem Publishing. Gultekin, I., & Arhan, S. (2015). The examination of content validity of Turkish questions asking in level determination exam (SBS). National Education, 44(206), 69-96. Gunduver, A., & Gokdas, I. (2011). Exploring 8th grade placement test achievement of elementary school children according to certain variables. Adnan Menderes University Journal of Educational Sciences, 2(2), 30-47. Huang, X., Wilson, M., & Wang, L. (2016). Exploring plausible causes of differential item functioning in the PISA science assessment: language, curriculum or culture. Educational Psychology, 36(2), 378-390. Iskenderoglu, T. A., Erkan, I., & Serbest, A. (2013). Classification of SBS mathematics questions between 2008-2013 years with respect to PISA competency levels. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 4(2), 147-168.
107
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
Kalaycioglu, D. B., & Kelecioglu, H. (2011). Item bias analysis of the university entrance examination. Education and Science, 36(161), 1-13. Kara, Y. & Cepni S. (2011). Investigation the alignment between school learning and entrance examinations through item analysis. Journal of Baltic Science Education, 10 (2), 73-86. (Indexed in SSCI) Karabey, B. (2011). The effect of new student selection exam on prospective primary mathematics teacher’s achievement in integral. Buca Faculty of Education Journal, 29, 192-202. Karadeniz, O., Eker, C. & Ulusoy, M. (2015). Evaluation based on acquisition of the questions in the course of the revolution history and Kemalism in the passing from basic education to high education exam (TEOG). International Journal of Eurasia Social Sciences, 6(18), 115-134. Karakaya, I. (2011). Examination of the relationship between the academic achievement and OSS scores of the students in teaching programs. Journal of Measurement and Evaluation in Education and Psychology, 2(1), 155163. Karamustafaoglu, O. & Sontay, G. (2012, June). After a TIMSS exam: The opinions of the students and practitioner teachers participating in TIMMS 2011. X. National Science and Mathematics Education Congress, Nigde, Turkey. Kjærnsli, M., & Lie, S. (2008) Country profiles of scientific competence in TIMSS 2003. Educational Research and Evaluation: An International Journal on Theory and Practice, 14(1), 73-85. Kücük, A., Sengul, S., & Katranci, A. G. Y. (2014). Views of prospective mathematics teachers about TIMSS: Case of Kocaeli university. Journal of Research in Education and Teaching, 3(1), 25-36. Kutlu, O., & Karakaya, I. (2007). A research on defining the factor structures of tests used at secondary school’s student selection and placement test. Elementary Education Online, 6(3), 397-410. Lam, T. Y. P., & Lau, K. C. (2014) Examining factors affecting science achievement of Hong Kong in PISA 2006 using hierarchical linear modeling. International Journal of Science Education, 36(15), 2463-2480.
108
Ümmühan Ormancı
Le, L. T. (2009). Investigating gender differential item functioning across countries and test languages for PISA science items. International Journal of Testing, 9(2), 122-133. Martin, M. O. (1996). Third international mathematics and science study: An overview. M.O. Martin and D.L. Kelly (eds.), Third International Mathematics and Science Study (TIMSS) Technical Report, Volume I: Design and Development. Chestnut Hill, MA: Boston College. Martin, M. O., & Mullis, I. V. (2000). TIMSS 1999: An overview. (Eds.) Martin, M. O., Gregory, K. D., & Stemler, S. E. TIMSS 1999 technical report. International Study Center. Measurement, Selection and Placement Center (ÖSYM) (2018a). The 2018 higher education institutions examination (YKS) guide, Ankara. Retrieved on 05.01.2019 from https://dokuman.osym.gov.tr/pdfdokuman/2018/YKS/KILAVUZ_28062 018.pdf.pdf. Measurement, Selection and Placement Center (ÖSYM), (2016). 2016 student selection and placement system (ÖSYS) guide. Retivered from https://dokuman.osym.gov.tr/pdfdokuman/2016/YGS/2016OSYSKILAVUZU06012016.pdf on 08.05.2019. Measurement, Selection and Placement Center (ÖSYM), (2018b). YKS evaluation report, Ankara. Retrieved on 05.01.2019 from https://dokuman.osym.gov.tr/pdfdokuman/2018/GENEL/YKSDegrapor0 6082018.pdf. Measurement, Selection and Placement Center (ÖSYM), (2018c). Higher education institutions exam first session (TYT) sample question booklet. Retrieved from https://dokuman.osym.gov.tr/pdfdokuman/2017/OSYS/YKS/TYTOrnekS oruKitapcigi03122017.pdf on 08.01.2019. Ministry of National Education (MEB), (2010). National preliminary report of the PISA 2009 project. Ankara: Education Research and Development. Ministry of National Education (MEB), (2013). E-guide for common exams for transition to secondary education in the 2013-2014 academic year. Retrieved from 109
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
http://www.meb.gov.tr/sinavlar/dokumanlar/2013/kilavuz/2013_OGES_ Klvz.pdf on 08.05.2019. Ministry of National Education (MEB), (2016). 2015-2016 school year common exams e-guide. Retrieved from http://www.meb.gov.tr/sinavlar/dokumanlar/2015/kilavuz/OrtakSinavlar_ E_Klavuz2015_2016.pdf on 08.05.2019. Ministry of National Education (MEB), (2018). Central exam application and application guide for secondary education institutions. Retrieved on 05.01.2019 at http://www.meb.gov.tr/sinavlar/dokumanlar/2018/MERKEZI_SINAV_B ASVURU_VE _UYGULAMA_KILAVUZU.pdf. Ministry of National Education (MEB), (2018). Sample questions about the central exam for secondary education institutions that will take students to the exam. Retrieved on 08.01.2019 from https://odsgm.meb.gov.tr Mullis, I. V. S., & Martin, M. O. (2013). TIMSS 2015 assessment frameworks (Eds.). Retrieved from Boston College, TIMSS & PIRLS International Study Center, United States. Mullis, I. V., Martin, M. O., Gonzalez, E. J., & Chrostowski, S. J. (2004). TIMSS 2003 international mathematics report: Findings from IEA's trends in international mathematics and science study at the fourth and eighth grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center. Nyléhn, J. (2006, November). Grade 4 Norwegian students’ understanding of reproduction and inheritance. In the Second IEA International Research Conference, 145-153, Washington D.C., United States. OECD (2009). PISA 2009 assessment framework key competencies in reading, mathematics and science. Retrieved on 05.05.2019 from, https://files.eric.ed.gov/fulltext/ED523050.pdf OECD (2010). PISA computer-based assessment of student skills in science. Paris, France: OECD Publishing. OECD (2017). PISA 2015 Assessment and analytical framework: Science, reading, mathematic, financial literacy and collaborative problem solving, revised edition, PISA. Paris: OECD Publishing.
110
Ümmühan Ormancı
Olsen, R. V., Prenzel, M., & Martin, R. (2011) Interest in Science: A many‐ faceted picture painted by data from the OECD PISA study. International Journal of Science Education, 33(1), 1-6. Papanastasiou, C. (2008). A residual analysis of effective schools and effective teaching in mathematics. Studies in Educational Evaluation, 34(1), 24-30. PISA (2015). PISA 2015 MS - released item descriptions final. Retrieved on 08.01.2019 from, http://www.oecd.org/pisa/test/PISA%202015%20MS%20%20Released%20Item%20Descriptions%20Final_English.pdf Sadler, T. D., & Zeidler, D. L. (2009). Scientific literacy, PISA, and socioscientific discourse: Assessment for progressive aims of science education. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 46(8), 909921. Sirin, S. R., & Vatanartiran, S. (2014). PISA 2012 Rating: Data-Driven Education Reform Suggestions for Turkey. Istanbul: TUSIAD. Sisman, M., Acat, M. B., Aybay, A., & Karadag, E. (2011). TIMSS 2007 national math and science report 8th grade. Ankara: Ministry of National Education. Sozen, E., & Turksever, O. (2018). Evaluation of the possible effects of the change in geography question number on secondary education geography program, which is important for regional leadership in university entrance exams. International Journal of Leadership Training, 3(3), 1-12. Thomson, S., Hillman, K., & Wernert, N. (2012). Monitoring Australian year 8 student achievement internationally: TIMSS 2011. Australian Council for Educational Research, Australia. Thomson, S., Hillman, K., Wernert, N., Schmid, M., Buckley, S., & Munene, A. (2012). Monitoring Australian year 4 student achievement internationally: TIMSS and PIRLS 2011. Australia: Australian Council for Educational Research. Thomson, S., Wernert, N., Underwood, C., & Nicholas, M. (2008). TIMSS 2007: Taking a closer look at mathematics and science in Australia. Australia: Australian Council for Educational Research. 111
Features and Characteristic of Large-Scale Science Examinations: PISA, TIMMS, LGS, YKS
TIMSS (2013). TIMSS 2011 national preliminary assessment report. (Yucel, C., Montenegro, E. & Turan, S.) Policy Analysis in Education Reports Series I, Eskişehir. Ülger, B. B. & Güler, H. K. (2019). PISA ve TIMSS Sınavlarının Temel Aldığı Ölçme ve Değerlendirme Yaklaşımları (Ed. Salih Çepni). Ankara: Pegem Publishing. Usta, H. G., & Cikrikci-Demirtaslı, R. N. (2014). The factors that affect students’ scientific literacy according to PISA 2006 in Turkey. Journal of Educational Sciences Research, 4(2), 93-107. Wößmann, L. (2002). How central exams affect educational achievement: International evidence from TIMSS and TIMSS-Repeat. Harvard University Program on Education Policy and Governance Working Paper No. PEPG/02-10. Yigittir, S., & Caliskan, H. (2013). Content validity analysis of questions on the field of social studies in the level assessment exam (SBS). National Education Journal, 42(197), 145-157. Yildirim, H. H., Yildirim, S., Ceylan, E. & Yetisir, M. I. (2013). TIMSS 2011 results from the perspective of Turkey. Turkish Education Association Tedmem Analysis Series I, Ankara. Yildirim, H. H., Yildirim, S., Yetisir, M. I., & Ceylan, E. (2013). PISA 2012 national preliminary report. Ankara: Ministry of National Education General Directorate of Innovation and Educational Technologies. Yıldız, C. (2016). Prospective mathematics teachers’ views about the exams in higher education. International Conference on Education in Mathematics, Science & Technology, May 19-22, Muğla.
112
Part II Summative Evaluation
Mine KIR
Chapter 6 Portfolio Assessment Mine KIR
Introduction Continuous change that takes place in science and technology has different reflections in educational environment. A reflection of this change ensued through adopting constructivist approach in Turkey as of 2005. Educationtraining process which is information based and student is in a passive and receiver status has turned into a structure that student structures the information through product-oriented approach based on his understanding, active participation and development as well as the centralized process of development as a result of the process evaluation. Although the duties and impact of the educators change within this process, there is not much change in the educators themselves. Education process include the research of educational needs, design, and application of education and measurement of its effectiveness. Traditional approaches and alternative approaches can be used together or separately in determining the effectiveness of education based on the contents and learning outcomes of the course. Within the traditional approach, there are written, objective (multiple choice tests, true false items, matching items, fill in the gaps items), oral, and practical exams aimed towards application. In alternative approach, there are many methods such as portfolio assessment, assignments and projects, concept maps, structured grid, word-association tests, observation and interview, presentation and posters, learning diaries, selfevaluation forms, and peer evaluation. While the traditional approach is mostly aimed for the evaluation of learning outcome, alternative approaches more focus on the evaluation of the process. It is seen in educational environments that the alternative assessment approaches gain important alongside the traditional methods. This approach is related with the paradigm change that occurred in educational environments.
115
Portfolio Assessment
This part consists of the definition of portfolio as one of the alternative assessments and evaluation methods, the types of portfolio, things that must be found in development and learning portfolios, tips for preparing portfolios, reflective writing, portfolio application steps, role of teacher and student in portfolio process, portfolio assessment, and electronic portfolios. What is Portfolio? Portfolio is a tool that has been used for many years in order to collect and display the best examples of their works by artists, photographers, and writers etc. This approach was adapted into the field of education as one of the basic methods of student performance evaluation (Normen & Granlund, 1998). Portfolios are used both as a learning tool and an assessment tool in the field of education. The use of portfolio is one of the alternative assessment techniques and evaluation methods that emerged due to the shortcomings in the measurement of expected behaviours of students in the traditional educational approaches (Birenbaum & Dochy, 1996; Şenel Çoruhlu, Er Nas & Çepni, 2009). Portfolios were started to be used within the field of education at the end of 1980s and developed in 90s (Belanoff & Dickson, 1991; Odabaşı Çimer, 2011). Many definitions were made for portfolios. Paulson et. al (1991) defined portfolio as “a collection, through which the student displays his development and success in a field or multiple field in accordance with a purpose”. Hamp-Lyons (1996) defined portfolio as something “that consists of proofs, which show the written texts by the students during this process and the stages of writing, the author’s development as a writer, reflect the identity and progress of the writer”. The definition of portfolio has changed and improved in time. Student participation in selecting the performance proofs and determining the content as well as the self-evaluation of the student were added to the previous definitions (Doğan, 2001). Barret (2005) defined portfolio as “…collection of the studies, which show the effort, development and successes of the students in one field or multiple fields throughout the process, with a specific purpose”. This collection consists of proofs that reflect in the student participation at the selection of content, selection criteria, criteria for ability evaluation and the self-evaluation. Considering all these definitions, it is seen that all of them focus on similar characteristics. As a result, portfolios can be defined as a file compiled by the students that bring together the works conducted during their learning process and the products, which were created under the light of their self-analysis for a specific purpose. 116
Mine KIR
General Characteristics of Portfolios Although there are differences in use, purpose and preparation of the portfolios, all portfolios are similar characteristics. These characteristics are listed by Chang (2001) as follows: Individual: Portfolio is a work prepared and structured by a person in accordance with certain aims within his learning process. It is about determining self-reflection through the choices made by the person. Innovative: Portfolios help student in observing the development and change that occur during the learning process. The duration is important for the application to observe the change and development. Selective: Portfolios enable students to have opportunity to make their own choices while directing their work. It has double value: The portfolios both enable students to present and manage themselves and do their work in the way that they want, and it provides teachers with the opportunity to evaluate the development and success of the students. Unique: Portfolios provide the students with the opportunity to bring together all the works and performances created during their learning. The traditional methods deal more with the learning product rather than their learning process, while the portfolios enable the inclusion of the things conducted during the process in the evaluation process. Since the mere product-oriented evaluations do not fully reflect the characteristics of the student, it may cause an insufficient evaluation. Moreover, since the portfolios contain tangible products which are presented along with the changes that take place within themselves and their works in time, the evaluations will be unique as well. Reflective: While preparing the portfolios, the student can their previous works together with the current works throughout this process. This situation can contribute to the review of their learning as well as the setting goals for further learning. Interactive: During their preparation, the portfolios enable students to form opinions from each other about the information that they are missing or do not understand. Throughout the process, it causes the creation of a continuous communication and interaction with the teachers as a counsellor.
117
Portfolio Assessment
Types of Portfolio When creating portfolios, the individuals have different purposes. In addition to considering the developmental process of what the individuals could or could not learn and how they learn, the thinking skills and problem-solving skills can be also considered in the portfolios. In consideration with this and similar purposes, the portfolios differ from each other., In general, the content of the portfolio and by whom that the content is prepared as well as the purpose of the portfolio should be taken into consideration When classifying the portfolios (Kan, 2007). When these features are considered, it is impossible to talk about single classification. Upon the examination of the literature, it can be seen that different classifications were made by different researchers. Tilemma and Smith (2000) divided the portfolios into three groups. These are file portfolio, learning portfolio and reflective portfolios. Lankes (1995) stated in his work that there are 6 types of portfolios which can be used in line with different purposes. These are developmental portfolios, teacher portfolios, proficiency portfolios, show case portfolios, working portfolios and college admission portfolios. Apart from these classifications, Columba and Dolgos (1995) developed a classification based on the content of the portfolios and by whom they are organized. These are activity portfolios, teacher-student portfolios and alternative teacher assessment portfolios. While the prospective teachers in teacher training institutions deal with the portfolios that they can be used in their job applications, the instructors mostly focus on the portfolios aimed towards learning (Kaptan & Korkmaz, 2005). The portfolios that can be used in teacher training programs can be classified as follows (Phi Delta Kappa and International Ball State University, 2001). Developmental portfolio: The instructors certify the success and development of the students in-class. Show case portfolio: Prospective teachers display their handmade products to a group with the participation of different persons in-class to show their development in their skills. Professional portfolio: It is created through the betterment of the show case portfolio. Prospective teachers display their necessary skills for teaching by acting like teachers.
118
Mine KIR
Necessary Contents within the Developmental and Learning Portfolio Since developmental portfolio is used in this study, a detailed information regarding the content of the portfolio is provided below. 1. Cover page should state who the writer is and how the portfolio displays the learning process of the writer as a learner. It should include a summary that show the learning and development of the student. This part should be written at last but put on the front page. 2. A table that states the page number and content should be added. 3. In the content, the sections that should be in all student files as well as the sections that are left to the initiative of the students should be determined. Fundamentally, it is important for the things to be included in the student file to be predetermined, so that a common decision can be reached. The section with optional elements is the section, in which the students can display their difference from the others. In this section, the students can show their best work as well as their less successful ones to determine the causes. 4. Each work that included in the portfolio must be added with the work completion date. 5. Audio, verbal and written products should be added in the file together with their drafts and revised versions. 6. Reflections may appear in different stages of the learning process. A short justification should be added for each selected element. This may include the performance of the learner, the progress of a learner or the emotions about the learner himself. The students may reflect on some or all the items listed below. What did I learn from this? What was I good at? Why did I choose this work? What do I want to improve through this work? What do I think about my own performance? On which parts I encounter difficulties? Why?
119
Portfolio Assessment
Important Points to Consider in Creating a Portfolio Paulson et. al (1991) explained in a list what the portfolio should contain in the work titled “What Makes Portfolio a Portfolio”. According to Paulson et. al (1991), the portfolio; 1. Enable students to obtain information about their learning. The product acquired as a result should provide self-reflection for the student and supply information about the student. 2. Is something made by the student, not something that is instructed to the student to be made. Therefore, the portfolio evaluation is a tangible method for students to learn the value of their own work and evaluate themselves as learners. Sous, the contents of the portfolio should be chosen by the student himself, thus making it unique and different from a compiled file. 3. Should openly or implicitly display the works of the students. Within the portfolio, there should be the justification (the purpose of creating the portfolio), aim, content (the thing intended to be displayed), standards (which performance is good or not so good) and judgmentdecision (what does the content tell us). 4. Can serve for different purposes within the year apart from the purposes than it served for the end of the semester. Since certain materials are important for learning, they can be in the portfolio, but there should be materials that are willingly put by the student in the portfolio at the end of the year. 5. Can have multiple purpose, but these purposes should not cause confusion. Materials selected by the learner should reflect in individual aim and interest, but the information within it can reflect the interests of the teacher, family and environment as well. The student should show the development of the goals set in the curriculum which are universal in their portfolios. 6. Should include information displaying the development. There are many ways that show development. The most known one is to understand how much the skill of the students developed based on their in-school performance. The change can be observed through inventories on this topic, the behaviours outside (such as reading) can be recorded or the attitude can be determined through different ways to display development. As a result, many methods and skills needed 120
Mine KIR
for the creation of an effective portfolio that does not appear by itself. The students should be shown example portfolios and be given support by explaining how different portfolios are developed. One of the most important factors in the successful creation of portfolio for an individual is to gain the skill of self-reflection and express himself both intellectually and in writing. Therefore, it is important to present students prior to the portfolio work with applied experiences towards self-expression. Reflective Writing In 1933, Dewey became part of the reflective thinking concept literature because of his works on education and sociology. Dewey defined reflective thinking as a way of effectively, consistently and carefully thinking of an information structure, which supports reaching any thought or information and the results aimed by it. Dewey stated that the individuals with reflective thinking skill must be open-minded, eager and responsible individuals. The dictionary definition of reflective thinking is; going back to our past actions or correcting our thoughts on certain topics, thinking deeply and seriously, remembering or recalling something, putting information in mind into different shape, managing this information and associating them with our emotions and acceptances. Based on these two definitions, reflective thinking can be explained as the thinking process of the individuals on the previous actions, correcting them if there are any insufficiencies, thinking on these actions to be better if insufficiencies do not exist. For the students, having critical thinking skill as well as expressing themselves with a critical point of view has a significant impact on the preparation of the portfolio and ensuring the effectiveness of the portfolio. There are different classifications for critical writing. Hatton & Smith (1995) stated that there are four levels to critical thinking. According to Hatton & Smith (1995), the critical thinking includes; i. ii. iii. iv.
Descriptive (depictive) writing Descriptive (depictive) reflection Dialogic reflection Critical reflection.
In descriptive writings, it is aimed to describe the situation that the student in and the situation need to be expressed as it is. There is no reflection at this level. In descriptive writings, information is mentioned in literature and certain studies. Here, there is no judgment or decision being made about the cases and situations. 121
Portfolio Assessment
These are writings mostly for helping to explain the situation as well as readers see, feel and hear about the cases. Descriptive reflective thinking: It is the level that explains cases and situations, supports the results and judgments and through which the depiction and explanation methods are used in the meantime. The being made explanations come forth as a result of the judgments of the individual. Descriptive reflection helps us to remember and understand our actions. Comparison, classification, summarization and illustration are used when performing descriptive reflection. Here, the teacher actualizes his application and steps aside just like the others and follows the actions. Dialogic reflective thinking is to make decisions by thinking and talking out loud about the previous cases or to solve the problem through trying other methods. The work or application being used can be omitted if there is a better method. Dialogic reflective thinking helps teacher to make a connection between the previous knowledge and newly acquired knowledge. As an example, there can be two students taking the same courses and while one course is understood by both students very well, the other course is understood by neither. When doing this, they ask each other questions trying to solve the situation in depth and by revealing its different aspects. Critical reflective thinking requires us to have a critical point of view while thinking over the taken actions. For this, the thought processes cannot be random, on the contrary, it must be conducted for a specific purpose with the use of certain strategies. During this thinking process, analysis should be performed and interpretive approach should be shown. Critical reflective thinking cannot be superficial; more effective results should be acquired through thinking in depth and on a specific field. The most distinct characteristic of critical reflective thinking that separates it from the others is a new information will be revealed as a result. The acquired knowledge determines the direction of the next step. The individual must have critical thinking skills (reasoning, evaluation, problemsolving, decision-making and analysis) to reach to the level of critical reflective thinking. There are certain types of sample cases that can be used in teaching reflective writing. In a study conducted by Moon (2001), the reflective writing samples of the university students can be found. At the end of the study, in which writing samples for each level can be found, evaluation criteria can be found as well. 122
Mine KIR
Application Steps of Portfolio There are many classifications, listings and flow charts in relation with the path to follow in the application of the portfolio. The following flow chart is prepared in consideration with an application made at a university level. There can be changes in the steps below in accordance with the grade and age level of the students. The necessary steps are listed in order in the flow chart below. Presentation of the portfolio Completion and application of reflective thinking and reflective writing Determining the goal Determining the studies to be added to the portfolio Creation of performance tasks and grading key Filing the works Conducting weekly student-teacher interview Evaluation of the portfolio
Figure 6.1 Application steps of portfolio Presenting portfolio to the class: In the application step, the students are informed about the importance of the portfolio, the benefits it will provide, and what they will be doing throughout the process. The importance of active participation of the students during the portfolio application is emphasized, and their participation starts in that stage. During the presentation of the portfolio, the presentation should be supported by as much tangible examples as possible. If any, the teacher brings in samples of portfolio prepared at different levels, so the students can have the opportunity to analyse them. Definition and application of reflective writing: What is reflective thinking, what is reflecting and how it is written at different levels are told students with reflective writings examples in a manner that was stated in the part of reflective writing. The application is supported with examples. It is told students that the weakly prepared portfolios are requested to be written at a critical level. In order
123
Portfolio Assessment
to make the portfolio more effective, questions can be added into them. These questions are;
What did I learn? What could I not learn? What did I do to compensate for those that I could not learn?
In case that the students answer these questions, they will primarily define their learning with the question of “What did they learn?” and will be allowed to reflect on their learnings by digging into what they learned with the question of “What could I not learn?”. Here, they are expected to realize the obstacles (the situations that are related to behavioural and emotional conditions, classroom environment, teacher or themselves) before their learning. It is ensured that they reflect on the reasons and effects of the obstacles within the frame of cause and effect relation. Through the question of “What did I do to compensate for those that I could not learn?”, they realize “what they need to do” to overcome the obstacles before them, and they are expected to explain and present with evidence how they overcome these situations. Since all these evidence and other definitions will constitute the entirety of the portfolio, this level of the application is very important. Goal setting: In the part of goal setting, the students are primarily requested to determine close goals (their expectations from the lesson, in which the application was made, and what their success will provide them etc.). Subsequently, they are requested to write the impacts of these close goals on longterm goal (professional expectation, career planning), the contributions that it can make for them quotidianly. At the end of the application, they are requested to make an evaluation in terms of their reaching levels to the defined close and longterm goals and of cause and effect relation about the subjects which they were successful in and missing from. The works which are requested to be in the portfolio: Because the education system cannot be arbitrary, a plan should be made within the frame of teaching program. The teacher should define the works, to take place within the portfolio, and the types of these works together with the students (performance tasks, sources that they use during their works, worksheets, examinations etc.). Considering that the portfolio is a tool, used in following-up the student’s development, the evidence (including sketches) that will show the way of learning to the teacher should be added into the portfolio. 124
Mine KIR
The creation of the performance tasks and grading key: Performance tasks are kind of work that is needed to take place in the portfolios. In particular, performance tasks include a preliminary preparation different from other tasks. Apart from the task, the tools for assessment should be also prepared such as a gradual grade key, self-evaluation, group evaluation forms. Information on the preparation of the gradual grade key and self-evaluation forms are included under the evaluation part together with their examples. Turning the works into a file: The works, required to be included in the portfolio, should be conveniently put into files by determining over the course of student and teacher interviews. The filed works should also include the changes that were made by taking the feedbacks into consideration. Furthermore, the additional works, in which the students find themselves successful, or the works, which they think of improving, can be added into the portfolio by student request. The student-teacher interview: In the weekly held student-teacher interviews (the frequency can be determined by the teacher regarding the classroom population), the students should be provided with feedbacks about their works. The important issues that are needed to be paid attention while providing feedbacks are to emphasize firstly the parts, in which the student is successful, in the feedbacks to the student, and then the deficient parts or the ones should be paid attention that need to be developed. As it will create a sense of insufficiency in the students, if the deficiencies are emphasized only, a decrease can be experienced in the student motivation. Evaluation of the portfolio: Among the works which are included in the portfolio especially should be taken into grading that can be evaluated with a gradual grading key. If the grading levels in the grading key are clearer and more understandable, the grading will be more reliable. Detailed information on evaluation is given under the title of the evaluation of portfolios. After the evaluation, the teacher meets with the students again and shares information about the evaluation of portfolios and which parts need to be paid attention in a next work. The most important point to be kept in mind is that learning is a process and precursor learning is the foundation for the next learning. The Role of the Teacher and Student in the Portfolio Process Whereas the continuous change that takes place in the education systems brings out many alternative approaches in addition to the traditional ones, the teacher conserves its position and role in the education process. By this reason, 125
Portfolio Assessment
the student and teacher work in cooperation is needed to be able to ensure the success of the portfolio applications which have a structure that overlaps with our understanding of contemporary education. In the student’s learning, providing the desired/expected success and giving correct information in their development, the portfolio is important for defining the roles of shareholders and their behaving in convenience with these roles in order to complete the process with success. Within this scope, the role of the teacher in the portfolio process is given articles below:
126
The scope of the portfolio, its purpose and how the materials will be put into the file must be defined. Information must be provided in terms of the contribution that the portfolio provides for the student’s learning and development. Helping to the students in determining their goals within the scope of lesson in accordance with its purpose. Preparing the activities that will take place in the file and performance-based applications; determining the criteria that will be used in the evaluation through self-evaluation, peer review and group evaluation forms (the necessary ones are chosen in accordance with the context of the lesson). Meeting with the students and providing feedback to them about their works according to the plan that is determined by taking the classroom population into account. Determining the problems that the student can face in the portfolio writing and preparation process and taking precautions. Determining the criteria that are going to be used in the evaluation of the portfolio. Providing information to the student with the feedbacks that will support and guide the student development. The feedbacks that are provided to the student are very important for the continuity of the process and monitoring the development. The path that is needed to be taken here should be cyclical and continuous in the manner of evaluation-correction-planning-applicationevaluation. The role of the student in the portfolio process are:
Mine KIR
Being eager and showing diligence in the preparation process to ensure for the portfolio that provides correct information about student learning and development. Performance based applications and other activities should be prepared according to the requested qualities. Paying attention to the points that emphasized in the interviews with the teacher, then arranging next work according to the feedbacks. Evaluating students works and their individual development according to the determined evaluation criteria. Finally, evaluation of entire process provided for the student (the goals that are set at the beginning are important here) and writing them down (good sides, the parts which student thinks to be deficient, reinforced sides).
The Evaluation of Portfolios Portfolio applications are a valid evaluation tool as they include all works that are requested from the students over the course of the process. However, researchers are questioning the reliability of the portfolio. It is impossible to determine the qualification of the student as a writer and how much support the student received over the course of the process. By this reason, some researchers state that reliability can be provided to a certain extent by knowing students for a specific period inside classroom environment and asking them to write about the works that they have made. Apart from the issue of validity and reliability, how the portfolio will be evaluated is related to the portfolio’s purpose of use. Those that are required to be inside the portfolio (such criteria as organization, including works without any deficiency, context etc.) can be scored with the gradual grading key, prepared by being evaluated (Kutlu, Doğan, & Karakaya, 2010). Knowing that the portfolios will be evaluated each week and the students will be motivated more in their works. This situation will contribute both in the provision of the expected outputs from the students and the grade’s being at a level that the students wants as it is quite important for them. It was seen that the motivations of the students were higher who thought that they were not to receive only one score at the end of the process (Kır, 2015). Thus, informing the students at the beginning of the process about the evaluation will provide both in the good operation of the process and taking the expected result. A gradual grading key, which can be used in the evaluation of the portfolio, is given below.
127
Portfolio Assessment
Exemplary The cover letter shows who the writer is and how the portfolio displays the learning process of the writer as a learner. A summary is presented as a report that shows the student learning and development. A table is added that displays the page number and content. Fundamentally, the things are complete to be included in the student file. The file is prepared in a systematic and organized way. In the optional part, the student displays his best work or the less successful works with the included reasons. Each work added in the portfolio with the date of its preparation. The audio, verbal and written work drafts and revised versions are included in the file. Reflections reveal the different stages of the learning process. There is a short reasoning added along with each selected element. The learner performance, progress as a learner are added as well as their emotions about themselves. Average The cover page shows who the writer is and how the portfolio displays the learning process of the writer as a learner. A summary report displaying the learning and development of the student do not fully reflect the situation. The table is lack of show the page number and content. Fundamentally, the things are complete to be included in the student file. The file is prepared in a systematic and organized way. In the optional part, the student displays his best work or the less successful works with the included reasons. Each work added in the portfolio with the date of its preparation. The file is lack of including the audio, verbal and written work drafts and revised versions. Reflections reveal the different stages of the learning process. There is a short reasoning added along with each selected element. The learner performance, progress as a learner or their emotions about themselves are added.
128
Mine KIR
Needs to be Improved The cover letter shows who the writer is and how the portfolio displays the learning process of the writer as a learner. A is presented as a report summary that shows the student learning and development. The table is not included showing the page number and content. Fundamentally, the things are complete to be included in the student file. The file is sloppy and does not have a specific organization. In the optional part, the student does not display his best works. Not every work was added in the portfolio with the date of its preparation. The file is lack of included the audio, verbal and written work drafts and revised versions. Reflections do not reveal the different stages of the learning process. There is not a short reasoning added along with each selected element. The learner performance, progress as a learner or emotions of students are not included about themselves. Teacher’s remarks:
Figure 0.2 The Gradual Grading Key for the Complete Portfolio It is important for the students to analyse themselves about their work in their portfolio applications. Below is a self-assessment form prepared for the students to evaluate themselves. Table 6.1 Student Self-Assessment Form for the Complete Portfolio Dear students, Below are some sentences about your portfolio. Consider the sentences with your portfolio in mind and cross (X) the grade you deem appropriate. 1 2 3 4 The cover page shows how I am and how I present my learning process as a learner. A summary report is presented that show my learning and development. A table is included that shows page number and content. Mainly, the things are complete to be found in the student file. The file is prepared in a systematic and organized way.
129
Portfolio Assessment
In the part where the optional elements are found, I either showed my best works or presented my less successful works with the reasons. I stated the day each work is made when adding it into the portfolio. I included the audio, verbal and written product drafts and revised versions in the file. My reflections reveal different steps of my learning process. I added a short reasoning for each element I chose. I added my emotions regarding my performance, my development as a learner or myself.
Limitations of a Portfolio The most important factor in the use of portfolio as a tool of teaching and evaluation is that it is a work created by the honest work of the student. It is normal for the students to get help and be affected by their environment when performing the work, and it is not possible for them to rely only on their mind. It is important for the work not be copied from another work by the student. Since the students will be motivated in terms of cheating when the cheating is not recognized, in the long run it will cause their successes to be impacted negatively. Therefore, for the teacher to carefully track the process, to give timely and correct feedback will ensure the student do unique works and the teacher do correct evaluation. When the learning processes are taken into consideration, the portfolio provides the student with the opportunity of directing the self-learning (Demirören et al., 2009). Teacher’s attitude should aim to provide opportunities for the students to conduct appropriate works, they should reinforce the successful applications with positive and motivational feedback. If the students are need to be evaluated through the grade they receive from the portfolio at the end of the process, the students may continue they work merely with the purpose of meeting the teacher’s expectation. The students, who are merely trying to actualize the expectations of the teacher, will not have the opportunity to reflect themselves in the way and level they want and thus the developmental dimension of the portfolio will be disregarded. Ensuring this balance will be important for the autonomy of the students as well. One of the most fundamental behaviours expected from the students when writing portfolios is self-reflection. The students may refrain from and be 130
Mine KIR
unwilling about writing about what they feel lack of, weak and may cause fear and anxiety. The careful and large-scaled observations, interviews and directions of the teacher can minimize this condition. Therefore, the feedback mechanism should be effective and continuously be active. Preparation and evaluation of the portfolios are time-consuming works. Considering that the teachers not only have one class or course but also different classes and courses, they will need more time for the portfolio work. In addition, an expert other than the teacher must analyse the files to ensure the consistency and objectivity. When the status of the source is taken into consideration, opinion of another teacher can be needed at least for the preparation process of the assessment criteria. In certain situations, a storage space might be needed due to the differences in applications. During the planning stage of the process, the teacher may need to predetermine the storage space based on the need and condition of the school. It is difficult to evaluate the cognitive and metacognitive gains expected from the students in portfolios. A grading key, which is explicitly and clearly prepared, can be used to prevent subjective grading. Since the resulting grades will be similar when the grading is done by at least two people, the subjectivity in grading can be prevented. Preparing the grading key in such a way will also eliminate the problems possible to encounter. Electronic Portfolios It is seen that recently the computer-aided applications have become more common within educational environments due to the changes that took place in the computer technology. This increase in the usage of computer technology can contribute to the elimination of certain problems met while preparing portfolios. Usage of computers provides great convenience for the applicators regarding the storage problem, preservation without any damages, fast and easy access to the information in the file and sharing files. Portfolios created by the students in an electronic environment in line with the predetermined criteria are called as electronic portfolio. If one to compare electronic portfolio with the file-type portfolio regarding their preparation processes, there is not much difference between them. In short, it will not be wrong to call the electronic portfolios as the file-type portfolios prepared in an electronic environment. The fact that the student products can be kept in smaller areas, easy to copy of them to the environments accessible online, evaluation by the teachers whenever 131
Portfolio Assessment
they want to pave the way for the portfolio use to become more common. E-portfolios, just like file-type portfolios, have benefits as well as limitations. It is necessary to have computer technology in a classroom environment in addition to the active usage ability of the students for the preparation of the portfolios. When the students that will prepare e-portfolio have already prepared classic portfolio (know the tasks, necessities and steps of the process), it will ensure that the process will continue more smoothly. While it is important for the students to be able to use the computer, it is also important for the teacher to be able to use it and for the school to meet the equipment conditions as well. Other limitations of the application are the time-consuming transfer of the works into electronic environment, costly development, the memory expanse in the computer and the damaging of the works by the malevolent people (Cooper, Edgett & Kleinschmidt, 1998; Health, 2005) Conclusion Portfolio analysis, which is an alternative assessment and evaluation method, is a file that includes the evaluation of the process rather than the product itself. Although the ways of using the portfolios differ from each other, generally they must be individual, progressive, selective, double value, reflective and interactive. When the ways of use and purposes of the portfolios with different characteristics are taken into consideration, the portfolios can be analysed under three categories as in developmental portfolios, show case portfolios and professional portfolios. These differences between the portfolios may differ based on the application steps and their ways of application. The definition and features presented about the portfolio here stand for the qualities of the portfolios that can be prepared at a university level. Supportive questions (What did I learn? What did I not learn? What did I do to make up for the things I did not learn?) regarding how the reflection as a general characteristic of the portfolios can be performed as well as the reflective writing activity provide students with the opportunity to think over their learning. It is seen that the individuals, whose level of awareness increase, assume the responsibility of their later learning and can determine the characteristics that they lack or need to improve. Learning of the students and thinking over their actions support the critical thinking and metacognitive thinking, as part of the 21st century skills.
132
Mine KIR
References Barrett, H. (2005). White paper: Researching electronic portfolios and learner engagement. Retrieved from http://google.electronicportfolios.com/reflect/whitepaper.pdf. Belanoff, P. & Dickson, M. (1991). Portfolios: Process and product. Portsmouth, NH: Boynton/Cook. Birenbaum M. & Dochy F. R. C. (1996). Alternatives in assessment of achievement, learning processes and prior knowledge. London: Kluwer Academic Publishers. Chang, C. C. (2001). A study on the evaluation and effectiveness analysis of webbased learning portfolio (WBLP). British Journal of Educational Technology, 32 (4). 435-458. Columba, L., & Dolgos, K. (1995). Portfolio assessment in mathematics. Reading Improvement, 32 (2), 17-76. Cooper, R. G., Edgett, S. J., & Kleinschmidt, E. J. (1998). Best practices for managing R&D portfolios. Research-Technology Management, 41 (4), 2033. Demirören, M., Melek, A., Paloğlu, Ö., & Koşan, A. (2009). Bir Öğrenme ve Değerlendirme Yöntemi Olarak “Portfolyo” Portfolio as a Learning and Assessment Method. Ankara Üniversitesi Tıp Fakültesi Mecmuası, 62(01), 019-024. Doğan, F. E. (2001). Suggested portfolio development model for ELT students at Gazi University. Gazi Üniversitesi, Eğitim Bilimleri Enstitüsü Yüksek Lisans Tezi, Ankara. Hamp-Lyons, L. (1996). Applying ethical standards to portfolio assessment of writing in English as a second language. In M. Milanovich and N. Saville (Eds.), Performance Testing and Assessment: Selected Papers from the 15th Language Testing Research Colloquium, pp.151-164. Cambridge: Cambridge University Press. . Hatton, N., & Smith, D. (1995). Reflection in teacher education: Towards definition and implementation. Teaching and Teacher Education, 11 (1), 33-49.
133
Portfolio Assessment
Heath, M. (2005). Are you ready to go digital? The Pros and cons of electronic portfolio development. Library Media Connection, 23 (7), 68-72. Kan, A. (2007). Portfolyo değerlendirme. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 32, 133-144. Kaptan, F. & Korkmaz, H. (2000). Fen öğretiminde tümel (portfolio) değerlendirme. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 19, 212220. Kır, M. (2014). Biyoloji Öğretmenlerinin Yansıtıcı Düşünme Hakkındaki Görüşlerinin İncelenmesi. Journal of Research in Education and Teaching, 3(4), 225-233. Kutlu, Ö., Doğan, C. D., & Karakaya, İ. (2010). Öğrenci başarısının belirlenmesi: Performansa ve portfolyoya dayalı durum belirleme. Ankara: Pegem Akademi. Lankes, A. M. D. (1995). Electronic portfolios: A new idea in assessment. ERIC Clearing House on Information and Technology. Service. ED 390377. Moon, J. A., 2004, A Handbook of reflective and experiential learning: theory and practice. New York: Routledge. Norman C. & Granlund E. (1998). Assessment of student achievement. Upper Saddle River, NJ, United States: Pearson Education. Odabaşı Çimer, S. (2011). The effect of portfolio on students’ learning: Student teachers’ views. European Journal of Teacher Education, 34(2), 161-176. Paulson, L.F., Paulson, P.R. & Meyer, C. (1991). What makes a portfolio a portfolio? Educational Leadership, 48(5), 60-3. Phi Delta Kappa and International Ball State University Teacher College. (2001). Student teacher’s portfolio handbook and evaluation of student teacher’s guidebook and implementation guide for evaluation of student teachers. Ball State Univ., Muncie, IN.; Phi Delta Kappa International, Bloomington, IN. (ERIC Document Reproduction Service No.474198). Şenel Çoruhlu, T., Er Nas, S.& Çepni, S. (2009). Fen ve teknoloji öğretmenlerinin alternatif ölçme-değerlendirme tekniklerini kullanmada karşılaştıkları problemler: Trabzon örneği. Yüzüncü Yıl Üniversitesi Eğitim Fakültesi Dergisi, 6(1), 122-141. 134
Mine KIR
Tilemma, H. H. & Smith, K. (2000) Learning from portfolios: differential use of feedback in portfolio construction. Studies in Educational Evaluation, 26, 193–210.
135
Ufuk TÖMAN
Chapter 7 Project Assignments and Its Evaluation: Sample Presentation and Implementation Ufuk TÖMAN
Introduction In addition to developments in every field today, a rapid development has been experienced in the field of information. New information has been revealed at any time and current accumulation of knowledge has been in rapid increase (Blumenfeld et al., 1991; Çepni, 2014). Thus, it is important to raise individuals who can adapt themselves to the increasing knowledge and the developing technology, correspondingly (Peterson & Myer, 1995). Due to these reasons, the new curriculums aims at training individuals who does not only memorize but also who can establish link between the knowledge learned in the school and every day subjects, interpret them, and generate solutions to the problem which they encounter with the steps of scientific method as a scientist. Moreover, the curricula include teaching methods and techniques which enable students to be more active and learn by doing and inquiry (Boaler, 1998; Doppelt, 2003; Taşkın, 2012). One of these teaching methods and techniques is project assignments and the aim is to train autonomous individuals who can engage in lifelong learning and can solve encountered problems (Demirel, 2011). In this context, instead of transferring a specific content to the students, they must be offered education opportunities with project assignments which aim at getting the complete and detailed understanding of the content and acquiring knowledge. Project assignments make contributions to students to perceive the events occurring in their environment, generate solutions to the problems via analytical thinking, and to become individuals who can inquire, observe, and generate solutions to the problems (Çepni, 2014).
137
Project Assignments and Its Evaluation: Sample Presentation and Implementation
Project Assignments Project assignments are mental and physical activities carried out by the students as a group work and individually through scientific methods (understanding the problem, recognizing and limiting the problem, identifying the problem, constructing hypothesis, collecting data, data analysis, drawing conclusions and generalizations) based on the concepts and principles which they learn in different disciplines under real life or similar conditions (Aydın, Demir Atalay & Goksu, 2017; Willard & Duffrin, 2003). Project assignments focus on the fundamental concepts and principles of a single or more disciplines and possibly involve learning goals of a more than one course in a course scenario. They help students to do research about the solution of the problem, acquire knowledge and introduce a new product by integrating this knowledge into a meaningful whole. They also provide opportunities for students to work autonomously and construct their own knowledge and enable them to reach the highest point with the genuine products (Demirel, 2011; Walker & Leary, 2009). Project assignments are based on learning through experience, that is, learners use their own knowledge. The learners actualize this by identifying the problem, investigating ways of solutions, administering the research, analysing the data and associating prior knowledge with new knowledge (Brown, 1992; Yurtluk, 2005). Steps of Project Implementation Choosing the Problem/Subject: This is the step in which the problem about the subject is determined. While choosing a topic, it should be considered that a topic draws students’ attention, can generate solutions to the encountered problems in everyday life, observable, applicable, and rational (Helm & Katz, 2001). Preparation of the Project: This is the step in which the necessary conditions required for the project was provided. This step consists of stages like gathering information/literature review, method and preparing the calendar (Karademir, 2016; Ozdener & Ozcoban, 2004). Carrying Out the Project/Implementation: This is the step in which the conclusion is drawn by using the steps of scientific study.
138
Ufuk TÖMAN
Presentation of the Project/Report: This is the step in which the studies carried out about the problem, the results obtained, and the generated solutions are reported and presented in the class (Liu, 2016; Ozdemir, 2006). Evaluation of the Project: This is the step in which the product is evaluated.
The Characteristics of the Project Project assignments are student centred and active student participation is in the forefront. The teacher is the facilitator. Students carry out free studies (exhibition, presentation etc.) under the guidance of their teachers. Project assignments are activities which require scientific study and highly motivated and interested students. In project assignments, students produce a product by using the steps of scientific method and the other disciplines for some time according to the chosen topic and they present this product after it is reported. Project assignments evaluate both the process and the product. Project assignments are required to plan the available resources efficiently by considering the appropriateness of purpose, cost and time conditions, and control the process well. Project assignments involve both in-class and out of class settings. Because the student identifies the problem and solves himself during the project assignments, the hypothesis he constructed could be very different and genuine results could be obtained. Students choose a topic either in groups or individually by considering their interests and needs. Students who particularly study in line with the research strategies acquire high order thinking skills with the help of these assignments. The problem question which will be determined according to the topic must be open to different ways of solution, not only using one-way solution. Project assignments can be adapted to different types of intelligence. They offer students learning opportunities appropriate to their learning styles. While choosing a topic for project assignment, it should be considered that the topic must enable students to think and do research, it must be fun and interesting, it must reveal individual differences and be 139
Project Assignments and Its Evaluation: Sample Presentation and Implementation
supporting, it must provide opportunities for students to work collaboratively, but it must not damage student’s autonomy, it must develop student’s social, individual, and collaborative skills and pay special attention to the results or the products to be compatible with the real world (Özaslan ve Çetin, 2018; Sylvester, 2007). Types of Projects Equipment and Tool of Projects: Projects such as a car running with a solar energy, generating electricity from the wind energy require to use technological design cycle. Project-Based Teaching-Learning: It includes projects like determining the science subjects which elementary school students have difficulty in learning, eliminating misunderstandings, developing worksheets and proving conceptual change (Scott, 1994). Intellectual, Problem, or Research Projects: They include projects like environmental damages of waste batteries, the effects of radiation on living things and water problem. Projects Based on Aesthetic Issues: They include projects like landscape planning, landscape projects, ancient water canals, and urban transformation (Kemaloglu, 2006). Study (or Operational) Projects: They include projects like how to make teachers’ working space more comfortable, how to make people work more efficiently and how to increase the individuals’ capacity utilization rate (Çepni, 2012).
140
Benefits of Project Assignments Students gain experience about real life problems. Students acquire scientific attitude. Students gain effective speaking skills. Students develop self-confidence. Students develop high order thinking skills (creative, divergent, reflective, critical, analytical, and scientific thinking). With the help of the project assignments, students learn to do research, reach resources, establish effective communication with people, work together, share information, take responsibility, and fulfil responsibility.
Ufuk TÖMAN
Students develop skills like thinking, offering solutions, decisionmaking, benefiting from different sources (data collection skill), and making observation. Students have situations which they can reach the highest level of cognitive domain (the level of evaluation) and permanent learning. Students actualize much more permanent learning via experiential doing (or learning through experience and doing). Students gain an organized and a regular study habit. Students gain individual or collaborative (or team) work habits. Because project assignments provide opportunities to work in groups, they make contributions to the students’ socialization and help them come to agreement on a subject. As students work on the subject they choose freely, their motivation and interest increase. As students complete the projects with their own efforts in the process, they facilitate students’ inner development. Limitations of The Project Assignments Project assignments are time-consuming. Digression and wandering away from the subject can be observed. Students’ prior learning can be inadequate for the solution of a new problem. There could be difficulties while supplying the equipment and tools required for the project. Because the teacher will not be able to intervene the difficulties that students encounter outside the classroom instantly, problems could occur. Problems may occur if students fail to report their work appropriately. Especially, problems may arise during the evaluation step. Problems may occur when the students’ presentation skills are not good. Assigning projects which require psychomotor skills may lead to problems.
141
Project Assignments and Its Evaluation: Sample Presentation and Implementation
Project Samples (Samples of Presentation and Implementation) Project Sample 1 Name of the Project: Earthquake Alarm Research Question (Problem): Due to the lack of construction of earthquake resistant buildings and an alarm, there are losses of life. Hypothesis: During the earthquake shaking, the alarm in the house can ring and warn the people. Purpose of the Project: To construct earthquake resistant buildings, to minimize the loss of life by warning the people at home. Summary: The function of making an alarm which rings during the earthquake is to warn the people. Very big tremors occur during the earthquake. With the help of the wheel placed under the house, it moves back and forth, thus this will prevent the house from destruction. The salt water in the bowl will shake with this tremor. The electrodes placed on the sides of the bowl get wet and thus conduct electricity and the alarm rings.
Figure 7.1 Earthquake Alarm Construction Preparing the mechanism in the figure, while one of the conducting wires is immersed in salt water, the other one remains on the surface of the water without touching the water. Because salt water conducts electric current, it causes the conducting wire which touches water to shut off even during a tiny tremor and the bell to ring. This mechanism can be placed in a house model and used as an earthquake alarm. 142
Ufuk TÖMAN
Result: The damage was prevented to the house during the earthquake. Moreover, an alarm has been made which warns people during the earthquake. Suggestions: Our houses must be strengthened and made resistant against the earthquakes. The possible losses of life can be minimized by setting the earthquake alarm in every house. (URL-1, 2018). Project Sample 2 Name of the Project: Non-Exploding Balloon Research Question (Problem): To observe how pressure changes with the surface area Hypothesis: I think as the surface area increases, the pressure will decrease. Purpose of the Project: To demonstrate how pressure changes according to the surface area by setting an experiment design Summary: Two identical balloons are blown up until they become the same size. The first balloon is placed on a single nail and the other one is placed on the mechanism with a bed of nails in the picture. By exerting equal force on the balloon with a single nail and the other balloon placed on a bed of nails, it is observed whether the balloons pop.
Figure 7.2 Relationship Between Surface Area and Pressure Design of the Experiment: A mechanism is prepared with boards as in the picture. A rigid foam floor is used to insert nails easily. The nails are inserted into the foam floor to make a bed of nails. 143
Project Assignments and Its Evaluation: Sample Presentation and Implementation
Result: It is observed that as the number of the nails increased, the balloons did not pop. This means that as the surface area increases, the applied pressure decreases. Thus, the balloons do not pop. It was observed that when a force is applied on the balloon with the single nail, the balloon pops, but the balloons placed on the bed of nails do not pop (URL-2). Project Evaluation The evaluation of the project assignments is different from the evaluation using traditional methods (Gultekin, 2005). The process is as important as the product in the project. Thus, as is with all the contemporary approaches, both the products and the process must be evaluated along with the steps to get the product. The performed project must be evaluated both by the teacher and the student (Chen, 2006; Memişoğlu, 2011). Project evaluation is not only related to the students’ understanding levels of concepts and subjects. It is about the development and recording of students’ reallife experiences which they need out of their classes and school life (Knoll, 1997). The project evaluation stage is not only intended for teacher’s evaluation but also it provides an opportunity for students to evaluate themselves. Evaluation reflects what and how much learners learn outside their schoolwork. Thus, the learners’ development and improvement can be documented. The best evaluation is the one which enables students to evaluate themselves with questions like “What do I understand?” and “How do I do it?” (Abbott & Ryan, 1999; Yurtluk, 2003). Project evaluation is not done with a pen and paper test. There must be different forms in evaluation. All students are different from one another whatever their skills and experiences. Thus, the evaluation activities must be specific enough and provide appropriate and effective feedback. Assessors could be the students themselves, same age groups, teachers and domain experts (Glassman & Whaley, 2000). The units of the evaluation could be individual learners, student groups, and the whole class. The components in the evaluation are written works (formal school assignments or homework and informal sources and journals), observations (observation of group activities and individual studies), presentations, informal discussions and questions, project designs and questions, and final assignments (Toolin, 2004). Preparing a project is a process; thus, process evaluation must be carried out. Especially, students must prepare portfolio and these portfolios must be graded
144
Ufuk TÖMAN
according to the scoring criteria (or rubrics). A project evaluation scale prepared by Çepni (2012) is given below. Table 7.1 Project evaluation scale Name of the Project Student Name and Surname Criteria Point Interest was raised in the project (5 p). Notes were taken about the explanations of the study (5 p). Motivation Questions were asked in case of incoherent situations (10 p). Work was shared (10 p). Planning Time was planned appropriately (5 p). Information sources were reached (5 p). Research was carried out about the principles, Gathering concepts, and rules related to the topic (5 p). information Necessary information was selected after gathering the information and supporting materials (10 p). Language and punctuation were checked in the report (5 p). Written and visual components were compatible Report with each other (10 p). References were included (5 p). Different activities were involved in the presentation to explain the subject (10 p). A summary was prepared to explain the subject (5 p). Presentation Besides the explanations, supporting visual materials were used in the presentation (5 p). Time was used effectively during the presentation (5 p).
145
Project Assignments and Its Evaluation: Sample Presentation and Implementation
References Abbott, J. & Ryan, T. (1999). Constructing knowledge, reconstructing schooling. Educational Leadership, 57(3), 66-69. Aydın, S., Demir, A. T., & Göksu, V. (2017). The effects of project-based learning process on the academic self-efficacy and motivation of middle school students. Bartin University Journal of Faculty of Education, 6(2), 676-688. Blumenfeld, P., Soloway, E., Marx, R., Krajcik, J., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3&4), 369-398. Boaler, J. (1998). Alternative approaches to teaching, learning, and assessing mathematics. Paper presented at the European Conference for Research on Learning and Instruction. Athens, Greece. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2, 141-178. Chen, H-L. (2006). Projects to think with and projects to talk with: How adult learners experience project-based learning in an online course (Unpublished PhD Dissertation). Kansas State University College of Education, Manhattan, Kansas. Çepni, S. (2012). Introduction to research and project work (6th edition). Trabzon: Celepler Publishing. Çepni, S. (2014). Teaching science and technology from theory to practice (11th Edition). Ankara: Pegem Academy Publishing. Demirel, Ö. (2011). Curriculum development in theory (15th edition). Ankara: Pegem Academy Publishing. Doppelt, Y. (2003). Implementation and assessment of project-based learning in a flexible environment. International Journal of Technology and Design Education, 13, 255-272. Glassman, M. & Whaley, K. (2000). Dynamic Aims: The use of long-term projects of Dewey’s educational philosophy. Early Childhood Research and Practice. 2(1), 24-32. 146
Ufuk TÖMAN
Gultekin, M. (2005). The effects of project-based learning on learning outcomes in the 5th grade social studies course in primary education. Educational Sciences: Theory and Practice, 5(2), 548–556. Helm, J.H., & Katz, L.G. (2001). Young investigators: the project approach in the early years. New York: Teachers College Press. Karademir, E. (2016). Investigation the scientific creativity of gifted students through project-based activities. International Journal of Research in Education and Science, 2(2), 416-427. Kemaloglu, E. (2006). Project work: how well does it work? Assessment of students and teachers about main course project work at Yildiz Technical university school of foreign language basic English department (Unpublished Master’s Thesis). Bilkent University Institute of Educational Sciences, Ankara. Knoll, M. (1997). The Project Method: Its vocational education origin and international development. Journal of Industrial Teacher Education, 34(3),59-80. Liu, X. (2016). Motivation management of project-based learning for business English adult learners. International Journal of Higher Education, 5(3), 137–145. Memişoğlu, H. (2011). The Effect of Project Based Learning Approach in Social Sciences Class on the Student Success and Memorability. International Journal of Humanities and Social Science, 1 (21), 295-307. Ozarslan, M., & Cetin, G. (2018). Effects of biology project studies on gifted and talented students’ motivation toward learning biology. Gifted Education International, 2, 1-17. Ozdemir, E. (2006). An investigation on the effects of project-based learning on students’ achievement in and attitude towards geometry (Unpublished Master’s Thesis). Middle East Technical University Graduate School of Natural and Applied Sciences, Ankara. Ozdener, N. & Ozcoban, T. (2004). A project-based learning model’s effectiveness on computer courses and multiple intelligence theory. Education Sciences: Theory & Practice, 4, 147–170
147
Project Assignments and Its Evaluation: Sample Presentation and Implementation
Peterson, S. E. & Myer, R. A. (1995). The use of collaborative Project-based learning in counselor education. Counselor Education and Supervision 35(8), 150-158. Scott, C. (1994). Project-based science: Reflections of a middle school teacher. Elementary School Journal, 57(1), 1-22. Sylvester, A. (2007). An investigation of project-based learning and computer simulations to promote conceptual understanding in eighth grade mathematics. Unpublished PhD Dissertation. Kansas State University College of Education, Manhattan, Kansas. Taşkın, Ö. (2012). New approaches in science and technology teaching (2nd edition). Ankara: Pegem Akademi Publishing. Toolin, R. E. (2004). Striking A Balance Between Innovation and Standards: A Study of Teachers Implementing Project based Approaches to teaching Science. Journal of Science Education and Technology, 13(2), 179- 187. URL-1, http://www.fenokulu.net/yeni/Fen-Konulari/Konu/Deprem-AlarmiYapimi_261.html. 25 December 2018. URL-2, http://www.fenbilim.net/basinc_nedir/. 27 December 2018. Walker, A. & Leary, H. (2009). A Problem Based Learning Meta-Analysis: Differences across problem types, implementation types, disciplines, and assessment levels. Interdisciplinary Journal of Problem-based Learning, 3(1),12-33. Willard, K. & Duffrin, M. W. (2003). Utilizing project-based learning and competition to develop student skills and interest in producing quality food items. Journal of Food Science Education, 2(69), 69-73. Yurtluk, M. (2003). The effect of project-based learning on mathematics lesson learning process and student attitudes. (Master thesis). Hacettepe University, Institute of Social Sciences, Ankara. Yurtluk, M. (2005). New Orientation in Education (Project Based Learning). (Edit. Özcan Demirel). Ankara: Pegem A Publishing.
148
Beyza AKSU DÜNYA
Chapter 8 Using Many-Facet Rasch Measurement Model in RaterMediated Performance Assessments Beyza AKSU DÜNYA
Introduction Performance tasks are frequently used in science assessments to measure higher-order thinking skills. Suppose a performance assessment situation where examinees are asked to build robots from safe materials such as Legos, plastic boxes, or electronic sensors (i.e. held in Western Washington University in 2012) and where each product is scored by a number of raters, following a rating scale. In this type of assessment, it is inevitable that score of a student will depend not only on his/her proficiency in designing robot, but also on rater characteristics, such as rater’s overall severity, or his/her tendency to avoid extreme ends of the rating scale (Eckes, 2012). This is a typical example of rater-mediated assessment (Engelhard, 2002), also known as a performance task (McNamara, 2006). In rater-mediated assessments, there is one more source of variation, in addition to examinees and questions, that may impact scores; raters. Raters who evaluate performance of students on various tasks such as designing and implementing a science project might introduce errors, which is called rater effects, into the assessment process. In such cases, assessment outcomes depend on specific raters who provide the ratings, and this threats validity and fairness of assessments. Constructing valid and fair science measures in rater-mediated assessments heavily depends on applying well-designed methods to deal with sources of unwanted variability associated with scores. Thus, this chapter aims to contribute to the field of rater-mediated performance assessments. The chapter is designed to introduce readers the Many Facet Rasch Model (MFRM), which is particularly well-suited to analyse rater-mediated assessment data. A facet is defined as any factor in the measurement process that is assumed to effect scores in a systematic way (Wolfe & Dobria, 2008). In the rater-mediated assessments, rater facet often constitutes a substantial source of unwanted 149
Using Many-Facet Measurement Model in Rather-Mediated Performance Assessment
variance in scores such as severity or leniency of the raters (Eckes, 2011). In addition, rater facet may interact with other facets in many ways. For example, a rater may use the scoring criteria differently while evaluating different gender groups. In this chapter, basic concepts related to MFRM will be presented without technical details in order to:
present readers benefit of using MFRM approach in rater-mediated assessments discuss the kind of questions can be addressed using MFRM and output generated by facets software should be used to answer those questions.
Conceptual Explanation of Rasch Measurement Model and Many Facet Rasch Model (MFRM) MFRM refers to a class of measurement models suitable for a simultaneous analysis of multiple variables (or facets) potentially having an impact on assessment outcomes (Eckes, 2011). MFRM is an additive linear model that is based on logistic transformations of observed ratings to a logit scale. MFRM model extends the basic Rasch model for dichotomous data to facilitate study of other facets of interests in assessments that typically involve human judgment. This model makes possible to investigate potential sources of measurement error such as raters and rating scales that introduce construct-irrelevant variance into the ratings. MFRM can also separate out each facet’s contribution to the assessment setting and examine it independently of other facets to determine to what extent each facet is functioning as intended. MFRM is considered a member of family of Rasch models, that all have grounded on dichotomous Rasch model (Rasch, 1980). Thus, before detailing MFRM, the fundamentals of dichotomous Rasch model should be presented. Dichotomous Rasch Model Assume a science exam where examinees are asked to respond to items and items are scored dichotomously which means that the responses are either correct (with a score of 1) or incorrect (with a score of 0). In Rasch Measurement Model, developed by Georg Rasch (1960/1980), probability of a correct answer depends on examinee proficiency and item difficulty. Mathematically; p(
150
=1) =
(
) (
)
Beyza AKSU DÜNYA
where = score of examinee n to item i = proficiency of examinee n = difficulty of item i In this basic Rasch model, examinee proficiency and item difficulty are two parameters that estimated from item response data. The estimated parameters are expressed on the same, ruler-like scale, called “logit scale”. A logit is the measurement unit of the scale for any parameter specified in the measurement model. According to the model, if an examinee’s proficiency equals to the item’s difficulty, since = and exp(0)=1, the examinee has 0.5 probability of correct response to the item. The higher an examinee’s proficiency from an item’s difficulty indicates the better chance of correct response the examinee has. This summarizes the basics of Rasch measurement model. Many Facet Rasch Model (MFRM) A facet is an aspect of any assessment situation that may have an influence on the measurement process. A facet can be raters in rater-mediated assessments, performance tasks, or examinee-related characteristics such as race, gender etc. (Myford & Dobria, 2012). While MFRM allows researchers to check each potential facet’s contribution to the observed ratings, a special consideration is given to the rater facet. Why raters are concerning in rater-mediated assessments? Performance rating is a sophisticated and complex mental work which includes observing, organizing, weighting, integrating, and drawing inferences (Myford & Wolfe, 2003). As stated by Hill et al. (1988), raters do not function as neutral and objective recorders of some physical reality (as cited in Dobria, 2011, p. 4). The act of rating is very likely to involve errors, called as rater errors or rater biases, as suggested in the relevant literature (Myford and Wolfe, 2003). Therefore, the raters’ judging process which has unavoidable traces of their subjectivity should be monitored to identify potential sources of rater effects/errors. Rater effect When there is fit between observed data and MFRM model in a rater-mediated assessment, examinee proficiency is expected to be independent from particular raters who rated the examinee. However, this ideal cannot be reached most of the 151
Using Many-Facet Measurement Model in Rather-Mediated Performance Assessment
time. Various rater effects that expose irrelevant and unwanted variation to the scores investigated in the literature (Myford & Wolfe, 2003). Among them; leniency/severity effect, central tendency effect, restriction of range, and bias have been given greater emphasis among the rater effects (Myford & Wolfe, 2003). Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, severity/leniency is commonly considered to be most pervasive and detrimental rater effect in performance assessments (Dobria, 2011). Various factors contribute to a rater’s severity or leniency including professional experience. In some circumstances, the most experienced or senior rater may be also the most severe (Eckes, 2012). Research on rater severity in performance-based assessments has a fairly long history. Looney (1997) applied the many-facet Rasch model to analyse the judges’ ratings from Olympic figure-skating competition. Kang and Ahn (1999) examined the judges’ severity in rating the quality of athletic performances in gymnastics competition using three facets; place, game, and judge. Raters’ overuse of the middle categories of a rating scale while avoiding extreme ends is referred as central tendency (Myford & Wolfe, 2003). A similar but slightly different rater effect is called restriction of range which means that ratings are clustered about any point on the scale. In this case, all too low ratings is severity, all too high ratings is leniency or at the midpoint is central tendency. In addition to these rater effects, rater and examinee characteristics may impact rating behaviour. For example, raters may behave differentially severe or lenient depending on examinee’s gender, race, class etc. Rater demographics such as age, education level, experience etc. may also impact their rating behaviour (Myford & Dobria, 2012). Which questions can be answered with MFRM? Using MFRM, researchers can answer some important group-level questions regarding raters, which include (Myford & Dobria, 2012):
152
Do the raters differ in rating the examinees in terms of their level of severity? How consistently are the raters able to distinguish among the examinees in terms of their levels of proficiency? How consistently are the raters able to distinguish among the performance tasks in terms of their levels of difficulty?
Beyza AKSU DÜNYA
Along with these group-level questions, some individual-level questions regarding raters can be answered using MFRM:
If the raters differ in their severity levels, which rater is more severe/lenient than others? Are there any raters whose rating show a systematic pattern for a subgroup of examinees? Lastly, MFRM is used to analyse interaction between rater characteristics (such as beliefs, personality, experience etc.), and rating behaviours:
Does every rater use the scoring criteria in a similar manner for every examinee, regardless the examinee and the rater background, type of task being rated, time etc.? The Mathematical Model Assume a three facets-situation like the robot competition example above. The log-odds of transition from one rating scale category to another is represented by examinee’s proficiency parameter, raters’ severity parameter, and a difficulty parameter for the task(s). A suitable mathematical model can be expressed as follows (Linacre, 1989): [
]=
-
-∝ -
where = probability of an examinee n receiving a rating of k from rater j on task l, = probability of an examinee n receiving a rating of k-1 from rater j on task l, = proficiency of examinee n, = difficulty of task i, ∝
= severity of rater j,
= difficulty of scale category k relative to scale category k-1. Similar to the basic Rasch model presented above, MFRM is an additive linear model that is based on logistic transformation of observed ratings to a logit scale (a logarithmic transformation of the ratio of successive rating scale category
153
Using Many-Facet Measurement Model in Rather-Mediated Performance Assessment
probabilities). More facets can be added to the model depending on the assessment situation and potential sources of variance to the scores. In the model, severity refers to raters who are consistently and significantly too harsh or too lenient, as opposed to other raters. Severity parameter is negatively-oriented which means the higher the severity parameter value against to lower the rating. which is referred as threshold parameter denoted difficulty of being observed/scored in category k relative to category k-1. MFRM provides joint estimate of various facets onto a single linear scale (logit scale). Thus, researchers can compare facets and draw conclusions about examinees, performance tasks and raters simultaneously. Interpreting MFRM output FACETS (Linacre, 2018) software program is quite frequently used to run MFRM analysis. The questions highlighted in title “2.3.” can be answered by checking appropriate FACETS output and finding relevant statistics. For illustrative purposes, an example output was retrieved from online tutorial document of FACETS software, established to train interested researchers (Linacre, 2018). In the selected example, 3 senior scientists (the raters) are rating 7 junior scientists (examinees) on 5 creativity tasks using a 9-point rating scale. A 3-facet model was run to analyse various questions. Usually, the first information checked on the original output is in Figure 8.1 on FACETS, which displays every facet on a ruler-like variable map.
Figure 8.1 Variable map
154
Beyza AKSU DÜNYA
The first column is the logit scale on which each facet element is calibrated. The second column shows examinee proficiency which ordered from most proficient (Examinee 2) to the least proficient (Examinee 6). The third column shows 5 creativity tasks from most difficult (Enthusiasm) to the least difficult (Attack and Daring). The fifth column shows the raters’ severity, ordered from the most severe (Brahe) to the least severe/lenient (Cavendish). The last column displays 9-point rating scale category points and the thresholds (horizontal lines). The thresholds are the transition points where likelihood of receiving the next higher rating begins to exceed the likelihood of receiving the lower rating (Myford & Wolfe, 2003). For example, examinees with proficiency measures between 0 logits and -0.2 logits are more likely to receive a rating of 5 than any other rating across these five tasks. Question 1: Do the raters differ in rating the examinees in terms of their level of severity? In a rating situation, raters are supposed to display similar levels of severity/leniency. However, this is not always possible. The indices shown in Figure 8.2, generated by FACETS can be used to analyse whether the inconsistencies among raters’ severity levels are significant or not.
Figure 8.2 Rater Measurement Report Column titled “measure” on Table 7.1.1 provides each rater’s severity level in log-odd units and associated standard error. In the example, Avagadro is the most severe rater with a severity measure of 115.6. Rater separation index value which is reported under Table 7.1.1 shows the number of different strata of severity in the raters. Since the raters are expected to perform a similar level of severity, the expected value of this statistics is 0. In the example, the separation index value 155
Using Many-Facet Measurement Model in Rather-Mediated Performance Assessment
of 2.82 indicates that within 3 raters, there are about three statistically distinct strata of severity. Reliability of rater separation is another value that is used to check difference of raters in their severity levels. It shows the degree that the raters can be differentiated in terms of their severity. Similar to the rater separation index, a value of 0 is expected for these statistics. Last index to check for Question 1 is fixed (all same) chi-square and its significance which tests whether the raters significantly differs in their levels of severity. As seen in Table 7.1.1, the chi-square value of 16.4 with a significance value smaller than .01 indicates that the severity measures for the raters were not all the same, after allowing for measurement error. The senior scientists included in this example analysis are well differentiated in terms of the exercised levels of severity. Question 2. Do raters use the rating scale consistently across all examinees and tasks? How consistent are the raters to distinguish different examinees/tasks? With this question, researchers aim to see whether there is any rater effect exists in the ratings for some examinees or tasks. Table 7.1.1 on FACETS that provides several mean-square fit indices for each rater. Among them, infit (information weighted) mean-square fit indices refer to the consistency of rating behaviour for a rater across all tasks and examinees. Mean-square infit has an expected value of 1 (Linacre, 1989). Values greater than 1 indicate a rater’s inconsistent rating behaviour, which is called “misfit”. Outfit (outlier sensitive) fit index value indicates unexpected ratings from a rater whose ratings are usually consistent. As stated by Myford and Dobria (2012), “a rater may be generally consistent in using the rating scale but occasionally gives unexplainable rating, given that his/her other ratings”. Usually, mean-square fit values greater than 1.4 imply inconsistencies in ratings for a rater and requires attention. As seen in Figure 2, none of the three raters has fit values greater than 1.4 which means that they use the rating scale consistently across all examinees and tasks. Potential reasons of inconsistencies in ratings can be lack of understanding of the meaning of scale categories, thus failing to distinguish different categories, fatigue through end of performance, thus not paying attention to the performance (Myford & Dobria, 2012).
156
Beyza AKSU DÜNYA
Question 3. Does every rater use the scoring criteria in a similar manner for every examinee, regardless examinee background, type of task being rated, time etc.? Bias indices are used to detect differential rater effects due to examinee, task and rater characteristics very commonly (Engelhard, 2002). Specifically, bias indices are employed to check whether each rater employs a uniform level of severity when rating performances of individual examinees, or whether some raters appear to exercise differential severity/leniency. To answer to this, a bias statement should be added to FACETS model statement that would allow to investigate possible interaction effects between raters and background characteristics. Then, FACETS produce a bias size value in logit units that relates to each rater’s overall severity measure. A t-statistic and p-value accompanying each bias size value indicate whether the difference between a rater’s overall severity measure and his/her severity measure while rating a particular examinee is statistically significant or not. Conclusion Performance-based, rater-mediated assessments in areas such as science, language testing, and performing arts pose some challenges due to their dependence on subjective judgments. Yet, existing research on rater effects in performance-based assessments has a limited scope. This introductory chapter summarized potential rater effects on performance assessments and the way to handle them using MFRM. In addition to examining rater severity, the chapter highlighted possible interaction between raters’ severity and examinees that may occur due to raters’ expectations and background characteristics. A gentle introduction to interpreting output was provided through using an example data from FACETS tutorial. However, MFRM analysis is broader than the content presented here. Interested researchers should consult with more technical works and FACETS tutorial.
157
Using Many-Facet Measurement Model in Rather-Mediated Performance Assessment
References Dobria, L. (2011). Longitudinal Rater Modeling with Splines. Retrieved from ProQuest Digital Dissertations. (UMI Number: 3472389). Eckes, T. (2012). Introduction to Many-Facet Rasch Measurement Analyzing and Evaluating Rater-Mediated Assessments. New York: Peter Lang. Engelhard, G., Jr. (2002). Monitoring raters in performance assessments. In G. Tindal & T. Haladyna (Eds.), Large-scale assessment programs for ALL students: Development, implementation and analysis (pp. 261-287). Mahway, NJ: Lawrence Elbaum Associates, Pub. Kang, S. J., & Ahn, E. J. (1999). Objectivity of gymnastic performance judgment: Application of many-facet Rasch model. The Korean Journal of Physical Education, 38, 641– 650. Kang, S. J., & Ahn, E. J. (1999). Objectivity of gymnastic performance judgment: Application of many-facet Rasch model. The Korean Journal of Physical Education, 38, 641– 650. Kang, S. J., & Ahn, E. J. (1999). Objectivity of gymnastic performance judgment: Application of many-facet Rasch model. The Korean Journal of Physical Education, 38, 641– 650. Kang, S. J., & Ahn, E. J. (1999). Objectivity of gymnastic performance judgment: Application of many-facet Rasch model. The Korean Journal of Physical Education, 38, 641-650. Linacre, J. M. (1989). Many-facet Rasch Measurement. Chicago: MESA Press. Linacre, J. M. (2018). A user’s guide to FACETS: Rasch measurement computer program. Retrieved from https://www.winsteps.com/a/ftutorial2.pdf Looney, M. A. (1997). Objective measurement of figure skating performance. Journal of Outcome Measurement, 1(2), 143-163. McNamara, T. F., & Roever, C. (2006). Language testing: The social dimension. Malden, MA: Blackwell. 158
Beyza AKSU DÜNYA
Myford, C, M. & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement. Journal of Applied Measurement, 5(2), 189-223. Myford, C. M., & Dobria, L. (2012). FACETS introductory workshop tutorial. Chicago, IL: University of Illinois. Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests. Chicago: University of Chicago Press. Wolfe, E. W., & Dobria, L. (2008). Applications of the multifaceted Rasch model. In J. W. Osborne (Ed.), Best practices in quantitative methods (pp. 71-85). Thousand Oaks, CA: Sage.
159
Part III Formative Evaluation
İsa DEVECİ
Chapter 9 Measuring and Assessing 21st Century Competencies in Science Education İsa DEVECİ
Introduction The history of science education shows that researchers and educators have focused on teaching scientific concepts to students. More recently, it has been pointed out that skills should be taught as well as concepts. Consequently, the teaching of skills in science has begun to gain importance. Science process skills are one example of the skills being taught today, however, it is believed that these skills do not meet all the needs of individuals in the 21st century, so new skills have emerged. Therefore, the need for students in both science and general education has been highlighted to be taught 21st century competencies. In providing 21st century competencies to students, the role of K12 education has been the subject of discussion (National Research Council, 2010). Some researchers claim that there is nothing new either pedagogically or academically about 21st century competencies (Mathews, 2009; Brusic & Shearer, 2014). Although there is some opposing views, the importance of providing21st century competencies to is agreed on. Science encompasses information that has been accepted and the process of realising this information (National Research Council, 2010). Students discuss, question, experiment, make models, develop projects and produce new ideas in the process of attaining the information, all of which provides opportunity to improve their competencies. In the days of advanced technology with easy access to information, it is vital to teach students the competencies to get the best from it. 21st Century Competencies This section includes information about the mentioned categories in the literature, how they can be measurement and evaluation and meaning of 21st century competencies. Before discussing the subject, it is important to clarify 163
Measuring and Assessing 21st Century Competencies in Science Education
some concepts to ensure consistency throughout the text. A skill is something that is learned to accomplish one or more functions. A competency may include a skill, but it is more than that, as it may include ability and behaviour, as well as information essential for using the skill (Sturgess, 2012). In this study, it was decided to use the term “21st century competencies” instead of “21st century skills” to describe a concept that includes knowledge, skills, and abilities. However, as the literature uses different classifications for 21st century competencies, it would be useful to examine these before deciding on a general classification. The United States (US) Department of Education (2003) collected a range of 21st century competencies and classified them into four themes for students, employees and other individuals. The themes are digital-age literacy, inventive thinking, effective communication and high productivity. i) Inventive thinking includes adaptability and managing complexity, self-direction, curiosity, creativity, and risk taking, higher-order thinking and sound reasoning. ii) Effective communication contains teamwork, collaboration, and interpersonal skills, personal, social, and civic responsibility, interactive communication. iii) Digital-age literacy refers to basic, scientific, economic, and technological literacies, visual and information literacies, multicultural literacy and global awareness. iv) High productivity includes prioritising, planning, and managing for results, effective use of real-world tools, and the ability to produce relevant, high-quality products. The National Research Council (2010) stated that 21st century competencies have five dimensions in science education. These are: i) complex communication/social skills, ii) non-routine problem-solving skills, iii) adaptability, iv) systems thinking, v) self-management/self-development. In their document on the global cities educational network, Soland, Hamilton and Stecher (2013) classified 21st century competencies into three themes, as follows; i) Interpersonal competencies that cover leadership, communication and collaboration, and global awareness. ii) Intrapersonal competencies that include mindset growth, intrinsic motivation, learning how to learn, and grit. iii) Cognitive competencies composed of critical thinking, academic mastery, and creativity. Trilling and Fadel (2009) expressed 21st century competencies in skills and literacy in four categories: i) 21st century core subjects and themes which include 164
İsa DEVECİ
civic literacy, environmental literacy, global awareness, health literacy, financial literacy, and visual literacy. ii) Learning and innovation skills which include communication and collaboration, collaboration with others, communicating clearly, creativity and innovation, critical thinking and problem solving. iii) 21st century life and career skills which include productivity and accountability, leadership and responsibility, social and cross-cultural skills. iv) 21st century information, media, and technology skills which include information literacy, media literacy, and technological literacy. It can be seen from the literature that there are various classifications of 21st century competencies which include a range of skills, knowledge and abilities some of which are related to each other. For the purpose of clarity, Figure 1 illustrates all the concepts which can be classified according to the related literature. The competencies are grouped in three themes as thinking skills, inter/intrapersonal skills and literacies in the literature (Demiral & Cepni, 2018; Department of Education, 2003; National Research Council, 2010; Soland, Hamilton & Stecher, 2013; Trilling & Fadel, 2009). Thinking Skills • Metacognition • Higher-order Thinking • Inventive Thinking • Critical thinking • Creative Thinking • Systems Thinking • Growth Mindset • Non-routine problemsolving skills
Inter/Intrapersonal Skills • Adaptability • Risk Taking • Self-direction • Self-management • Curiosity • Intrinsic motivation • Grit • Teamwork • Responsibility • Communication • Leadership
Literacies • Basic Literacy • Scientific Literacy • Economic Literacy • Visual Literacy • Information literacy • Multicultural Literacy • Global Awareness • Academic mastery
Figure 9.1 21st Century Competencies Thinking Skills There are three themes in the 21st century competencies, one of which is thinking skills and it is useful to examine “what this means”. Metacognition/learning how to learn: According to Soland, Hamilton and Stecher (2013), learning how to learn and metacognition have a similar meaning. Learning how to learn (metacognition) is the ability to determine how to approach to a problem, to follow the process of self-understanding and to evaluate progress
165
Measuring and Assessing 21st Century Competencies in Science Education
towards completion. Metacognition is the ability to be aware of one's own thinking processes and to control them (Hacker, 1998). Higher-order thinking: Interpretation, evaluation, synthesis, inferring, estimating, analysis, problem solving, comparison, predicting, generalising and creative thinking are all considered to be higher order thinking skills (Department of Education, 2003; Miri, David & Uri, 2007). Inventive thinking: Inventive thinking is the ability to effectively solve nontypical problems (Sokol et al., 2008). It is a cognitive activity that enables individuals to approach the problem-solving process creatively and critically through innovative or specially designed activities (Rasul, Halim & Iksan, 2016). In this sense, it can be said that inventive thinking is associated with non-routine problem-solving skills. Critical thinking: According to Cottrell (2005), critical thinking is a cognitive process intended for the use of the mind. It requires the use of mental processes such as attention, classification, selection and judgement. It also encourages one to be sceptical about a subject, and to examine what happened previously (Cottrell, 2005). Creative thinking: Creativity is seen as the act of making something personal or cultural, new and original (Department of Education, 2003). It is seen as a measurable mental characteristic with mostly taken measurements using quantitative tools (Bapna et al., 2017). Systems thinking skills: Systems thinking considers the principles of complex systems for cognitive analysis and cognitive representation of systems (Riess & Mischo, 2010). Thus, it is a way of thinking used to explain, understand and interpret complex and dynamic systems (Evagorou et al., 2009). In the literature, it is possible to see different terms such as “systems-oriented thinking”, “ecological thinking”, “complex problem solving”, and “network thinking” all of which are similar to systems thinking (Riess & Mischo, 2010). Growth mindset: There are two different mentalities of students in school, which are the fixed and growth mindsets. Students with a growth mentality see intelligence as a shaping, and approach it as a function of effort. Those with a fixed mindset consider intelligence as an innate talent (Dweck, 2010). Non-routine problem-solving skills: Non-routine problems, consequences or solutions are unpredictable problems (Saygılı, 2017). They arise when students 166
İsa DEVECİ
encounter with a different situation or problem, but do not know a direct way to solve it (Murdiyani, 2018). Creative solutions, analysis, synthesis, different strategies, and trial and error are needed to solve non-routine problems (Carson, 2007; Tarım & Artut, 2009; Lester, 2013). Inter/Intrapersonal Skills Interpersonal skills are expressed as the ability of individuals to read and manage their own emotions, behaviour and motivation as well as others during social interaction or in a socially interactive environment (Department of Education, 2003). According to Murni and Anggraini (2015), interpersonal skills include: i) social awareness, which consists of: (a) political awareness, (b) developing others, (c) leveraging diversity, (d) service orientation, and (e) empathy. ii) Social skills consist of: (a) communication, (b) influence, (c) leadership, (d) conflict management, (e) team work, and (f) synergy. On the other hand, the ability to think things through is defined as an intrapersonal skill (Kara & Aslan, 2018; Lang, Jones, & Leonard, 2015). According to Murni and Anggraini (2015), intrapersonal skills include: i) self-awareness, which consists of: (a) self-confidence, (b) self-assessment, (c) traits and preferences, and (d) emotional awareness. ii) Self-skills include: (a) improvement, (b) self-control, (c) trust, (d) worthiness, (e) time/source management, (f) proactivity, and (g) conscience. In this study, a skill set of 21st century competencies consist of all the skills mentioned here and is named “inter/intrapersonal skills”. A short explanation is provided in below. Adaptability: This is the ability of individuals to change their thoughts, attitudes and behaviour in order to avoid of adaptation problems now and in the future. It is expressed as the ability to understand and adhere to time, resources and system constraints (e.g. organisational, technological) and to cope with multiple objectives and tasks (Department of Education, 2003). Risk taking: This means being willing to make mistakes. It is also expressed as being willing to defend non-traditional or unpopular positions (Department of Education, 2003). Self-direction: This is expressed as the ability to identify learning-related objectives, plan for achieving objectives, manage time and effort independently, and to independently assess the quality of any product or learning as a result of the learning experience (Department of Education, 2003). Self-direction refers to the desire and capacity of a person to conduct his or her education (Candy, 1991). 167
Measuring and Assessing 21st Century Competencies in Science Education
Self-management: In the context of 21st century competencies, selfmanagement skills are associated with self-direction (National Research Council, (2010). This is the individual or systematic application of behavioural modification techniques that reflect in the individual's behaviour resulting in the desired changes (Heward, 1987). Curiosity: This is the desire of the individual to know and can be expressed as a spark of interest that leads to questioning (Department of Education, 2003). Intrinsic motivation: Soland, Hamilton and Stecher (2013) mentioned that motivation is a process that requires people to take action to achieve specific goals. Psychologists often distinguish between two types of motivation as extrinsic and intrinsic. Extrinsic motivation refers to the pressures of sources outside the individual, which include incentives such as money, praise or success point. Intrinsic motivation refers to forces within the individual that activate behaviours. Grit: Duckworth and Gross (2014) stated that despite obstacles, grit envisages the completion of challenging targets. So, grit is seen as an unsolved concept. According to Duckworth et al. (2007) while motivation means an effort or an immediate interest, grit means perseverance and passion for the long-term goals. For example, as Soland, Hamilton and Stecher (2013) explained, a person may be motivated to complete a project. The project may continue for a certain period, but after a while something will reduce a person’ interest in it and they might not complete it, which could be due to a lack of grit. Teamwork and collaboration: This ability is described as cooperative interaction between two or more individuals working together to solve problems, create a new product, acquire knowledge and specialise in a subject (Department of Education, 2003). Personal responsibility: A person's ability to apply information and knowledge related to legal, ethical and technological issues to provide the quality, integrity and balance of life as a citizen, a family, a community member, a student and a worker (Department of Education, 2003). Social and civic responsibility: This is the ability to manage and govern technology, to protect society, the environment and democratic ideals, and to promote public interest (Department of Education, 2003).
168
İsa DEVECİ
Communication: This is expressed as the creation of meaning through exchanges using a series of contemporary tools, ways of communication and the use of processes (Department of Education, 2003). Leadership: It is difficult to define leadership because of the dimension of communication and collaboration but it does require a sense of vision for the future and an ability to work with other people (Soland, Hamilton & Stecher, 2013). Leadership skills are the tools, behaviours, and capabilities required to succeed in motivating and guiding others (MTD Training, 2010). Literacies The term literacy is expressed as the ability to read and write (Demiral & Turkmenoglu, 2018; Moseley, 2000). Individuals in the 21st century need to be literate. In this research, a range of literacies, expressed as 21st century competencies, have been considered so it is useful to include a brief explanation of them. Basic literacy: In the digital age, individuals should have the necessary language proficiency (English) and mathematical skills to achieve their goals, improve their knowledge and potential, and to function at work and in society (Department of Education, 2003). Scientific literacy: This is the understanding and knowledge of scientific processes and concepts required for economic productivity, participation in civil and cultural events, and personal decision making (Department of Education, 2003). Economic literacy: This is expressed as the ability to identify economic problems, alternatives, costs and benefits and to analyse incentive measures at work and in economic situations; review the consequences of changes in economic conditions and public policies; collect and regulate economic evidence, and measure and evaluate costs against benefits (Department of Education, 2003). Visual literacy: This is expressed as the ability to create, use, appreciate and interpret pictures and videos using both traditional and 21st century media that develops thinking, decision making, communication and learning (Department of Education, 2003). Information literacy: This is using technology, networks, and electronic resources. The ability to know which information is needed, how to find it, 169
Measuring and Assessing 21st Century Competencies in Science Education
synthesise and use it effectively in the media (Department of Education, 2003). Multicultural literacy: The ability to understand and appreciate own culture of an individual or others and the similarities and differences in the culture, customs, values and beliefs of others (Department of Education, 2003). Global awareness: This is expressed as the recognition and understanding of relations among international organisations, nation states, public and private economic institutions, socio-cultural groups, and individuals around the world (Department of Education, 2003). Academic mastery: Learning academic content is essential for education, and the mastery of such content serves as a driving force for advanced interpersonal and intrapersonal competences as well as higher order thinking skills (Soland, Hamilton & Stecher, 2013). Academic content is education about topics such as mathematics, science, reading, global studies and foreign languages (Soland, Hamilton & Stecher, 2013). Measurement and Assessment Measurement and assessment play an important role in the education process. The results of assessments reached with the help of measurement tools provide important tips for teachers, families, administrators, educators and educational politicians. In the 21st century, traditional measurement tools fail to measure the basic skills of students (Salpeter, 2003). Therefore, it is useful to consider different measurement and assessment tools. For example, “project assessments” and “portfolio assessments” are important tools that monitor student progress in the learning process (Salpeter, 2003). It is said that there are three types of assessment that can be carried out to monitor student progress: diagnostic, formative or summative. Diagnostic, formative or summative assessments can be formal or informal and while formal assessments are planned and recorded in advance, informal assessments are not (Shepardson & Britsch, 2001). Diagnostic assessments identify performance capabilities such as the conceptual understanding or questioning abilities of students in a specific science domain. In diagnostic assessments, the information obtained about the students is used to guide the educational process (Shepardson & Britsch, 2001). Cognitive diagnostic assessment is designed to measure students' cognitive strengths and weaknesses in order to measure specific knowledge structures and processing skills (Leighton & Gierl, 2007).
170
İsa DEVECİ
The purpose of formative assessment is to provide constructive feedback to educators or teachers on their instructional performance (Paulsen, 2002). Formative assessments are regularly applied throughout courses of study. Formative assessments can be also used to direct instruction and/or assess performance (Shepardson & Britsch, 2001). The purpose of summative assessment is to determine what the students have learned at the end of the course or programme (Paulsen, 2002). Summative assessment allows teachers to summarise the performance of the students and may be applied at the end of a unit as a result assessment (Shepardson & Britsch, 2001). Although diagnostic assessments are widely used, misconceptions continue particularly in science subjects (Jang, 2012). A diagnostic assessment focuses on the existing knowledge of the student before they start learning (Jang, 2012). Formative and summative assessments play an important role in programmes and by using them it is possible to obtain information about the effects of the programme, as well as contributing to research into professional development (Ostermeier, Prenzel & Duit, 2010). Table 1 lists the characteristics of the different types of assessment from various sources (Harlen & James, 1997; Moss & Brookhart, 2009; Jang, 2012). Table 9.1 The Characteristics of Diagnostic, Formative and Summative Assessments Diagnostic Assessment To guide the educational process. Carried out before teaching begins. Focuses on students' strengths and weaknesses at the beginning of teaching. Tends to assess special targets. Seen as an activity performed before the teaching-learning process.
Formative Assessment To improve learning and achievement. Carried out while learning is in progress - day to day, minute by minute. Focused on the learning process and the learning progress.
Summative Assessment To measure or audit educational attainment. Carried out from time to time to create snapshots of what has happened. Focused on the products of learning.
Seen as an integral part of the teaching-learning process.
Seen as an activity performed after the teaching-learning process.
171
Measuring and Assessing 21st Century Competencies in Science Education
Guiding - it is instructive about whether students have achieved or not. Teachers adopt the role of auditors for students' prior knowledge Teachers use it to determine students' strengths and weaknesses (knowledge, skill, attitude, behaviour). It takes care of existing situations before students begin the learning process. It can be used to plan differentiated instruction.
Collaborative - teachers and students know where they are headed, understand their learning needs, and use assessment information as feedback to guide and adapt what they do to meet those needs. Fluid - an ongoing process influenced by student need and teacher feedback. Teachers and students adopt the role of intentional learners. Teachers and students use the evidence they gather to adjust for continuous improvement. It takes care of the progress of everyone, the efforts she/he puts in. It can be used to plan differentiated instruction.
Teacher directed teachers assign what the students must do and then evaluate how well they complete the assignment.
Rigid - an unchanging measure of what the student achieved. Teachers adopt the role of auditors and students assume the role of being audited. Teachers use the results to make final “success or failure” decisions about a relatively fixed set of instructional activities. It relates to progression in learning against to public criteria. It can be used to determine the effectiveness of differentiated teaching.
The Measurement and Assessment of Skills When the sub-dimensions of 21st century competencies are examined, it can be said that they consist largely of skills. Therefore, it is becoming increasingly important for students to acquire thinking, interpersonal and intrapersonal skills. It is useful to know which tools or techniques to use for measuring these skills and to decide when to measure them. Table 2 provides some indications regarding this issue.
172
İsa DEVECİ
Table 9.2 Assessment Practices for Skills
x x x x x x x x x x x x x x x x x x
Summative Assessment
Formative Assessment
Instruments, techniques, methods, approaches Categorising Grid One-sentence Summary Approximate Analogies Concept Maps Defining Features Matrix Pros and Cons Grid Content, Form, and Function Outlines Analytic Memos Word Journal Observations Argumentation Checklist Peer and Self-assessments Fishbone Analysis Portfolio Assessments Classroom Opinion Polls Project-based Learning Problem-based Learning Design-based Learning Everyday Ethical Dilemmas Survey Form
Diagnostic Assessment
Assessment Types
x x x x x x x x x
x x x x x x x x x
Through considering the information in Table 2, it can be said that it is more appropriate to evaluate skills during and at the end of the teaching process. Of course, diagnostic assessments can be made at the beginning of the teaching process. However, because a diagnostic assessment may not produce a good result since the skills that are in question. The instructor must spend more time with the students to be able to comment on their skills. Therefore, it can be said 173
Measuring and Assessing 21st Century Competencies in Science Education
that the formative and summative assessments are more appropriate. It might be useful to provide some detailed information about what the assessment tools and techniques listed in Table 2 mean, so those who wish to understand more about these tools and techniques can review the relevant citations. Categorising grid: This technique improves students' analytical, critical thinking and study skills, and their strategies and habits (Angelo & Cross, 1993; Wattles, 2011). It is also suitable for use in biological and life sciences courses (Enerson, Plank & Johnson, 2007). To use it, you need to define a key taxonomy and then draw a guide that shows the relationships between concepts (see in detail; Enerson, Plank & Johnson, 2007). One-sentence summary: This technique allows teachers to learn how creative students summarise a large amount of information on a specific topic (Angelo & Cross, 1993). It provides opportunity to improve student creativity. At the end of the teaching process, the teacher may ask students to summarise the subject by finishing the lesson a few minutes early (Byon, 2005). Approximate analogies: In this technique, students complete the second half of a given analogy as sentence (Angelo & Cross, 1993). This allows teachers to ascertain whether students understand the relationship between two concepts, or the terms given in the first part of the analogy. Moreover, it offers students the opportunity to exhibit their creativity (see in detail Angelo & Cross, 1993). Concept maps: A concept map is a structural representation composed of annotated nodes and labels (Ruiz‐Primo & Shavelson, 1996). The lines show the relationship between two concepts (nodes). The labels are the short expressions of how two concepts are related. Concept maps are drawings or diagrams showing the mental connections that students make between concepts (Dochy, 1994). This technique enhances the ability of students to synthesise and integrate knowledge and ideas (Angelo & Cross, 1993). Therefore, it can be used both before and after the teaching process. Defining features matrix: This is an easy technique for students to realise and categorise important information (Clúa & Feldgen, 2010). It enables them to categorise concepts according to the presence (+) or absence (-) of important identifiable features, thus enable them to contribute to analytical reading and thinking skills (Angelo & Cross, 1993). Therefore, it is more useful to use at the end of teaching.
174
İsa DEVECİ
Pros and cons grid: This technique allows students to quickly review the pros and cons, costs and benefits, advantages and disadvantages of a topic (Angelo & Cross, 1993). It is highly effective for improving students' analytical thinking and decision-making skills. Therefore, it is more useful to use at the end of teaching. Content, form, and function outlines: In this technique, students are asked to answer the questions “what, how and why” while reading from a newspaper, a newspaper article, a critical article, a bulletin board, a magazine advertisement, or a television commercial etc. It is effective for developing students' analytical thinking, reading, writing and study skills, and their strategies and habits (Angelo & Cross, 1993). Therefore, it is more useful to use at the end of teaching. Analytic memos: Analytic memos evaluate the ability of students to analyse assigned problems by using approaches, methods and techniques specific to the learned discipline. In this process, it is possible to develop students' analytical thinking, problem-solving, writing, management and leadership skills (Angelo & Cross, 1993). The problem situation provided during the writing process should be made clear to the students and should state from which position they should write, to whom they write and should not exceed one or two pages. Word journal: Students summarise a short text in one word and then write a paragraph explaining why they chose the word (Kafka, 2018). It is possible to develop students' analytical thinking, problem-solving, writing, management, leadership, memory, listening, reading and study skills, and their strategies and habits (Angelo & Cross, 1993). Observations: It is possible for teachers to evaluate the skills of students during the teaching process by making observations in the form of field notes. In addition, they can monitor the students’ performance by creating an observation form and evaluate students' thinking skills in a more appropriate way. Observations conducted throughout the teaching process can produce better results. Argumentation checklist: The process of argumentation is one educational process that leads students to think that the process of teaching is important. In this process, students choose a claim and try to support it while other students try to refute which helps to develop students' decision making, analytical thinking, risk taking, communication and negotiation skills. It is effective for controversial scientific issues (socio-scientific issues) that have not been proven to be either
175
Measuring and Assessing 21st Century Competencies in Science Education
true or false. It is more effective to implement the process of argumentation during the teaching process to be able to conduct it appropriately. Peer and self-assessments: These can help students to understand the assessment process and develop their own assessment skills (Seifert & Feliks, 2018). Peer-assessment is a process where students express or consider the performance of other students who are of equal-status (Topping, 2009). Selfassessment is defined as a process in which students constantly review their own strengths and weaknesses in order to improve their learning (Kadri, 2018). Such assessments may be carried out during or after the teaching process. Fishbone analysis: This is widely used in many areas (Li & Lee, 2011) and begins with the determination of a fundamental problem or a result which is represented by the head of a fish. The bones of the fish represent the variables that cause the problem or the result. This technique is effective for developing students' critical thinking and analytical thinking skills. Portfolio assessments: A portfolio is described as a collection of products that students produce during the learning process (Evin-Gencel, 2017). Portfolio assessments can be considered as both formative and summative assessment (Denney, Grier & Buchanan, 2012). Classroom opinion polls: These allow teachers to explore the views of students during the course. Teachers get students' ideas by taking a classroom opinion poll on a controversial subject. They can be used as either a pre or posttest tool to determine whether students' views have changed. The whole class offers multiple options for the controversial topic (yes-no, Likert scale options). The teacher reflects on the options to the entire class using a projection device and asks the students to write down their opinions. This process is repeated after the teaching process and is effective in developing students' leadership skills (Angelo & Cross, 1993). Project-based learning: This is a student-centred, teacher-assisted and learning-oriented approach. Students try to obtain information in order to solve problems in the project development process (Bell, 2010). Students create a problem and begin to investigate it under teacher supervision. This contributes to their 21st century collaboration and communication skills (Bell, 2010). Problem-based learning: This uses a case study approach whereby students take on projects that solve complex and real-world problems. The students work 176
İsa DEVECİ
in small groups to investigate, research and produce solutions to the problems (Trilling & Fadel, 2009). Design-based learning: It is possible to see design-based learning approaches in many disciplines such as science, art, technology, engineering and architecture (Trilling & Fadel, 2009) and many researchers and educators have benefited from this approach and used it to solve authentic problems (Angeli & Valanides, 2005; Koehler & Mishra, 2005). It can be used in chemistry education for example, to make symbolic formalisms more meaningful to students and to provide them with context (Apedoe et al., 2008). Everyday ethical dilemmas: This technique allows students to understand values and force them to face with their own. First, the teacher determines an ethical issue or question and then creates a paragraph about it or finds a readymade text about the specific dilemma. The teacher asks the students two or three questions and requires them to write short answers in an honest manner (Angelo & Cross, 1993). This technique is effective for measuring attitudes and values (Shaeiwitz, 1998). It also improves students 'decision-making and leadership skills and ensures they respect the views of others (Angelo & Cross, 1993). Survey form: These are forms in which structured responses to an expression are presented to participants. The answers can be provided using a 5-point Likerttype or a different classification system. They are used to measure students’ management and leadership skills, their self-esteem/self-confidence and their attitudes and values (Angelo & Cross, 1993). These forms can be created by the teacher. It is also possible to find improved validity and reliable survey forms in the literature. Measurement and Assessment of Literacies There are many assessment tools and techniques for measuring and evaluating academic knowledge and literacies in relation to 21st century competencies. When an assessment tool used at the beginning of the course or teaching process, it is a diagnostic assessment and when used during the teaching process, it is a formative assessment. If an assessment tool used at the end of the course or teaching process, it is a summative assessment. Researchers and educators will have an idea about which tools or techniques can be applied. Table 9.3 also provides some indications about the stages at which these tools and techniques could be used.
177
Measuring and Assessing 21st Century Competencies in Science Education
Table 9.3 Assessment Practices for Literacies
x x x x
x x
x
x x x
x x x
x x
Summative Assessment
Formative Assessment
Instruments, techniques, methods, approaches Background Knowledge Probes Focused Listing Misconception/Preconception Check Empty Outlines Memory Matrix Minute Paper Mind Map Muddiest Point Observations Situational Judgement Tests Multiple-choice Tests Checklist Open-ended Questions
Diagnostic Assessment
Assessment Types
x x x x x x x x x x x x
Table 9.3 lists various tools and techniques, so it is useful to provide some information about what they are. Anyone who wants to learn more about them can do so by referring to the relevant references. Background knowledge probes: These help teachers to determine the students’ levels before teaching begins and provide them opportunity to determine what they have or have not learned. It is also possible to use this technique at the end of a course (Enerson, Plank & Johnson, 2007). For example, it could take the form of a simple questionnaire prepared by instructors, with short answers and multiple-choice questions (Angelo & Cross, 1993; Enerson, Plank & Johnson, 2007). Focused listing: This technique allows students to focus on a single important term, name or concept from a specific lesson or class. It is a tool that helps 178
İsa DEVECİ
students to quickly identify and remember the most important points about a specific topic. It can be used before, during, or after any course or lesson (Angelo & Cross, 1993). For example, students are provided a blank piece of paper and asked to write down five or more words or phrases about “task”. After a few minutes, their answers are collected. It is usually enough to allow two or three minutes and five to ten items. The trainer examines what has been written and if there is evidence of incorrect learning, the next lesson is taught by referring back to the topic and what was written down (Angelo & Cross, 1993). Misconception/preconception check: The main aim of this technique is to determine and evaluate the students’ prior knowledge and beliefs and identify anything that could prevent new learning (Enerson, Plank & Johnson, 2007). It can be used before, during, or after any course or lesson. Trainers can use multiple-choice and short-answer questions, and Likert-scale responses (Angelo & Cross, 1993; Enerson, Plank & Johnson, 2007). Empty outlines: This technique is self-explanatory and can be used at the end of a course session or at the beginning of the forthcoming session. It is a summary of the lecture, presentation and generates a discussion. The concepts (dimensions and sub-dimensions) are determined that focus students' attention (Angelo and Cross, 1993). Memory matrix: This is a two-dimensional diagram of a rectangle divided into rows and columns that is used to organise information and indicate their relationships. It is useful for evaluating the phenomena and principles that students must remember and understand in courses with high information content. The row and column headings are provided but the cells are left empty for students to complete. This is a useful technique to use after teaching. Table 9.4 is an example of a matrix related to environmental measuring devices (Angelo & Cross, 1993). Table 9.4 A Memory Matrix for Environmental Measuring Devices Device Name
Ideal Range
Unit(s)
Sound Light Radiation pH Temperature
179
Measuring and Assessing 21st Century Competencies in Science Education
Minute paper: This versatile technique offers a quick and fairly simple way to get written feedback about what students are learning (Enerson, Plank & Johnson, 2007). The trainer asks two questions just before the end of the course, such as “what was the most important thing you learned in this class?” and “what was the important questions remain unanswered?”. The technique can be used to assess what students have learned during a laboratory session, a working group meeting, a field trip, homework, or a video and can be used at the beginning or end of a course (Angelo & Cross, 1993; Enerson, Plank & Johnson, 2007). Muddiest point: At the end of the lesson, the students communicate the points that were not clear or the issues they found confusing during the lesson or about a specific topic (King, 2011). These are known as the muddiest points and are usually written on notes which are distributed and collected at the end of the lesson (King, 2011) and can relate to a conference, a discussion, homework, a game or a film (Angelo & Cross, 1993). Mind map: This is a diagram used to represent ideas, tasks, or linked concepts and centred on a keyword or idea (Arroyo, 2011). After writing down a word, between five and ten sub-words are derived from it. Then, each of the words expressed to serve as the sub word for the next set of derivations (Buzan, 1993; Arroyo, 2011). It can be used as a note taking technique (Buzan, 1993), to determine the prior knowledge and concepts before teaching, to determine the concepts learned after teaching, and to take notes during the teaching process. Observations: This is one of the data collection techniques used in the scientific research process. It is also useful for learning about students by observing them during their education process. Observations can provide information about the literacy status of students. Situational judgement tests: This tool gives option to participants to provide their best responses to the presented situations. Their answers are mostly collected through multiple-choice options or Likert-type ratings (Kyllonen, 2012). These tests provide a set of work-related scenarios to participants from which they select the most appropriate responses among multiple options. They include some instruction types just as self-estimating instructions: what would you do? (Broadfoot, 2006). Such tools can be used at the beginning of teaching to identify the needs of students, and at the end of teaching to summarise their situation. These tools can also be used to measure and evaluate attitudes, beliefs, thoughts, knowledge, and literacy levels. 180
İsa DEVECİ
Multiple-choice tests: These provide diagnostic information about students’ understanding and can be quickly scored, so that the results can be incorporated into the decisions of teachers (Briggs et al., 2006). It is also widely used to evaluate student achievement at the end of the teaching process. Checklists: Today’s students often use text messages, read magazines, surf the internet, and watch TV, so they need a reading strategy both inside and outside of the classroom, and checklists are an effective tool for assessing their reading (Afflerbach, Ruetschlin & Russell, 2007). They help to assess and evaluate literacy and produce a critical reading strategy. Open-ended questions: Open-ended questions can be used at the beginning of the teaching process to poll students and to stimulate their thinking skills throughout the course. They can be used at the end of the teaching process to determine what students have learned or have not fully understood. It is important to direct question (why, what, how) to the students. It is also important to develop students' reading skills by asking them open-ended questions. Conclusion In the 21st century, students should have a range of competencies, however, it is not possible to acquire them if a limited number of learning techniques are used. It can also be difficult to measure and evaluate these new competencies using traditional tools and techniques. Therefore, it is important to increase the awareness of educators and researchers about the wide range of measurement and assessment tools and techniques which is one of the most important reasons for this research. It is noted that some competencies cannot be measured using traditional tools (Silva, 2008). Multiple choice, gap-filling, true or false tests, matching questions are all examples of traditional measurement tools and techniques that are difficult to measure. Therefore, it is important that measurement tools can assess all the different skill areas that the students need. For example, it may be useful to use the argumentation checklist technique that can help to develop students' decision-making skills as a measurement tool for those skills. Similarly, it may be useful to use the pros and cons grid techniques to develop, measure and evaluate students' analytical thinking skills. Hence, it is possible to say that every skill-specific technique may soon emerge. In this study, 21st century competencies which are classified in different ways in the literature categorised into three themes; thinking skills, inter/intrapersonal skills and literacies. 181
Measuring and Assessing 21st Century Competencies in Science Education
Thinking skills are composed of metacognition, higher-order, inventive, critical, creative and systems thinking skills and growth mindset, non-routine problem-solving skills. Inter/intrapersonal skills include adaptability, risk taking, self-direction, self-management, curiosity, intrinsic motivation, grit, teamwork, responsibility, communication, and leadership. There are many tools to assess these skills, including; categorising grids, one-sentence summaries, approximate analogies, concept maps, defining features matrices, pros and cons grids, content forms, function outlines, analytic memos, word journals, observations, argumentation checklists, peer and self-assessments, fishbone analyses, portfolio assessments, classroom opinion polls, project-based learning, problem-based learning, design-based learning, everyday ethical dilemmas, and survey forms, all of which can be used as measuring instruments and techniques or methods and approaches. The literacies are composed of basic, scientific, economic, visual, information and multicultural literacies and global awareness and academic mastery. There are many tools to assess these literacies, including; background knowledge probes, focused listings, misconception/preconception checks, empty outlines, memory matrices, minute papers, mind maps, muddiest points, observations, situational judgement tests, multiple-choice tests, checklists and open-ended questions all of which can be used as measuring instruments, techniques, methods or approaches. It is important to use the measurement tools, techniques, methods or approaches at specific stages of a course and there are three types of assessments for this purpose as diagnostic, formative and summative. Formative and summative assessments are more appropriate for measuring skills, whereas diagnostic and summative assessments are more appropriate for information and literacy assessments.
182
İsa DEVECİ
References Afflerbach, P., Ruetschlin, H. & Russell, S. (2007). Assessing Strategic Reading, Ed., Cathy Collins Block, Classroom literacy assessment (pp. 177-194), New York: The Guilford Press Angeli, C., & Valanides, N. (2005). Preservice elementary teachers as information and communication technology designers: An instructional systems design model based on an expanded view of pedagogical content knowledge. Journal of Computer Assisted Learning, 21, 292-302. Angelo, T.A., & Cross, K.P. (1993). Classroom assessment technique, A Handbook for College Teachers. San Francisco: Jossey-Bass Inc. Apedoe, X.S., Reynolds, B., Ellefson, M.R., & Schunn, C.D. (2008). Bringing engineering design into high school science classrooms: The heating/cooling unit. Journal of science education and technology, 17(5), 454-465. Arroyo, C.G. (2011). On-line social networks: innovative ways towards the boost of collaborative language learning. ICT for language learning, 20 - 21 October, Florence, Italy. Bapna, A., Sharma, N., Kaushik, A. & Kumar, A. (2017). Measuring 21st Century Skills. doi: 10.13140/RG.2.2.10020.99203 Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House, 83(2), 39-43. Briggs, D.C., Alonzo, A.C., Schwab, C., & Wilson, M. (2006). Diagnostic assessment with ordered multiple-choice items. Educational Assessment, 11(1), 33-63. Broadfoot, A.A. (2006). Response instructions and faking on situational judgment tests. (Master’s Thesis). Bowling Green State University, Ohio. Brusic, S. A., & Shearer, K. L. (2014). The ABCs of 21st century skills. Children's Technology & Engineering, 18(4), 6-10. Buzan, T. (1993). The Mind Map. London: BBC Books Byon, A.S. (2005). Classroom assessment tools and students’ affective stances: KFL classroom settings. Language and Education, 19(3), 173-193.
183
Measuring and Assessing 21st Century Competencies in Science Education
Candy, P.C. (1991). Self-direction for lifelong learning: A comprehensive guide to theory and practice. San Francisco: Jossey-Bass. Carson, J. (2007). A problem with problem solving: Teaching thinking without teaching knowledge. The Mathematics Educator, 17(2), 7-14. Clúa, O., & Feldgen, M. (2010, October). Work in progress—Image Processing (and CATs) as an introduction to algorithmic thinking. In Frontiers in Education Conference (FIE), 2010 IEEE (pp. T1D-1). IEEE. October 27 – 30, Washington, U.S. Cottrell, S. (2005). Critical thinking skills developing effective analysis and argument. New York: Palcrave Macmillan Demiral, U., & Cepni, S. (2018). Examining preservice science teachers’ argumentation skills in terms of their critical thinking and content knowledge levels: An example using GMOs. Turkish Journal of Science Education, 15(3), 128-151. Demiral, U., & Turkmenoglu, H. (2018). Examining the relationship between preservice science teachers’ risk perceptions and decision-making mechanisms about GMOs. Van Yuzuncu Yil University Journal of Education, 15(1), 1025-1053. Denney, M.K., Grier, J.M., & Buchanan, M. (2012). Establishing a portfolio assessment framework for pre-service teachers: A multiple perspectives approach. Teaching in Higher Education, 17(4), 425–437. Department of Education (2003). enGauge® 21st Century Skills: Literacy in the Digital Age. Los Angeles: North Central Regional Educational Laboratory and the Metiri Group. Retrieved from https://pict.sdsu.edu/engauge21st.pdf Dochy, F.J.R.C. (1994). Assessment of domain-specific and domain-transcending prior knowledge: Entry assessment and the use of profile analysis. Ed., M. Birenbaum & F.J.R.C. Dochy Alternatives in assessment of achievements, learning process and prior knowledge (pp. 93-129). Boston: Kluwer Academic. Duckworth, A., & Gross, J. J. (2014). Self-control and grit: Related but separable determinants of success. Current Directions in Psychological Science, 23(5), 319-325. 184
İsa DEVECİ
Duckworth, A.L., Peterson, C., Matthews, M.D., & Kelly, D.R. (2007). Grit: Perseverance and Passion for Long-Term Goals. Journal of Personality and Social Psychology, 92(6), 1087-1101. Dweck, C.S. (2010). Even geniuses work hard. Educational Leadership, 68(1), 16-20. Enerson, D.M., Plank, K.M., & Johnson, R.N. (2007). An Introduction to Classroom assessment techniques. Center for Excellence in Learning & Teaching, Pennsylvania State University. Retrieved from http://www.uc.edu/content/dam/uc/cetl/docs/classroom_assessment_tc Evagorou, M., Korfiatis, K., Nicolaou, C., & Constantinou, C. (2009). An investigation of the potential of interactive simulations for developing system thinking skills in elementary school: a case study with fifth-graders and sixth graders. International Journal of Science Education, 31, 655– 674. Evin-Gencel, I. (2017). The effect of portfolio assessments on metacognitive skills and on attitudes toward a course. Educational Sciences: Theory & Practice, 17, 293–319. Hacker, D. J. (1998). Metacognition: definitions and empirical foundations. In D. J. Hacker, J. Dunlosky, and A. C. Graesser (eds.), Metacognition in Educational Theory and Practice. Mahwah, pp. 1–24 N.J.: Erlbaum. Harlen, W., & James, M. (1997). Assessment and learning: differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365-379. Heward, W. L. (1987). Self-management. J. O. Cooper, T. E. Heron ve W. L. Heward (Ed.). Applied Behavior Analysis, New Jersey: Prentice Hall/Merrill. Jang, E.E. (2012). Diagnostic assessment in language classrooms, (Glenn Fulcher and Fred Davidson) Diagnostic assessment in language classrooms (pp. 120-133), Canada: Routledge Kadri, N. (2018). Student self-assessment vs teacher assessment: The issue of accuracy in EFL classrooms. (Ed., Jessica Mackay Marilisa Birello Daniel Xerri) Bridging the Gap between Research and Classroom Practice (pp. 93-98), Faversham: Published by IATEFL 185
Measuring and Assessing 21st Century Competencies in Science Education
Kafka, T. (2018). Students Assessment, ed., Rona F. Flippo, Thomas W. Bean Handbook of College Reading and Study Strategy Research (326-340 pages). New York: Routledge Kara, Y., & Aslan, B. (2018). Drama temelli fen etkinliklerinin okul öncesi öğrencilerinin sosyal beceriler üzerine etkisinin incelenmesi: Besinler konusu örneği. Yüzüncü Yıl Üniversitesi Eğitim Fakültesi Dergisi, 15 (1), 698-722. King, D. B. (2011). Using clickers to identify the muddiest points in large chemistry classes. Journal of chemical education, 88(11), 1485-1488. Koehler, M.J. & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research, 32(2), 131-152 Kyllonen, P.C. (2012). Measurement of 21st Century Skills Within the Common Core State Standards Retrieved from https://pdfs.semanticscholar.org/2cbb/a09b7eb490f502749d85fc43ceeef8 7191c2.pdf Lang, G., Jones, K., & Leonard, L.N. (2015). In the know: desired skills for entrylevel systems analyst positions. Issues in Information Systems, 16(1), 142148. Leighton, J.P., & Gierl, M.J. (2007). Why cognitive diagnostic assessment? Cognitive Diagnostic Assessment for Education (pp. 3-18). Cambridge: Cambridge University Press Lester, F.K. (2013). Thoughts about research on mathematical problem-solving instruction. The Mathematics Enthusiast, 10(1/2), 245-278. Li, S.S., & Lee, L.C. (2011). Using fishbone analysis to improve the quality of proposals for science and technology programs. Research Evaluation, 20(4), 275-282. Mathews, J. (2009, Oct 23). 21st century skills: another disappointment. The Washington Post. Retrieved from http://voices.washingtonpost. com/classstruggle/ 2009/10/21st_century _skills_a_suicide.html
186
İsa DEVECİ
Miri, B., David, B., & Uri, Z. (2007). Purposely Teaching for the Promotion of Higher order Thinking Skills: A Case of Critical Thinking. Research in Science Education, 37(4), 353-369. Moseley, C. (2000). Teaching for environmental literacy. The Clearing House, 74(1), 23-24. Moss, C.M., & Brookhart, S.M. (2009). Advancing formative assessment in every classroom: A guide for instructional leaders. Alexandria: ASCD publications. MTD
Training (2010). Leadership skills. Retrieved http://promeng.eu/downloads/training-materials/ebooks/soft leadership-skills.pdf
from skills/
Murdiyani, N.M. (2018). Developing non-routine problems for assessing students’ mathematical literacy. Journal of Physics: Conference Series, 983(1), 012115. Murni, A., & Anggraini, R.D. (2015, November). The influence of applying problem-based learning based on soft skill to increase students’ creativity in the subject development of high school mathematics curriculum. In Proceeding 7th International Seminar on Regional Education, 3, 12031212. National Research Council (2010). Exploring the Intersection of Science Education and 21st Century Skills: A Workshop Summary. Margaret Hilton, Rapporteur. Board on Science Education, Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. Retrieved from http://k12accountability.org/resources/STEMEducation/Intersection_of_S cience_and_21st_C_Skills.pdf Ostermeier, C., Prenzel, M., & Duit, R. (2010). Improving science and mathematics instruction: The SINUS project as an example for reform as teacher professional development. International journal of science education, 32(3), 303-327. Paulsen, M.B. (2002). Evaluating teaching performance. New Directions for Institutional Research, 114, 5-18.
187
Measuring and Assessing 21st Century Competencies in Science Education
Rasul, M. S., Halim, L., & Iksan, Z. (2016). Using stem integrated approach to nurture students’ interest and 21st century skills. The Eurasia Proceedings of Educational & Social Sciences, 4, 313-319. Riess, W., & Mischo, C. (2010). Promoting systems thinking through biology lessons. International Journal of Science Education, 32(6), 705-725. Ruiz-Primo, M.A., & Shavelson, R.J. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching, 33(6), 569-600. Salpeter, J. (2003, October). 21st century skills: Will our students be prepared? Technology and Learning-Dayton. Retrieved from http://techlearning.com/article/13832. Saygılı, S. (2017). Examining the problem-solving skills and the strategies used by high school students in solving non-routine problems. E-International Journal of Educational Research, 8(2), 91-114. Seifert, T., & Feliks, O. (2018). Online self-assessment and peer-assessment as a tool to enhance student-teachers’ assessment skills. Assessment & Evaluation in Higher Education, 44(2), 169-185. doi: 10.1080/02602938.2018.1487023 Shaeiwitz, J. A. (1998). Classroom assessment. Journal of Engineering Education, 87(2), 179-183. Shepardson, D.P., & Britsch, S.J. (2001). Tools for assessing and teaching science in elementary and middle school, (Ed. Daniel P. Shepardson) Assessment in science: A guide to professional development and classroom practice. Science Education (pp. 119-147). USA: Kluwer Academic Publishers doi: 10.1007/978-94-010-0802-0 Silva, E. (2008, November). Measuring Skills for the 21st Century. Education Sector Reports. Education Sector. Retrieved from http://elenamsilva.com/wpcontent/uploads/2013/05/MeasuringSkills.pdf Sokol, A., Oget, D., Sonntag, M., & Khomenko, N. (2008). The development of inventive thinking skills in the upper secondary language classroom. Thinking Skills and Creativity, 3(1), 34-46.
188
İsa DEVECİ
Soland, J., Hamilton, L.S., & Stecher, B.M. (2013). Measuring 21st Century Competencies Guidance for Educators. Retrieved from https://asiasociety.org/files/gcen-measuring21cskills.pdf Sturgess, G. (2012, December). Skills vs Competencies. What’s the Difference? Retrieved from http://www.talentalign.com/skills-vs-competencies-whatsthe-difference/ on 28.11.2018 Tarım, K., & Artut D.P. (2009). Investigation of the prospective teachers’ problem-solving process in the nonroutine word problems. Journal of Uludag University of Faculty of Education (JUUFE), 12, 53-70. Topping, K.J. (2009). Peer assessment. Theory into practice, 48(1), 20-27. Trilling, B., & Fadel, C. (2009). 21st century skills: Learning for life in our times. San Francisco: John Wiley & Sons. Wattles, I. (2011, March). Effective course related assessment techniques. The First International Conference on English Studies, 19 March, Novi Sad, Serbia.
189
Burçin GÖKKURT ÖZDEMİR
Chapter 10 The Use of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics Burçin GÖKKURT ÖZDEMİR
Introduction This chapter focuses on the use of concept maps and cartoons in mathematics instruction as an alternative evaluation method. In the literature, there is a vast number of studies on the use of concept maps and cartoons in mathematics instruction (Cardemone, 1975; Minemier, 1983). However, the number of studies is considerably limited on the use of concept maps and cartoons in mathematics instruction as an alternative evaluation method. In fact, both concept maps and concept cartoons can be used in the evaluation phase of the instruction process as well as the beginning, research and development phases. For instance, learning deficiencies of students can be determined by performing a formative assessment in classrooms. Concept cartoons can be used to receive feedback from students and determine their misconceptions. The number of concept examples necessary for a student to learn about the knowledge structure can be measured by using concept maps. For many educators, designing concept maps or cartoons on computer programs or by pencil and paper might be time-consuming. In some cases, computer programs might remain incapable for the visualization of materials and cartoons that include graphics. In this context, several online and desktop programs (Inspiration, Edraw Max, SmartDraw, Toodoo, Pixton software etc.) have been developed to prepare both materials. The design of computer-aided concept maps and cartoons can provide many benefits as adaptation easiness, digital communication and digital recording. At this context, it is thought that preparation of digital concept maps and cartoons will be advantageous both for teachers and students in mathematics instruction since they are designed as an instruction medium and can also be used in the phase of evaluation of the results (Aydın, Baki, Köğce & Yıldız, 2009; Sesli & Kara, 2012). When the features of digital concept maps and cartoons such as recording, 191
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
printing and changing ability are considered, the sample materials provided in this chapter might provide an opportunity for teachers by raising an awareness to design digital concept maps and cartoons as assessment tools on account of designing a better learning environment. Concept Cartoons and Concept Maps as a Formative Assessment Tool in Mathematics Classrooms Students attended in mathematics classes face with many misconceptions in various subjects. Some techniques are used to reveal those misconceptions. While questionnaires and interviews are used mostly by researchers, teachers generally use practical methods such as written documents by students, their drawings and discussion (Chin & Teou, 2009). One of the alternative techniques used by teachers is concept cartoons (Keogh & Naylor, 1997, 1998; Naylor & Keogh, 2000). Concept cartoons are the materials that consist of dialogues of cartoons. Only one of the ideas presented in those cartoons is accepted scientifically right and all the others represent scientifically wrong ideas (Keogh, Naylor, & Wilson, 1998). There have been many of studies carried out with various aims by using cartoons. Among them, there are researches that aim to structure ideas (Keogh & Naylor, 1996), make clear the concepts (Keogh & Naylor, 1999; Kabapınar, 2005; Sexton, 2010), get rid of the misconceptions (Prescott & Mitchelmore, 2005; Swedosh & Clark, 1997) and use them as an assessment tool (Chin & Teou, 2009; Doğan Temur, Özsoy, & Turgut, 2019; Ingec, 2008; Keogh, Naylor, de Boo, & Rosemary, 2002). Concept cartoons are used by teachers in many ways with the aim of assessment. Encouraging a student to individually react to a concept cartoon in written form or verbally is an assessment method. Teachers might ask students to comment on the expression of each character in the cartoon separately or ask students if they are agreed with any of the characters and invite them to express the reason why they do or do not (Palaz, Kılcan, & Koroglu-Cetin, 2015). Teachers sometimes want to see whether the ideas of the students are improved or not and the degree of improvement if there is any improvement. Therefore, before and after the teaching of the subject, teachers can observe and assess reaction of the students to the concept cartoon (Keogh & Naylor, 1999; Naylor & Keogh, 2012). The use of concept cartoons not only enable the teachers to obtain feedback on ideas of the students but also reveal any misunderstanding of the students. On 192
Burçin GÖKKURT ÖZDEMİR
this sense, among their different uses, concept cartoons can be used in formative assessment. In literature, researchers have indicated that concept cartoons allow formative assessment (Chin & Teou, 2009; Naylor & Keogh, 2009, 2013). The aim of this assessment is to eliminate learning mistakes and deficiencies of students. There is no aim to grade in formative assessment (Usun, 2008). Clark (2012) claims that formative assessment is not an exam or tool, but it is a process that has the potential to support learning by developing learning strategies. Pellegrino (2012) defines formative assessment as a tool which is designed for observing the behaviours of students and producing data that can be used to make reasonable inferences about student knowledge. Especially in classes like mathematics requiring prerequisite information to be learned, not employing formative assessment makes it harder for students who are at the lower levels and have not gained information and skills to adopt higher behaviours (Tekin, 2010). If the potential of concept cartoons is considered as a learning tool in assessment, a teacher possibly forms a basis for students to reflect on abstract mathematical concepts and discuss them through employing the concept cartoons to detect the misconceptions of them. In the process of learning and teaching mathematics, teachers get the chance to prepare rich learning environments about concepts and subjects which students have difficulty in learning with the formative assessment which aims to find out how the students are learning and they are improving during the process of learning. The formative learning, which is usually named as assessment for learning (Bennett, 2011), is aimed to provide a basis for activities that will improve students in mathematics classes (Baki, 2008). According to Ginsburg (2009), formative assessment helps to realize the individual and specific goals during the learning and teaching process of mathematics. Employing formative assessment in teaching mathematics has a positive effect on the success of the students (Tempelaar et al., 2012). Bell (2000) defined formative assessment as an assessment which is informative about student learning and aim to realize the learning at the time of learning. Therefore, formative assessment lets students to constantly improve their performances. Thus, teachers can constantly improve their teaching (Sadler, 1998). According to Black and William (1998), this assessment includes feedback to the student, teachers’ or taking action to improve learning at the time of learning for students or teachers and making self-assessment of students. A basic element of formative assessment is the feedbacks in two different kinds: Feedbacks from student to teacher and from teacher to student (Black & Harrison, 193
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
2004). On the other hand, students can participate in self-assessment and peerassessment where they can make judgement on their and their peers’ understanding (Black & Harrison, 2001). For this reason, formative assessment not only includes diagnostic assessment but also includes self and peer assessments. Another tool that is used in assessments to reveal the student knowledge and changes in time on their understanding is the concept map (Novak, Gowin, & Johansen, 1983). The technique of concept maps was improved by Novak and his colleagues in Cornell University in 1972. This study is based on Ausubel’s theory of "Assimilation Theory of Learning”. Concept map is a graphical presentation that consists of concepts and conjunctions which provide the relationship among the concepts. The concepts and the conjunctions are marked on the map. These conjunctions can be one way with arrow, two way or without a way. Concepts and conjunctions may be divided into sections and this kind of map shows the transaction or the causal relationship among the concepts (Plotnick, 2001). Concept maps are the tools that provide the information to be reconstituted and help researching the conceptual changes (Sahin, 2002). In their research, RuizPrimo and Shavelson (1996) stated that when used as an assessment tool, the concept map becomes an educational task which reveals the information structure of the student in a certain field of subject. There are many researches that analysed the usage of concept maps in assessment of students’ conceptual understanding in sciences (Brown, 2000; Cañas & Novak, 2006; Mintzes, Wandersee, & Novak, 2000; Novak & Cañas, 2004) and mathematics in time (Afamasaga-Fuata'i, 2004, 2007a, 2007b; Hannson, 2005; Liyanage & Thomas, 2002; Williams, 1998). Although concept maps are often used in teaching science, it was used by Cardemone (1975) to represent the key ideas in mathematics classes. Minemier (1983) stated that, when students prepare concept maps, they not only show a better performance in problems of doing tests but also improve their selfsufficiency. On the other hand, researchers have stated that concept maps are a pedagogical tool to provide an overview of the subject (Afamasaga-Fuata’i, 2006; Brahier, 2005). Concept maps are accepted as a valuable tool for teachers because they improve the understanding of the students when they are used in repetitive situations (Vanides, et al., 2005). Concept maps are used as an assessment tool to follow the conceptual change in various contexts (Edmondson, 2000; Mintzes, Wanderersee & Novak, 2001; Ruiz-Primo & Shavelson, 1996). For example, the concept maps can be assessed numerically or non-related to the goals of the teachers. If the assessment aims to encourage learning, grading is not likely to be 194
Burçin GÖKKURT ÖZDEMİR
helpful. In an assessment, grading is more efficient where it is aimed to assess the learning results at the end of a subject. Within the scope of this purpose, when the assessments where the researches in which concept maps are being assessed are analysed, the difference in the assessment method stands out since there is designated methods (Doğan Temur & Turgut, 2017; McClure, Sonak, & Suen, 1999; McClure & Bell 1990; Novak & Gowin, 1984). According to, Croasdell, Freeman and Urbaczewski (2003), concept maps are evaluated in many forms. The evaluation may consist of:
counting the total number of concepts, counting the total number of relationships, measuring the map complexity (number of indicated relationships beyond the minimum needed to connect all concepts linearly – i.e., number of concepts minus one), comparing the maps from the beginning or middle of the semester to the maps created at the end of the semester, or comparing the maps to that of an expert or an instructor.
McClure, Sonak and Suen (1999) used the concept maps as an in-class assessment tool and assessed the concept maps that students drew with six different methods as relational, relational with the main map, integral, integral with the main map, structural, structural with the main map. The correlations of the grades of concept map, which are assessed by the similarity with the map drawn by experts (the main map), indicated that it supports the validity of the five of the six methods. Based on this result, they argued that concept maps can be used as an in-class assessment method. Buldu and Buldu (2010) have analysed the views of teacher candidates on the use of concept maps in formative assessment. In the analysis, most of the teacher candidates found this technique beneficial and stated that the concept map method can be an alternative to traditional techniques. This chapter is important as it verifies the fact that concept map and concept cartoon are helpful to improve student learning as assessment tools. Concept maps need to be rationally integrated to teaching as an effective method for formative assessment and need to be suitable for teachers to use (Hartmeyer, Stevenson, & Bentsen, 2018). A similar situation is also valid for concept cartoons. With the aim of teaching, concept maps can be drawn on paper with pencil or can be produced through computer software. According to Royer and Royer (2004), students prefer to use a computer rather than paper and pencil and even 195
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
they have created sophisticated concept maps in computer. This high sophistication of the maps is not only an expression of meaningful learning (Novak & Gowin, 1984), but also it makes it possible to make a more detailed formative assessment. For many teachers, it might take a long time to design concept maps in computer programs or draw by hand. Sometimes, computer programs might not be enough for creating the illustrations of cartoons and illustrations contain graphics. At this point, some online desktop software programs (Inspiration, cmap tools, mind manager, brain storming, star think, Edraw Max, Smartdraw, Toodoo, Pixton sofwares etc.) have been developed to create both of those materials. Designing concept maps and cartoons by teachers might provide lots of advantages such as easy adaptation, digital communication and digital record. At this point, preparing digital concept maps and cartoons is considered to be very beneficial for teachers and students in teaching mathematics which will be used at the stage of formative assessment. When the characteristics of digital concept maps or cartoons such as being recordable, printable, changeable are considered, the sample materials that are given in this chapter might be beneficial for teachers. Teachers might use digital concept maps and cartoons as assessment tools to detect deficiencies and misconceptions of students. While doing this, they might have those materials designed by themselves or by the students. Some examples of the digital concept maps and cartoons that can be used in assessment of teaching mathematics are as follows;
Figure 10.1- Digital concept map created to detect misconceptions or mistakes of students in the subject of prisms 196
Burçin GÖKKURT ÖZDEMİR
When the Figure 10.1 is analysed, the students were asked to guess the shapes of surfaces on a cube in the first map. In the second map, they were asked to choose the shapes that are related to a prism, from the given geometrical solids. The aim here is to assess whether the students understood the concept of prism or not. In addition to these materials, students might create concept maps and those maps can be assessed. Tuluk (2015) used concept maps as an assessment tool in his research and had the teacher candidates create digital concept maps about angles. Thus, he tried to reveal the knowledge of them about of angles. In Figure 10.2, there is an example of the concept maps that was created by the teacher candidates.
Figure 10.2 Concept map that has been created by a teacher candidate on the concept of angle (Tuluk, 2015). Some samples of digital concept cartoons created to detect deficiencies or mistakes of students in different subjects of mathematics are indicated in Figure 10.3. 197
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
Figure 10.3 Digital concept cartoon created for detecting the mistakes or misconceptions of the students about natural numbers and powered numbers While the first concept cartoons about converting numbers into word in Figure 3 are to detect the mistakes of the students while writing the natural numbers, the second concept cartoons are on detecting the mistakes of the students who make mistakes while dividing powered numbers. For example, some students divide the base and the power separately while dividing powered numbers. They take the result of the base division as the base and the result of the power division as the power. It is possible to get rid of the mistakes and deficiencies by presenting these concept cartoons to the students. Here, it is a way of assessment to encourage the students to individually react to a concept cartoon in written way or discussion. This situation provides a good mechanism for systematic assessment and test. Teachers might ask students to comment on the expression of each character in the cartoon or ask students if they agree with any of the characters and ask them to explain their reasons. It is very important to question the reasoning of the students agree with a certain expression that they are defending in a cartoon (Demir, 2008). In this way, teachers can discover the internalized ideas of the students and give opportunity to demonstrate the rightness of those ideas that might be an important part of the assessment (Naylor & Keogh, 2009). While some of the teachers are grading the answers of the 198
Burçin GÖKKURT ÖZDEMİR
students to concept cartoons in numbers, the others personally give verbal feedbacks. The grading criteria of Ormanci and Durmaz-Ören (2010) who assess the concept cartoons by giving points are given in Table 10.1. Table 10.1 The scoring key used in the analysis of concept cartoons Assessment Criteria Correct Answer – Correct Explanation Correct Answer – Partially Correct Explanation Wrong Answer - Correct Explanation Correct Answer – Wrong Explanation Wrong Answer – Partially Correct Explanation Wrong Answer – Wrong Explanation
Score 3 2 2 1 1 0
Scoring Criteria *Correct Explanation: Explanation where the answer is implied with all scientific aspects * Partially Correct Explanation: Explanation where the answer is not implied with all scientific aspects or which involves some misconceptions *Wrong Explanation: (1) The answer is scientifically wrong in total, (2) is irrelevant, (3) is repeated as a whole, (4) is completely composed of misconception, (5) left as blank
According to the criteria in Table 10.1, teachers may assess concept cartoons in mathematics classes, or they may assess through the rubrics created by themselves. In summary, in mathematics classes where abstract concepts are densely employed, it is necessary to use alternative assessment approaches rather than techniques based on traditional assessment approaches. Traditional assessment approaches might not always be enough for assessment especially on the skills like problem solving, modelling, reasoning, proportional reasoning that exist in the teaching programs approved by the Ministry of National Education (MoNe, 2018) that are aimed to be gained to students by teachers. Thus, assessment needs to be done not only through traditional assessment approaches but also creating environments where students can share and discuss their ideas. Concept cartoons are the approaches that should be scientifically reflected on. For example, teaching the concept of proportion which serves as a bridge for mathematical thinking, plays an important role in mathematics (Lesh, Post, & Behr, 1988). 199
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
Therefore, it is necessary to assess whether the students conceptually understand the subject of ratio and proportion. Students may have difficulty in understanding the concept of proportion in the subject of ratio and proportion, which is influential in proportional reasoning and they confuse the concepts of inverse proportion and direct proportion. Dogan and Cetin (2009) observed that the students cannot determine the proportion types in proportion problems and have difficulty in solving inverse proportion problems. The concept cartoon, given in Figure 4 can be used to determine whether the students have difficulty in inverse proportion problem.
Figure 10.4 Digital concept cartoon prepared for detecting the mistakes misconceptions of students in inverse proportion problems
200
and
Burçin GÖKKURT ÖZDEMİR
Conclusion As a result, it is very important to detect the mistakes of the students on mathematical concepts and get rid of them. As mathematics is a structurally cumulative discipline, each concept learned in mathematics is a step for the following concept or concepts. Therefore, a difficulty in learning any concept or any wrong information learned about a concept will cause difficulties in learning many following concepts and misconceptions. Relating the mathematical concepts and operations indicated that the concepts and the relationships are understood through concept maps and cartoons. Thus, it is suggested that mathematics teachers should use these materials as alternative assessment tools in learning-teaching activities.
201
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
References Afamasaga-Fuata’i, K. (2004). Concept maps and vee diagrams as tools for learning new mathematics topics. In A. J. Canãs, J. D. Novak, & F. Gonázales (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the first international conference on concept mapping (pp. 13–20). Spain: Dirección de Publicaciones de la Universidad Pública de Navarra. Afamasaga-Fuata’i, K. (2006). Innovatively developing a teaching sequence using concept maps. In A. Canas & J. Novak (Eds.),˜ Concept maps: Theory, methodology, technology. Proceedings of the second international conference on concept mapping (Vol. 1, pp. 272–279). San Jose, Costa Rica: Universidad de Costa Rica. Afamasaga-Fuata’i, K. (2007a). Communicating students’ understanding of undergraduate mathematics using concept maps. In J. Watson & K. Beswick (Eds.), Mathematics: Essential research, essential practice. Proceedings of the 30th annual conference of the Mathematics Education Research Group of Australasia (Vol. 1, pp. 73–82). University of Tasmania. Australia, MERGA. Afamasaga-Fuata’i, K. (2007b). Using concept maps and vee diagrams to interpret “area” syllabus outcomes and problems. In K. Milton, H. Reeves, & T. Spencer (Eds.), Mathematics essential for learning, essential for life. Proceedings of the 21st biennial conference of the Australian Association of Mathematics Teachers, Inc. (pp. 102–111). University of Tasmania, Australia, AAMT. Aydın, M., Baki, A., Köğce, D., & Yıldız, C. (2009). Mathematics teacher educators’ beliefs about assessment. World Conference on Educational Sciences 2009: New Trends in Educational Sciences, 1 (1), 2126-2130. Baki, A. (2008). Kuramdan uygulamaya matematik eğitimi [From theory to practice mathematics education] (4. Edition). Ankara, Turkey: Harf Education Publisher. Bell, B. (2000). Formative assessment and science education: Modelling and theorising. In R. Millar, J. Leach, & J. Osborne (Eds.), Improving science education: The contribution of research. Buckingham: Open University Press. 202
Burçin GÖKKURT ÖZDEMİR
Bennett, R. E. (2011). Formative assessment: a critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5-25. Black, P. & Harrison, C. (2001). Self- and peer-assessment and taking responsibility: The science students’ role in formative assessment. School Science Review, 83(302), 43–49. Black, P. & Harrison, C. (2004). Science inside the black box. London: NferNelson. Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-74. Brahier, D. J. (2005). Teaching secondary and middle school mathematics (2nd ed.). New York: Pearson Education, Inc. Brown, D. S. (2000). The effect of individual and group concept mapping on students’ conceptual understanding of photosynthesis and cellular respiration in three different levels of biology classes. Dissertation Abstracts International AADAA-I9970734, University of Missouri, Kansas City. Buldu, M. & Buldu, N. (2010). Concept mapping as a formative assessment in college classrooms: Measuring usefulness and student satisfaction. Procedia-Social and Behavioral Sciences, 2, 2099-2104. Cañas, A. J. & Novak, J. D. (2006). Re-examining the foundations for effective use of concept maps. In A. J. Cañas, J. D. Novak, & F. M. Gonzalez (Eds.), Concept maps: Theory, methodology, technology. Proceedings of the Second International Conference on Concept Mapping (pp. 494-502). San Jose, Costa Rica: Universidad de Costa Rica. Cardemone, P. F. (1975). Concept maps: A technique of analyzing a discipline and its use in the curriculum and instruction in a portion of a college level mathematics skills course (Unpublished Master’s thesis). Cornell University, Ithaca, New York. Chin, C. & Teou, L. Y. (2009). Using concept cartoons in formative assessment: scaffolding students’ argumentation. International Journal of Science Education, 31(10), 1307-1332. Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205-249. 203
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
Croasdell, D. T., Freeman, L. A., & Urbaczewski, A. (2003). Concept maps for teaching and assessment. Communications of the Association for Information Systems, 13, 396-405. Demir, Y. (2008). Kavram yanılgılarının belirlenmesinde kavram karikatürlerinin kullanılması. [The use of concept cartoons in determining misconceptions] (Unpublished Master’s thesis,). Atatürk University, Erzurum, Turkey. Doğan Temur, Ö., & Turgut, S. (2017). Analysis of pre-school teachers’ knowledge and opinions about play in mathematics educational process. Journal of Education and Practice, 8(30), 14-24. Doğan Temur, Ö., Özsoy, G., & Turgut, S. (2019). Metacognitive instructional behaviours of preschool teachers in mathematical activities. ZDM Mathematics Education, 51 (4), 655-666. Dogan, A. & Cetin, I. (2009). Seventh and ninth grade students’ misconceptions about ratio and proportion. Uşak University Social Science Institute, 2(2), 118-128. Edmondson, K. M. (2000). Assessing science understanding through concept maps. In J. J. Mintzes, J. H. Wandersee, & J. D. Novak (Eds.), Assessing science understanding: A human constructivist view. Educational psychology press (pp. 15–40). Cambridge, Massachusetts, US:Elsevier Academic Press. Ginsburg, H. P. (2009). The challenge of formative assessment in mathematics education: Children’s minds, teachers’ minds. Human Development, 52(2), 109-128. Hannson, O. (2005). Preservice teachers’ views on y = x + 5 and y = πx2 expressed through the utilization of concept maps: A study of the concept of function. Retrieved December, 2018, from http://www.emis.de/proceedings/PME29/PME29RRPapers/PME29Vol3 Hansson.pdf Hartmeyer, R., Stevenson, M. T., & Bentsen, P. (2018). A systematic review of concept mapping-based formative assessment processes in primary and secondary science Education. Assessment in Education: Principles, Policy & Practice, 25(6), 598-619, 204
Burçin GÖKKURT ÖZDEMİR
Ingec, S. K. (2008). Kavram haritalarının değerlendirme aracı olarak fizik eğitiminde kullanılması. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 35, 195-206. Kabapinar, F. (2005). The effectiveness of teaching via concept cartoons from the point of view of constructivist approach. Educational Sciences: Theory & Practice, 5(1), 135–146. Keogh, B. & Naylor, S. (1997). Starting points for science. Sandbach, UK: Millgate House. Keogh, B. & Naylor, S. (1998). Teaching and learning in science using concept cartoons. Primary Science Review, 51, 14–16. Keogh, B. & Naylor, S. (1999). Concept cartoons, teaching and learning in science: an evaluation. International Journal of Science Education, 21(4), 431-446. Keogh, B., Naylor, S., & Wilson, C. (1998). Concept cartoons: A new perspective on physics education. Physics Education, 33(4), 219-224. Keogh, B., Naylor, S., de Boo, M., & Rosemary, F. (2002). Formative assessment using concept cartoons: Initial teacher training in the UK. In H. Behrendt, H. Dahncke, R. Duit, W. Gräber, M. Komorek, A. Kross, & P. Reiska (Eds.), Research in science education—Past, present, and future (pp. 137– 142). New York: Kluwer Academic Publishers. Lesh, R., Post, T., & Behr, M. (1988). Proportional reasoning. In J. Hiebert & M. Behr (Eds.), Number concepts and operations in the middle grades (pp. 93–118). Reston, VA: National Council of Teachers of Mathematics. Liyanage, S. & Thomas, M. (2002). Characterising secondary school mathematics lessons using teachers’ pedagogical concept maps. Proceedings of the 25th annual conference of the Mathematics Education Research Group of Australasia (MERGA-25) (pp. 425–432). New Zealand: University of Auckland. McClure, R. J. & Bell, P.E. (1990). Effects of an environmental education related STS approach instruction on cognitive structures of pre-service science teachers. University Park, PA: Pennsylvania State University.
205
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
McClure, R. J., Sonak, B., & Suen, K. H. (1999). Concept map assessment of classroom learning. Journal of Research in Science Teaching, 36(4), 475492. Minemier, L. (1983). Concept mapping and educational tool and its use in a college level mathematics skills course (Unpublished Master’s thesis). Cornell University, Ithaca, New York. Ministry of National Education [MoNE] (2018). Primary and middle school 1,2,3,4,5,6,7 and 8. grades mathematics course curriculum. Ankara: Head Council of Education and Morality. Mintzes, J. J., Wandersee, J. H., & Novak, J. D. (2000). Assessing science understanding. San Diego: Academic Press. Mintzes, J. J., Wandersee, J. H., & Novak, J. D. (2001). Assessing understanding in biology. Journal of Biological Education, 35(1), 118–124. Naylor, S. & Keogh, B. (2000). Concept cartoons in science education. Sandbach, UK: Millgate House. Naylor, S. & Keogh, B. (2009). Active assessment. Mathematics Teaching, 215, 35-37. Naylor, S., & Keogh, B. (2013). Concept cartoons: what have we learnt? Journal of Turkish Science Education, 10(1), 3-11. Novak, J. D. & Cañas, A. J. (2004). Building on new constructivist ideas and cmaptools to create a new model for education. In A. J. Cañas, J. D. Novak & F. M. González (Eds.), Concept Maps: Theory, Methodology, Technology. Proceedings of the 1st International Conference on Concept Mapping. Pamplona, Spain: Universidad Pública de Navarra. Novak, J. D. & D. R. Gowin (1984). Learning how to learn. New York: Cambridge University Press. Novak, J. D., Gowin, D. B., & Johansen, G. T. (1983). The use of concept mapping and knowledge Vee mapping with junior high school science students. Science Education. 67(5), 625-645. Ormanci, U. & Sasmaz-Oren, F. (2010). A scoring study on the use of concept cartoons drawings word association test and concept maps for assessment-
206
Burçin GÖKKURT ÖZDEMİR
evaluation purposes. International Conference on New Horizons in Education, Famagusta, North Cyprus. Palaz, T., Kılcan, B., & Koroglu-Cetin, Z. (2015). Karikatüre dayalı öğrenmeöğretme yaklaşımı. G. Ekici (Ed.), Etkinlik örnekleriyle güncel öğrenmeöğretme yaklaşımları II. Ankara: Pegem Yayıncılık. Pellegrino, J. W. (2012). Assessment of science learning: Living in interesting times. Journal of Research in Science Teaching, 49(6), 831–841. Plotnick, E. (2001). Concept mapping: A graphical system for understanding the relationship between concepts. Teacher Librarian, 28(4), 42-45. Prescott, A. & Mitchelmore, M. (2005). Teaching projectile motion to eliminate misconceptions. In H. L. Chick & J. L. Vincent (Eds.), Proceedings of the 29th Conference of the International Group for the Psychology of Mathematics Education, 4, 97-104. Royer, R. & Royer, J. (2004). Comparing hand drawn and computer-generated concept mapping. Journal of computers in mathematics and science teaching, 23(1), 67-81. Ruiz-Primo, M. A. & Shavelson, R. J. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching, 33(6), 569-600. Sadler, R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77–84. Sahin, F. (2002). Kavram haritalarının değerlendirme aracı olarak kullanılması ile ilgili bir araştırma [A research related using concept maps as an assessment tool]. Pamukkale University Journal of Education, 11, 18-33. Sesli E. & Kara, Y. (2012). Development and application of a two-tier multiplechoice diagnostic test for high school students’ understanding of cell division and reproduction. Journal of Biological Education, 46 (4), 214225. Sexton, M. (2010). Using concept cartoons to access student beliefs about preferred approaches to mathematics learning and teaching. In L. Sparrow, B. Kissane, & C. Hurst (Eds.), Shaping the future of mathematics education: Proceedings of the 33rd annual conference of the Mathematics Education Research Group of Australasia. Fremantle: MERGA. 207
The Ose of Concept Maps and Concept Cartoons as an Assessment Tool in Teaching and Learning Mathematics
Swedosh, P. & Clark, J. (1997). Mathematical misconceptions-can we eliminate them? In F. Biddulf & K. Carr (Eds.), People in Mathematics Education (pp. 492-499). Waikato: Mathematics Education Research Group of Australasia. Tekin, E. G. (2010). Matematik eğitiminde biçimlendiricideğerlendirmenin etkisi (Unpublished Master’s thesis). Marmara University, İstanbul, Turkey. Tempelaar, D. T., Kuperus, B., Cuypers, H., van der Kooij, H., van de Vrie, E., & Heck, A. (2012). The role of digital, formative testing in e-learning for mathematics: A case study in the Netherlands. RUSC, Universities and Knowledge Society Journal, 9(1), 284-305. Tuluk, G. (2015). The evaluation of the concept maps created by future middle school mathematics teachers in regard to the concept of angle. Turkish Journal of Computer and Mathematics Education, 6(2), 323-337. Usun, S. (2008). Değerlendirme [Assessment]. G. Başol (Ed.), Eğitimde ölçme ve değerlendirme [Measurement and evaluation in education]. Turkey, İstanbul: Lisans Publisher. Vanides, J., Yin, Y., Tomita, M., & Ruiz-Primo, M. A. (2005). Using concept maps in the science classroom. Science Scope, 28, 27–31.
208
Seyhan ERYILMAZ, İlknur REİSOĞLU
Chapter 11 Web 2.0 Applications Can be used in Assesment and Evaluation Seyhan ERYILMAZ, İlknur REİSOĞLU
Introduction Assessment and evalution has a major role in improving the quality of teaching. However, the traditional methods and techniques of assessment and evaluation are no longer attractive or meaningful for 21st century students that grow up with digital technologies. This situation requires teachers to develop more effective and different activities. Web 2.0 applications, which became a main topic nowadays due to the improvement of web technologies, offers different applications that can make assessment and evaluation process more effective, fun and active. These applications enable teachers to archieve the tools of assessment and evaluation, easily administer them to the students, apply, instantly give feedback, summarize the answers of students by graphs and charts, track student development closely, etc. The evaluation of group work and process is much easier with Web 2.0 compared to traiditional methods. Proper support and supplementary resources can be provided by easily determining the subjects that the students didn’t understand. In addition, the students perceive the assessment and evaluation process as part of their daily life, and thus practice regulary everything they learn. As these applications are free and user-friendly, they become widespread and popular. Accordingly, 13 different Web 2.0 applications are presented in this chapter, which enable teachers to develop different assessment and evaluation activities at different stages of learningteaching process. These selected 13 applications are categorized under the titles; interactive video lesson, assessment and evaluation through gamification, developing assessment and evaluation tools (Figure 1). In each category, the key features of applications are presented in detail.
209
Web 2.0 Applications Can be used in Assessment and Evaluation
Basic Purpose
Interactive Video Lessons
Assessment by Gamification
Development of Assesment Tools
EdPuzzle
Kahoot
Rubistar
PlayPosit
Socrative
Google Form
TED-Ed
Plickers
GoToQuiz
Quizlet
PollEveryWhe re
BookWidgets
QuickKey
Figure 11.1 Web 2.0 applications can be used in assesment and evaluation Interactive Video Lesson Applications Videos provide greater engagement than text or images for a simple reason that it combines audio and text (Brame, 2015). Making interactive videos give the opportunity for learners to be more active in the learning process (Zhang et al., 2006), and improves the quality of the content and the effectiveness of learning (Allen & Smith, 2012; Çepni et al., 2014; Reisoğlu, Gedik & Göktaş 2013); Lloyd & Robertson, 2012). In addition, it makes easier to integrate data and enhances the permanence of it by encoding verbally and visually. Through interactive videos, students execute actions such as listening, reading, writing, and thinking. In this way, it ensures that students more engage in the topic, evaluate the things they watch with a critical point of view, and understand the content better (Brame, 2015; Reisoğlu & Göktaş, 2016). Szpunar et al. (2013) advocates that interactive videos increase the students’ habit of note-taking and decrease test anxiety. Interactive videos are appropriate to be used at every stage of learning-teaching process, as well as for assessment and evaluation purposes. In interactive videos that can be used for assessment and evaluation purposes, interaction is possible between student-content, student-teacher, or studentstudent. With the help of these interactions, assessment and evalution process becomes more fun, the teacher can complete assessment and evaluation process faster and more effectively, and the content can be reinforced through the questions in the videos. 210
Seyhan ERYILMAZ, İlknur REİSOĞLU
The interface of Web 2.0 applications that helps create and publish interactive video lessons are presented under this title and it is explained how they can be used for assessment and evaluation purposes in education. Thus, teachers can choose the most appropriate one based on their needs by comparing their features. EdPuzzle EdPuzzle is a Web 2.0 application that enables its users to cut videos, add sound, add question to ready made videos or videos that they make (I Love Edu, 2018). Users can reach Edpuzzle for free from the website https://edpuzzle.com/ . There is an automatic sign up option for users that have a Gmail or Edmodo account. In this application, two different profiles can be created, one as a teacher and the other as a student. Teachers can make changes on videos and organize lessons with videos via Edpuzzle. Users who register as teachers can create virtual classes following the steps “MyClasses → Add New Class”. Students and even parents can reach these virtual classes through “Invite Students” option or class code and link. “Connect with Google Classroom” option integrates it with Google Classroom. Teacher can create new videos through “Content → Add Content→Create a Video”. EdPuzzle can edit a choosen video or share a pre-edited video from YouTube, Khan Academy, National Geographic, TED Talks, Veritasium, Numberphile, Crash Course video channels. From “Content → Add Content→UpLoad a Video” link, they can make their videos interactive. Videos can be editted with
button on the videos, preview can be done with
button, a copy of the video can be made with
button. Choosen parts of the
video can be deleted by clicking on the "Crop Video (
)” button that appears
after clicking the button on video,. Syncronized audio can be added with “Voiceover” button,. The video can be paused at a certain part and informative audio notes can be added With “Audio Notes ( )” button. With Quizzes ( ) button, open-ended questions, multiple choice questions or discussion questions can be added to the desired moments of the video. Feedback can be provided to those open-ended or multiple-choice questions. Once the question, the alternate answers and the feedback is entered, user should press “Save” button. The editted video can be shared on the virtual class following these steps: “Finish→Assign to Class →Virtual Class Name →Assign”. With the help of “Prevent 211
Web 2.0 Applications Can be used in Assessment and Evaluation
Skipping” option, students are prohibited from skipping certain parts of the video. A link and an “Embeded code” to the editted video can be created Through “Finish →Public Links” steps. All the assignments that are given to the students, the completed and graded tasks of the virtual classroom can be reach fby “My Classes → Due Assignments”. If the teacher wants to create a due date for an assignment, “My Classes → No Due Date” should be chosen, then an assignment should be picked and then a due date can be entered. In this part, if a student has completed the task, then the teacher can see his/her answer to the questions on the video. The teacher can also put in the score of the student who completed the task. Via “My Classes → Students” link, students can be invited to the virtual classroom or can be deleted. On the “Gradebook” menu that is on top of the screen, the time that each student spends watching the video and their grades are listed on a chart for each video. On the student interface, students can log in to the virtual classroom using the code that is given by the teacher. Under the menu “My Classes”, the assignments are listed that the teacher gives. The student can watch the videos and view his/her grade and the teacher’s feedback from this same link. EdPuzzle
1. Access the application from this address https://edpuzzle.com/ . 2. Enter the relevant keywords in the “search field”. 3. Select the desired video channel from the “Popular Channels” section. 4. In the incoming screen, press button your needs.
on the video that appropriate to
5. Select the part that you think is not suitable for your purpose with tools and delete it with the “Crop Video” button. 6. Use the “Voiceover button” to add audio items such as speech, music, and questions. 7. Click on the “Audio Notes” button to add voice annotations that give explanatory information with
212
tool.
Seyhan ERYILMAZ, İlknur REİSOĞLU
8. Click on the “Quizzes button” to add an open-ended, multiple-choice and a comment question to the part of the video you want with
tool.
9. Enter your feedback to questions by clicking on the “feedback text”. 10. Click the “Save” button to save the final version of your video. 11. Click “Finish” button to create a new class with the “Add new class” tool. 12. Share your video in a virtual classroom. 13. Invite 10 students to the virtual classroom through “My Classes → Students” steps. 14. See videos on the “Gradebook” menu. PlayPosit PlayPosit, is a free Web 2.0 application that makes the lessons more active and fun by enabling its users to add interactive questions to the ready-made videos or videos that users make (Lorch, 2018). User can access this application from https://learn.playposit.com/learn/ link, and also see its introductory video from https://knowledge.playposit.com/ . User who already have a Gmail, Edmodo, Clever or Office 365 account can automatically register to this application. The system allows you to create an account as a teacher or as a student. On the teacher interface, several menus exist where the teacher can create interactive videos. These menus are: “My bulb”, “Classes”, “Units and Premade Bulbs”. Virtual classrooms can be created through → “Class” menu, and chapters can be created through “New→Units” steps. Videos can be added to these virtual classrooms and chapter. When you follow the link “Class” → “Students” → , students can be added to these virtual classrooms. The link, classroom code or a file wit “csv” extension can be used to add students to the virtual classroom. → “Bulb” should be followed to create or edit new videos. The desired video can be transfered to this application by entering the link, searching for a specific video on the Youtube channel or by uploading the video that the teacher made. After the video is uploaded, user will see ,
and
options by clicking on
. At a specific moment of the video, user can click on “Add Question” 213
Web 2.0 Applications Can be used in Assessment and Evaluation
button, enter the question and the answer choices to the appropriate spaces. Quesitons can be added in the format of multiple choice, check all, fill in the blanks, reflective pause, web embed, polling survey and discussion forum. Image, link, source code, special character, equation and table can be added through questions beside the text. The multiple choice, check all, fill in the blank questions can be graded automatically. In multiple choice questions, feedback for each answer option can be given by the button. Created questions can be rearranged at any time. parts that need to be cut out of the video can be selected by clicking on “Trim Video” button. By clicking on “Delete Video” button, selected parts can be deleted from the video. lessons with multiple videos can be created when you click → . When “Preview” button is clicked, user can reach and watch the preview of the video. By clicking on “Share” button, created video can be shared with colleagues, URL can be shared with registered/unregistered learners and embedded into anywhere user want. User can manage settings such as rewind/forward a video, skip answering the questions on the video, watch the video repeatedly by clicking on of videos can be printed from the saved from
→
→
→
. Viewing reports
link. The created video can be
link. These created videos can be reached from “My
Bulbs” menu. Additional alterations on the video can be done from
→
“Edit” link. When a deadline is entered from → “Assign” menu, these videos can be assigned to the virtual classrooms. User can categorize the videos in different units by using the steps → “Manage Units”. From → “Monitor” link, user can get the list of students who wacthed the video, view their answers to the questions, pull a chart or graph of the feedback that the students received, and their grades. By picking a virtual classroom(s) under the menu “Classes” and then clicking on “Analitics” button, a list of students who watched the video, frequency of video watch, the answers that students gave for each question, and their notes can be reached in detail. A list of registered students to a specific lesson can be viewed from “Classes→Students”. The videos that were previously created by others can be viewed from “Premade Bulbs” menu.
214
Seyhan ERYILMAZ, İlknur REİSOĞLU
Students can log in to the student interface of the application by using the link given by the teacher, or the classrom code. After signing in, students can see their assignments on the screen, answer questions while watching the video, receive feedback based on their answers, take notes on the videos, chat with their friends and view their completed tasks.
PlayPosit 1. Access the application https://learn.playposit.com/learn/. 2. Follow
from
this
address
→ “Bulb” steps.
3. Click the “Youtube” button on the screen. 4. Enter the keywords you want in the “search field”. 5. In the incoming screen, select the appropriate video and click “Add” button. 6. After watching the video, determine the sections that you think are not appropriate for your purpose. 7. When you click the
button in the
icon, use the
tools that appear on the screen to select the sections should be removed from the video. 8. Click the “Add Internal Crop” button to display the sections that should be removed. 9. Click the section.
button in the
icon in the
icon to delete the relevant
10. Click “Exit crop mode” to exit from “Delete” mode. 11. Click the
button in the
icon.
215
Web 2.0 Applications Can be used in Assessment and Evaluation
12. Select the desired question type (multiple choice, fill in the blank, free response, reflective pause, polling survey, check all, web embed, discussion forum). 13. Enter the requested information and press
.
14. Repeat steps 11-13 to create five different types of questions. 15. Set the sharing options via the “Share button”. 16. Save the generated video following the
→
steps.
TED-Ed TED-Ed is another Web 2.0 application that enables its users to edit readymade videos and create video lessons. User can reach this application from https://ed.ted.com/. In this application, user can create not only teacher and student profiles, but also “other” profile which makes it different from the previous two applications. Users who have a “Facebook” account can sign up automatically. After logging in teacher interface, user can reach previously created videos following “Discover→ Lessons” steps. A fee must be paid in order to edit these videos. When user click on “Create” → “A Lesson” button on the screen, a new video lesson can be created. The video that will be used in this part can be transfered by using keywords or Youtube link of a video. Click on the video and press “Continue” button on the screen that comes up, and the video is transfered to the application from editing. User will see the following buttons on the screen: “Let’s Begin, Think, Dig Deeper, Discuss and Finally” (Allan, 2018). There is a short introduction about the video that is going to be made under “Let’s Begin” button. The “Think” button enables user to add multiple choice or open-ended questions about the video. In multiple choice questions, the correct answer is shown to the students after completing answers. By using “Dig Deeper” button, more detailed explanations and links for supporting sources can be offered to the users who viewed the video. A discussion about the information that the video offers can be started using “Discussion” button. User can give extra information about the important parts of the video by using “And Finally” button. Then complete the lesson. Existing video can be changed with a different one by “Change Video” link. Parts of the video can be deleted from the video using “Crop Video” link. In order to publish the created video, user just need to click 216
Seyhan ERYILMAZ, İlknur REİSOĞLU
on “Publish” button. While publishing a video, there is an option “Require students to use TED-Ed accounts”, which requires students to open an account; and there is another option “Don't require students to use TED-Ed accounts”, which doesn’t require students to open an account. When you click on sign, a submenu of “Lessons”, “Discussions”, “Notifications”, “Settings” and “Logout” appear. Video lessons that were viewed, completed, draft video or a new video lesson can be created when “Lessons” menu is choosen. Discussions and teacher’s comments can be reached from the “Discussions” menu. From “Notifications” menu, user can reach the activities that the students did, or the notifications or their comments on the discussions. Options such as the teacher receiving a notification through an email can be chosen from the “Settings” menu. Unlike other applications, the student and “other user” interfaces show a similarity with the teacher interface, which means all users can exploit all the features that the application offers. While watching the video, there are “Watch”, “Think”, “Dig Deeper”, “Discuss” options. As one watches the video, one of these options can be picked and actived.
TED-Ed Practice
1. Access the application from this address https://ed.ted.com/educator. 2. Follow 4. 5. 6. 7. 8.
→ “Lessons” steps. 3. Click buton. Enter the desired keywords or the URL of the video you know in the “search field”. Click on a video you want to edit and click “Continue” to transfer the video to the application. Click on the “Crop video” link to cut out the parts of the video that you specified and remove it from the video. Click on “Let’s Begin” button to enter short information about the video lesson. Click on the “Think” button to enter the open-ended and multiplechoice questions you'd like to ask the users who watch the video. 217
Web 2.0 Applications Can be used in Assessment and Evaluation
9. With the “Dig Deeper” button, provide additional resources and information that you want to provide to users who watched the video. 10. Create a discussion question about the course by clicking on the “Discuss” button. 11. Click the “And Finally” button to enter the points that you want to indicate about your video lesson. 12. Click the “Publish” button to publish your video lesson.
Comparison of Interactive Video Applications and Their Use in Education The comparison of EdPuzzle, PlayPosit and TED-Ed applications that enable creating interactive videos is reflected on Table 1 based on their main features. Table 11.1 Comparison the features of EdPuzzle, PlayPosit and TED-Ed Edpuzzle
PlayPosit
TED-Ed
Gmail, Edmodo
Gmail, Edmodo, Clever and Ofice 365
Facebook
Google Classroom
-
-
Available Videos
EdPuzzle, YouTube, Khan Academy, National Geographic, TED Talks, Veritasium, Numberphile, Crash Course video channels or user's own videos
YouTube video channel or user’s own videos
YouTube video channel
Integrating Videos
Not avaliable
Avaliable
Not avaliable
What can be done with videos
Removing a section, adding questions and voice notes, video voiceover
Removing a section, adding questions
Removing a section, adding questions, adding descriptive and
Applications that provide automatic log in Integrated applications
218
Seyhan ERYILMAZ, İlknur REİSOĞLU
summative information
Types of questions that can be added
Devices Available to Use Feedback type presented to the student Create a virtual classroom
Multiple choice, open ended and reflective
Multiple choice, check all, fill in the blank, reflective pause, web embed, polling survey and discussion forum
Multiple choice, open ended and discussion
Tablet/Computer, Mobile
Tablet/Computer
Tablet/Computer, Mobile
Constant
Variable
Constant
Avaliable
Avaliable
Not avaliable
The tables and graphs of the students' answers to the questions in the videos and the total score they received.
Summarized information on how many people watched the video and how many people completed the questions.
Visual status of Information students watching in the results videos, scoring and report monitoring time
The distinctive features of the three applications can be seen when Table 1 is analyzed. It is understood that EdPuzzle supports different video channels or sources and offers more options while editing a video. PlayPositi on the other hand, gives the opportunity to create different types of questions. Teachers can choose the most appropriate application through considering this table. Interactive videos, which can be used at any stage of the learning process, are nowadays used frequently to create reverse learning environments. Teachers can create video lessons with videos that are suitable for reverse learning environment 219
Web 2.0 Applications Can be used in Assessment and Evaluation
using any one of these above listed applications. They can process students work on lessons and subjects regardless of time and place. Interactive videos can not only be used throughout a whole lesson, but it can also be used effectively at a specific point of the lesson. Students’ attention and motivation can be increased by using catchy videos and adding stimulants such as sound, question, sign, etc. at the beginning of a lesson. The preknowledge and preparedness of students can be determined by using questions on a video. During the teaching process, students can be asked to watch the videos, answer the questions, create discussions as a group. With settings that can prohibit students skipping questions or skipping parts of a video, students can be motivated to put personal effort to learn the content and, if it is enabled, have an exchange of ideas with colleagues. Finally, at the end of a learning process, the interactive videos can be used as means for reinforcement or for level determination. Videos can make assessment and evaluation process more fun. Assessment and scoring can be expedited with the help of report sheets. Teachers can follow student development and progress more closely. Interactive videos are suitable to be used for various disciplines such as science, social studies, language teaching, art. For example, students’ capability to understand what they watched can be determined by using a video of a short incident. Empirical subjects in science lessons can be taught in more detail using extra added explanations on videos. Gamification Applications Gamification is the use of game components to engage the user or enhance user experience in applications that are not real games (Domínguez et al., 2013). The use of gamification in educational applications can attract students’ attention, provide motivation, streamline learning process, improve student engagement, and make assessment and evaluation fun. In addition to these, it supports students to learn a topic, to repeat what they learn, study as a group or in their own pace. Through gamification, the students/groups that answer fastest and correctly to the questions are rewarded with visuals and their success is reinforced. The answers and results are also displayed on a big screen where every student can see them. Thus, gamification makes assessment and evaluation process more effective. A respectful but competitive environment is created among the students, which forces them to be more careful and faster. In this part, Web 2.0 applications are introduced that enable combining assessment and evaluation activities with gamification elements and explained how they can be used for educational purposes. In addition, features of these Web 220
Seyhan ERYILMAZ, İlknur REİSOĞLU
2.0 applications are compared to help teachers to choose the most appropriate application based on their needs. Kahoot It is a free and easy-to-use application that gamifies assessment and evaluation process. It allows the use of ready-made games. It has two seperate pages: one for creating games and the other for logging in a game. There are four user types in this application: as a teacher, as a student, socially, and at work. It can be reached from https://kahoot.com/welcomeback/ address. And registration can be completed through “Sign Up” button. User who have a Google or Microsoft account can autocatically register to this application. When user enter the teacher interface of Kahoot, user can reach previously created games from menu that is at the top of the screen. The games that the user created are listed in
menu. A
quiz,
discussion,
poll
and jumble can be created by clicking button. When quiz is clicked, user can enter a title and an explanation for the quiz, chose who can view it, its language, target audience, supplementary resources, and an introductory video if there is one. The information can be saved when
button is
clicked. On the screen that comes up, questions can be formed by “Add Question” icon. The question, answer choices, information about the sources that can be used are entered, the correct answer can marked, a time limit can set, and if there is an image or a video about the question, it can be uploaded in this part. The “Quiz” can only consist of multiple questions with minimum two, maximum four answer options. A list of the quesitons that are created can be reached by button. The icon it,
that is next to the question, deletes it;
icon copies the question. The quiz can be saved with
the next screen where you see the icon preview is possible by
icon edits button,. On
, changes can be made on the quiz,
icon, the game can be played with
icon, the link
can be generated or an email can be sent by icon. When is clicked during the game, a number called “game pin” appears on the screen. Question allignment and answer allignment can be changed, and autoswitch between questions is possible with the
buttons that appear afterwards. Students can
join the game using a “Game Pin”.
button should be pressed when 221
Web 2.0 Applications Can be used in Assessment and Evaluation
everything is completed. Questions with only two answers can be formed in the discussion part shown with
icon. Questions with at least two answer options
can be formed in the questionairre part shown with
icon. Ranking questions
can be formed in the jumble part shown with icon. In the menu, detailed Excel reports supported with graphs can be generated for each student and each question. In the student interface, once the students enter their Game Pin, they can start playing the game by a nickname. When in use, questions and answers can be reflected on the smart board, and the students can see the symbols representing the answer options on their smart phones or tablets. Students should try to answer the questions as quickly as possible. After each question, a list is shown of students who answered the question quickest and correctly and a graph shows how many students answered that question correctly. The student with the quickest answer gets the highest point. The same procedure is followed until answering the last question. The winner is announced at the end of the game. This way, students’ knowledge and skills can be assessed in a fun and competitive environment. At the end of a quiz, students can rate the game. The game can be played as individual or as team. Kahoot Exercises 1. Access the application from https://kahoot.com/welcomeback/ or https://create.kahoot.it/ web sites. 2. Follow “Create” → “Quiz” steps. 3. Enter the descriptive information, such as name, language, and click on the
button.
4. Click symbol. 5. Enter your multiple-choice question, options, video or pictures if available, and the reply time in the corresponding fields on the incoming screen. Please specify the correct option/options. 6. Click button. 7. Repeat the steps 5-7 on the screen to create 10 questions. 8. Click on the
222
button after completing the question entry.
Seyhan ERYILMAZ, İlknur REİSOĞLU
9. Click
symbol.
10. Adjust game by the buttons. 11. Once you have made the adjustments, click on Classic or Team to create your “game pin”. 12. Share the game pin with the students and ensure them access the game at https://kahoot.it/. 13. Return to the homepage follow “Create” → steps. 14. Follow the steps presented in 3-12 to create and share your ranking questions with your students (take care to enter the options of the problem in accordance with the order). 15. Enter reports about games you created from the
menu.
Socrative Socrative is an application that enables teachers to make a quick individual or group assessment of a certain topic, and it gives the results in reports and graphs (Warwick, 2017). The application can be reached from https://www.socrative.com/, and it can be used on smart phones as well. User can sign up as a teacher and/or student. In the free version, the teacher can form one virtual classroom with maximum 50 students. Students can sign in by using the code of the virtual classroom and do not need to sign up. Once the students fill in their name, they can answer the questions and receive feedback for every single one. Teachers can complete the requested information and sign up, or they can use their Google account to sign in automatically if they have one. On the screen that comes up, the following menus exist: “Launch”, “Quizes”, “Rooms”, “Reports and Results”. New quizes can be formed, and alterations on the existing quizes can be done by “Quizes” menu. When “Quizes” is clicked, pre-existing quizes appear on the screen. There is a search option that helps finding a specific quiz. New folders can be created and quizes can be categorized by using
“Create
Folder” icon. The teacher can “Delete” a quiz, “Merge” two quizes, “Move” quizes to a specific folder. A new quiz can be created by “Add Quiz” → “Create New” option. On the next screen, a title can be written to the quiz, and by ticking the “Align quiz to standard” option, the field, topic, student level, and learning outcomes can be put in. From “Questions” menu, teachers can click 223
Web 2.0 Applications Can be used in Assessment and Evaluation
on one of “Multiple Choice”, “True/False”, “Short Answer” options and write the questions. It is possible to add images to questions, increase answer options, add explanations about questions. In order to score, an answer should be picked, “Save” button should be clicked, then the answer will be saved. Once all the questions are answered, “Save & Exit” button should be clicked. When “Launch” menu is clicked, the following shortcuts appear: “Quiz”,
“Space Race”,
“Exit Ticket”, “Multiple Choice”,
“True/False” and “Short Answer”. User can click on “Quiz” to start a previously prepared quiz or click on “Search Quizes” to search for a specific quiz by name, select it, and then click “Next” button. From “Choose Delivery Methods and Settings” part, usershould choose one of the options
“Instant
Feedback”, “Open Navigation” or “Teacher Paced”. If “Instant Feedback” option is selected, students are required to answer the questions in a specific order. However, they cannot change their answers afterwards. They receive an instant feedback after each question. If “Open Navigation” option is selected, students can answer the questions in any order they prefered and they can change their answers. When “Teacher Paced” option is selected, the teacher can see the question flow and the answers. By “Required Names”, “Shuffle Questions”, “Shuffle Answers”, “Show Question Feedback”, “Final Score” options, names of the students who completed the quiz can be saved, the questions and the answer options can be shuffled, feedback can be given to students, and the final scores can be viewed by the students. The students answer to each question can be seen by clicking on “Start” button. Once all the students complete a quiz, a report can be pulled by clicking on “Finish” →
icon, and
a graph can be generated by clicking on icon. “Space Race” option should be selected in order to create a fun and competitive environment in the classroom and make students answer the questions in groups. One of the previously prepared quizes should be chosen and “Next” button should be clicked, then the number of groups should be entered that will participate in the game. “Auto assign” option should be selected to give automatic names to groups. Or, if students are going to pick their group name, then “Student selection” option should be selected. An icon that will show progress should be selected. The same settings that were explained for the “Quiz part” should be 224
Seyhan ERYILMAZ, İlknur REİSOĞLU
completed and then “Start” button should be clicked. The progress of groups that answer the questions correctly and quickly are shown on a graph. With “Finish” button, same reports as in the Quiz part can be reached. The “Exit Ticket” part is used to get students’ feedback on the lessons. In this part, the following questions are directed to the students: “How well did you understand today’s material?” and “What did you learn in today’s class?” However, the language option is limited in this part. By using “Quick Question”, the teacher can write question and answers on the board or verbally state them. And the students answer these quick questions by Socrative. The number of students who choose each answer option is shown on a graph. The teacher can create virtual classroom by “Room” menu. A fee must be paid in order to create more than one classroom. In “Reports” menu, the results of previous quizes and space races can be reached by , icons. “Whole Class Excel”, “Individual Students”, “Question Specific” options come up once these icons are clicked on. “Whole Class Excel” option pulls a report for each student including his/her final score and number of correct answers, and students’ answers for each question. “Individual Students PDF” option creates a pdf file, which shows each student’s correct answers and mistakes on the digital test paper. “Question Specific PDF” option on the other hand, shows number ofstudents who chose each answer option for multiple choice and true/false questions, and show students’ answers for open ended questions in the report. The teacher can email these reports by icon, download them as pdf by
icon, or save them on Google Drive by
icon.
Socrative Exercises
1. 2. 3. 4. 5.
Access to the application from https://www.socrative.com/ address. Login to “Teacher Login”. Follow “Quizess” → “Add Quiz” steps to create a quiz. Give a quiz name on the incoming screen. Select “Align quiz to standard” checkbox to enter the quiz's field, subject, student level and achievements. 6. Click on the “Multiple-Choice” button to enter the question, the option(s), and the correct option. 7. Create 2 multiple answer, 2 True/False, and 2 short answer questions. 225
Web 2.0 Applications Can be used in Assessment and Evaluation
8. After all question entries are completed, press the “Save & Exit” button. 9. Enter the “Room menu” and give your students the virtual class code. 10. Select the quiz you created by following the “Launch” → steps and press the “Next” button.
“Quiz”
11. Get instant feedback via “Instant Feedback”. 12. Activate the settings for students who complete the quiz, record their names, mix questions and answers, give feedback to the students, and see the total score that the students have received. 13. Click “Start” button. 14. Wait for the students to complete the quiz. 15. Click on the “Finish” button and click to get the results in report. 16. Enter the “Reports” menu. Click on the quiz you have completed and click . 17. Click “Individual Students PDF”. 18. Click on the
icon to download as pdf.
Plickers This is an application that turns assessment and evaluation into a fun process by using gamification. It can be reached from the https://get.plickers.com/ address. Once all the requested information is filled in from “Get Started” button, log in can be done. Once log in is completed, information about the application and its use can be reached from “Getting Started Guide, What’s New, Help” links that are on top of the screen. The teacher needs to open the application from his/her smart phone and web. The created question through the application is reflected onto the smart board, and the students are asked to choose and display the correct answer card that are distributed to them. The teacher can scan the cards that the students show, while the application is on in his/her phone using the camera icon on the question and can instantly check how many students (and which students) answered the questions right or wrong. Other than the plickers cards that the students need to have, it does not require any equipment, which makes it easier to use this 226
Seyhan ERYILMAZ, İlknur REİSOĞLU
application (Mitchell, 2016). The questions can be prepared beforehand, or new questions can be created during the lesson. The plickers cards that are distributed to the students can be reached following “Help” → “Get Plickers Cards” steps. The Plickers card packages are created based on the number of students in the classroom and the size of the cards. When the appropriate option is selected, then the cards can easily be downloaded and printed. The card sets are up to 63, which means there can be maximum 63 students in the classroom. There is a number on each card, the students and the cards are matched using these numbers. Each card is different, and the cards are distributed to students to make sure that the number on the card and the student’s number matches. All four corners of the cards are used for a different answer option. When answering a question, the students should make sure they display the correct corner/answer option on top. New card games can be created using the “New Set” button. On the screen that comes up, the title of the game is entered to the “Untitled Set” section. Then the multiple choice (with maximum 4 answer options) or true/false questions are entered. If there is only one specific answer to the questions, then “Graded” option should be selected. Otherwise, “Survey” option should be clicked. Images can be added to the questions by the icon. “Set as True/False” button convert the question type to true/false. New questions are created using icon. Question can be copied, and with “Delete” button a question can be deleted with “Duplicate” button. The created questions can be transfered to virtual classrooms using “Add to Queue” button. The games that are recently played can be listed on the screen on “Recent” menu. A search word to find a specific game can be written to the search tab. On “Your Library” menu, there is the search tab, and there are buttons to create a new game, a new folder , a new question, and trash. If user right-click on a game and choose “Move to Folder”, user can move the game in any of your folders. A saved report or specific analysis results can be found from “Reports” menu. Daily, weekly, monthly, or 3-months period reports can be pulled from “Scoresheet” menu. With the help of button, the teacher can choose and adjust whether he/she wants the students’ names, card numbers, percentages, distribution of true and false answers to be seen on the screen or not. In “Classes” section, the teacher can click on “New Class” and enter a class name, then click on “Create Class” and create a new classroom. When “Add Students” button is clicked inside a classroom, student name/surname information can be entered and by clicking on the “Next” button the students can 227
Web 2.0 Applications Can be used in Assessment and Evaluation
be added to that classroom The students can be added one by one or can be copy/pasted from a document. But only one student name should be written on each line. A form to list students should be selected, as entered, sort by first name, sort by last name, and then a list of students in a specific order is created using “Done” button. One card number is given for each student. These card numbers can be reached usign the icon or by clicking on “Class Roster” button. The cards should be distributed to the students with respect to these numbers. The teacher can edit class name, color, subject, year of the class by using the icon . The game can be started with “Play Now” button. There is a button at the top of the screen called “Student List”, which allows you to adjust when the students’ list will be shown (display while scanning, never display, always display), how the students will be listed (by name or card number), whether to display card numbers or not (Hide/Show), and whether to show individual reports or not (Hide/Show). Graphic display options (manually/always), the color of a wrong answer (Green/Red), the method to show answer options (Show check and cross/Show choice letter) and displaying the number of answer options (Hide/Show) can be adjusted with “Display Options” button. Plickers
1. Access to the application from https://get.plickers.com/ address. 2. Create a class by filling in the relevant fields in the “Classes” → “New Class” → “Enter the class name” → “Create Class” → “Add Students” → “Enter the name and surname of the student” → “Next” → “Done” steps. 3. Press the plickers cards with the “Help” → “Get Plickers Cards” steps. 4. Distribute the cards according to the pairing in the “Class Roster” in the class you created. 5. Use the “New Set” button to create two multiple choice and four right-wrong questions. 6. With the “Add Queue” button, transfer the questions to the class you created. 7. Enter the class you want students to solve by clicking on the “Play” or “Play Now” button to ensure that the students reach the question. 228
Seyhan ERYILMAZ, İlknur REİSOĞLU
8. Click on the “Student List” button on the screen to make the relevant adjustments. 9. Click the “Display Options” button on the screen to make the relevant adjustments. 10. Download the Plickers app to your smartphone and press the button then button to scan the answers of the students. 11. Reports are available in the “Reports” menu. Quizlet It is an application that allows creating online study sets and developing several online games. It is essentially used for teaching language, concepts, and terms. With the help of this application, assessment and evaluation can be done throug flashcards, multiple choice, open ended, matching questions, and games (Powers, 2019). This application gives feedback to the users, and offers readymade materials and gives the chance to the teacher to createown material. Quizlet can be reached from https://quizlet.com/tr address. By clicking “Sign Up” button, user can enter the requested information and register. User can also automatically sign in using Google or Facebook account. Help and additional information can be gathered from the “Help Center” link that is on the left side of the screen. Instructions for teachers can be reached by clicking on “Teachers”. And, a mobile version of the application can be downloaded by clicking on “Mobile”. By using the
“Search” button on top of the screen, a keyword can be
entered to find a previously prepared study set. When “Create” button is clicked, user can start creating study sets. User can enter a title for the study set on the screen that comes up. There is a statement “Visible to everyone”, and by click “Change” button underneath it, the teacher can choose who can view the study set (everybody, certain classes, users with a password or just me). There is also a statement “Only editable by me”, and by clicking Change button underneath it, the teacher can edit the study set (certain classes, users with a password, or just me). An image can be uploaded to the matching game by using “Pick an image” button. Reference points for concepts or terms can be created on images and additional information can be entered in this part. For example, a cell image can be uploaded and reference points on all organelles can be created and then the function of each organelle can be explained. A term or a concept can be written, and an audio can be added to the “Term” line. The equivalent of a concept of a term in other languages, its definition, a related image, or an audio 229
Web 2.0 Applications Can be used in Assessment and Evaluation
explanation can be added to “Definition” line. However, the audio recording option is not available in the free-of-charge version. When adding term or a definition, the teacher can use the application’s suggestions by clicking on icon, add images by clicking on icon, create audio recording by icon. In order to add more terms or definitions, the teacher needs to click on “Add Card” link. Selected terms can be deleted by the icon. By clicking on “Create” button, study materials are created under the titles flash cards, learn, write, spell, test, and games (match, gravity, and live). In all these materials, instant feedback is provided to student answers. For each material, language, question type and number of questions can be changed by “Options” button. Under the title “Learn”, learning is tested by letting the student get through to the next level as he/she progresses and during this process, they are forced to face and answer harder question. For instance, there are multiple choice question in the first round, but in the second round there are short answer questions. While the students are asked to pick an answer in the first part, in the second part they are forced to write the answer themselves. In the “Flashcards” menu, there is the term on one side of the card and on the other side its definition. The users have the option to shuffle the cards, move to the next card automatically, or click manually to move to the next card. Under the title
“Write”, users are asked to enter the term
based on the definition that is shown on the screen. In the “Spell” section, the audio of a term can be played and the students are asked to enter what they hear in the related space. In the
“Test” section, multiple answer or openended
questions are directed to the students. In the “Match” game, a term is matched with its definition. The length of the game and the quickest time is displayed in this part. In the “Gravity” game, the aim is to protect the planets from incoming asteroids that are the terms. The students need to enter the definitions before the asteroid/term hits to the surface of a planet or vice versa. In the “Live” section, teams are formed with minimum 4 students. The teams need to answer to the questions correctly and in the quickest time win the game. However, in order to play the game, the teacher needs to use a computer and each student or team needs to have a tablet or a mobile phone. With the 230
icon that
Seyhan ERYILMAZ, İlknur REİSOĞLU
is under the title of the study set, users can edit the set. The
icon shares the
study set in Google Classroom or Remind. Following the → “Create a New Class” step, the study set can be shared with the virtual classroom that is created on Quizlet. With
icon, the teacher can view the number of students in virtual
classroom and the number of shared classrooms. Using the icon, the progress of a class can be viewed however it requires an additional fee. With the help of icon, the study set can be copied, the student scores can be listed, the study set can be printed in different formats, other previously created study sets can be combined with another study set, the texts in the study set can be exported, and a study set can be deleted. On the main page, when “Create a Class” link is clicked, a new class can be created. On the screen that comes up next, the created study sets can be added to the virtual classrooms using the icon icon
. With the
, the students that have an email address can be added to a class. And
by clicking on
icon, Google Classroom can be connected.
Quizlet
1. Access the application from this address https://quizlet.com/tr. 2. Create a new virtual class by clicking the “Create class” menu on the main page. 3. Add e-mail addresses of the students to the virtual class with the icon. 4. Click the “Create” button to enter the name of your study set. 5. Upload an image of your subject that is suitable to use in the matching game in the field . 6. Create reference points on the image. Enter descriptions for reference points. 7. Enter your concepts or terms in the “Term” field, your voice annotations.
231
Web 2.0 Applications Can be used in Assessment and Evaluation
8. Add descriptions, images and audio descriptions of your concepts or terms in the “Description” field. 9. Enter for at least six terms with steps 7-8. 10. Click the “Create” button. 11. Share your work set with
icon in your virtual class.
12. After using your study set, click on the of the students.
icon to display the scores
BookWidgets Bookwidgets is an application that allows you to create interactive quizes, worksheets, and games easily by using computer, tablet or mobile phone (Burns, 2016). Bookwidgets are different from traditional methods and transforms classroom activities into fun and interactive experiences. Many of the activities, that are developed by this application, are automatically scored, and feedback is sent to the students instantly. In addition to this, the teacher developed activities can be easily shared on Google Classroom, Moodle, Canvas and Schoology. The application can be reached from https://www.bookwidgets.com/ address. Once the required information is filled in the “Sign In” menu, the registration can be completed. Or if user have a Google or Smartschool account, user can automatically sign in. On the screen that comes up, there is a “Home” menu. From this menu, user can access to the previously developed widgets, student works, information about the new features of the application, activity suggestions that can be done Bookwidgets, and help resources. From “Widgets” menu, user can reach previously created widgets, virtual classrooms, and sample widgets. Using the following steps “Widgets” → “My Widgets” → “Create New Widgets” → “Test & Review”, the activities such as exit slip, split whiteboard, webquest, flash cards, split worksheet, whiteboard, quiz, timeline, worksheet can be reached. And, if the following steps are followed “Widgets” → “My Widgets” → “Create New Widgets” → “Games”, the games such as Bingo Card, Jigsaw Puzzle, Pair Matching, Word Search, CrossWord, Memory Game, Randomness, HangMan, MindMap, Spot the Difference can be reached. If you click on “Widgets” → “My Widgets” → “Create New Widgets” → “Pictures & Videos” section, new activities with videos and visuals can be created. The names of these activities are “Before/After”, “Image Carousel”, “Random 232
Seyhan ERYILMAZ, İlknur REİSOĞLU
Images”, “Frame Sequence”, “Image Viewer”, “3D”, “YouTube Player”, “HotSpot Image”, “Piano”, “TipTiles”. From “Widgets” → “My Widgets” → “Create New Widgets” → “Math section”, “Active Plot”, “Spreedsheet”, “Arithmetic and Chart” activities can be obtained. The activities can be enriched with visuals, they can be shared, the warnings or feedback in the activities can be manually translated into any language, the student answers vcan be saved. The created widgets can be previewed by the “Preview” button, the ready widgets can be shared on Google Classroom by “Get Shareable Link” or can be emailed to specific people and can be downloaded by the icon. From “Widgets” → “My Groups” section, virtual classes can be created, and students can be added to these classrooms. The students can be questioned about their feelings and opinions about a lesson by “Exit Split” activity in the “Text & Review” section. 3 emojis are used to show feelings while answering to these questions. As for student opinions, open ended questions can be asked, or visuals can be added to the questions. The activities called “Split Whiteboard”, “Whiteboard” can be used when the students are required to answer with a drawing. In this activity, the students can be asked to draw parts or pieces of a given visual. In “Split Whiteboard” game, the students are given directions and they are asked to draw based on those instructions. “WebQuest” helps to create steps in inquiry-based learning activities. Two-sided cards can be created by “Flash cards”. Activities that consist of questions such as open-ended, multiple choice, matching, fill in the blanks, putting in order that can also be supported with visuals can be created from “Worksheet”, “Quiz” section. Questions about a given paragraph can be created by “Splitworksheet”. And, “Timeline” allows students to put the given events in order on a visual. The “Bingo” game in the “Games” section is very similar to the Turkish game Tombala. The teacher pronounces a word and if that word or the corresponding visual is on the students’ screen, they click on it. “Crossword” is a game which consists of horizontal and vertical columns, and where there are some joint letters. “Hangman” is a game of guessing a word. “Jigsaw” is a game in which the student tries to complete a whole image by putting together pieces of a visual. In “Memory Game”, related or identical pieces are matched. “Mind Map” enables the students to visualize a statement, object or a term that is related to another terms that the teacher gives. In “Pair Matching”, the students match the symbols, visuals, and statements that mean the same thing. In “Randomness”, the student 233
Web 2.0 Applications Can be used in Assessment and Evaluation
chooses the odd text or visual out. “Spot the Difference” is based on spotting the differences between two given pictures. “Word Search” is based on finding a list of words in a jumble of letters. And the arithmetic in “Math” section, enables students to practice arithmetic operations. “Grades & Reportings” → “Students Work” link enables the teacher to obtain student scores, their answers, the time and date they completed the task once a widget is applied. Following “Grades & Reportings” → “Exit Slips” steps, the students’ feelings and opinions about a lesson can be viewed. BookWidgets Practice 1. Access the application from this address https://www.bookwidgets.com/. 2. Create a webquest, a quiz, a split worksheet with “Widgets” → “My Widgets” → “Create New Widgets” → “Test & Review” steps. 3. Create a CrossWord, a Memory Game, a HangMan with “Widgets” → “My Widgets” → “Create New Widgets” → “Games” steps. 4. Create an Arithmetic activit with “Widgets” → “My Widgets” → “Create New Widgets” → “Math” steps. 5. Create a virtual class with “Widgets” → “My Groups” steps. 6. Share the widgets you created in steps 2-4 in your virtual classroom. 7. Share each widget with two friends via email with “Get Shareable Link”. 8. Monitor the results belongs to people using widgets with “Grades & Reportings” → “Studets Work” steps.
Comparison of Gamification Applications and Their Use in Education The comparison of Kahoot, Socrative, Plickers, Quizlet, and BookWidgets applications that used in gamify assessment and evaluation process is reflected on Table 11.2 based on some of their main features.
234
Seyhan ERYILMAZ, İlknur REİSOĞLU
Table 11.2 Comparing Kahoot, Socrative, Plickers, Quizlet, BookWidgets applications in terms of some basic features Kahoot
Socrative
Plickers
Quizlet
Bookwidgets
Gamification
Gamificatio n, Create an assesment tool Google
Gamification
Gamification, Create study sets
Google
Facebook Google
Gamification , Creating teachinglearning activities Smartschool, Google
Google Drive
Not avaliable
Google Classroom, Remind
Multiple choice, true/false, open-ended
Multiple choice, true/false
Developable Correct game types answer as soon as possible
Correct answer as soon as possible
Card game
Flash cards, writing, listeningwritining, multiplechoice, openended, matching Gravity game, matching, live
Game play options Archiving
Individual, team Avaliable
Individual
Basic use
Applications Microsoft, that provide Google automatic login Integrated Not avaliable applications
Available question type
Multiple choice, survey and ranking (jumble)
Individual, team Avaliable
Not avaliable
Individual, team Avaliable
Google Classroom, Moodle, Canvas and Schoology Open ended, multiple choice, matched, fill in the blank, sorting
Crossword hangman, jigsaw, memory game, mind map, pair matching, randomness, spot the difference, word search, math Individual Not avaliable
235
Web 2.0 Applications Can be used in Assessment and Evaluation
Equipment needed for use
Computer, smart board, tablet, / smart Phone Correct answer, Success ranking
Tablet, smart phone, computer, plickers cards Correct answer
Creating a Not avaliable virtual classroom Participatio Game pin n of students in the game
Avaliable
Avaliable
Room code
Adding students to the generated virtual classroom
Information in the result report
Student and questionbased graphics and visuals
Daily, weekly, monthly, quarterly reports
Feedback type
Computer, Smart board, Tablet, Smart Phone Correct answer, Success ranking
Student and questionbased graphics and visuals
Tablet, computer, smartphone
Tablet, computer, smartphone
Correct answer, Progress according to students' achievements Success ranking Avaliable
Correct answer
Adding students to the generated virtual classroom, Google Classroom, Remind Progress according to students' achievements
Link, Google Classroom, e-mail
Avaliable
Students' scores, answers, the date when they completed the activity
Based on the information on Table 2 that shows the differrences of gamification applications, the teachers can choose the appropriate application depending on the students’ interests, their level, and the topic of the lesson. Quizlet and BookWidgets, as explained above, can be used at any stage of learning-teaching process. Kahoot, Socrative and Plickers on the other hand, are more appropriate to use when examing students’ prior knowledge, preparedness on a topic and evaluate student success. Moreover, with the use of Kahoot, Plickers and Socrative, the students/groups with the quickest answer can be rewarded with visuals on the big screen in front of the whole class, which creates a competitive environment and motivates the others to be more careful and 236
Seyhan ERYILMAZ, İlknur REİSOĞLU
quicker. Kahoot, Socrative, Plickers and BookWidgets can be used when teaching many subjects. On the other hand, Quizlet can be used when teaching language, notions and terms. Quizlet not only gives feedback to the students, but it also supplies data regarding student progress. The prominent feature of Quizlet is that once the flashcards are created, the application automatically creates several different games/questions based on the same topic. The application that has the most games is BookWidgets. However, unlike Quizlet that generates different games using the entered information. In Bookwidgets, each game needs to be created seperately. Quizlet can generally be used when notions and terms of a topic are being instructed, and sets including different kinds of activities (question, test, and game) can be created. Other activities in the rest of the gamification applications are independent of each other. Kahoot, Socrative, Quizlet and BookWidgets allow the use of previously created games. However, Plickers only allows user to create own game/quiz. In terms of the equipment all the applications require, Plickers is more useful. Once the Plickers cards printed, they can be used more than one lesson and in more than one classroom. And it doesn’t require the students to sign up or have any equipment. Applications that Develop Assessment and Evaluation Tools The introduction of some applications, which can be used in order to develop rubrics that are commonly used in assessment and evaluation and assessment tools that include different types of questions, will be introduces under this title. In addition, these Web 2.0 applications’ features are compared so that teachers can choose the most appropriate one based on their needs. Rubistar Rubric is a scale divided into varying sizes and levels show the performance that are expected from the students. As rubrics are dependent on performance, they both help to improve student performance and monitor it. What we mean by perormance here are the actions that consist of many sub-skills that will be monitored. Each sub-skill related to the performance expectation forms the size of the rubric. The categories that show the expected performance level can be indicated with numbers or descriptive statements. Performance levels for each performance size is written in indicators. Rubistar is an easy to use and free application in which there are ready-made rubrics and templates based on specific fields. It can be reached from the address http://rubistar.4teachers.org/index.php. Registration is simply completed via an 237
Web 2.0 Applications Can be used in Assessment and Evaluation
email address. The following menus are on the “Home page” after signing in: “Find rubric”, “Create rubric”, “Teacher home” and “Tutorial”. Under the section “Find rubric”, there is a search box and the option to choose one of the following search options: “Search Rubric Titles”, “Search Author Name”, “Search Author Email Address”, “AND” search type (all words must match), “OR” search type (any of these words). Maximum 3 keywords with minimum 3 letters can be entered on the search line, and there must be a space between these keywords. And by choosing one of the search options, the intended list of rubrics can be listed. New rubrics can also be created from the “Create rubric” section. When you sign in to create a rubric, there have to be an entry in 40 minutes. A rubric can be editted after it is saved. Some ready-made templates for fields such as science, music, maths, verbal presentation, and art are offered on the application. If one of these templates will be used, user need to click on a template, enter a title, then decide if the rubric is going to be temporary or permanent on the “Demonstration Rubric?” part. User can choose which of the performance sizes are offered and which one user want to choose by option. When the sizes on the list does not fit, new sizes can be created by entering the intended text in this
space. Necessary
adjustment can be done by button. By button, the rubric can be printed or downloaded on a computer in excel format. With button, the rubric can be shared online. If a template is not going to be used, then “Click Here to Create a Brand New Rubric” link at the bottom of the page should be clicked. On the screen that comes up, a title for the rubric, number of columns, information whether the rubric will be on the system temporarily or permanently, and if temporary a due date should be entered. Using button, the level of performance should be entered to “Your New Column” area, and a new performance level can be created by clicking on “Add New Column” button. With the help of button, the intended performance size for each row should be entered. And using “Add New Rows” button, a new performance size can be created. The rows can be switched. With button, performance level comments can be entered, and changes can be saved by “Submit” button. The rubric can be printed by clicking on “Click Here to View or Print Your Completed Rubric”. When the prepared rubric can 238
Seyhan ERYILMAZ, İlknur REİSOĞLU
be used online, an ID number is generated for that rubric. It becomes available on the Rubistar database, and Rubistar can be used for analysis. In “Teacher home” section, the rubrics that can be used online and information about them (ID number, title, creation date, and last edit date) can be viewed. The teacher can preview, edit, analyze, and copy these rubrics. Once the rubric is applied, → steps should be followed, and under each indicator that Rubistar creates, the student numbers should be entered. By clicking on “Submit” button, the percental results of the analysis can be viewed. In “Tutorial” section, information on how to use the application appears. Rubistar Practice 1. Enter to the apllication by following http://rubistar.4teachers.org/ index.php → “Login” steps. 2. Choose “Lab report” which is under the category of “Create rubric” → “Science”. 3. Enter the descriptive information of rubric (Rubric Project Name, Demonstration Rubric). 4. Choose the rubric category you think appropriate to your rubric by using button
or create a new category using
field. 5. Edit the indicators belongs to categories you specified. 6. Carry out the steps 4-5 for 5 different categories. 7. Follow the steps “Submit” → “Make available online”. 8. After using rubric, follow “Teacher home” → → steps. 9. Enter the number of students, exhibiting indicators you specify in the empty spaces on the screen. 10. Click on the “Submit” button to reach the analysis results. 11. Find a rubric which has similiar features to rubric you created through “Find Rubric” button.
239
Web 2.0 Applications Can be used in Assessment and Evaluation
Google Forms Google forms allow users to create online questionairres or tests. In order to use this application, a Google account must be created. Once a Gmail address is created and signed in, a new form is created by following the steps → → . A title can be added to the form “Untitled Form” section. First of all, from “Questions” tab, an explanation about the form is written, questions that will be added to the Form are written and using icon images related to the questions can be uploaded from the computer or google search engine. By “Multiple Choice” ( ) button, the following question types can be selected: Multiple choice, short answer, paragraph, linear scale, check boxes, uploading a document, date, and time. In multiple choice and check boxes questions, any number of answer options can be added, and visuals can be added for each answer. Many options can be listed in the drop-down menu. For example, when menu is clicked, primary school, middle school, high school, university, and other answer options can be listed for the educational background question. In these types of questions, it is enough to enter the answer options. In the downloading a document option, the students are requested to do their homework or activity on a file and upload it on the system. The file type (powerpoint, document, pdf, etc.), number of files, and size of the files can be adjusted. In linear scale questions, the answers can be rated on a scale of 1 to 10. The multiple choice and check box questions are used for questionairres in order to create more than one question of the same type. In tables, questions are entered to the rows and answer are written to the columns. Only one level can be selected in multiple choice questions, but more than one level can be selected in check box questions. Date and time can only be used for questions that should be answered with a date and time. The question allows user to copy a question, the
icon under each
icon deletes a question, the
“Necessary” ( ) icon doesn’t let the students skip the question, and the icon allows user to pick a shuffle mode for the answer options. New questions can be added to the test by clicking on the icon on the right side of the screen, the icon categorizes similar questions allows user to create parts in the test, and the
icon allows user to add a title and explanation for the created parts. User
can add visuals to the questions from computer or internet with
icon, and the
icon allows user to add videos to the questions. With the help of 240
Seyhan ERYILMAZ, İlknur REİSOĞLU
“Customize Theme” icon, an image can be added to the banner of a created test, background color and font can be changed. With icon, user can preview the test. Following → “General” steps, the email addresses of the students who filled in the form can be retrieved automatically. In addition, settings such as choosing who can view the test, limiting the students to answer the test only once, deciding whether the students can make changes on their answers once they finish the test, and selecting how to view the test results (in a graph or a table) can be adjusted. By → “Presentation” step, changes whether the progress bar will be visible, shuffling the questions will be possible, and reconnecting in order to change an answer will be optional. In Google forms, user need to follow → “Test” → “Turn this into a Test” ( ) steps should be followed. After that, the “Answer Key” ( ) icon appears under each question. Using this icon, the points for each question and the feedback that will be given to the students after they answer the question are entered. For multiple choice questions, the correct answer is chosen; and for open ended questions, a text is entered. From “Answers” tab, student answers and related graphs can be reached. By the icon, the students’ answers can be transfered to an Excel chart. Following the steps → “Test” → “Test Options”, the students can be given the chance to view their grades via each mail or manual analysis. For the questions that the students skipped or answered incorrectly, they do have the chance to view the correct answer and its points. By the “Send” button, the test can be emailed to people or its link can be shared on places such as Google+, Facebook, Twitter. The created test can be deleted, copied, printed, editted, or a co-creator who can view it can be added by the icon. When → “Preferences” ( ) steps are followed, the students’ email addresses can be gathered, the questions can be made mandatory, and the total scores for the test can be entered. Google
Form
1. If you don't have a Gmail account, create it and enter your account. 2. 3. 4. 5.
Create a new form by following → → steps. Enter a title for your test in the “Untitled form” field. Enter an explanation related to test by using “Questions” tab. Create a multiple-choice question. 241
Web 2.0 Applications Can be used in Assessment and Evaluation
6. Use the icon, for making your question a mandatory for filling. 7. Create 10 different questions (multiple choice, paragraph, check boxes, table of multiple choice, table of chech boxes, and loading folder) by carrying out 4-6 steps. 8. Select a banner image or color for your test, arrange the font type by clicking icon. 9. Ensure receiving the students' e-mails automatically and displaying the students' test results by → “General” steps. 10. With → “Presentation” steps, ensure displaying the progress bar related to completion of the test and mixing the orders of questions. 11. Follow → “Test” → steps. 12. Enter points and feedback will be given to students for questions by clicking icon under each question. 13. Add collaborators who can modify the test with → steps. 14. Click on the “Send” button to create a link related to the test you created and share it on different platforms. 15. Monitor student responses by clicking on the
icon.
GoToQuiz GoToQuiz is an application that teachers can create quizes or obtain readymade quizes. The application can be reached from https://www.gotoquiz.com/ address. There are menus such as “Make a Quiz”, “Make a Poll”, “Quiz Directory”, and “Popular Editor Picks”. When “Make a Quiz” menu is clicked, the teacher can access “Test”, “The Classic Quiz”, and “The MultiResult Quiz” options. Via “The Test”, test that have only one correct answer and that are graded out of 100 by dividing the total number of questions. In “The Classic Quiz”, questions that have more than one answer can be created. Tests are graded out of 100 points, but each question can be graded as intended. By “The Multi-Result Quiz”, when the quiz is applied, the results are entered manually. Once “The Test” option is clicked, “I agree to the GoToQuiz.com terms of service” box must be approved. If “Start Creating Your Quiz” → “Add your questions” steps are followed, multiple choice questions and answers can be entered, the correct answer can be indicated, and “Save this Question” button should be clicked in order to save the question. 10 questions can be written in the same way. In this same section, the teacher can also enter a title to the quiz, and write varying feedback based on score intervals. Information about the quiz 242
Seyhan ERYILMAZ, İlknur REİSOĞLU
for students and an explanation can be written to guide the students after the test is over. Images can be uploaded for the test, name, website, email address information of the person who created the test can be entered. The link for the quiz can be reached by clicking on “Publish This Quiz” button. When “The Classic Quiz” is selected, similar steps are followed. However, there is one difference. Instead of choosing one correct answer, each answer option’s impact is assessed (pozitive or negative). When “The Multi-Result Quiz” is selected, information about the questions are entered and visuals for the results are uploaded. The teacher can reach the created quizes by entering quiz name and password on the space that is on the upper left side of the screen, and then make changes on it. “Make a Poll” allows teachers to prepare polls. Poll questions, explanation about the poll, poll display, and period of availability are selected on the screen that comes up. Once the answer options for poll questions are entered, “Add It” button should be clicked. The name of the person who created the poll need to enter website (if there is one) and email address. The link to the poll can be reached by “Publish It Now” button. From the “Quiz Directory” menu, readymade quizes which are categorized under the following titles can be found: “All about You”, “Games”, “Recreation”, “Animals”, “Health”, “Science”, “Arts”, “Lifestyle”, “Society”, “Computers”, “Offbeat”, “Sports”, “Countries”. In “Popular” menu, the top 40 quizes per day can be viewed. And from “Editor Picks” menu, the quizes that the editor likes are listed. GoToQuiz Practice 1. 2. 3. 4. 5.
Access to the application from https://www.gotoquiz.com/ address. Follow “Make a Quiz” → “Test” steps. Click on the “The Test” choice. Confirm the “I agree to the GoToQuiz.com terms of service” box. Enter multiple-choice question and answers, select the correct choice from “Start Creating Your Quiz” → “Add your questions” section. 6. Click on the “Save this Question” button. 7. Enter 10 questions following 5-6 steps. 8. Enter the quiz title and feedback which will be given to the students according to the score ranges determined by the application.
243
Web 2.0 Applications Can be used in Assessment and Evaluation
9. Enter two paragraph explanation for informing students about quiz and a paragraph explanation for guiding students when the quiz is completed. 10. Upload an image for test. 11. Enter your name, web site and address of web site if you have an email. 12. Press the “Publish This Quiz” button. 13. Create “The Classic Quiz”, “The Multi-Result Quiz” by folowing 4-12 steps.
PollyEveryWhere PollyEveryWhere is an application that allows the user to create different types of questions that aim to assess listeners knowledge, feelings, and thoughts after a presentation. With the free version, it can be applied to groups consisting of 25 people. The application can be reach from https://www.polleverywhere.com/ address. After “Sign Up”, there are two options: “You’re participating” or “You’re presenting”. After picking one of these options as appropriate and filling up the necessary information for registration, user name and password should be entered on “Login” section and then “My Polls” button should be clicked. “Polls”, “Participants” and “Reports” menus will be seen on the next screen. When “Polls” menu is selected, previously created polls can be obtained, or new polls can be created. To create a new poll, the user needs to click on “Create” button or icon. In order to create multiple choice questions, icon should be clicked, then the question and its answer options should be entered in. If there are any related images, they can be uploaded. The answers can be in image, text, link, or LateX format. By checking icon next to the answer options, the correct answer can be picked. The questions can be shuffled by icon. Images can be added to the questions by icon. The questions can be deleted by
icon. A different answer option can
be added by icon. By clicking on icon, word clouds can be created. In this part, it is enough to enter terms related to the topic or question text. The listeners are asked to express their opinion or questions by clicking on
244
icon.
Seyhan ERYILMAZ, İlknur REİSOĞLU
Later, the answers are sorted, and the best answer is selected. icon helps to create questions which need to be answered by clicking on a picture. Here, the user can use the ready-made images that the application offers, or upload images from the computer. After the question is written, the area that should be picked as an answer by the listener should be selected. It is enough to pick a spot on the image. By clicking on icon, a poll can be created. In this menu, a title is given to the poll and multiple choice, Word cloud, Question & Answer, clickable image, open ended text, and ranking questions can be formed. After selecting the question type, the actual questions and answer options are written. More than one question can be formed in this part. By clicking on
icon, open ended
questions can be formed. By clicking on icon, a test that consists of multiple-choice questions can be created. The ready-made question templates (Icebreaker, Upvote, Emotion Scale, and Ranking) can be put into use when icon is clicked on. After forming a question, another one can be created by clicking on “Add Another Activity” button. When “Create” button is clicked on, the question can be applied to the listeners. When “Polls” menu is clicked on once again, a list of all the questions that were created can be viewed. The created questions can be sorted based on their creation date and name when firstly icon and then the “Sort” button is clicked. The selected questions can be grouped by clicking on “Group” button. And these groups can be dissolved when “UnGroup” button is clicked. The questions can be downloaded as “Powerpoint” presentations or screen shots when “Download” button is clicked. Reports about the questions can be viewed by “Report” button. When “Clear” button is clicked, the question is transfered to the archieve. “Delete” button deletes, “Move” button moves the question. “Edit” button can be used to make changes on a question. If icon is selected, the question cannot be editted or used. When icon on the questions is clicked, they can be applied to the listeners. The web link of the question is generated from “Share link”. A question is duplicated with the “Duplicate” button. Questions can be editted using the “Edit” button. When icon is clicked, the results page is displayed. When “Configure” menu that is on the right side of the screen is clicked, user can choose whether the listeners will answer the questions using the website or text messages. User can click on “How 245
Web 2.0 Applications Can be used in Assessment and Evaluation
people can respond”, then select “Website” if the listeners are going to use a website link, “Presenter session” for a session code, or “Keyword” option for a keyword. However, the keyword option requires a fee. Adjustments such as choosing who can answer the questions, whether an answer can be changed later, how many times a user can answer a question, etc. The user interface can be previewed by the “Test” menu. From “Present” menu, you can choose if the results will be displayed on a web page or as simultaneous presentations. When “Activate” button on the page that shows the results is clicked, the question is asked to the listeners. Results are shown with “Show Results” button, the questions are locked by “Lock” button, the results are deleted by “Clear Results” button, viewing mode is changed with “Full Screen” button, and user can move from one question to the other by “Next-Previous” buttons. On the results section, the answers of users are shown on a graph based on the question type. Inside “Poll” section, there is a button on the left side of the screen called “My Poll”. User can reach to the created questions from this menu. “Account Poll” menu can be used when a fee is paid. It allows user to add people who can collaborate with. From “Examples” menu, different types of questions can be attained. A fee must be paid to use “Participants” menu, where information about the users can be entered. This allows user to sort the results based on names, email address, etc. “Reports” menu also requires a fee and generates reports based on the results. Depending on settings, listeners can answer the questions by the created link, by visiting PollEv.com/your-username response page, or via sms messages. PollyEveryWhere 1. Access to the application from https://www.polleverywhere.com/ address. 2. Enter user name and password from “Login” section. 3. Follow “My Polls” → “Polls” → “Create” steps. 4. Create a multiple-choice question by clikcking on 5. Press the “Add Another Activity” button.
icon.
6. Create word clouds by icon. 7. Create a question which you can get the opinions of the audience with icon. 246
Seyhan ERYILMAZ, İlknur REİSOĞLU
8. Create a clickable image question with
icon.
9. Create an open-ended question with icon. 10. Click on the “Poll” menu. Click on the first question you have created. 11. Ensure users entering their answers by using web link clicking on the “Configure” menu. 12. Press on the “Activate” button. 13. After monitoring the results pass to other question via “Next” button. 14. Create a survey have 5 questions by following “My Polls” → “Polls” → “Create” steps and clicking on icon. 15. Follow 11-13 steps. 16. Create a test have 10 questions by following “My Polls” → “Polls” → “Create” steps and clicking on 17. Follow 11-13 steps.
icon.
QuickKey This is a mobile grading application. It provides quick and easy grading and evaluation of quizes, tests, and polls that are either on paper or on a device (Granata, 2014). If a device is used, the results can be reached from the teacher’s device without signing in or setting up the application. If a mobile phone or tablet is used, then the student’s answer page can be graded using the camera. This way, the phone of tablet replaces an optical reader without the need to use internet. The application can be reached from https://get.quickkeyapp.com/ address. By clicking on “Sign Up” button and then filling in all the requested information, registration can be done. icon shows the user profile details. On the next page, when “Dashboard” menu is selected, the following icons appear: . If you click on “Create a Quiz” → icons, a new quiz can be formed. Here you need to fill in the quiz title, to whom the quiz will be applied, and a label related to the quiz; then press “Next” button. In the next step, the total number of questions, the correct answer of each question, and the score of each correct answer should be entered using the “Simple Quiz Builder” button. By the “Advanced Quiz Builder” button, one of the following question types can be selected: “Multiple Choice”, “Open 247
Web 2.0 Applications Can be used in Assessment and Evaluation
Response”, “Match Answers Numerical”, “Match Answer Text”. In multiple choice questions, the question itself, the correct answer option and its score needs to be filled in. In “Open Response” questions, grading is done the same way it is done in rubrics. In “Match Answers Numerical”, “Match Answer Text” question types, the possible answers and their respective scores are entered. The grading of all question types, except open-ended questions, are done by the application as intended. The quiz can be saved by “Save and Preview” button. In the last phase, final touches such as leaving a space under each question for the student to do his/her calculations, adding related images under the questions, creating an answer key can be completed. In order to choose one of these options, the user just needs to check the box. The quiz can be printed using the button. And finally, “Save & Finish” button needs to be clicked. Reports can be pulled from “Create a Quiz” → . If “Legacy Itemization Report” is selected, the students’ answer for each question, number of correct and false answers, total score, and correct answer percentages in the class can be reported. When “Score Sheet” is selected, total number of answered questions, number of correct and false answers, and total score per student can be reported. When “Quiz Itemization Report” is picked, the report shows student names, the date the quiz was performed and completed, score per question, number of correct and false answers, their score out of the highest possible score, and information as a class. By buttons, print options for student answers can be selected. In the free version, each student’s markings on the optical form can be viewed. From “Dashboard” → “Create a Course” link, the course name, information about the course, code of the course can be added. And students who took the course can be added using “Search Students” space. A lesson can be created and the quizes that will be carried out can be determined and then “Save Course” button can be clicked. From “Dashboard” → “Create a Student” link, student’s name, surname, student ID number, and email address can be filled in, then from “Create a Course” menu, the courses that he/she selected can be entered, then Add button should be clicked. Any desired report that were explained above can be pulled by the “Dashboard” → “Run Report” link. A new course can be added if icon in the “Courses” menu is clicked. In this section, previously added lessons are listed. Changes on the lesson can be 248
Seyhan ERYILMAZ, İlknur REİSOĞLU
done by
button. A list of students who enrolled in the course can be reached
from
button. A lesson can be deleted using
button. Optical form
samples which are marked with student ID numbers can be printed by button. In the “Quiz” section, all the fuctions that are listed under icon can be performed. The only difference here is that user can reach a list of quizes, edit quizes, and delete them. The quiz can be copied by
button, an access
code can be generated by button and this information can be forwarded to the students. In the “Student” section, all the functions that are listed under icon can be performed. The only difference here is that each student’s detailed information and a list of courses they took are listed as well. User can delete or edit a student’s information if it needs to be. A student can be added using
button, created student lists in XLS, XLSX, or CSV
format can be uploaded to the application using button. The information on the application can be integrated with Google Classroom or PoerSchool using button. The list of students can be downloaded by
button.
From “Reports” section, all the functions that are listed under icon can be performed. School information system can be integrated using “Reports” → “Data Management” via QuickKey. An account for school can be created using one of the buttons listed under “New School Account” menu. Help and guidance documents can be found under menu. QuickKey Practice 1. Access to the application from https://get.quickkeyapp.com/ address. 2. Register by clicking on “Sign Up” button and entering requested infromation. 3. Acces to your profile by clicking on
button.
249
Web 2.0 Applications Can be used in Assessment and Evaluation
4. Add 10 students by using menu.
button which is under “Student”
5. Add a new course by using button which is under “Courses” menu. 6. Click on the “Quizess” menu located on the left side of the incoming screen. 7. Click on icon. 8. Enter the quiz title, a label related to quiz and which classes to apply and then click the “Next” button. 9. Enter related information belong to 10 questions which in type of “Multiple Choice”, “Open Response”, “Match Answers Numerical”, and “Match Answer Text”. 10. Save the quiz by clicking on “Save and Preview” button. 11. Mark the options of leaving space under each question for students to act when quiz applying, taking place visuals related to question under question text, creation of the answer key. 12. Print the quiz by clicking on the 13. Click on the “Save & Finish” button. 14. Add 10 new students by using
button.
which is under “Student” menu.
15. Choose the course which you created by using button which is under “Reports” menu. 16. Access to reports by choseing “Legacy Itemization Report” option.
The Comparison of Applications that Develop Assessment and Evaluation Tools and Their Use in Education The comparison of Rubistar, Google Form, GotoQuiz, PollyEveryWhere, QuickKey applications that develop assessment and evaluation tools is reflected on Table 11.3 based on some of their main features.
250
Seyhan ERYILMAZ, İlknur REİSOĞLU
Table 11.3 The comparison of applications that develop assessment and evaluation tools
Applications that provide automatic log in
Integrated applications
Rubistar
Google Form
Goto Quiz
PollyEveryWhere
QuickKey
N/A
Google
N/A
N/A
Google
N/A
N/A
N/A
Multiple choice, point-and-click on Picture, open ended, Q&A, survey question
Multiple choice, open ended, matching
N/A
Types of questions that can be added
Indicator
Devices Available to Use
Computer
Feedback type presented to the student Creating a virtual classroom
Google Doc, Google Drive, Google Classroom Multiple choice, short answer, paragraph, linear scale , check boxes, pull down menü, table of multiple chioice, table of check boxes, uploading, date, clock Mobil, computer, tablet
N/A Correct answer N/A
Google Classroom
Multiple choice,
Computer Different feedback according to answer choice N/A
Mobil, computer, smart board, tablet
Mobil, computer, tablet
Correct answer
N/A
N/A Avaliable
251
Web 2.0 Applications Can be used in Assessment and Evaluation
Archieve
Information in the results report
Avaliable
Percentage values for the overall class
N/A
Total scores and graphs related to answers
Avaliable
Not avaliable
Avaliable Creation of word clouds according to the question type, showing the correct- wrong answer numbers with graphs
N/A Number of students’correct –wrong answers to each question, total scores, percentages of correct answers all across the class
When Table 11.3 is analyzed, advantages of each application compared to the others can be viewed. For example, in Google Forms, different types of questions can be formed. PollEveryWhere application offers more visual reports. Based on their distinctive features, Google Forms are more flexible for developing assessment and evaluation tools. Its integration with Google Classroom and Google products makes it effective in reaching large population. QuickKey on the other hand, has the advantage to quickly evaluate the assessment tools without the need to use an optical reader. As for PollEveryWhere, it can specially be used to promote student participation and determine whether they understood the topic or not. Finally, Rubistar helps creating rubrics that allow objective evaluation of performances. Analytical rubrics can be obtained using the ready-made templates, categories, and performance indicators as appropriate. Conclusion In this part, 13 different Web 2.0 applications, which enable teachers to execute assessment and evaluation activities at different stages of learningteaching process, were introduced under three categories. EdPuzzle, PlayPosit and TedEd aim to determine whether students understood what they watched and help them to make discussion related in the things they learn by adding multiple choice, open-ended, discussion questions to the interactive videos. EdPuzzle and Playposit offers virtual classrooms and more detailed reports. KaHoot, Socrative, Plickers, Quizlet, BookWidgets use gamification elements and they are the application that can be used to identify students’ preknowledge and preparedness by creating a respective and competitive environment. KaHoot and Socrative generates detailed reports through tables and graphs. Quizlet creates study sets, based on the entered information, that give quick practice opportunity for students. Rubistar, Google Form, GotoQuiz, PollEveryWhere, QuickKey help 252
Seyhan ERYILMAZ, İlknur REİSOĞLU
create tests, quizes, and polls through different types of questions. PollEveryWhere differs from the others because it asks for user opinion after a presentation and determines their knowledge level which makes the process more interesting and exciting. QuickKey allows user to evaluate students’ answer sheets just like an optical reader. Rubistar on the other hand, gives the opportunity to create rubrics for performance-based activities.
253
Web 2.0 Applications Can be used in Assessment and Evaluation
References Allan, J. (2018). Attributes of online video learning objects. Retrieved 04.02.2019 from https://www.elearningworld.org/turn-online-videos-into-engaginglearning-objects/ Allen, W.A., & Smith, A.R. (2012). Effects of video podcasting on psychomotor and cognitive performance, attitudes and study behavior of student physical therapists. Innovations in Education and Teaching International, 49, 401-414. Brame, C.J. (2015). Effective educational videos. Retrieved 13.11.2018 from http://cft.vanderbilt.edu/guides-sub-pages/effective-educational-videos/. Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education, 15(4), es6. Burns, M. (2016). Class tech tips: Use bookwidgets to create interactive activities for ipad, chromebooks and more! Retrieved 04.02.2019 from https://www.techlearning.com/tl-advisor-blog/11471 Çepni, S., Ayvacı, H. Ş., Bakırcı, H., & Kara, Y. (2014). The lecturers are a reflection on the web-based performance evaluation program of parents' computer literacy levels. Journal of Instructional Technologies & Teacher Education, 3(1), 1-9. DomíNguez, A., Saenz-De-Navarrete, J., De-Marcos, L., FernáNdez-Sanz, L., PagéS, C., & MartíNez-HerráIz, J. J. (2013). Gamifying learning experiences: Practical implications and outcomes. Computers & Education, 63, 380-392. Granata, K. (2014). Quick Key' app reduces teacher grading time. Retrieved 04.02.2019 from https://www.educationworld.com/a_tech/quick-keygrading-app.shtml I Love Edu (2018). Instant flipped lessons with the edpuzzle extension. Retrieved 04.02.2019 from https://www.i-heart-edu.com/instant-flipped-lessonswith-the-edpuzzle-extension/
254
Seyhan ERYILMAZ, İlknur REİSOĞLU
Lloyd, S.A. & Robertson, C.L. (2012). Screencast tutorials enhance student learning of statistics. Teaching of Psychology, 39, 67-71. Lorch, C. (2018). 3 Useful playposit features for interactive course videos. Retrieved 04.02.2019 from https://learninginnovation.duke.edu/blog/2018/11/3-useful-playposit-featuresfor-interactive-course-videos/ Mitchell, K. (2016). Plickers — An engaging way to assess. Retrieved 04.02.2019 from https://www.thewalkingclassroom.org/plickers-fun-freeengaging-way-to-assess/ Powers, M. (2019). Quizlet, Flexible study aid supports learning at home, at school, and on the go. Retrieved 04.02.2019 from https://www.commonsense.org/education/website/quizlet. Reisoğlu İ. & Göktaş Y. (2016). 3B sanal öğrenme ortamları için sorgulama toplulukları ölçeğinin geliştirilmesi. Pegem Eğitim ve Öğretim Dergisi, 6, 347-370. Reisoğlu, İ., Gedik, N., & Göktaş Y. (2013). Öğretmen adaylarının özsaygı ve duygusal zekâ düzeylerinin problemli internet kullanımıyla ilişkisi. Eğitim ve Bilim-Education and Science, 170, 150-165. Szpunar, K.K., Khan, N.Y., & Schacter, D.L. (2013). Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proc Natl Acad Sci USA 110, 6313–6317. Warwick, L. (2017). Socrative, flexible, free, easy-to-use assessment tool. Retrieved 04.02.2019 from https://thedigitalteacher.com/reviews/socrative Zhang, D., Zhou, L., Briggs, R.O., & Nunamaker, J.F. Jr. (2006). Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & Management, 43, 15-27.
255
Buket Özüm BÜLBÜL
Chapter 12 How to Create and Use Rubrics? Buket Özüm BÜLBÜL
Introduction The main purpose of the performance evaluation is to evaluate both behaviour of students and quality of their products (Bastian, Lys & Pan, 2018; Danielson & Hansen, 2016; Danielson & Marquez, 2016). One of these evaluate approaches is the rubrics. Rubrics may be described as scoring equipment that help to set certain goals and expectations to assign and evaluate either orally or written assignments (Savaş, 2013). In other words, rubric is called a scoring scale which evaluates students’ tasks. Rubrics is an important place for assessment and evaluation since it is possible to observe how much the goals are achieved; which issues are understood by students and how the progress has been proceeded. So, the rubrics are chosen in this stage of this book. One of the problems in education is to educate individuals who can overcome the problems. For this to happen, it is not enough to adopt the constructivist approach of learning environments. At the same time, individuals need to evaluate their own solutions and get appropriate feedback. At this stage, assessment and evaluation takes an important place. One of the dimensions of effective assessment and evaluation is to evaluate the performance of the students and to use the rubrics as a tool for making assessment. This section is going to focus on rubrics and consist of five parts in total. The first part is about rubrics’ mean such as “what are the rubrics?”, “why are they important?” and “why do we need rubrics?”. The rubric categorization was mentioned by considering related literature. Second part of this chapter is examples of rubrics for better understanding of the rubrics mean. The last part is about how to construct rubrics or developing rubrics as well as their benefits and limitations.
257
How to Create and Use Rubrics?
The Rubrics and their Importance The system of learning and teaching approaches have inputs and outputs. In other words, as the educators improve quality of their teaching, it is expected to get higher level of the student understanding. So, formative assessment approaches have been used to measure students understanding. At this stage, rubrics are one of the alternative assessment techniques and evaluation approaches to measure student performance. Literature have so many rubrics definitions. In general, definition of rubrics is a scoring scale that lists criteria to evaluate students or it is detailed scoring guides to measure performance of students (Andrade & Heritage, 2018; Brookhart, 2013; Menendez-Varela & Gregori-Giralt, 2018; Stevens & Levi, 2013; Şaşmaz-Ören, Ormancı & Evrekli, 2014). In other words, the performance of students can be measured via certain criteria of rubrics. At this stage rubrics should not be considered as a student’s feedback tool. So, rubrics are not just a checklist or rating scale since rubrics have both numerical determination of score categories and the content of categories. Consequently, rubrics are an assessment tool to see the level of student’s missing performance by the content of the tasks. In other words, the rubrics have criteria and scoring criteria for how these criteria are applied. Let’s think that a research assignment has given to our student. They completed their tasks and turn it back to you. How do we evaluate this assignment? How do we tell our students the mistake they did and what grade we give in return? The answer to these questions is given by a rubric in Table 12.1. Table 12.1 The example of research assignment rubric Criteria of Research Assignment of Rubric
Level of Performance
Content
Good (Three Points) Research topic reflects in the content
Average (Two Points) Research topic reflects in some of the content
Low (One Point) Research content does not reflect in content
At least five different sources were used during the research process
During the research process, less than five resources were used
During the research process, only one research resource was used
Effect of the Widespread Research
258
Buket Özüm BÜLBÜL
Table 12.1 shows an example of rubric which is made to evaluate the research assignment assigned to the students. When this example examined, it is seen that student’s research assignment (context, effect of the widespread research …etc.) will be evaluated in the first column and grades will be given according to this evaluation in other columns (good, average, low). Each of these grades such as good, average and low represent different skills. For example, if a well-prepared assignment takes three points in content, it may be mean that subject matter is reflected well. Or, if a student takes three points in the criteria of the effect of widespread research, it means that this student used at least five resources. The information about rubric preparation methods is given in following sections. As a result, the rubrics lead student’s performance through evaluation. After students’ performance evaluated through rubrics, appropriate feedback should be given to them. While giving feedback to students, they should be informed about how they made mistakes at which stage. At this stage, the rubrics come into play. In other words, differentiated feature of rubrics from the other scoring scale is examine that assign to criteria. So, rubrics have an important role for effective feedback. There is a need to answer the question of “why rubrics need to be done?”. The answer to this question is mentioned in below. Why Do We Need Rubrics? Rubrics have a potential effective education environment such as time-saving and effective feedback to students. However, as educational researcher, rubrics are do not be used much in educational environments. In addition to help students to know “what they need to do” to achieve a certain grade, another important part of this is the answer to the question of “why do we need rubrics?”. Therefore, the necessity for rubrics tried to be explained in a schema in related literature (Andrade & Heritage, 2018; Stevens & Levi, 2013; Tenam-Zemach & Flynn, 2015).
259
How to Create and Use Rubrics?
Figure 12.1 Model for “why do we need rubrics?” Figure 12.1 show that the reasons for using rubrics. These reasons are: rubrics provide timely feedback, prepare students to use detailed feedback, level the playing field, help us to refine our teaching skills, facilitate communication with others, provide self-assessment and encourage critical thinking. Brief explanation provided for the reasons in below. Rubrics Provide Timely Feedback Timely feedback was the subject of many resources (Danielson & Hansen, 2016; Stevens & Levi, 2013). Timely feedback to the task assigned students provides conceptual learning to them. When students see the points, they did wrong or missing immediately, this stage gives them an opportunity to correct them. Here it is, the rubrics come into play. Rubrics have both student grades to their tasks and its own criteria. This situation helps to student task’s grade and at the same time seeing where they have made mistakes. Finally, feedback is the most effective when it is given as soon as possible after student’s task completion and it helps students to make positive changes in their subsequent works. Rubrics Prepare Students to Use Detailed Feedback In general, students want to have detailed feedback to know what they did wrong or true. So, they can improve their knowledge as well as correcting what they are doing wrong. With the help of the criteria, rubrics reveal the points that
260
Buket Özüm BÜLBÜL
the students did wrong or missed in their tasks. Consequently, well prepared rubrics with criteria can give effective and detailed feedback to students. Rubrics Help Us to Refine Our Teaching Skills One of the main aims of the teacher is finding answer to that question: How we can teach better to our students? To find an answer to this question, teachers should improve their teaching skills and they should find answer to this question “How can we find out that what we need to do to become better ones? How do we know if we are good teachers?”. At this stage, teachers can realize what the students are experiencing via rubrics. Once teacher realize this, they can develop themselves and they can identify different teaching methods and strategies. Rubrics Facilitate Communication with Others As stated by Stevens & Levi (2013), most of the teachers teach in collaboration with others. The third party to teachers is teaching assistant, headmaster, group teachers, parents, etc. So, rubrics facilitate communication with these others. Rubrics Provide Self Assessments Formative assessment and evaluation are an important part of student learning. When we think that self-assessment is a good tool for student learning, rubrics can be used as a self-assessment tools to evaluate student performance. It means that student’s performance can be directed via rubrics (Andrade & Heritage, 2018). Rubrics Encourage Critical Thinking Rubrics format is a chance for students to notice their ongoing improvement in their work (Stevens & Levi, 2013). So, rubrics can play a major role in students’ critical thinking via its criteria. That’s mean when a teacher uses rubrics to evaluate students’ performance, students can notice their performance and can discuss what they did wrong or true with their friends and teacher. When they discuss about their performance with their friends and teacher, their critical thinking can come into play. Types of Rubrics The rubrics provide effective feedback to the student’s performance, products and evaluate the students’ performance objectively. Figure 2 shows that scoring instruments for performance assessments are divided into two as mentioned by Mertler (2001): Checklist and Rating Scales. 261
How to Create and Use Rubrics?
Rating scale has two types of rubrics: analytic rubrics and holistic rubrics. These types of rubrics are informed in below.
Figure 12.2 Types of Scoring Instruments for Performance Assessments Analytic Rubrics When different criteria are defined, and performance levels are explained for each criterion, analytic rubrics are used (Danielson & Dragoon, 2016). In other words, behaviours are scored as a whole, not as a part in analytic rubrics. So, clarity and details are important in analytic rubrics. Analytic rubrics are scored in all at first, then it is evaluated to multi-dimension (Mertler, 2001). As stated by Mertler (2001), template of analytic rubrics is presented in Table 12.2. Table 12.2 Template for analytic rubrics by Mertler (2001) Criteria 1
262
Beginning Description reflecting in the beginning level of performance
Developing Description reflecting in movement toward mastery level of performance
Accomplished Description of achievement for the mastery level of performance
Exemplary Description reflecting in highest level of performance
Score
Buket Özüm BÜLBÜL
Criteria 2
Description reflecting the beginning level of performance
…
…
Description reflecting in movement toward mastery level of performance …
Description of achievement for the mastery level of performance …
Description reflecting in highest level of performance
…
…
In Table 12.2, it is seen that tasks take part in the left column and the ranks are ordered based on the ratings of these tasks. If anyone takes low score, he/she will be in the beginning level. If anyone takes the highest score, he/she will be in the exemplary level. The advantage of analytical rubric is including measured criteria detail. The disadvantage of analytical rubric is having longer scoring duration than holistic rubrics. Teachers can use analytic rubrics to assess student’s homework or research reports or to give feedback their projects. Holistic Rubrics Holistic rubrics deal with a research without breaking into pieces of a study or stage of the process. And holistic rubrics are deal with more results than process (Jackson & Larkin, 2002; Prins, Kleijin & Tartwijk, 2017; Yune, Lee, Im, Kam & Baek, 2018). Mertler (2001) gave an example of holistic rubric in Table 12.3. Table 12.3 Template for holistic rubrics by Mertler (2001) Score 5 4 3 2 1 0
Description Demonstrates complete understanding of the problem. All requirements of task are included in response. Demonstrates considerable understanding of the problem. All requirements of task are included. Demonstrates partial understanding of the problem. Most requirements of task are included. Demonstrates little understanding of the problem. Many requirements of task are missing. Demonstrates any understanding of the problem. No response/task not attempted.
It is seen in Table 12.3 that holistic rubrics do not have separate criteria like analytic rubrics. In other words, holistic rubrics have a scoring system from 1 to 263
How to Create and Use Rubrics?
5 to assess according to sum of all criteria. The aim of this stage is to evaluate a specific task as a whole. While the advantage of holistic rubric saves times, the disadvantage of holistic rubric cannot be rated in detail to the behaviours to be evaluated.
Developing a Rubric
How to Construct Rubrics? Preparation of the rubrics is not easy as it is seen. This situation may even frighten many researchers and teachers. In this section, the stages of preparing the rubric will be cleared. In literature, the stages of preparation of rubrics are collected under four titles irrespective of teaching assistants, colleagues or even with students. Preparation steps of rubrics in terms of related literature is given in Figure 3 (Andrade & Heritage, 2018; Brookhart, 2013; Danielson & Dragoon, 2016; Menendez-Varela & Gregori-Giralt, 2018; Mertler, 2001; Stevens & Levi, 2013; Tenam-Zemach & Flynn, 2015). 1. Reflecting
2. Listing 3. Grouping and Labeling 4. Application
Figure 12.3 Stages to developing a rubric In Figure 12.3, the steps of creating the rubric can be divided into 4 stages as described by Stevens & Levi (2013): (1) reflecting, (2) listing, (3) grouping and labelling, and (4) application. Reflecting The criteria of the assignments and behaviours to be measured are determined at reflecting stage. The following features should be considered when determining these behaviours: “Why did you create this assignment, have you given the assignment or similar assignment before, how does this assignment relate in what you are teaching, what skills will student need to have or develop to successfully complete this assignment, what exactly is the task assigned, what evidence can students provide in the assignment, what would show they have 264
Buket Özüm BÜLBÜL
accomplished, what you hoped they would accomplish when you created the assignment, what are the highest expectations you have for student performance on this assignment overall, what is the worst fulfilment of the assignment you can imagine, short of simplicity not to turn it in at all” (Stevens & Levi, 2013). Listing In the first step of the list stage of the criteria, objectives and targets are listed and written in articles. For example (Example 1): The teacher set a research assignment based on group work to his graduate students and wanted to submit this assignment to students. And the teacher wants to prepare a rubric for evaluation of students’ assignment. How can the teacher prepare a rubric for this, let’s think about it? Table 0.4 Group presentation skills list Clear and understandable speech Effective usage of body language Equal presentation by each member of the group Good interaction between each member of the group Enough research about the topic of presentation Equal work load for each member of the group Communication of each member of the group with the instructor before the presentation
Table 12.4 shows that criteria of the rubrics in second stage. In this stage, target behaviours are listed. The skills in Table 4 are listed to expected student’s behaviours based on group work. Grouping and Labelling The behaviour listed in the previous step is grouped at this stage. Later, the groups are named in grouping and labelling stage. The behaviour is labelled and grouped in Table 4 as follows.
265
How to Create and Use Rubrics?
Sufficient research about the topic of presentation Equal work load for each member of the group Communication of each member of the group with the instructor before the presentation
Clear and understandable speech Effective usage of body language Equal presentation by each member of the group Good interaction between each member of the group
Dimension1: Organization
Dimension2: Presentation
Figure 12.4 Group presentation grouping and labelling Figure 12.4 presents the rubric which has group presentation to grouping and labelling criteria. It shows that listed figures were divided in the second stage. Group presentation is grouped, and labelling criteria is divided into specific dimensions as seen in Figure 4. These dimensions are organization and presentation. Organization dimension gives points to students’ group preparation. Presentation dimension gives points to student’s group presentations according to specific criteria. Finally, the listed criteria are grouped, and each dimension are given labels with criteria in grouping and labelling stage. Application In application, the lists and groupings are transferred to a rubric grid as a teacher. In this stage, how to rate student behaviours need to be decided in the grouped rubric. According to this, it is written to table what the behaviours and scores mean. Table 5 gives an example of application from rubric criterion.
266
Buket Özüm BÜLBÜL
Table 12.5 An Example of the Group Presentation Rubric Performance Scores
Organization (Dimension 1)
Criteria
One point
Two Point
Three Point
The topic of the presentation should be well researched
The topic of the presentation is researched from a single source
The topic of the presentation is researched from more than three sources
Dissemination of the workload as equal Each member in the group keep in touch with teacher before presentation Clear and understandable speech
Workload dissemination to one person
The topic of the presentation is researched from one or three sources Workload dissemination to given to two people Two members keeps in touch with teacher before presentation
Presentation (Dimension 2)
Effective usage of body language
Presentation time distribution of the members in the group
Just one member keeps in touch with teacher before presentation Presentation is complex and hard to understand Usage of body language is not effective, and presentation is monotony
Presentation is clear but hard to understand
One of the group members did not presented.
Some of the group members used the time very long and others used less time while presenting
Usage of body language is not effective
Score
Workload dissemination to all people as equal All members in the group keep in touch with their teacher before presentation Presentation is clear and understandable The members of the group dominate to body language during to presentation All of group member used equal time
267
How to Create and Use Rubrics?
It is seen that research assignment based on group work rubric as given in Table 12.5. One of the criteria in each group in Table 5 is rated in three degrees. And total score means sum of scores from these degrees. In other words, in application stage, each of the criteria is explained and scored in Table 5. Benefits and Limitations of the Rubrics The main purpose of the rubrics, which is one of the alternative assessments and evaluation tools, is to evaluate student’s behaviours in the learning process and make positive contribution to these behaviours. In this respect, rubrics have many benefits in educational environment. Unfortunately, poorly prepared rubrics may have some disadvantages in some cases. In the literature, some definitions are made about advantages or benefits and disadvantages or limitations of rubrics (Menendez-Varela & Gregori-Giralt, 2018). Benefits of Rubrics The benefits of rubrics can be listed as substances. So, rubrics,
make learning goals as clear, guide to instructional design, allow to the evaluation process to be fairer, give a chance to students for giving a feedback themselves and rubrics give a feedback to the students, allow students for evaluating themselves individually, give an information to teacher about students’ learning, allow learning to be a process.
Limitations of Rubrics Well-designed rubrics provide reliability and validity in teaching and learning process. But ill-prepared rubrics do not have validity and reliability. While preparing the rubric, it should be prepared according to students’ characteristics, content and student’s behaviour. So, extra effort and time are required to prepare rubrics. And, ill-prepared rubrics falsely direct students. Finally, there can be concluded that limitations of rubric are taking so much time while preparing. Conclusion There is a maxim in educational environment: “The more the teacher tells the students, the more they understand”. In other words, even if a teacher prepares a good learning processes, he/she cannot teach exactly without student’s
268
Buket Özüm BÜLBÜL
willingness for lesson. In this stage, evaluation and assessment approaches come into play. In Recent years, formative assessment has been given importance in assessment and evaluating approaches. Because, if teachers give a formative feedback instead of giving a grade, they can better teach conceptual learning to students. In this stage, this part of the book is mentioned to rubrics that is one of the formative assessments and evaluating approaches. Rubric is a detailed scoring guide to evaluate a performance or a study. So, this part of book searches the answers of these questions: “Why do we need rubrics?” and “Why are they important?”. In the following sections, the kinds of the rubrics are mentioned. After the definition of kind of the rubrics, the development procedure of them explained. While explaining to developing the rubric, examples of rubric are provided. The examples are about how a teacher works in educational sciences, evaluate student’s research project. A rubric was developed in this chapter of the book by using these examples. In the last part of the chapter, benefits and limitations of the rubrics are mentioned. Finally, rubrics are mentioned in this chapter by giving examples of educational sciences. In this way, it is thought that performance and process of the evaluation will help them to see their own development and their mistakes. Similarly, teachers will be able to focus on student’s problems in understanding the issues in their lessons by learning the rubric development stages and in what situations rubrics will be used. In addition, students will be aware of their individual development via using rubrics.
269
How to Create and Use Rubrics?
References Andrade, H. L. & Heritage, M. (2018). Using Formative Assessment to Enhance Learning, Achievement and Self-Regulation. New York: Routledge Bastian, K.C., Lys, K. & Pan, Y. (2018). A Framework for Improvement: Analyzing Performance-Assessment Scores for Evidence-Based Teacher Preparation Program Reforms. Journal of Teacher Education. 69(5), 448462. Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment And Grading. Alexandria, VA: ASCD Publications. Daniealson, C. & Hansen, P. (2016). Performance Tasks and Rubrics for Early Elementary Mathematics (Second Edition). New York: Routledge. Danielson, C. & Marquez, E. (2016). Performans Task and Rubrics for High School Mathematics. New York: Routledge. Danielson, C., Dragoon, J. (2016). Performance Tasks and Rubrics for Upper Elementary Mathematics. New York: Routledge. Jackson, M. & Larkin, J.M. (2002). Teaching students to use grading rubrics. Teaching Exceptional Children, 35(1), 40-45. Menendez-Varela, J.L. and Gregori-Giralt, E. G. (2018). Rubrics for developing students’ professional judgement: A study of sustainable assessment in arts education. Studies in Educational Evaluation. 58, 70-79. Mertler, C. A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25), 1-10. Prins, F. J., Klejin, R. & Tartwijk (2017). Student’s use of a rubric for research theses. Assessment & Evaluation in Higher Education. 42(1), 128-150. Savaş, H. (2013). A study on the evaluation and the development of computerized essay scoring rubrics in terms of reliability and validity. (Unpublished master thesis). Çağ University Institute of Social Sciences, Turkey. Stevens, D. D., & Levi, A. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Sterling, VA: Stylus Publishing.
270
Buket Özüm BÜLBÜL
Şaşmaz-Ören, F., Ormancı, Ü. & Evrekli, E. (2014). The alternative assessmentevaluation approaches preferred by pre-service teachers and their selfefficacy towards these approaches. Education and Science, 39 (173), 101116. Tenam-Zemach, M., & Flynn, J. E. (2015). A Rubric Nation: Critical Inquiries on The Impact of Rubrics in Education. Charlotte, NC: Information Age Publishing. Yune, S. J., Lee, S. Y., Im, S. J., Kam, B. S. & Baek, S. Y. (2018). Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students. BMC Medical Education, 18(124), 1-6.
271
Banuçiçek ÖZDEMİR
Chapter 13 Assessment through the Scramper Banuçiçek ÖZDEMİR
Introduction SCAMPER technique, which is defined as guided brainstorming technique, is named with the abbreviation of the initials of seven stages. (S.C.A.M.P.E.R.); S: Substitute C: Combine A: Adapt M: Modify, Minify, Magnify P: Put to other uses E: Eliminate R: Reverse, Rearrange (Yıldız & Israel, 2002). The definition, application steps, application examples, application consists of evaluation steps for scamper will be presented in this chapter. What is the Scamper? It should pass through a certain process of meaningful processes in order to perceive the events occurring in the lives of individuals. Every individual who undergoes this process needs to make certain classifications in his/her cognitive process. This classification is very important in identifying the same characteristics of the information and revealing the differences. The perception of the individual is shaped by the beginning of the educational age and the environmental factors and vital processes. With the perception of the information, the classification made by the individual is important in understanding and planning the information. One of the environments where the individuals indicate their ideas and the new information is where the information is perceived and understood. The exchange of ideas on the subject, that is present in the classroom 273
Assessment trough the Scamper
or on the spoken subject, is being made within certain limits. In the traditional approach, while the information was made by the teacher, the students were known as the direct field of information. By the adoption of the constructivist approach, only giving direct information is restricted and it has been guided by the teacher in a student-centred and guiding position. It can used in certain strategies to guide the knowledge of the trainer. The SCAMPER technique, which is considered among the methods of discussion, is thought to develop creative thinking skills of individuals. This technique prevents students from getting away from the subject as in brainstorming and focuses on different aspects of the subject or the case without distracting from the subject. The SCAMPER technique encourages individuals to evaluate different or original ideas while encouraging different solutions to problems at the same time (Eberle, 1996). It enables individuals to think multidimensionally and at the same time allows to move away from existing thinking patterns (Yıldız & Israel, 2002). It allows different thinking to consider existing object differently, develop problem-solving skills, and develop creativity in order to create original products (Serrat, 2009). It is observed that influencing the lives of individuals by creating a difference of thought and positively affects their intellectual skills (Gladding, 2011). SCAMPER is a technique that improves the individual's creativity by stimulating thought (Glenn, 1997). Different ideas on the same phenomenon are revealed through different perspectives in the thought of the individual. The ideas taken from individuals whose different opinions are exposed may also guide the next learning-teaching process. Students in the same age group, using creative thinking in later learning, provide information about unknown information with known objects. This makes it easier to plan learning, perception and conceptual frameworks. SCAMPER technique aims to develop creative thinking skills on known objects, objects or phenomena rather than teaching new knowledge. Scamper’s Practice Steps SCAMPER is a creative thinking strategy consisting of the name of each step which develops the creative thinking skills of each new idea and the ideas and thoughts of teachers and students (Michalko, 2006). It is also one of the methods that allows creative thinking to internalize the individual and provide the habit of creating creative ideas (Alyazad, 2014).
274
Banuçiçek ÖZDEMİR
SCAMPER technique, which is defined as guided brainstorming technique, is named with the abbreviation of the initials of seven stages. (S.C.A.M.P.E.R.); S: Substitute C: Combine A: Adapt M: Modify, Minify, Magnify P: Put to other uses E: Eliminate R: Reverse, Rearrange (Yıldız & Israel, 2002).
Figure 13.1 SCAMPER practice steps (https://litemind.com/wpcontent/uploads/misc/litemind-scamper-reference.pdf) The steps of the SCAMPER technique, purposes and examples of the applications in each step are as follows (Glenn, 1997); Substitute It is the stage of replacing the object, object or object to which the SCAMPER technique will be applied with another object. It is the stage where the object to be displaced is determined how and selected why. The designated object is the stage in which the object or object is determined and measured. Questions are asked to determine how and why the object to be fulfilled is selected. With these questions, it is provided to determine the thoughts of the individual to examine 275
Assessment trough the Scamper
the effect of imagination on creativity. For this, the following questions can be asked (Glenn, 1997); a. b. c. d. e. f.
What object can be used to replace this object? What is being used instead of this object? What would be more useful if you displace this object? What would you change if you want to change this object? What materials would you use if you wanted to do this again? What would you do if you wanted to do this?
Combine Different objects, objects or phenomena are brought together to create new and more useful structures. For this, the following questions can be asked (Glenn, 1997); a. What else can we combine this object with? b. What can we combine parts of this object with? c. If we combine this object with other object, how does it function? (Yildiz & Israel, 2002) d. Can we combine this object with other object? (Yildiz & Israel, 2002) This stage allows a new product can be removed (Michalko, 2006) which aims and ideas to be merged (Serrat, 2009) by combining more than one object or idea. Adapt The used object is to determine the compatibility of the object or phenomenon to the different situation or purpose. It involves adapting the existing object to serve a new object, situation, or purpose by making changes. The questions to be asked for this phase are as follows (Glenn, 1997; Yıldız & Israel, 2002); a. b. c. d. e.
How can you benefit from trash materials? What objects are being used instead of this object? How can this object be made more functional? What changes would you like to make on this object? What other materials can we do with this object?
Modify, Minify, Magnify In this stage, changes are made to reduce original size of the existing object (by dimension or volume), making it lighter or heavier, making it faster or slower. 276
Banuçiçek ÖZDEMİR
The questions to be asked at this stage are as follows: (Glenn, 1997; Yildiz & Israel, 2002); a. What differences can be made in the appearance of this object? b. What happens if this object is heavier or lighter? c. What colours would be preferred if you wanted to change the colours of this object? d. How would we change or lengthen the length of this object? e. What would have changed if this object was faster or slower? f. What else could be added to this object? (Serrat, 2017) Put to Other Uses It is the stage at which the usage area of the object is different from what is used. What is different from what is known in the field of use, what kind of purposes, how and how to use the question is determined by questions (Glenn, 1997). The following questions should be asked for this phase (Glenn, 1997; Yildiz & Israel, 2002); a. b. c. d. e. f.
What other purpose can this object be used for? How can we use the existing object to solve a problem? How can you promote this object? What can we do with different objects? What changes can we make to use this object in different places? How would you use it, if you didn't know how to use this object? (Serrat, 2009)
Eliminate It is the stage in which the creativity of the object, object or phenomenon is removed by removing all or part of the subject (Glenn, 1997). The following questions should be asked for this phase (Glenn, 1997; Yildiz & Israel, 2002); a. b. c. d.
What parts of this object would you like it to change? What feature does this object have in the old ones but not in the new ones? What are the features of this object that are used in the old but not used? What is the main part of this object?
277
Assessment trough the Scamper
e. What happens if you remove other parts except the original part of this object? f. What happens if you remove any part of this object? Reverse, Rearrange Considering the use of the existing object is the stage of the regulation of properties (Glenn, 1997). At this stage, the object is given the opposite or vice versa. The following questions should be asked for this phase (Glenn, 1997; Yildiz & Israel, 2002); a. b. c. d. e.
How could you have done this object? Which parts of this object can you change? What features should be rearranged from this object? Which parts of this object do you rearrange? What object can be replaced with this object?
SCAMPER APPLICATION EXAMPLES Substitute Sample (Moreno, Yang & Hernandez Wood, 2014) Question: What can be used instead of hand wash cloths for small children? Answer: Disposable cloth (due to use and disposal). Combine Sample 1 (Glenn, 1997) Question: “What is a more functional hanger if the coat hanger is combined? Answer: A computer-controlled pollution meter can be combined to determine whether the garment hanging on the hanger is clean or dirty. Sample 2 (Daly et al., 2012) Question1: What happens by combining a phone with a camera? Question2: What is the function of combining the phone and radio? Question3: What kind of function does the printer and scanner integrate? Question4: Why bread, cheese, salami is called toast?
278
Banuçiçek ÖZDEMİR
Adapt Sample 1 (Glenn, 1997) Question: How do we get the clothes hanger to be different and more special? Answer1: I do a water-absorbing substance so that the water does not need to be sucked and dried. Answer2: I do different smells on the clothes hangs to remove bad odours on it. Sample 2 (Buser et al., 2011) Question 1: How to make tires for bad weather conditions? Answer: Manufacturing of thick and toothed tires Modify, Minify, Magnify Sample (Glenn, 1997) Question: What kind of changes can be made on the clothes hanger? Answer1: It can be made wider and more robust and can be made of hanging objects such as blankets. Answer2: If different colours are made according to the different characteristics of the dressing provides. Put to Other Uses Sample (Glenn, 1997) Question: For what purposes can the clothes hanger be used? Answer1: It can be used as a radio and television antenna. Answer2: It can be used by turning it into a separator or paperclip shape. Eliminate Sample (Buser et al., 2011) Question 1: How about if the cables of the phones are removed? Question 2: What happens if there is no oil when preparing food? Reverse Sample (Gladding, 2007) Question 1: How would you do it if you rearranged words and letters in video clips and letters?
279
Assessment trough the Scamper
Conclusion The place of education in the life of individuals is very important. Changing the characteristics of the individual away from the tradition, education moves towards a constructivist approach that supports learning by experiencing. In particular, the persistence of each individual knowledge that they experience in the learning-teaching process in which the individual takes place is quite high. It is very important in this process that individuals continue to make a difference in their thinking or to make a different point of view. SCAMPER, which is one of the creative thinking strategies, provides a different perspective and different perception to the individual regardless of the education level. The feedback received from the students in each step is very useful in measuring both the creativity and learning of the student. It is very easy to evaluate with the concrete examples given to each stage in the insufficient areas. It is a method that can be very practical for the students. How to implement this method and how the student evaluation should be done in the steps of implementation should be explained by giving trainings to the relevant teachers. This method, which supports the development of creative thinking skills, is thought to be more permanent and productive for individuals to look at an object from a different point of view. So, they learn with different ideas. Within the same peer group, it will be remarkable to give training to the teachers in the next period in which students have likened, modified or reorganized as a result of their creative thinking. The object, which is determined by SCAMPER technique, allows different questions to be interpreted from a different perspective with this technique consisting of different steps. In addition to specifying that the teacher who will perform the SCAMPER application must have a subject or thought subject matter dominance, he/she must give preliminary information about the object to be used before the application. Asking the attention of the student also allows the collection of interest on the object (Erberle, 1996). The experience of the teacher to practice will be as important as the application of SCAMPER technique (Roger, 2011). Students and teachers can be evaluated in two ways; 1.While determining the student's own creative thinking tendencies, students can provide more permanent and effective learning from the different point of view of his/her peer group.
280
Banuçiçek ÖZDEMİR
2. The teacher determines the different points of view in the same sample group related to the object without removing the students from the determined focus object. In addition to this, adding different ideas and creative ideas to students in their own experiences and teaching them with these ideas in the following years can be remarkable. SCAMPER is a type of acrostic which allows the student to be measured by himself and his teacher at every step of the training. It is important for the teacher to intervene in the education process while the deficiency or inaccuracy in any step provides the student to be aware of his/ her own. It was determined that the teacher increased the confidence and motivation of the other teachers and the student's confidence (Jelena et al., 2014). As a result, SCAMPER technique is a method that develops creative thinking both from the point of view of the educator and from the student point of view to the object or object. This method allows the evaluation of the individual at each stage. The SCAMPER technique to be used in the learning-teaching process, which makes it suitable for the sample group with its content determined in the target of education, will also allow students to evaluate their students. It is a method that can be applied at different levels of education from pre-school to higher education level and it is guided by the teacher as a guide that can enable the teacher to think different without removing them from the subject. It is recommended that this method be applied in different disciplines and areas.
281
Assessment trough the Scamper
References Alyazad, M. N. (2014). The development of creative thinking in preschool teachers: The effects of SCAMPER program. International Journal of Psycho- Educational Sciences, 6(3), 81-87. Buser, J. K., Buser, T. J., Gladding, S. T. & Wilkerson, J. (2011). The creative counsellor: using the SCAMPER model in counsellor training. Journal of Creativity in Mental Health, 6(4), 256-273. Daly, S. R., Christian, J.R., Yılmaz, S., Seiferd, C. M., & Gonzales, R. (2012). Assessing design heuristics for idea generation in an introductory engineering course. International Journal of Engineering Education, 28(2), 463-473. Eberle, B. (1996). Scamper: Games and activities for imagination development. Waco, Texas: Prufrock. Gladding, S. T. & Micro training Associates (2007). Becoming creative as a counsellor: the SCAMPER model. United States: Micro training Associates. Gladding, S. T. (2011). Using creativity and the creative arts in counselling: An international approach. Turkish Psychological Counselling and Guidance Journal, 4(35), 1-7. Glenn, R. E. (1997). SCAMPER for student creativity. Education Digest, 62(6), 67-68. Retrieved from http://web.b.ebscohost.com/ehost/detail/detail? vid=2&sid=f183924f-2939-474c-8566b0596747548d%40sessionmgr103&bdata=Jmxhbmc9dHImc2l0ZT1laG9 zdC1saXZl#db=eue&AN=503531629 on 15.11.2018. Jelena P., Apple C.Y., Toby M.Y. & Tong, S. L. (2014). The feasibility of enhancement of knowledge and self-confidence in creativity: a pilot study of a three-hour SCAMPER workshop on secondary students. Thinking Skills and Creativity, 14, 32-40. Michalko, M. (2006). Thinker Toys. A handbook of creative thinking techniques. Berkeley, CA: Ten Speed. Moreno, D. P., Yang, M. C. ve Wood, K. L. (2014). Design creativity for every design problem: A design by-analogy approach. Design Computing and Cognition, 14, 1-10. 282
Banuçiçek ÖZDEMİR
Roger, V. (2011). SCAMPER: A tool for creativity and imagination development for children. New York: Center for Creative Leadership. Serrat, O. (2009). The SCAMPER technique. Knowledge Solutions. Retrieved from http://www.adb.org/sites/default/files/publication/27643/scampertechnique.pdf on 14.10.2018. Serrat O. (2017) The SCAMPER Technique. In: Knowledge Solutions. Singapore: Springer. Yıldız, V., & İsrael, E. (2002). Yaratıcılığı geliştirmede bir yol: SCAMPER. Journal of Education for Life, 74-75, 53-55.
URL (Figure 1) Litemind’s blog on Creative Problem Solving with SCAMPER (available: http://litemind.com/scamper/) suggests more than 60 questions that can be asked, along with almost 200 words and expressions one can create associations with. https://litemind.com/wp-content/uploads/misc/litemind-scamper-reference.pdf
283
Ahmet Volkan YÜZÜAK
Chapter 14 Self Peer and Group Assessment Ahmet Volkan YÜZÜAK
Introduction Education programs and measurement and evaluation process are not standard and valid for everyone. For this reason, maximum diversity and flexibility is essential in measurement and evaluation process (MoNE, 2018). Therefore, different techniques of assessment can be used for an effective learning environment. “The importance of the role of assessment in the teaching and learning process cannot be doubted” (Askham, 1997, p. 300). Students may be involved in assessment in a wide variety of ways such as self-assessment, peer assessment, feedback provision, self or peer testing and negotiation or collaboration with lecturers about some aspect of the process (Hounsell et al., 1996). Assessment is a term that includes all procedures to gain student learning and value judgments related to learning process. Two types of assessments are formative (assessment for learning) and summative (assessment of learning). Formative assessment is related to judgments about the quality of student responses i.e. performances, pieces or work. It can be used to shape and improve proficiency of students. On the other hand, summative assessment which is essentially passive but often influences important decisions for students is concerned with summarizing achievement status of students. Feedback plays an important role in formative assessment (Sadler, 1989; Carless et al., 2006; Hounsell 2006). Figure 14.1 indicates a model and seven principles of good feedback mechanism (Nicol & Macfarlane-Dick, 2006).
285
Self Peer and Group Assessment
Figure 14.1 A model and seven principles of good feedback practice The seven principles of good feedback are stated as facilitating the development of self-assessment and reflection in learning, encouraging teacher and peer dialogue around learning, helping to clarify characteristics of good performance, providing opportunities to close the gap between current and desired performance, delivering high-quality information to students about their learning, encouraging positive motivational beliefs and self-esteem, providing information to teachers that can be used to help to shape the teaching (Nicol & Milligan, 2006; Nicol & McFarlane-Dick 2004; 2006). Self-Assessment Self-assessment is one of the important components of good feedback practice. In the feedback field, a study focused on the importance of the students as self-assessors. Self-assessors are able to provide their own feedback since they understand the standard that they are aiming for and can judge and change their own performance (Nicol & Macfarlane-Dick, 2006). Self-assessment process includes judgments of students about their own judgments on their work related
286
Ahmet Volkan YÜZÜAK
to a learning goal, reflecting on their efforts, identifying improvements and also adjusting the ‘quality’ of their work (Black et al., 2004). Student self-assessment is “the process by which the student gathers information about and reflects on his or her own learning … [it] is the student’s own assessment of personal progress in knowledge, skills, processes, or attitudes. Self-assessment leads a student to a greater awareness and understanding of himself or herself as a learner” (Ontario Ministry of Education, 2002, p. 3; Deveci, 2017; Yılmaz et al., 2016). A learner can achieve an objective as much as possess a concept of the standard (i.e. learning goal, reference level), compares the actual level of performance with the standard and engages in appropriate action. Due to this, self-assessment is necessary for learning (Sadler, 1989). It was shown that self-assessment raises students’ achievement significantly, improves student behaviour and also increases student engagement and motivation (White & Stiggins, 2002; Rolheiser & Ross, 2001; White & Frederiksen, 1998; Ross, 2006). Ross (2006) states the reason for the use of selfassessment of teachers or lecturers as self-assessment technique which has distinctive features that gives opportunities to students in contributing criteria on a work to be judged and student engagement in tasks, maintains student interest and attention. In addition, the technique is more cost-effective than other assessment techniques and enhances learning since students may learn when they will share responsibility for the assessment. Two examples of self-assessment are shown in Figure 14.2 and Figure 14.3.
287
Self Peer and Group Assessment
Laboratory report
Student name: ………………… Date: ……………. Lecturer: ……………………… 1: I am bad at, 2: I am not bad not good, 3: I am good at Criteria 1 2 3 How can I improve? 1.Explaining the aim of experiment 2. Using the laboratory equipment 3. Making the experiment 4. Recording the data 5. Making the analyse of the data 6. Reporting the process I want to add that………………………………………………………… ….
…………………………………………………………… …………………………………………………………… ……………………………………………………………
Figure 14.2 An example of self-assessment form
288
Ahmet Volkan YÜZÜAK
My opinions related to my experiment Student name: ………………… Student ID: ……………………… 1. What did I do well in this experiment? 2.
Date: …………….
……………………………………………………… What I learned in this experiment?
……………………………………………………… 3.
What do I need for the experiment?
4.
What did I expect when I was doing my work?
………………………………………………………
……………………………………………………… 5. What can I do to improve my skills and make a better experiment?
………………………………………………………
Figure 14.3 A questionnaire related to self-assessment In figure 14.2 and 14.3, examples are given for self-assessment forms. Boud (2003) stated that students are always self-assessing. Self-assessment enables students as effective and responsible learners. Self-assessment process plays an important role for lifelong and effective learning which needs to be developed in university courses as well as being necessary. Self-assessment can be used for individual self-monitoring, promoting good learning process, diagnose and remediation, other forms of assessment, improving professional or academic practice, consolidating learning, revision achievements, self-knowledge and understanding. El-Maaddawy (2017, p. 87) introduced an assessment paradigm that integrates formative, summative and self-assessment (Figure 14.4).
289
Self Peer and Group Assessment
Criteria/Standards Instructor discusses assessment with students
Formative Assessment
Summative Assessment
criteria/standards
for
Student Response Students work on assessment task then submit their work to the instructor
Feedback Instructor makes judgement on student work and provides minimal feedback and tentative grade for students
Self-Assessment Students identify possible sources of errors, suggest corrections, and self-assessment report
Final-Assessment Instructor makes judgement on self-assessment report and provides final grade and feedback Figure 14.4- Assessment paradigm As stated in Figure 4, the assessment paradigm comprehends five main steps which are criteria, student response, feedback, self-assessment and final assessment. Second, third and fourth steps can be examined as formative assessment and final assessment can be evaluated as summative assessment. Peer Assessment Peer and self-assessment are methods of marking and creating feedback. Selfassessment and peer assessment techniques have been used for a long time. For example, George Jardine who was a professor at the University of Glasgow from 290
Ahmet Volkan YÜZÜAK
1774 to 1826, described a plan which included methods, rules and advantages of peer assessment of writing (Gaillet, 1992; Kara, & Bakırcı, 2018). Self and peer assessment techniques can be used in alternative assessment. Self-assessment and peer-assessment are interrelated collaborative learning technique (Berry, 2005). Self-assessment and peer assessment can be used together. Peer and selfassessment techniques have some similarities. For instance, students assess work of each other or study individually in peer assessment process by using a predetermined list of criteria (Falchikov, 2005). Peer assessment can be considered as a part of peer tutoring (Donaldson & Topping, 1996) and defined as “students learning from and with each other in both formal and informal ways” (Boud et al., 2001, p. 4). Self or peer assessment techniques require four stages that are preparation, implementation, follow-up and evaluation and replication. The main stages are illustrated in Figure 14.5 (Falchikov, 2004).
Figure 14.5 Stages to conduct and evaluate a self or peer assessment study 291
Self Peer and Group Assessment
Falchikov (2004, p. 105) identified stages for evaluating self or peer assessment study and examined some problems and their solutions in the same study The problems are ‘students often dislike either the idea or the experience of being involved in assessment’, ‘Colleagues are suspicious of, or hostile to, the idea’, ‘Setting up studies involves too much time’. The things to do to solve the problems are ‘discussing these problems with students, preparing them thoroughly, considering using student assessment for formative purposes or reducing the amount the student derived marks ‘counting’, ‘stressing the fact that the time is well spent because of the great benefits to students’ etc. Self and peer assessments can promote autonomy (Ashraf & Mahninezhad, 2015). Autonomous learning model shows the interaction of four main elements as the learner, teacher, task and environment (Higgs, 1988). The learner autonomy is one of the main goals of education. (Boud, 1988, p. 18). Moreover, Race (2001, pp. 85-86) stated seven reasons for using peer assessment: 1. Students are already doing it: Students are continuously assessing their peer in fact. 2. Students find out more about our assessment cultures: One of the biggest dangers with assessment is that the students often don’t really know how their assessment works. 3. We can’t do as much assessing as we used to do: The amount of the assessment that lecturers can cope with is limited with more students, heavier teaching loads, and shorter timescales. 4. Students learn more deeply when they have a sense of control on the agenda. 5. The act of assessing is one of the deepest learning experiences: Applying criteria to someone else’s work is one of the most productive ways of developing and deepening understanding of the subject matter involved in the process. 6. Peer-assessment allows students to learn from each other’s successes. 7. Peer-assessment allows students to learn from each other’s weakness. Peer assessment is also important for group process which are necessary component of student preparation for a business career. Factors were identified, that are involved in group dysfunctions and result in behaviours, such as possessing divergent individual characteristics; lack of mutual trust or support, disagreement on task purpose or direction… Even though, many mechanisms are 292
Ahmet Volkan YÜZÜAK
available to facilitate group participation among these peer evaluations are essential as stated by Helms and Haynes (1990). An example of peer assessment is shown in Figure 14.6. Peer Evaluation Form (for a micro teaching activity)
Assessment of ………………….. Usage educational method: ………
Date:…………..
1: Weak 2: Fair 3: Good Criteria An effective introduction Expressing the objectives Increasing motivation An effective communication Commitment to the lesson plan Introduction to the experiment Instruction and the experiment compliance Experiment material introduction Resulting of the experiment An effective evaluation relates to technique Time usage
1
2
3
Explanations:
……………………………………………………… ……………………………………………………… ………………………………………………………
Name of the student (assessor): ………………. Grade: …..
Figure 14.6- An example of peer-assessment form
293
Self Peer and Group Assessment
Group Assessment A study group generally includes three or six students in a classroom. There are various working principles with groups. Students can work as a group and each student can submit his/her study. Only a group report can be submitted to lecturer. Therefore, group assessment may have different meanings but it can generally be defined as assessment of group working process. Group projects can be poster presentations, a laboratory report, a project related in use of scientific methods, an analysis of a documentary, preparation of a video or a website etc. Group assessment can be sometimes problematic. Knight (2004, p. 64) stated some advantages and disadvantages of individual and group tasks in point of teachers (Table 14.1). Table 14.1 Some advantages and disadvantages of individual and group work tasks from the teachers’ point of view Scenario Individual work
Group work
Advantages *Gives an individual assessment of performance *Is a personal piece of work *Promotes deep learning approaches *Encourages teamwork *Maximises available resources *Cuts down on marking *Shares workload
Disadvantages *A lot of marking *May be not interesting for students
*Individuals get subsumed within the whole *Some students get away with doing little
Even though group assessments can be problematic, there are many reasons to do group assessment projects. In addition to expressed advantages in Table 14.1, some advantages can be added to group assessment. such as:
294
fosters teamwork is related to student-centred approach increases communication maximises available resources shares workload …
Ahmet Volkan YÜZÜAK
An example of group assessment form is shown in Figure 14.7 (MoNE, 2004; Deveci & Önder, 2015).
Project A Name of the Lecturer/Assessor: Name of Group: …………………..
Date:
5: Strongly agree; 4: Agree; 3: Not decided; 2: Disagree; 1: Strongly disagree. Evaluation Criteria
Group member 1
Group member 2
Group member 3
1. Ready to work 2. Listening to others 3. Sharing responsibilities 4. Supporting group friends 5. Participation in discussions 6. Justification of opinions 7. Respect to different views 8. Be willing for project 9.Usage of time efficiently 10.Completing homework 11. Keeping homework Total point
……………………………………………………… Explanations:
Figure 14.7- Group Assessment form Conclusion This part of the book introduced assessment techniques which were selfassessment, peer assessment, and group assessment in addition to example forms related in techniques and identified the advantages of these techniques. For example, self and peer assessments can promote autonomy (Ashraf & Mahninezhad, 2015). The autonomous learning model is shown in Figure 14.8 (Higgs, 1988). 295
Self Peer and Group Assessment
Figure 14.8 Autonomous learning Figure 14.8 indicates the interaction of four main elements of autonomous learning model; the learner, teacher, task and environment (Higgs, 1988). The learner autonomy is the main goal of education (Boud, 1988. p. 18). Peer assessment is important for autonomy. Moreover, Race (2001, pp. 85-86) stated seven reasons for using peer assessment. Related assessment techniques are essential aspect of formative assessment. As expressed, assessment practices such as self, peer and group are used to improve learning process, self-confidence, ability to express students’ thoughts and feelings, and communication in or out group.
296
Ahmet Volkan YÜZÜAK
References Ashraf, H., & Mahninezhad, M. (2015). The role of peer-assessment versus selfassessment in promoting autonomy in language use: A case of EFL learners. Iranian Journal of Language Testing, 5(2), 110-120. Askham, P. (1997). An instrumental response to the instrumental student: assessment for learning. Studies in Educational Evaluation, 23(4), 299317. Berry, R. (2005). Entwining feedback, self, and peer assessment. Academic Exchange Quarterly, 9(3), 225–9. Black, P., Harrison, C., Lee, C., Marshall, B. & William, D., (2004). Working Inside the Black Box: Assessment for Learning in the Classroom. Phi Delta Kappan, 86(1), 8-22. Boud D., Cohen R., Sampson J. (2001). Peer learning in higher education learning from and with each other. London, UK: Kogan Page. Boud, D. (2003). Enhancing learning through self-assessment. London: Kogan Page. Boud, D. (Ed.) (1988). Developing student autonomy in learning. New York: Kogan Page. Carless, D., Joughin, G, & Mok, M. M. C. (2006) Learning-oriented assessment: principles and practice, Assessment and Evaluation in Higher Education, 31(4), 395–8. Chappuis, S. & Stiggins, R. J. (2002). Classroom assessment for learning. Educational Leadership, 60(1), 40–43. Deveci, İ. & Önder, İ. (2015). Views of Middle School Students on Homework Assignments in Science Courses, Science Education International, 26(4), 539-556. Deveci. İ. (2017). Fen bilimleri öğretmen adaylarının girişimci özellikler ile ilgili öz değerlendirmeleri. Mehmet Akif Ersoy Üniversitesi Eğitim Fakültesi Dergisi, 44, 202-228. Donaldson, A. J. M., Topping, K. J., Aitchison, R., Campbell, J., McKenzie, J. & Wallis, D. (1996). Promoting Peer Assisted Learning among Students in
297
Self Peer and Group Assessment
Further and Higher Education (SEDA Paper 96). Birmingham: Staff and Educational Development Association. El-Maaddawy, T. (2017). Enhancing learning of engineering students through self-assessment. Global Engineering Education Conference (EDUCON) (pp. 86-91), IEEE. Falchikov, N. (2004). Involving students in assessment. Psychology Learning & Teaching, 3(2), 102-108. Falchikov, N. (2013). Improving assessment through student involvement: Practical solutions for aiding learning in higher and further education. London: Routledge. Gaillet, L. I. (1992). A foreshadowing of modern theories and practices of collaborative learning: The work of the Scottish rhetorician George Jardine. Paper presented at the 43rd Annual Meeting of the Conference on College Composition and Communication, Cincinnati OH, March 19-21, 1992. Helms, M. & Haynes, P. J. (1990). When bad groups are good: An appraisal of learning from group projects. Journal of Education for Business, 66(1), 5-8. Higgs, J. (1988). Planning learning experiences to promote autonomous learning. In developing student autonomy in learning (ed) Boud, D., pp. 40-58. London, UK: Kogan Page. Hounsell, D. (2006). Towards more sustainable feedback to students. Paper presented to the Northumbria EARLI SIG Assessment Conference, Darlington, 29 August – 1 September. Hounsell, D., McCulloch, M. L. & Scott, M. (Eds) (1996) The ASSHE Inventory: Changing Assessment Practices in Scottish Higher Education, Edinburgh, Centre for Teaching Learning and Assessment, The University of Edinburgh and Napier University, Edinburgh, in association with the Universities and Colleges Staff Development Agency. Kara, Y. & Bakırcı, H. (2018). A scale to assess science activity videos (SASAV): The study of validity and reliability. Journal of Education and Training Studies, 6(1), 43-51.
298
Ahmet Volkan YÜZÜAK
Knight, J. (2004). Comparison of student perception and performance in individual and group assessments in practical classes. Journal of Geography in Higher Education, 28(1), 63-81. MoNE-Ministry of National Education (2004). Science education program for 6th, 7th and 8th grades. MoNE: Ankara. MoNE-Ministry of National Education (2018). Science education program. Retrieved from http://mufredat.meb.gov.tr/ProgramDetay.aspx?PID=325. National Science Teachers Association (2001). Assessment. NSTA Position Statement. Retrieved from https://www.nsta.org/about/positions/ assessment.aspx Nicol, D. & Macfarlane-Dick, D. (2006). Formative assessment and selfregulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218. Nicol, D. J. & Milligan, C. (2006). Conceptualising technology-supported assessment in terms of the seven principles of good feedback practice. In G. Gibbs, C. Bryan and K. Clegg (Eds) Innovative Assessment in Higher Education, London: Routledge. Nicol, D., & Macfarlane-Dick, D. (2004) Rethinking formative assessment in HE: a theoretical model and seven principles of good feedback practice. www.heacademy.ac.uk/assessment/ASS051D_SENLEF_model.doc Ontario Ministry of Education (2002). The Ontario Curriculum unit planner. Toronto, ON: Queen’s Printer for Ontario. Peters, J. M., & Stout, D. L. (2006). Science in elementary education: Methods, concepts, and inquiries. USA: Pearson Merrill Prentice Hall. Race, P. (2007). The lecturer’s toolkit: A practical guide to assessment, learning and teaching (3nd ed.). London: Kogan Press Limited. Rolheiser, C., & Ross, J. A. (2000). Student self-evaluation – What do we know? Orbit, 30(4), 33–36. Ross, J. A. (2006). The reliability, validity, and utility of self-assessment. Practical Assessment Research & Evaluation, 11(10), 1–13. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional science, 18(2), 119-144. 299
Self Peer and Group Assessment
Ürey M., Cepni S. & Kaymakçi S. (2015). Fen temelli ve disiplinlerarası okul bahçesi programının bazı sosyal bilgiler öğretim programı kazanımları üzerine etkisinin değerlendirilmesi. Uludağ Üniversitesi Eğitim Fakültesi Dergisi, 2, 7-29. White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16(1), 18–31. Yılmaz, R., Reisoğlu, İ., Topu, F.B., Yılmaz, T.K., & Göktaş Y. (2016). The development of a criteria list for the selection of 3D virtual worlds to design an educational environment. Croatian Journal of Education, 17,1037-1069.
300
Sevilay ALKAN, Ebru SAKA
Chapter 15 Diagnostic Branched Tree and Vee Diagram Sevilay ALKAN, Ebru SAKA
Introduction This section discusses the “Diagnostic Branched Tree and Vee Diagram” techniques which are alternative assessment and evaluation tools. This section of the book is divided into two parts. The first part contains information about Diagnostic Branched Tree and the second part contains information about Vee diagram. These sections are then divided into sub-sections about the definitions of the techniques, their advantages and disadvantages, preparation procedures, and how to score them. The final sub-sections contain examples of the techniques in various courses. Diagnostic Branched Tree? Permanence of learning is known to be closely related with a correct understanding of events or concepts which are associated with each other (Selvi & Yakışan, 2004). Incorrect knowledge and observations of individuals manifest themselves as misconceptions or alternative concepts in the school age. Students create alternative concepts in their minds based on knowledge and experiences which they believe to be meaningful and coherent (Tekkaya, Çapa, & Yılmaz, 2000). Such alternative concepts negatively affect learning in the future and prevent students from understanding new concepts accurately. If students misconceptualize basic concepts, they may not be able to comprehend higherorder concepts correctly. They may have difficulties in associating their prior knowledge with new information and learning processes may be impeded (Canpolat & Pınarbaşı, 2012). For the reasons explained above, it is important to perform activities to identify and eliminate alternative concepts created by students. Various assessment tools are believed to be useful to identify such alternative concepts. One of these assessment tools is Diagnostic Branced Tree. This section discusses the Diagnostic Branched Tree technique.
301
Diagnostic Branched Tree and Vee Diagram
Diagnostic Branched Tree (DBT) was described for the first time in 1986 by Johnstone, McAlpine, and MacGuire as an alternative assessment tool in eduction. DBT is a diagnostic tool used to determine whether or not meaningful learning took place and identify any misconceptions or imperfect knowledge. While the process allows students to assess themselves, it also helps the teacher realize and correct misconceptions of students. Considering the importance of associating concepts with everyday events or other concepts with similar properties in an individual’s learning (Chi, 1992; Coştu, Ünal & Ayas, 2007; Köğce, Yıldız & Aydın, 2010; Yadigaroğlu & Demircioğlu, 2012), the significance of the Diagnostic Branched Tree technique is understood even better. DBT is one of the assessment tools used to identify what students have learned and what kind of misconceptions they have about a specific subject. DBT plays an important role in the learning process, since used to reveal false knowledge by identifying incorrect associations and strategies in student’s mind (Bahar, Nartgün, Durmuş & Bıçak, 2009:61; Bahar et al., 2012; Çepni & Çil, 2009; Karahan, 2007; Kocaarslan, 2012). In addition to the teaching and learning processes, DBT may also be used as an assessment tool to check what students have learned at the end of the process (Polat, 2011). In this technique, students are given readily prepared statements ordered from general to specific or simple to complicated and expected to decide whether the statement is true or false. The tree consists of 8 or 16 branches in total and contains 7 or 15 statements with true or false answers. The difficulty level increases with the number of branches (Çelikkaya, 2014, p. 177; Nartgün, 2006; Karamustafoğlu, Yaman & Karamustafoğlu, 2005). Students are expected to read each statement and decide whether it is true or false (Bahar, 2001; Karahan, 2007). Although it is similar to a test containing true-false questions which are traditional assessment tools, the technique has an important place among alternative assessment and evaluation techniques in terms of differences in form and intended use. In traditional True (T)-False (F) tests, questions are dealt with separately and they are usually independent from each other. Unlike a T-F test, DBT consists of questions which are related with each other and each answer affects the next one (Bahar, 2001; Bahar et al., 2012; Çalışkan & Yiğittir, 2008; Çelikkaya, 2014, p. 177; Çepni & Çil, 2009). Each true-false decision affects the next true-false decision (Çelikkaya, 2014, p. 177). Therefore, it is an assessment tool which considers not only correct answers, but also wrong answers. 302
Sevilay ALKAN, Ebru SAKA
Advantages of Diagnostic Branched Tree? The Diagnostic Branched Tree technique has many advantages for its use as both as a learning tool in the teaching process and as an assessment tool at the end of a unit. The advantages of this technique mentioned in the literature (Bahar, 2001; Bahar et al., 2012; Çalışkan & Yiğittir, 2008; Çelikkaya, 2014, p. 184, Geçgel & Şekerci, 2018; Gülşen Turgut & Turgut, 2018; Karahan, 2007, p. 16; Kocaarslan, 2012; Okur, 2008; Öztürk, 2011; Polat, 2011; Turan, 2010) are listed below. 1. As a tool aimed at evaluating students’ knowledge level and meaningful learning, DBT determines students’ readiness level and allows students to establish connections between concepts. In the DBT technique, starting points of students are examined to reveal their knowledge about the subject and kind of misconceptions. Using this technique, imperfections and alternative concepts created by students can be identified related to concepts in students’ minds. For example, Figure 15.1 shows a DBT diagram consisting of 8 exits.
Figure 15.1 Diagnostic Branched Tree Diagram If the student chooses exit 4 instead of exit 3, which is the correct exit, it can be concluded that the student understands statement 1 as well as statement 2, but has misconceptions about statement 5. This reveals that the teacher should conduct various activities to eliminate the identified misconceptions of the student. It is also believed that discussing all of the statements included in the 303
Diagnostic Branched Tree and Vee Diagram
DBT diagram and explaining the scientifically correct answer in classroom environment will be useful in terms of conceptual changes in students’ minds. 2. In the DBT technique, students continue to learn during the assessment process as well. All true-false statements in DBT are related with each other and each decision of the student requires another thinking process and reviewing existing knowledge. Because, the student encounters a similar situation in the next stage. In this case, the student may realize that his or her decision is wrong and change his or her decision. This characteristic of the DBT technique it’s the superiority to traditional tests containing true-false statements. True-false statements are often independent from and unrelated with each other. This may cause the student to give random answers without thinking thorough. Also, the DBT technique requires the student to make the right decision each and every time to reach to the exit with the highest score. The rate of making the right decision by pure luck is much lower compared to traditions tests containing truefalse statements. 3. The DBT technique allows the student to feel more relaxed during the learning process, avoid exam anxiety, and see his/her mistake immediately. Also, the student can perform self-assessment during the application process and this allows for a more enjoyable class. 4. In the DBT technique, students determine a goal to reach the exit. The DBT technique is different from traditional true-false tests in this sense. The DBT technique encourages students to create links between concepts in their minds and use all their mental effort by thinking in detail with the aim of finding the correct exit. In this sense, it is believed that students may challenge with a more dedicated effort to solve the diagram. 5. It is more likely that students will be successful in an assessment performed through the DBT technique. The presence of various instructions while making a decision about true-false statements and the relationship between the previous decision and the next decision allow students to make a higher number of right decisions and achieve more success. Also, DBT is a more comprehensive technique compared to standard assessment tools which helps the teacher to assess knowledge and see concepts in students’ minds. 6. While it is possible to draw a DBT diagram by pen and paper, it can also be created through various computer programs (Inspiration or Coggle) or office programs, which makes it more convenient to use. 304
Sevilay ALKAN, Ebru SAKA
7. Statements on each branch of the DBT diagram can be developed even more based on the previous statement and the student may realize that he or she made a wrong decision and go back to the previous branch. This can easily be achieved by a computer and even if the student makes a decision change, the teacher can see the previous decision or what kind of decision changes made by the student. This allows the teacher to identify what kind of help that the student needs. In this sense, DBT is an assessment tool which considers not only correct answers, but also wrong answers. Identifying wrong decisions of students helps the teacher to reveal reasons behind such decisions. In this sense, DBT provides good feedback. Indeed, DBT reflects in the student’s success or failure in a detailed manner and shows a picture of student mind. While traditional assessment tools fall short in terms of displaying student performance in a clear way, they also fail to provide students with the opportunity to track their own development (Chen & Martin, 2000; Enger & Yager, 1998; Manning & Gary, 1995; Shepard, 2000). In this sense, it can be said that DBT is superior to traditional assessment tools. Limitations of Diagnostic Branched Tree? While the use of the Diagnostic Branched Tree technique in the learningteaching process has its advantages, it has some limitations as many other assessment and evaluation tools. These limitations are listed below based on the relevant literature (Bahar, 2001; Bahar et al., 2009; Başoğlu, 2017; Çelen, 2014; Çelikkaya, 2014, p. 184; Kocaarslan, 2012; Öztürk, 2011; Polat, 2011; Turan, 2010; Turgut, & Doğan Temur, 2017). 1. The fact that statements are related with each other in the DBT technique may be time-consuming for teachers who intend to prepare a DBT diagram for the first time. Also, it is important that each statement in a DBT diagram involves elements which may eliminate misconceptions of students and allow students to achieve the outcomes of the course. Therefore, preparing a DBT diagram requires a lengthy research process and expertise. Moreover, teachers who have limited experience on DBT technique may be reluctant to use this technique due to reasons stated above. 2. Because DBT involves true-false statements, it is difficult to prepare statements which assess students’ higher-order learning. For this reason, it may fall short with respect to statements involving learning in synthesis and assessment steps.
305
Diagnostic Branched Tree and Vee Diagram
3. Because students proceed by deciding whether the statement is true or false, they may choose the correct option by luck. 4. Since it is possible for students to reach the exit by giving a certain number of answers, they may not read each and every statement or ignore some of the statements. 5. Branches are related in each other and ordered from general to specific in the DBT technique, which may make it difficult to use this technique for every subject and outcome or course. 6. Since the DBT diagram consists of true-false statements, it is possible to identify concepts about which students have incomplete or incorrect knowledge, but it does not provide detailed information. Preparing a Diagnostic Branched Tree? Preparation of an assessment tool according to the DBT technique consists of 7 steps. Preparation steps of DBT are as follows: 1. Review and determine outcomes related to the subject at hand (Kocaarslan, 2012). It will be more useful to pick outcomes about which students are more likely to have misconceptions since DBT allows for identifying and eliminating misconceptions of students. 2. Prepare true-false statements/questions which students can easily understand. It is important that statements are prepared in a way that they can assess an interconnected and interrelated knowledge network (Köklükaya, 2010). It should be remembered that this is one of the most significant properties which set DBT apart from a true-false test. 3. Draw the DBT diagram consist of 7 or 15 true-false statements/questions in line with the determined outcomes. While it is possible to draw the diagram by hand, a computer program may also be used. For example, when drawing a DBT diagram through a computer, the following order can be used: MS Office Word > Insert > SmartArt > Hierarchy > Horizontal Labelled Hierarchy. We can add the letters T and F by clicking left on the lines (Çelikkaya, 2014, p. 178). We can also use various template programs to draw DBT diagrams. For example, programs such as Inspiration and Coggle provide prepared templates, which makes it convenient to prepare diagrams. Figure 15.2 shows the structure of an 8-exit DBT diagram (MS Office Word > Insert > SmartArt > Hierarchy > Horizontal Labelled Hierarchy). 306
Sevilay ALKAN, Ebru SAKA
4.EXPRESS 2.EXPRESS 5.EXPRESS 1.EXPRESS 6.EXPRESS 3.EXPRESS 7.EXPRESS
Figure 15.2 A sample 8-exit DBT diagram When preparing a DBT diagram with Inspiration, the following order can be used: Diagram-Symbols-Create (bottom right-bottom left, respectively). The letters T and F can be added by clicking left on the lines. Figure 15.3 shows a sample 16-exit DBT diagram prepared through this program.
Figure 15.3 A sample 16-exit DBT diagram DBT diagrams can be prepared quickly and conveniently through these programs which offer prepared templates.
307
Diagnostic Branched Tree and Vee Diagram
4. Add the statements/questions in appropriate places in the DBT diagram. Statements should be ordered from easy to difficult. In addition to the difficulty level, statements should also be ordered from general to specific, simple to complex (Çelikkaya, 2014, p. 181). 5. After placing the statements/questions in the diagram, add the exits of the branched tree to complete the diagram. Figure 15.4 shows the final form of the diagram. 1.EXIT 4.EXPRESS 2.EXIT 2.EXPRESS 3.EXIT 5.EXPRESS 4.EXIT
1.EXPRESS
5.EXIT 6.EXPRESS 6.EXIT 3.EXPRESS 7.EXIT 7.EXPRESS 8.EXIT
Figure 15.4 The final form of the DBT diagram 6. Write a short instruction for students about the intended use of the DBT technique. This instruction should explain the measured outcomes by the DBT diagram, intended use of the assessment tool, and the path followed when answering the questions. For example; “Reach one of the exits at the end of the branched tree by deciding whether or not each of the following statements related to the functions unit is true or false. Remember, you can only reach a single exit at the end of this journey. Please mark the exit that you have reached.” 8. Create an answer key in the form of a table depending on the number of exits in the diagram (8 or 16). Detailed information about scoring is given in the section titled “How Do We Score A Diagnostic Branched Tree?”. Table 15.1 shows a sample scoring for an 8-exit DBT diagram.
308
Sevilay ALKAN, Ebru SAKA
Table 15.1 Sample scoring for 8-exit DBT EXIT EXIT 1 EXIT 2 EXIT 3 EXIT 4 EXIT 5 EXIT 6 EXIT 7 EXIT 8
15 POINTS + + + + -
25 POINTS + + + + -
60 POINTS + + + +
TOTAL 100 40 15 75 85 25 0 60
Scoring a Diagnostic Branched Tree? A diagram prepared with the DBT technique is not only used as an assessment tool, but also as a diagnostic tool. This technique may be used in the learningteaching process to identify students’ misconceptions and alternative concepts created by students to replace proper concepts. When scoring a diagram prepared with the DBT technique, the teacher should first determine how each exit is reached, in other words, how many correct and incorrect decisions the student must make to reach a specific exit. The score for each true-false statement should be assigned based on its difficulty. Considering that true-false statements are ordered from easy to difficult, the scoring system should follow the same order. The scoring may be out of 100 points for a specific unit or subject and the score assigned to each statement is decided based on its difficulty (Karacaoğlu, 2011). For example, the scoring may have the “15-25-60” order for an 8-exit DBT diagram and the “10-15-25-50” or “15-20-25-40” order for a 16-exit DBT diagram. The rationale behind scoring should be based on difficulty level. If the purpose is to identify misconceptions, right decisions may be scored as 1 point and wrong decisions may be scored as 0 points (Başoğlu, 2017; Kocaarslan, 2012). The correct option for each statement in the DBT diagram should be shared with students at the end on the assessment and students should be provided with the opportunity to see their mistakes. It will be useful to show the results to each student in the classroom and create a discussion environment. Figure 15.5 shows the scores which students can achieve in an 8 exit DBT diagram out of 100 (1530-55) points depending their exit number.
309
Diagnostic Branched Tree and Vee Diagram
1.EXIT 4.EXPRESS 2.EXIT 2.EXPRESS 3.EXIT 5.EXPRESS 4.EXIT 1.EXPRESS
5.EXIT 6.EXPRESS 6.EXIT 3.EXPRESS 7.EXIT 7.EXPRESS 8.EXIT
Figure 15.5 Eight exit DBT If the student decides that the first statement is true, the student proceeds to statement 2 and can only reach exit 1, 2, 3, or 4 depending on following decisions. If the student decides that the first statement is false, the student proceeds to statement 3 and can only reach exit 5, 6, 7, or 8 depending on following decisions. For example, if the student decides that the first statement is false, the student proceeds to statement 3; if the student decides that statement 3 is true, the student proceeds to statement 7; and the student decides that statement 7 is false, the student reaches exit 8. The first statement scored out of 15, statements 2 and 3 are scored out of 30, and statements 4, 5, 6, and 7 are scored out of 55. The scoring may be performed based on actual correct values of these statements. Table 15.2 shows a sample 8-exit DBT diagram which can be scored out of 100 points. Table 15.2 Sample scoring for 8-exit DBT out of 100 points EXIT EXIT 1 EXIT 2 EXIT 3 EXIT 4 EXIT 5 EXIT 6 EXIT 7 EXIT 8
310
15 POINTS + + + + -
30 POINTS + + + + -
55 POINTS + + + +
TOTAL 100 45 15 70 85 30 0 55
Sevilay ALKAN, Ebru SAKA
Notice that there is a total of 4 “+” and “-’ signs in each row and column of the 8-exit DBT diagram. Accordingly, the possible scores of students are shown based on this scoring. Table 15.3 shows a sample 16-exit DBT diagram which can be scored out of 100 points. Table 15.3 Sample scoring for 16-exit DBT out of 100 points EXIT Exit 1 Exit 2 Exit 3 Exit 4 Exit 5 Exit 6 Exit 7 Exit 8 Exit 9 Exit 10 Exit 11 Exit 12 Exit 13 Exit 14 Exit 15 Exit 16
10 POINTS + + + + + + + + -
15 POINTS + + + + + + + + -
25 POINTS
50 POINTS
Total
+ + + + + + + + -
+ + + + + + + + -
100 50 75 25 10 60 35 85 65 15 90 40 75 25 50 0
Notice that there are a total of 8 “+” and “-” signs in each row and column of the 16-exit DBT diagram. Accordingly, the possible scores of students are shown based on this scoring. Sample Questions for Diagnostic Branched Tree This section contains various examples for the use of DBT technique in certain courses. Example 1: A DBT related to the Introduction to Physics subject in 9th grade Physics course.
311
Diagnostic Branched Tree and Vee Diagram
GRADE: 9th Grade UNIT: Introduction to Physics OUTCOMES: 1. Application areas of physics are related with its sub-branches and other disciplines. 2. Physics classifies quantities. INSTRUCTION: Reach one of the exits at the end of the branched tree by deciding whether or not each statement is true or false. Remember, you can only reach a single exit at the end of this journey. Please circle the exit that you have reached. The unit of temperature in the SI system of units is calories. All base quantities are also scaler quantities. Multiple quantities are required to obtain a derived quantity.
The sub-brach of physics which deals with force, movement, and balance is mechanics. The origin of an archeological remain is determined using nuclear physics.
Physics is closely related with music, performing arts, cinema, and visual arts.
Pressure is a vector quantity.
Figure 15.6 Example diagnostic branced tree
Table 15.4 Answer key for example diagnostic branced tree EXIT 1. EXIT 2. EXIT 3. EXIT 4. EXIT
312
15 POINTS + + + +
25 POINTS + + -
60 POINTS + +
TOTAL 100 40 15 75
Sevilay ALKAN, Ebru SAKA
5. EXIT 6. EXIT 7. EXIT 8. EXIT
-
+ + -
+ +
85 25 0 60
Example 2: 9th Grade Mathematics Course, Absolute Value Subject INSTRUCTION: Reach one of the exits at the end of the branched tree by deciding whether or not each statement is true or false. Remember, you can only reach a single exit at the end of this journey. Please circle the exit that you have reached.
Figure 15.7 Diagnostic branced tree (Example 2) Example 3: 9th Grade Mathematics Course, Ratio-Proportion Subject INSTRUCTION: Below are a set of statements related to the RatioProportion subject. Reach one of the exits at the end of the branched tree by deciding whether or not each statement is true or false. Remember, you can only reach a single exit at the end of this journey. Please circle the exit that you have reached.
313
Diagnostic Branched Tree and Vee Diagram
Figure 15.8 Diagnostic branced tree (Example 3) 2. What Is Vee Diagram? Vee diagram was developed by Gowin in 1970s based on the hypothesis that it will realise the significant, deep and permanent learning in the learningteaching process. It is a technique based on realizing the significant and permanent learning by answering some critical questions of students at the beginning, during the process and at the end of learning-teaching process (Canbazoğlu, 2008). This technique is used instead of research or test report in class practices especially in classes like science that include laboratory practices. The Vee diagram, which was created by Gowin in 1977 for the laboratory practices of students, has been applied not only in science education but also in various other fields, following the testification of its effectiveness on learning. Vee diagram is both used for the teaching of the information on basic concepts in mathematics and for the solving of problems and exercises to urge the students to reflect more and see the correlations among the concepts (Thissen,1993). The goal of Vee diagram is to help students to learn the concepts and function as a bridge in between the ways followed during the creation of those concepts and let the students relate them. Vee diagram help students to realize the meaningful learning and concept learning during the process of the production of the information. In addition, this technique provides students to relate the information that they already have and the information that they produce or trying 314
Sevilay ALKAN, Ebru SAKA
to get and correlate them. Vee diagram also helps detecting the misconceptions the students already have and make the necessary corrections. This diagram can be used both as an in-class application tool during the concept teaching process in the classes and as an assessment tool to see what they have learned about a subject at the end of the class. In that respect, it holds an important place among the assessment and evaluation tools. Vee diagram basically consists of three main parts. While making a Vee diagram, a capital “V” letter is drawn and the focus question is put in the middle. The conceptual part is at the left side of the focus question when the methodological part is at the right side of it (See Figure 15.9).
Figure 15.9 Vee Diagram (Thiessen, 1993) The left part and the center of the Vee diagram is filled before the class to ensure that the students reflect about the preliminary information and the right side of the diagram is filled in following the class to present the process that has been lead to be able to answer the focus question. While filling in the Vee diagram, cognitive level of the students is important. For example, if the students are at the primary school level, the conceptual part should be filled by the teacher either with the students or before the class (Nakiboğlu, Benlikaya & Karakoç 2001). While the teachers are preparing the focus question, which is a part of the diagram, before the solution of the question, they should decide on what the question will be about, why it is prepared and as a result, how it will contribute to students. A good focus question correlates both with the conceptual left part of the diagram and the methodological right part of the diagram, and provides the transaction in between the two parts. While preparing a Vee diagram for primary school students, the focus question should be created by the teacher. Under the focus question, the pointed end of the V shape is the place where the information 315
Diagnostic Branched Tree and Vee Diagram
is started to be structured where the cases and objects are located. At the conceptual part (the left part) of the Vee diagram, there are theories, principles and concepts at the methodological part (the right side), there are the claims (information and value), data transformation and records (Thiessen, 1993). When the studies are analyzed on Vee diagram which was generated by Novak and Gowin (1984) at the first place (See Figure 15.10), it was observed that it used by changing some of its elements by several researchers (AfamasagaFuata’i, 2004; Nakiboğlu & Meriç, 2000; Thiessen, 1993). For instance, Nakiboğlu and Meriç (2000), omitted the philosophy element in the conceptual part, used the expression of experimental claims instead of value claims in the methodological part and took the cases and objects part as tools and equipment. Afamasaga-Fuata’i (2004) ignored the element of philosophy in the conceptual part and the element of value claims in the methodological part.
Figure 15.10 Vee Diagram (Novak and Gowin, 1984) Although Vee diagram is used differently, the main parts of the diagram are the same and achieve the same goal. The information on the main parts and elements of Vee diagram is discussed in detail under the title of “2.1 The Elements of Vee Diagram”. The Elements of Vee Diagram Vee Diagram basically consists of three main parts. In the middle part of the Vee Diagram, there are the focus question and/or the objects. The focus question: The focus question is located in the middle of the Vee diagram. They are important questions that express the starting point or the goal of the research that draws attention to the cases and the objects. While preparing 316
Sevilay ALKAN, Ebru SAKA
a focus question, one should make sure that the question clearly indicates a condition, it addresses a concept and requires data collection. The focus question is about the tools and the main case of the research. Also, it shows that what kind of records should be taken during the research. The answer of the focus question is deduced from the background of the information in the conceptual left part that affects the methodological right part as well (Gurley-Dilger, 1992) The Cases and/or the Objects: In this part, there are the procedure and the tools and equipment to be employed in order to answer the focus question. The cases express how the experiment is carried out or the solution of the the related problem; the objects express the tools and equipment to be used. At this part that is located at the basis of the Vee diagram, the down pointed end of the V shape, the shape of the experiment set-up or the related problem or the process steps of experiment/problem can be specified. At the left side of the Vee diagram, the conceptual part is located. The conceptual part consists of two main units that are “Theories and Principles” and “Concepts” (Thiessen, 1993). Theories and Principles: They are written on the upper left corner of the Vee diagram, above the concepts. The principles are guiding rules to make the research understandable. They describe the relationship among two or more key concepts. The principles are based on the information claims of the former research (Gurley-Dilger, 1992). Like the principles, the theories explain the connections among the concepts, and different than the principles, they organize the concepts and principles with the aim of defining the cases and the information on cases. Theories are more extensive than the principles and they include a lot of concepts and principles. While the principles are defining how the cases or objects come to being, the theories define why they happen (Novak & Gowin, 1984). Concepts: It is the part where the concept, term and the symbols of the subject are written. If the student fills in these parts before starting the experiment or the problem or if s/he fulfills the missing parts, it provides the student with a ground to solve the focus question. To ensure that the conceptual part of the Vee diagram can be easily filled in by the students, the concept maps of the Vee diagram shoul be introduced to the students (Novak & Gowin, 1984).
317
Diagnostic Branched Tree and Vee Diagram
At the right side of the Vee diagram, there is the methodological part. The methodological part consists of four main units; “Records”, “Data Transformations”, “Value Claims” and “Information Claims” (Thiessen, 1993). Records: All the results, evaluation and observation obtained during the research and problem-solving process are written at this part. With the solution of the focus question, all the results that are obtained should be located here. Data transformations: Data transformations provide the focus question to be answered more easily by making the observations more understandable. At this part, some graphics and statistics can be employed. By means of presenting the information this way, it becomes easier for the students to answer the focus question (Novak & Gowin, 1984). Value claims: They are the answers to the questions such as “Is this result good for us?”, “Is it right?”, “Can we do better?”. The benefits that the students will have for their daily lives and other classes and the importance of research/problem for the student with the concepts and relations learned is defined at this part. Information claims: Information claims are the answers that possibly be given to the focus question and it proposes new questions that will illuminate new research. These claims should be consistent with the conceptual and methodological information that lead the focus question at the first place (GurleyDilger, 1992). The left side of the Vee diagram includes the stage of reflection. It is the place to present the conceptual and methodological information, which are used in developing hypothesis. The right side of the diagram includes the stage of practice and the methodological and operational activities are presented at this part. During the study, indicating the examples of the concepts that are specified at the beginning, provides a meaningful and permanent teaching of those concepts (Nakiboğlu, Benlikaya & Karakoç, 2001). Advantages of The Vee Diagram Technique? During the learning-teaching process, the Vee diagram provides lots of benefits. They can be specified as follows with the support of the literature (Alvarez & Risko, 2007; Nakiboğlu & Özkılıç-Arık, 2006; Novak & Gowin, 1984; Roehrig, Luft & Edwards, 2001; Roth & Browen,1993; Tatar, Korkmaz & Ören, 2007; Tortop, Çiçek-Bezir, Uzunkavak & Özek, 2007) 318
Sevilay ALKAN, Ebru SAKA
1. Vee diagram contributes to the students to better organize their information, make researches more effectively and generate the outline to learn. 2. Students understand how the scientific information is generated during the creation of the Vee diagrams and their communication skills improve as a result of collaboration. 3. Vee diagram makes it easier to detect and correct misconceptions of the students when it is used during the teaching process of concepts. 4. Vee diagrams contribute to raise students as individuals who are eager to research, question and adopt scientific thinking. Vee diagrams are the tools that help students to learn successive concepts and comment on them. 5. Vee diagram not only indicates the learning procedure of the information but also it reveals the internalization of the learned information. 6. Vee diagrams improve the performance of the students in solving problems. It provides them to correlate old and new information. It is effective in mathematics classes while showing the correlation of concepts to daily life. 7. Vee diagram makes it easier for the teacher to teach the class, it gains time for the teacher. 8. Besides guiding the science teachers, they can be used as an alternative to classical laboratory reports. Limits (Disadvantages) of The Technique of Vee Diagram? Besides the benefits of the Vee diagram, it also has some limits. These limits can be summed up as follows with the support of the literature (Eryılmaz Toksoy & Çalışkan, 2015; Tatar et al., 2007). 1. If the Vee diagrams are not totally used or misused, they go beyond their aim and can become ineffective. If the teachers who will use the Vee diagram with lack of the knowledge about applying the technique and assessment or the students who are lack of information about to prepare a Vee diagram, it might be harder to practice the technique in terms of its aim. 2. Preparing a Vee diagram might be boring and hard for some students. Especially lower grade students might have difficulties in correlating
319
Diagnostic Branched Tree and Vee Diagram
the conceptual and methodological parts of the diagram. This situation might make the technique harder to apply. 3. The teachers who will apply the technique might have difficulties in preparing an effective focus question. Preparing Vee Diagram? While preparing the Vee diagram, the following sequence consisting of 8 stages should be followed in (Nakiboglu, Benlikaya & Karakoç, 2001): 1. The preparation of the Vee diagram begins with the drawing of a capital V. 2. The conceptual side is prepared before coming to the class. Various textbooks can be used for this purpose. Theories and principles that will facilitate the solution of the problem are determined and noted in the theory and principles section of the conceptual side. The concepts related to the question are listed under the concepts section. 3. Before the solution of the question, the focus question is determined by considering what the problem is about, why it is done and what does it accomplish. The focus question should be at most two (the focus question at the primary level is determined by the teacher). 4. The way to find the answer to the focus question, necessary events or objects are written to the lower point of the letter V in the diagram. 5. The solved question, and the measurements, observations and results related to the solution are written in the records section of the methodological side. 6. Records are rearranged as comparisons, differences, tables, graphs, drawings in accordance with the question. It is determined if there are special cases such as assumptions, limitation, or points to be considered in solving the problem. These information and records are re-edited in accordance with the question in the data transformation section of the diagram. 7. The experimental results obtained through the records and data transformations and the comments that can be made about these results are written in the value claims section. 8. Knowledge claims are created by explaining the experimental claims in a general level by using the theories and principles in the conceptual section and written to the relevant section of the diagram. Knowledge claims are the answers to the focus questions. These claims should be 320
Sevilay ALKAN, Ebru SAKA
consistent with conceptual and methodological knowledge that directs the focus question. In the preparation of the Vee diagram, answering to the questions listed in the table below will facilitate the creation of the diagram. Table 15.5 Guiding Questions in Preparing Vee Diagram (Afamasaga-Fuata’i, 2004) Sections Focus question Theory
Guiding questions What is the problem about? What are the main principles and theories that guide research? How are the concepts linked? What are the general rules, Principles principles and formulas that need to use? What concepts are used in the problem expression? Is Concepts there a need for related concepts in the problem solving? Events and/or What is the problem expression? objects What is the information given in the problem? Records How can we use records, concepts, principles, theories Transformations to determine the method? Knowledge claim What is the answer to the focus question given in the given event? Is the result true? Can we do better? Value claim Scoring a Vee Diagram Novak and Gowin (1984) stated that the Vee diagram can be used as an evaluation tool and the elements of the Vee diagram can be scored separately according to certain criteria. This scoring system is given in Table 15.6. Table 15.6 The Scoring of Vee Diagram as an Evaluation Tool (Novak and Gowin, 1984) Elements of diagram Focus Question
Vee
Score 0 1
Criteria
If the focus question is not defined If the focus question is defined but not focused on objects and basic events or the conceptual side of the Vee diagram
321
Diagnostic Branched Tree and Vee Diagram
2
3
Events and/or Objects
0 1 2 3
Theories, Principles, Concepts
0 1 2
3
4
Records and Data Transformations
0 1 2 3
4
322
If the focus question is defined, it contains concepts but does not recommend basic objects and events, or if laboratory work supports incorrect objects and events If a clear focus question containing the concepts used and supporting the main event and the objects accompanying it is not defined
If the event and object are not defined If events and objects defined but not consistent with the focus question If events and accompanying objects are defined and consistent with the focus question Same as above, but also suggesting the records to keep If the conceptual side is not defined If several concepts are defined without theory and principles If one type of concept and one type of principles defined (conceptual or methodological) or concepts and related theory are defined If both types of concepts and principles are defined, or one type of concepts, one type of principles, and one relevant theory are defined If concepts, two types of principles and a related theory are defined
If data record and transformation is not defined If the data record is defined but not consistent with the focus question or the main event If only one of the data records and data transformation is defined If records are defined for the basic event but data transformations are not consistent with the purpose of the focus question If all records and data transformations are defined, data transformations are consistent with the focus question, and are compatible with the level and student capabilities
Sevilay ALKAN, Ebru SAKA
Knowledge claims and value claims
0 1 2
3
4
If knowledge claims and value claims are not defined If the claims are not related to the conceptual side of the Vee diagram If knowledge claims contain concepts that are abnormal to content, or if generalizations are inconsistent with records and data transformations If knowledge claims include concepts related in the focus question and can be obtained from records and data transformations If knowledge claims contain concepts in the focus question and can be subtracted from records and data transformations, and at the same time, the value claim is guiding a new focus question
Considering the scoring key proposed by Novak and Gowin (1984), in researches where Vee diagrams are used as a measurement tool and give mention to reliability of measurement and evaluation, it is concluded that this technique is a reliable tool in determining the success of students. In this respect, reliable decisions can be made about the success of the students with the scores obtained from Vee diagrams. In this context, teachers can use Vee diagrams to measure both conceptual and operational information of their students (Polat-Demir, 2016). Examples of Vee Diagram Some examples of the use of Vee diagram will be given under this title. Example 1: Mathematics (This example was created by Afamasaga-Fuata (2008) for “equality and equations” subject) Focus Question: What are the equations of the lines passing through (3, -3) in general form?
323
Diagnostic Branched Tree and Vee Diagram
Figure 15.11 Vee diagram example 1 Example 2: Mathematics (This example was created by Calais (2009) for “probability” subject). Focus Questions: 1. What is the probability of selecting each of the five integers at any given time? 2. How many times should each of the five integers occur in this batch of 100 randomly generated integers? 324
Sevilay ALKAN, Ebru SAKA
Figure 15.12 Vee diagram example 2 Example 3: Chemistry (This example was created by Nakiboğlu, Benlikaya and Karakoç (2001) for the effect of temperature on solubility).
325
Diagnostic Branched Tree and Vee Diagram
Focus Question: What is the effect of temperature on the solubility of solids?
Figure 15.13 Vee diagram example 3
326
Sevilay ALKAN, Ebru SAKA
Conclusion The DBT was described for the first time in 1986 by Johnstone, McAlpine, and MacGuire as an alternative assessment tool in eduction. DBT is a diagnostic tool used to determine whether or not meaningful learning took place and identify any misconceptions or imperfect knowledge. While the process allows students to assess themselves, it also helps the teacher realize and correct misconceptions of students (Reisoğlu, Gedik & Göktaş, 2013). Students proceed to the next statement based on whether they have decided a statement is “True (T)” or “False (F)”. Statements are ordered from general to specific. Each statement gives an indicative result by itself. In this sense, DBT is different from traditional truefalse tests. The questions are asked from general to particular concerning the subject. The second title mentions the Vee diagram, which was expressed by Gowin for the first time in the 1970s in order to answer some critical questions at the beginning, during and at the end of the learning-teaching process and provide more meaningful, permanent and deeper learning. Vee diagram is not only determining and removing misconceptions but also fills in the gap between theory and practice. Vee diagram consists of three main sections by placing a focal question with a giant letter “V” right in the middle. There is the conceptual section on the left of the focal question and the procedural section on the right. The left side and center of the Vee diagram are filled in before the course, whereas the right side after the course. In the light of these data, the information respectively about their definitions, advantages and disadvantages, preparation procedure and score them are mentioned in subheadings. The last subheading, on the other hand, gives examples from various courses.
327
Diagnostic Branched Tree and Vee Diagram
References Afamasaga-Fuata’, K. (2004). Concept Maps & Vee Diagrams as a tool for learning new mathematics topics. Concept Maps: Theory, Methodology, Technology. In A. J. Cañas, J. D. Novak, & F. M. González (Eds). Proceedings of the First International Conference on Concept Mapping. (Vol.1, pp. 13-20). Pamplona, Spain: University of New England. Afamasaga-Fuata’i, K. (2008). Vee diagrams as a problem-solving tool: Promoting critical thinking and synthesis of concepts and applications in mathematics. Published on AARE’s website. Retrieved from http://www.aare.edu.au/07pap/code07.htm/afa07202.pdf. Alwarez, M. C., & Risko, V. J. (2007). The use of vee diagrams with third graders as a metacognitive tool for learning science concepts. Online Journal of Teaching and Learning. 1(1). Retrieved from http://eresearch.tnstate.edu/pres. Bahar, M. (2001). A critical approach to multiple choice questions and alternative methods. Educational Sciences: Theory & Practice, 1(1), 23-38. Bahar, M., Nartgün, Z., Durmuş, S. & Bıçak, B. (2009). TraditionalComplementary Assessment and Evaluation Techniques: Teachers Handbook (3rd Edition). Ankara: Pegem Academy Publishing. Bahar, M., Nartgün, Z., Durmuş, S., & Bıçak, B. (2012). TraditionalComplementary Assessment and Evaluation Techniques: Teachers Handbook (5th Edition). Ankara: Pegem Academy Publishing. Başoğlu, S. (2017). The effect of classıcal and technology assısted dıagnostıc branched tree technıque on students' academıc achıevement, mısconceptıons and cognıtıve load (Unpublished Master’s Thesis). Ordu University, Ordu, TURKEY. Retrieved from http://tez2.yok.gov.tr/ Calais, G. J. (2009). The Vee diagram as a problem solving strategy: Content area reading/writing implications. In National Forum Teacher Education Journal, 19(3), 1-8. Canbazoğlu, S. (2008). Assessment of pre-service elementary science teachers’ pedagogical content knowledge regarding the structure of matter (Unpublished Master’s Thesis). Ankara University, Ankara, TURKEY. Retrieved from http://tez2.yok.gov.tr/ 328
Sevilay ALKAN, Ebru SAKA
Canpolat, N. & Pınarbaşı, T. (2012). Prospective chemistry teachers’ understanding of boiling: a phenomenological study. Erzincan University Journal of Education Faculty, 14(1), 81-96. Chen, Y., & Martin, M.A. (2000). Using Performance Assessment and Portfolio Assessment Together in Elementary Classroom. Reading Improvement, 37(1). 32-38. Chi, M.T.H. (1992). Conceptual change within and across ontological categories Examples from learning and discovery in science (pp. 129-160). In R. Giere (Ed) Cognitive Models of Science: Minnesota Studies in the philosophy of Science Minneapolis. MN: University of Minnesota Press.Coştu, B., Ayas, A., & Ünal, S. (2007). Misconceptions about boiling and theır possible reasons. Kastamonu Education Journal, 15(1), 123-136. Çalışkan, H. & Yiğittir, S. (2008). Measurement and Evaluation in Social Studies. In B.Tay and A.Öcal (Eds.) Social Studies Teaching. (217-278). Ankara: Pegem Academy Publishing. Çalışkan, İ. (2014). The perceptions of pre-service science teachers' about using vee diagrams and electronic portfolios in physics laboratuary course. Educational Research and Reviews. 9(6). 173-182. doi: 10.5897/ERR2014.1759 Çelen, Ü. (2014). Psychometric Properties of Diagnostic Branched Tree. Educationand Science, 39(174), 201-213. doi: 10.15390/EB.2014.2630 Çelikkaya, T. (2014). Diagnostic branched tree. In S. Baştürk (Ed.), Measurement and Evaluation in Education (pp. 175-194). Ankara: Nobel Academic Publishing. Çepni, S. & Çil, E. (2009). Science and Technology Program (Recognition, Planning, Implementation and Associating with Sbs) Primary Education 1st and 2nd Level Teacher Handbook. Ankara: Pegem A Publishing. Enger, S. K., & Yager, R. E. (1998). The Iowa Assessment Handbook. ERIC Document Reproduction Service No: Ed 424286. Retrieved from https://eric.ed.gov/?id=ED424286 Eryılmaz Toksoy, S. & Çalışkan, S. (2015). Fizikte Kullanılan Problem Çözme Stratejileri Ölçeğinin Lise Öğrencileri İçin Uygulanabilirliğinin Test
329
Diagnostic Branched Tree and Vee Diagram
Edilmesi. Necatibey Eğitim Fakültesi Elektronik Fen ve Matematik Eğitimi Dergisi (EFMED), 9(2), 158-177. Geçgel, G., & Şekerci, A. R. (2018). Identifying alternative concepts ın some chemistry topics using the diagnostic branched tree technique. Mersin University Journal of the Faculty of Education, 14(1), 1-18. doi: 10.178.60 Gowin, D.B. (1970). The structure of knowledge. Educational Theory, 20, 319328. Gowin, D.B. (1977). The domain of education, Ithaca: Cornell University. Gülşen Turgut, İ., & Turgut, S. (2018). The effects of visualization on mathematics achievement in reference to thesis studies conducted in Turkey: A meta-analysis. Universal Journal of Educational Research, 6(5), 1094-1106. Gurley Dilger, L. (1992). Gowin’s vee. The Science Teacher, 59(3), 50-57. Johnstone, A. H., McAlpine, E. & MacGuire, P.R.P. (1986). Branching trees and diagnostic testing. A Journal Further and Higher Education in Scotland, 2, 4-7. Karacaoğlu, Ö. C. (2011). Curriculum development in online education. Ankara: Ihtiyaç Publishing. Karahan, U. (2007). Application of alternative measurement and evaluation methods that are grid, diagnostic tree and concept maps wıthin biology education, ([Unpublıshed Master’s Thesis), Gazi University, Ankara, Turkey.. Retrieved from http://tez2.yok.gov.tr/ Karamustafaoğlu, O., Yaman, S., & Karamustafaoğlu, S. (2005). Learning and teaching materials in science and technology education. Teaching science and technology in primary education. Ankara: Anı Publishing. Kocaarslan, M. (2012) Diagnostic branched tree technique and its use in the unit called change and diagnosis of matter in the program of science and technology at fifth grade. Mustafa Kemal University Journal of Social Sciences Institute, 9(18), 269-279. Köğce, D., Yıldız, C., & Aydın, M. (2010). Teaching profession: Elementary mathematics student teachers’ viewpoint. Second International Congress of Educational Research, April 29-May 2, Antalya. 330
Sevilay ALKAN, Ebru SAKA
Köklükaya, A. N. (2010). The analysis about the research and the development of knowledge of using technological devices which cause electromagnetic pollution of the students, (Unpublished Master’s Thesis). Sakarya University, Sakarya, Turkey. Retrieved from http://tez2.yok.gov.tr/ Manning, M., & Gary (1995). Portfolios in Reading and Writing. Teaching Pre K-8, 25(5), 94-95. Nakiboğlu, C., Benlikaya, R., & Karakoç, Ö. (2001). Use of V-diagrams in high school chemistry courses. Hacettepe University Journal of Education, 21, 97-104. Nakiboğlu, C., & Meriç, G. (2000). V-Diagram usage and applications in general chemistry laboratories. Journal of Balıkesir University Institute of Science and Technology, 2(1), 58-76 Nakiboğlu, C., & Özkılıç-Arık, R. (2006). Determination of the 4th grade students misconceptions related to gases. Journal of Yeditepe University Faculty of Education, 1(2), 1-17. Nartgün, Z. (2006). Measurement and Evaluation in Science and Technology Teaching. In M. Bahar (Ed.), Science and Technology Teaching (pp. 355415). Ankara: Pegem Publishing. Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press. Okur, M. (2008). Determination of 4. and 5. class primary teachers' opinions of using alternative assessment and evaluation techniques at science and technology course (Unpublished Master’s Thesis). Karaelmas University, Zonguldak, Turkey. Retrieved from http://tez2.yok.gov.tr/ Öztürk, P., T. (2011.) The effect of usage of concept maps, structured grıd and dıagnostıc tree technıcs to teach the “lıvıng thıngs and energy relatıons unıt” on 8th grade of prımary school students’ attıtudes towards scıence and technology lesson (Unpublished Master’s Thesis). Selçuk University, Konya, Turkey. Retrieved from http://tez2.yok.gov.tr/ Polat, B. (2011). The effects of vee diagrams, concept maps and diagnostic branched tree on attitudes to mathematic course and success andthe teacher views about these means, (Unpublished Master’s Thesis). 331
Diagnostic Branched Tree and Vee Diagram
Hacettepe University, http://tez2.yok.gov.tr/
Ankara,
Turkey.
Retrieved
from
Polat-Demir, P. (2016). The examination of reliability of Vee diagrams according to classical test theory and generalizability theory. Journal of Measurement and Evaluation in Education and Psychology, 7(2), 419-431. doi: 10.21031/epod.280199 Roehrig, G., Luft, J. A., & Edwards, M. (2001). Versatile Vee Maps. Science Teacher; 68(1), 28-31. Roth, W. M., & Bowen, M. (1993). The Unfolding Vee. Science Scope, 16(5), 28-32. Selvi, M., & Yakışan, M. (2014). Misconceptions about Enzymes in University Students. Gazi University Journal of Education, 24(2), 173-182. Shepard, L. (2000). The role of assessment in learning culture. Educational Researcher, 29(7), 4-14. http://www.jstor.org/stable/1176145. Tatar, N., Korkmaz, H., & Ören, F. (2007). Effective tools as a developing scientific process skills ın ınquiry-based science laboratories: Vee & I Diagrams. Elementary Education Online, 6(1), 76-92. Retrieved from http://ilkogretim-online.org.tr/ Tekkaya, C., Çapa, Y. & Yılmaz, Ö. (2000). Misconceptions of Pre-Service Biology Teachers on General Biology topics. Hacettepe University Journal of Education, 18, 140-147. Thiessen, R. (1993). The Vee diagram: A guide for problem solving. AIMS Newsletter, 3-11. Tortop, H.S., Çicek-Bezir, N., Uzunkavak, M., & Özek, N. (2007). Using Vdiagrams to determine the misconceptions in waves laboratory and the effects of the developping attitude towards the course. Süleyman Demirel University Journal of Natural and Applied Sciences,11(2), 110-115. Turan N. (2010). Comparison of alternative assessment techniques such as concept map and tree diagram with classic techniques in terms of student success, (Unpublished Master’s Thesis). Gazi University, Ankara, Turkey. Retrieved from http://tez2.yok.gov.tr/
332
Sevilay ALKAN, Ebru SAKA
Turgut, S., & Doğan Temur, Ö. (2017). The effect of game-assisted mathematics education on academic achievement inTurkey: A meta-analysis study. International Electronic Journal of Elementary Education, 10(2), 195-206. Yadigaroğlu, M. & Demircioğlu, G. (2012). Prospective chemistry teachers' associating levels of chemistry information with events in daily life. Journal of Research in Education and Teaching, 1(2), 165-171.
333
Part IV Subject Specific Evaluation
Gizem SAYGILI
Chapter 16 Alternative Assessment and Evaluation Practices in Primary School Gizem SAYGILI
Introduction Assessment and evaluation studies are carried out in primary school, for the recognition of students’ knowledge, skills, interests and abilities. Thus, certain activities are planned and put into practice in order to enable students to reach a level that is better than their present level. In other words, every activity in primary school has a purpose. The degree of objective achievement for the education activities are determined by conducted assessment and evaluation studies. The measurement of education activities and concluding by the obtained data reveal the suitability and effectiveness of the conducted study. Moreover, they become the determining factor for the further studies (Göçer, 2014, p.17). Education aims to change student's behaviour by presenting some experiences. In order to achieve this goal, a planned action is taken. Therefore, various decisions are made at various stages of education by performing assessment and evaluation (Özçelik, 1992, p.231). Assessment and evaluation start from the pre-program status of the student and continues throughout the program. After the completion of the program, feedback is received on the student's situation (Doğan, 1997, p.318). The received feedback indicates the level of student behaviour and what kind of shortcomings are found (Kutlu, 2003). Assessment and evaluation studies are carried out in order to determine readiness level and development of students, to give feedback on students' development, to determine learning difficulties, to examine effectiveness of teaching and teaching materials, to provide data for planning of future learning processes, to reveal students' strengths and weaknesses, and to conclude adequacy of the curriculum (Adanali, 2008, p.21).
337
Alternative Assessment and Evaluation Practices in Primary School
Assessment is called as stating activities that we perform as a result of our observations with numbers and symbols. In other words, measurement is a systematic method for observing any feature and expressing the result of observation with numbers or symbols. If there is observation, assessment is also a description. In assessment, everything that is the subject of assessment is a feature. Assessment has emerged to reveal differences everywhere and to determine the differences between individuals or objects regarding level of having a feature. In a broad sense, assessment is to state whether a certain object or objects have a certain feature and to observe the level of having a feature and to express observation results with symbols, especially with number symbols (Şimşek, 2009; Tan, 2012; Tekin, 1993). Evaluation is called as to compare an assessment result with a criterion and to decide about the feature determined by the assessment result in this way (Özçelik, 2010, p.221). In evaluation, there is a process of judging the student's product and performance. Evaluation is the process of determining whether the student is successful or unsuccessful. Evaluation questions whether the elements in the education system, unction well, and reveals aspects that do not function well and thus the system is repaired (Zorbaz, 2005, p.2). Evaluation is a concept that includes assessment. While assessment is to show the amount of a feature with numbers and symbols, evaluation is to decide whether this amount is enough or whether it is appropriate for the purpose. For this reason, assessment is generally objective, and evaluation is subjective because it is based on interpretation and judgment. In other words, assessment is based on observation, while evaluation is based on comparison, interpretation and decision-making (Semerci, 2009, p.3; Tekindal, 2009, p.312; Yılmaz & Sümbül, 2003, p.316). Alternative assessment and evaluation Alternative assessment and evaluation approaches have gained importance as a result of qualifications seeking in education. Traditional assessment and evaluation approaches are not enough alone in the evaluation of students (Çakıcı, 2008; Korkmaz, 2004; Şaşmaz-Ören, Ormancı & Evrekli, 2011). Therefore, alternative measurement and evaluation approaches providing students with multiple evaluation opportunities should be used to demonstrate their knowledge, skills and attitudes. Alternative assessment and evaluation approaches should be used in order to create a learning environment in which students actively participate, gain effective learning experiences and learn through their curiosity instead of imposition. Alternative assessment is an approach which is student338
Gizem SAYGILI
centred and takes students' individual characteristics into consideration, conducts different assessment techniques that reveal to what extend students can apply their knowledge and skills to real life (Çalışkan & Yiğittir, 2008, p.235). It gives students the opportunity to analyse their own learning styles and thoughts (Aydoğdu & Kesercioğlu 2005). Against to traditional assessment and evaluation approaches, this approach argues that the assessment has a wider meaning than a response to test items and should be considered in different dimensions. The aim of this approach is to give students a task related to the field to determine the students' knowledge and skills in a desired area of learning and to determine their effectiveness on that task by using assessment tools with ensured validity and reliability (Çepni & Ayvacı, 2007). According to the alternative assessment and evaluation approach in which student qualifications are of great importance and the process is prioritized, assessment and evaluation are part of learning process. For this reason, it takes place not only at the beginning and end of learning but at every important point during the learning process. Since the process is concentrated on, it requires the use of more and various assessment tools or methods than the traditional approach. In this approach, behaviours of students in and outside the classroom are monitored as well as traditionally used paper-pen tests. Students' performance is observed in the process of learning and teaching. The student's interest and attitude are assessed. Students participate in the evaluation process. In this way, students' performance is evaluated in all aspects (Gelbal & Kellecioğlu, 2007). With this approach that handles assessment and evaluation in a broad perspective, teachers could get acquainted with their students in all aspects. Thus, they can help more in their development in all areas. At the same time, students become aware of their own potential with this approach. In this way, they can make more realistic decisions in order to realize themselves. In this section, which prepared to enable an increase in the usage of alternative assessment and evaluation practices in primary school, complementary and contemporary applications such as portfolio, performance evaluation, self and peer assessment, interest and attitude scale, checklist, rating scale, rubric, frequency scale, poster, vee diagram, structured grid, diagnostic branched tree, word association test, and observation form are discussed within the framework of alternative assessment and evaluation practice. Alternative assessment and evaluation practices that can be applied in classes for effective and efficient teaching in primary school are listed as follows by Day and Respect (2015): 339
Alternative Assessment and Evaluation Practices in Primary School
Portfolio (Final Evaluation, Student Development File) It is the collection of all the assignments that a student does during a semester. The purpose of the portfolio is to monitor development of students in the process and to inform them of their development. Portfolios are prepared by students. Evaluation criteria of products in the portfolios should be prepared together with students. Teachers can evaluate students or students can evaluate themselves. Portfolio is an authentic (realistic) study. Students can put everything in files that help us recognize them and follow them through the process. There may be unfinished work, too. In this context, audio and video cassettes, photographs, projects and performance assignments, autobiography, book list, parents' opinions, self and peer evaluation forms, dairies etc. can be included in the portfolio. With portfolios, responsibility is provided to students. Students' selfconfidence increases. With the help of portfolios, talented students can be noticed. Professional and educational guidance can be provided with portfolio. Portfolio can be used to select students for special programs. It can be used to develop cognitive, affective, psychomotor skills together in students. It can be used to support parents' learning with students. It can be used to evaluate curriculum. Besides the benefits, portfolio also has some limitations. Students can put their work in the file by making copies from each other. It's not objective. It is difficult to evaluate. It is time-consuming. It is difficult to keep student assignments. There is no common goal because students can put different assignments. Therefore, individual evaluation is required. Performance Evaluation (Performance Task, Assignment) It is to carry out the evaluation by taking individual differences of students into account through situations and assignments that make them transform their knowledge and skills into action and real life. Performance evaluation results in an observable performance or a concrete product. It is the process of evaluating work, product or activities of the students that they exhibit by their knowledge and skills. So, there is a need to have a process and a product at the end of the process. Something happening at a time cannot be a performance. Performance evaluation can be cognitive, affective, or psychomotor. Performance tasks are the assignments that require students to use, develop and produce cognitive, affective and psychomotor skills such as critical thinking, problem solving, creativity, and research. For example, experimentation, making tools, writing poems, articles, essays, playing a musical instrument, or vocalizing a song, making presentations 340
Gizem SAYGILI
and so on. Performance assignment consists of an activity or homework that is conducted or completed by student and evaluated by teacher, student or friends according to certain performance criteria. Performance task requires students to use their knowledge and skills in the context of real-life situations. Performance evaluation provides real-life experiences. That's why it's authentic. Instant feedback is given. Students can see their shortcomings, strengths and weaknesses. With performance tasks, the focus is not on the students’ knowledge, but the students’ abilities. A well-defined graduated grading key (rubric) should be used to score the performance tasks as objectively as possible. Check List It is a chart that indicates whether certain behaviour is performed or not. It is based on observations. It shows whether certain behaviour is observed or not. If the behaviour is observed, it is recorded as “yes” while if it is not, it is recorded as “no”. In other words, existence/non-existence of behaviour is measured in a check list. In this way, teachers can be informed about existence/non-existence of behaviour. Check lists show what to do in which order and how. By looking at the list one can determine how many behaviours is observed or not. Check list is much more process-oriented. While using check lists, some negativities can emerge. One of them is that it presents two criterions to teacher showing the behaviour is observed or not. Another disadvantage of the check lists is that student’s performance is evaluated according to a certain scoring system. Although it helps determining strong and weak learning behaviour of students, the teacher should evaluate the performance in terms of scores and is not able to record the performance process. Rating Scale (Likert) It shows the extent to which an individual has a certain feature. It shows that which behaviour is observed to what extent. In a rating scale, the extent to which an individual has a certain feature is shown with numbers. This scale indicates the observance frequency of a feature. The rating scale measures the quantity of certain behaviour. That is why it is more product-oriented than process. Rubric (Graduated Grading Key - Answer Key) It shows how much grade to get according to how much a certain behaviour is observed. Which behaviour gets how many points is prepared through predetermined criterion. The most important difference of the graduated grading key is that it enables students to know what exactly their grades correspond to 341
Alternative Assessment and Evaluation Practices in Primary School
and how much they fulfil what is expected of them. Rubric is prepared with the student beforehand. In this way, the student knows what is expected from them. The predetermined rules increase the reliability and therefore objectivity of the assessment. With the help of rubric, information is presented to students about themselves and discover their insufficiency. Rubric enables us to make self and peer evaluation. Students can be evaluated in terms of cognitive, affective and psychomotor aspects. Rubric is difficult to prepare. It requires experience and expertise. A different form should be prepared for every course and field. Check list, rubric and grading scale are the tools. Portfolio and performance evaluation can be made with these tools. Self- and Peer Evaluation When a student evaluates himself/herself according to predetermined criterion, it is called self-evaluation. At the same manner, when he/she evaluates a peer, it is peer evaluation. The most critical purpose of the self and peer evaluation is for students to gain self-confidence and responsibility. Besides, it helps students to be able to decide. It enables them to look themselves from an objective perspective. Also, it motivates them and makes them active. If it is not used properly, it can deviate from its purpose and bias can emerge. For example, a student can grade low someone that he/she does not like. Interest and Attitude Scale Interest scale shows if students like something or someone. Curiosity is within the interest scale. On the other hand, attitude scale indicates perspectives and ideas of students about an incident. Attitudes can be positive or negative. Interests and attitudes are both within the affective area in other words about emotions. Frequency Scale It is a scale which shows which behaviour is observed how many times. It shows the frequency of a behaviour. Poster It is the presentation of a text with a picture on the side. It is one of the methods students use to learn something in detail, make a literature search and access related resources. The purpose to prepare a poster is to make sure students learn something in a meaningful and permanent way. Vee Diagram It is a technique based on the assumption that one can learn in a more meaningful, deeper and more permanent way on the cognitive level if he/she 342
Gizem SAYGILI
answers some critical questions at the beginning, during and at the end of the learning/teaching process. Vee diagram is not an activity itself. Rather it should be considered as a side tool for the in-class or out-of-class activities to be internalised and made sense of. It is more often used in science classes. It combines theory and practice. If there is an experiment, then there is proof of knowledge. It has three stages: planning, practice and evaluation. It draws attention of students to the subject. It helps students to think in high level. With the help of this technique, students can have more meaningful learning process. It is both a method and assessment tool. Structured Grid Scientific knowledge is intertangled and students should sort that out. There should not be irrelevant data in the grid. In this technique, students can place different statements (numbers, formulas, texts, pictures) into the little boxes and this improves both their visual and oral thinking skills. In the structured grid technique, it is almost impossible for students to answer questions without knowing the subject and only by luck. They are required to have information about the subject. The boxes students choose to bring out the mistakes and insufficiencies about the subject. While in multiple-choice tests only the true answers are scored, in the structured grid both true and wrong answers are evaluated. The structured grid technique can be applied by students in a short time. Diagnostic Branched Tree With this technique, statements are identified on which students make mistakes. Existing concept errors of students are determined. The subjects which students are insufficient or have misconceptions are identified. Preliminary information of students is identified. This technique can be developed in computer environment. When a student makes a mistake and notice it, they can undo it so they can answer the question again. It takes experience and time to prepare it. There is a possibility for students to make the right choice by guess. Word Association Test (WAT) It is applied to gather all the associations about a concept that a student has in his/her cognitive structure. It is written. Because of the time limit, the first coming to the mind is told. It is a written version of a brain storming by one person or a group of people. If the link between the associations is weak, then it means the level of concept error is high. The biggest advantage of word association tests is 343
Alternative Assessment and Evaluation Practices in Primary School
being easy to prepare and apply. It can easily be applied both to an individual and a group of people. Evaluation takes a long time and that is a disadvantage for this method. Observation Forms In some areas where the outcomes can be observed, this method is very important. Observations provide proper and quick information about students. Teachers observe students in terms of the answers that they give to questions and suggestions, their attendance in discussions made in class and group studies and discussions. Also, teachers observe reactions of students to tasks about learning and materials. Every student should be observed in different circumstances and on different days. Every student should be observed a few times. The measurement used for every student should be the same. Every student should be evaluated according to different features, abilities and behaviours. Conclusion Assessment and evaluation practices constitute the important and formative process of educational activities at primary school level as well as at all levels of education. Assessment and evaluation studies that have key importance in measuring and improving the quality of primary education should be enriched with modern practices. The alternative assessment and evaluation approach should become prevalent in practice in order to enable the students to reach better levels in terms of their knowledge, skills, interests and abilities. Various assessment tools or methods used in alternative assessment and evaluation practices, where student qualifications are based on and process evaluation is important, ensure that students are recognized in all aspects. Therefore, it allows students to demonstrate their all performance in learning process. Evaluation of not only the product, but also the process makes teaching effective and efficient. Enrichment of educational activities in primary school with alternative assessment and evaluation practices provides both teachers and students with various benefits. When the alternative assessment and evaluation approach is applied correctly and effectively, it is easier to make students gain desired knowledge, skills and values. In addition, learning activities become funnier and richer. Students enjoy learning and are motivated to learn. Thus, students' desire to learn and their participation in class increases. They also learn more quickly and easily through alternative assessment and evaluation in primary education.
344
Gizem SAYGILI
References Adanalı, K. (2008). Alternative Evaluation in Social Studies Education: Evaluation of 5th Class Social Studies Education in Terms of Alternative Evaluation Activities, (Unpublished Master's Thesis). Çukurova University Institute of Social Sciences, Adana Aydoğdu, M. & Kesercioğlu, T. (2005). Teaching Science and Technology in Primary Education. Ankara: Anı Publishing. Çakıcı, Y. (2008). Constructivist Approach in Science and Technology Education. Taşkın, Ö. (Ed), New Approaches in Science and Technology Teaching. (pp.1- 19). Ankara: Pegem A Publishing. Çalışkan, H. & Yiğittir, S. (2008). Assessment and Evaluation in Social Studies. Teaching Social Studies with Special Teaching Methods, (Ed: Tay, B. and Öcal, A.). Ankara: Pegem A Publishing. Çepni, S. & Ayvacı, H. Ş. (2007). Alternative (Performance) Evaluation Approaches in Science and Technology Education. (Ed: Çepni S.). Teaching Science and Technology from Theory to Practice. Ankara: Pegem A Publishing. Doğan, H. (1997). Curriculum and Instructional Design in Education, Ankara: Önder Printery. Gelbal, S. & Kelecioğlu, H. (2007). Competence perceptions of teachers about assessment and evaluation methods and problems they face, H. U. Journal of Education 33, 135-145 Göçer, A. (2014). Assessment and Evaluation in Turkish Education. Ankara: Pegem A Publishing. Gündüz M. & Saygılı G. (2015). Assessment and Evaluation in Primary School. Ankara: Maya Akademi Publishing. Korkmaz, H. (2004). Alternative Evaluation Approaches in Science and Technology Education. Ankara: Kutlu, Ö. (2003). On the 80th Anniversary of the Republic: Assessment and Evaluation, National Education Journal,160 Özçelik, D. A. (1992). Education Programs and Teaching: General Teaching Method, Ankara: ÖSYM Publishing. 345
Alternative Assessment and Evaluation Practices in Primary School
Özçelik, D. A. (2010). Assessment and Evaluation, Ankara: Pegem A Publishing. Semerci, Ç. (2009). Assessment and Evaluation in Assessment. Assessment and Evaluation. (Ed: Karip, E.) Ankara: Pegem A Publishing. Şaşmaz-Ören, F., Ormancı, Ü. & Evrekli, E. (2011). Fen ve teknoloji öğretmen adaylarının alternatif ölçme-değerlendirme yaklaşımlarına yönelik özyeterlilik düzeyleri görüşleri. Kuram ve Uygulamada Eğitim Bilimleri, 11 (3), 1675-1698. Şimşek, N. (2009). Assessment and Evaluation in Social Studies. Teaching Social Studies. (Ed: Safran, M.), Ankara: Pegem A Publishing. Tan, Ş. (2012). Assessment and Evaluation in Teaching, Ankara: Pegem Akademi Publishing. Tekin, H. (1993). Assessment and Evaluation in Education, Ankara: Yargı Books and Publishing. Tekindal, S. (2009). Assessment and Evaluation Methods in Schools, Ankara: Nobel Publishing, Yeryüzü Publishing House. Yılmaz, H. & Sümbül, A. M. (2003). Planning and Evaluation in Teaching. Konya: Çizgi Publishing House. Zorbaz, K. Z. (2005). Evaluation of Primary School Second Level Turkish Teachers' Opinions on Assessment and Evaluation and of Questions Asked in Written Exam (Unpublished Master's Thesis). Mustafa Kemal University Institute of Social Sciences, Hatay
346
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Chapter 17 Measurement and Evaluation in Special Education Sibel ERNAS, Şenay DELİMEHMET DADA & Hava İPEK AKBULUT
Introduction The first efforts towards special education are known to date back to 16th century, a time frame marked by negative attitudes towards the individuals with special needs (Nal & Tüzün, 2011). After in eras, it is the first time that individuals with special needs began to be noticed rather than sick and damned individuals. The first school for individuals with special needs was opened in France in 1760, followed by further schools for students with hearing disabilities being opened in the US in 1817 (Stainback & Stainback, 1996). The schools of that era provided education as well as care services. In early 1900s, numerous special education establishments were opened for a wide range of disabilities. The purpose of these schools was to provide special needs students with education in environments designed with specific reference to their needs by teachers trained for that purpose alone. The prevailing assumption was for that the students with special needs could not be educated in classrooms designed for general education. However, it was the same era that the first steps towards the introduction of special education classes in ordinary schools was seen along with the earliest attempts at mainstreaming education. In 1928, it was seen that the UK decide to operate special and general education establishments together. It was followed by a law passed in 1944, to allow suitable students with special needs to be educated in general education classes. Since 1960s, the mainstreaming activities have been gaining further pace (Sucuoğlu & Kargın, 2006). Many families and experts in the US voiced some objections (Kargın, 2004), leading to a trend whereby mainstreaming approaches became the norm in various countries since 1970s, in line with international human rights regulations. The regulations requiring students with special needs to be educated alongside their peers in general education classes were enacted 347
Measurement and Evaluation in Special Education
into law in Italy (1971), the UK (1974), France and the US (1975), and Norway (1976). Several reasons led to this change (Skiba et al. 2008). First, it was observed that special education classes did not facilitate mainstreaming even if partially, and, to the contrary, led to the separation of students with disabilities from students without disabilities. The students with special needs had been separated from their peers during the classes, and extracurricular activities, lunch breaks, and other breaks as well as various ceremonies (Sucuğlu & Kargın, 2006). In 1975, the rights of children with special needs to attend in public schools, have free access to the services they need, and receive education in general education classes along with their peers who do not have special needs, as much as possible, were secured through the Education for All Handicapped Children Act. The said law enacted by the US which covers important matters of regulation regarding “the education of individuals with special needs, such as bringing an end to exclusion from the programs and services of public schools, assessment without discrimination, individualized education program (IEP) preparation, education in the least restrictive environment, and the right to judicial review” (Kargın, 2004). That law was followed by the Individual Education Act (IDEA) in 1997, expanding the rights of children with special needs in terms of enjoying free education in a least restrictive environment. The least restrictive environment is introduced as an element of IDEA, requiring individuals with special needs to be educated in the least restrictive settings (U.S. Department of Education, 2010). This is a reference to spending the highest possible amount of time with peers who make normal development. Providing the least restrictive environment essentially guarantees the right to education, through assistance and adjustments to enable the individual with special needs to receive education under the conditions applicable to peers without such special needs (Friend & Bursuck, 2009). Moreover, IDEA encourages legal protection for the education rights of the students with special needs, and the revision of the education policies and classroom practices on part of the states and schools. In this context one can forcefully argue that the efforts to facilitate the education of individuals with special needs had been increasing in the US as well as the wider world. The emphasis has been on the assumption that allowing these individuals to share the same educational environments with their peers, rather
348
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
than being confined to secluded education environments, would lead to more effective results (Gürgür, Kış & Akçamete, 2012; Hornby, 2015). Mainstreaming/ Inclusion Several forms are proposed with respect to the education of individuals with special needs. And no debate about the individuals with special needs would be complete without some reference to education through mainstreaming and inclusion. Mainstreaming refers to the practice of providing students with special needs education in general education classes on a full-time or part-time basis, through the provision of support services for the teacher and the students. Mainstreaming is often defined as the practice where students with special needs receive education in the same environment with and alongside their peers with normal development (Acarlar, 2013; Kargın, 2004; Sucuoğlu & Kargın, 2006). Mainstreaming practices support the social, cognitive, physical and psychomotor development of individuals with special needs, by enabling them to share the same environment with their peers, in line with what they need. Therefore, education provided through mainstreaming extends several benefits to all involved stakeholders. A glance at the literature does not lead to a universal definition of mainstreaming (Nal & Tüzün, 2011). The provided perspectives refer to common points such as social development and social acceptance as the goals of the process, though. Inclusion, on the other hand, embraces the core tenets of this perspective, yet adds the requirement for school designs to consider the needs of each child (Nal & Tüzün, 2011). In this context, all existing schools should be organized as mainstreaming schools, requiring them to provide education in line with the needs of all individuals with special needs. In the beginning, this approach to education was called mainstreaming or integration, whereas the change in human rights outlook led to the adoption of the term inclusion. The fact that inclusion is a branch of mainstreaming, it is only natural that both concepts share several goals. The supporters of inclusion believe that the students with special needs should be educated in general education environments on a full-time basis, and that they can be excluded from the general education classes only when the required support services are unavailable (Gürgür, 2008). The arrangement of the mainstreaming/inclusion environments, which are considered the least restrictive environments for individuals with special needs, is considered as the most important factor in achieving success through these 349
Measurement and Evaluation in Special Education
approaches (Friend & Bursuck, 2009). To do so, the required regulations for mainstreaming/inclusion activities should be introduced in as well as outside classrooms. Moreover, the suitable education environment, tools and equipment, and support services should be provided and utilized in an environment based on cooperation. Assessment Processes for Students with Special Needs Assessment in special education entails the gathering of information with a view to reaching a decision regarding the student. According to the literature, such a process entails systematic collection and interpretation of a wide range of information, as well as making appropriate decisions in terms of teaching/intervention, classification and placement. The assessment process includes individual assessment of the student. The needs of the individual are to be identified through the process and definitions lead to arrangements to meet such needs. The primary purpose of this kind of process is to place the individual in an educational environment with the least amount of dependence for him/her. Therefore, the process supports the student’s socialization and learning processes simultaneously. Assessment involves two fundamental elements: decision-making and process assessment (Kargın, 2013). The assessment for decision-making emphasizes on the need to choose the skills to be developed in the individuals with special needs, and how to provide them such skills. Thus, one should first assess the existing performance levels of the children, in order to be able to decide on the skills to be developed in them (Kargın, 2013; Sucuoğlu & Kargın, 2006). Process assessment on the other hand, is performed in several stages. First, the assessment process is not just about the application of certain tests. It is, in effect, a complex process entailing both standard tests and criterion-referenced tools of assessment (Kargın, 2013). Second, assessment is not just a process taking place within a certain time frame, coupled with a single score assigned in the end. The process should consider all activities and the overall performance of the student. Third, assessment is a flexible process. It can be renewed continuously and reorganized with reference to the progress achieved by the student. Finally, assessment is not a process carried out by a specific individual employing a specific tool, in a specific setting (Kargın, 2013). As a rule, assessment should be an interdisciplinary process performed in various settings, employing a multitude of means (Lundy et al., 2011).
350
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Six goals are mentioned to summarize the objectives of the evaluation (Pierangelo & Giuliani, 2006). These objectives are; 1. Deciding the assessment: The strengths and weaknesses of the student should be put forward with the collected information. 2. Deciding to diagnose: Detailed information should be collected on the student's area of disability. 3. Deciding on eligibility for special education: The information collected should give detailed information on whether the student is eligible for special education services. 4. Deciding to develop an IEP (Individualized Education Plan): The information collected should give detailed information on development of the student's Individualized Education Plan. 5. Deciding on placement: The information collected should provide detailed information to make appropriate decisions in the student's educational placement process. 6. Instructional Planning: The information gathered during the evaluation process for the social, academic, physical and behavioral needs of the student is important in proper teaching planning. IDEA (2004) states that the assessment of individuals should be performed by an interdisciplinary team including at least one teacher or a special education expert knowledgeable in the suspected area of incompetence (Batu, 2000; Pierangelo & Giuliani, 2006). The experts to be included in such a team should employ multiple assessment tools and strategies, including but not limited to the information to be provided by the family, with a view to gaining functional and development assessment. The interdisciplinary team for assessment should include the general education teacher, school psychologist, special education teacher, language-speech therapist, medical personnel (if necessary), social worker, headmaster, family, school nurse, and physiotherapist (Pierangelo & Giuliani, 2006). The assessment materials and process should not include discrimination based on race, language, religion and culture. The location, term, and purpose of the education for students with special needs, and the people to provide the education shall be specified in the IEP, with a view to fixing these in a regulatory framework. The families of the children with special needs, as well as specialists working on these issues are entitled to voice objections against assessment, 351
Measurement and Evaluation in Special Education
placement and/or the program (Friend & Bursuck, 2009). The main emphasis Education of All Handicapped Children Act, PL 94-142 is;
parental information form (Parental consent form) approved by the family before making any evaluation, testing or placement. all children with special needs should be placed in the least restrictive educational environment. individualized Education Plan (IEP) should be prepared for all individuals with special needs. Evaluation should be made without any distinction. Individuals with special needs should be evaluated individually in all suspected areas of inadequacy. All tests applied to the individual with special needs and all reports given to the family should be in the mother tongue of the individual. Families should be included in the evaluation process. Zero reject. It is mandatory for individuals with special needs to be placed in appropriate educational settings, no matter how severe the inadequacy they have. This principle prohibits the exclusion of any child even from school.
Assessment Models in Special Education Assessment is an inseparable part of education. The existing performance level of the student is determined using the information obtained through assessment activities, leading to the customized planning of education in the light of the needs of the student (Sucuoğlu & Kargın, 2006). Assessment provides information about the abilities as well as the inabilities of the student (Gürsel, 2008). Students with special needs receive education in general education settings with their peers, as well as in separate education environments. That is why assessment is necessary to be able to provide them with the environment they need (Avcıoğlu, 2015). Accordingly, the assessment of the individuals and the adjustment of the education services are often based on two distinct models of assessment: Medical assessment and educational assessment. Medical Assessment A further step would be taken in case there are students suspected to have certain disabilities who cannot reach to the developmental objectives through necessary adjustments in the school setting. And, this step would begin with the 352
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
diagnosis, which is the beginning of the whole reference process. Diagnosis entails efforts to see whether the student suffers from developmental problems or shortcomings, along with the level and nature thereof. In other words, the students who continue to suffer academic, behavioural, social or emotional problems despite all educational adjustments and arrangements provided by the teachers for the student who suffer from learning and behavioural difficulties would be referred to relevant units and boards to provide more detailed evaluation (Gürsel, 2008). The students would first be dispatched to a Children’s Mental Health Centre in a government or university hospitals, with a view to diagnosis and detailed assessment (Kargın, 2013). The process carried out by the hospital is called medical diagnosis/assessment. By considering the medical data and psychometric analyses, the medical diagnosis process leads to a decision on part of the doctors, regarding the inadequacies of the child. Thus, the reached diagnosis should also be approved by a board composed of special education staff. Educational Assessment Educational assessment is performed for several objectives such as identifying and categorizing the students who are disabled or under certain risks, placing them in certain programs, developing customized education programs, identifying learning strategies, and choosing the appropriate goals (Taylor, 1997). The points mentioned by Ali are interesting at this junction, before providing a detailed picture of the steps to be taken in student assessment.
Every child is special
Ali As Ali noted, every child is special, and it is crucial to provide equal education opportunities to every student, in line with his or her skills and competences. In this context, a brief description of the steps involved in identifying the children with special needs would be handy. For instance, Sibel, the teacher of the class, would ask the following question to the children: 353
Measurement and Evaluation in Special Education
What substances are translucent?
Sibel To respond, Ali would go to the blackboard and write “naylan kad”. However, Ali intended to write “nylon cap”.
Ali often suffers some confusion regarding the letters and repeats certain mistakes over and over. Sibel, the teacher, notices that Ali performs the same mistakes for 4-5 times. He also has some problems in compliance with the rules of the school. Sibel, the teacher, starts to think that Ali may have special needs. The Figure provided above refers to the stage where doubts arise about any potential shortcomings that the student may have and is often called the “initial determination stage”. At this stage, children who have difficulty can be identified in terms of compliance with basic requirements in a classroom. The process should be based on several evaluation methods (copying the text on the blackboard to a notebook, working papers, open-ended exams, homework etc.). This stage involves all the students at the school, whereas the further steps of the assessment process are carried out with a lower number of students. The second stage of the assessment process is called the “pre-reference process”. The general goal of this stage is summarized below, in the words of Sibel, the teacher.
354
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Our aim at this stage is to prevent unnecessary evaluation and labelling of a student who does not need special education. Therefore, I should start the presubmission process for my student Ali.
Sibel
If one must summarize the process prior to referral, a detailed assessment of the student with special needs would be in order, to see whether there is a need for special education services or not. It is crucial to implement the process prior to the reference to the Counselling Research Centre in a systematic manner, in terms of identifying the students who have special needs. This is also an informal procedure, as the case with the first stage. The supports provided by the school counsellors are most crucial in this process as well as the other teachers in the school, along with the active efforts of the teacher of the class. Sibel, the teacher, would perhaps provide the best description of this systematic process. In the pre-shipment period, we aim to ensure that students with special needs participate in general education classes. Because the results of the research show that the effective operation of this process does not require detailed evaluation of many children with special needs. In this way, children can continue their education in general education classes without being labelled. But my student Ali did not show the necessary development in the process.
Sibel 355
Measurement and Evaluation in Special Education
Ali was observed to achieve only very limited success in his efforts to achieve the planned objectives, despite all the warnings provided in the pre-reference process. That is why Sibel, the teacher, initiates the reference process for a detailed assessment of Ali. Since I could not improve in line with the planned goals, my teachers decided that I should go into the submission process. In this process, they will decide whether I am eligible for special education services.
Ali In this process, all details gathered about the student are compiled in a report. The report presents all the performed practices and measures for the student. The views of Sibel, the teacher, on this process, are provided below.
We decided to send Ali to a detailed evaluation. But I didn't just give up as Ali's class teacher. We provided this decision with the school guidance service and Ali’s family.
Sibel
Standard tests are used in the detailed assessment process, which is a formal one. Utilizing informal assessment tools in support of this process would also help in making more accurate decisions about the student. The detailed assessment process is performed by the “Educational Diagnosis, Monitoring and Assessment Teams” at the Counselling Research Centres. In Sibel, the teacher’s words, the detailed assessment process serves the following objectives:
356
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
To briefly summarize the purpose of the detailed evaluation process, the aim of this process is to determine whether the student needs special education service or not. “Educational Diagnosis, Monitoring and Evaluation Team” will give the decision for my student Ali.
Sibel
The diagnosed problem should not only pose an obstacle requiring the student to utilize special education services, but also have a negative impact on student performance in education. The goal is to do away with such negative effects. That is why the process should be based on programs, personnel, tools and specifically developed equipment. The detailed assessment process culminates in classification and placement. Assessment Methods in Special Education The needs of the children in need of special education should be identified, followed by the provision of a suitable program of education. For this purpose, first the children should be diagnosed with the help of several assessment tools (Avcı & Ersoy, 1999; Bruns & Mogharreban, 2007). Through the use of various assessment tools, the physical features of the student, his academic achievements, the programs to be offered, any applicable special requirements, the teaching methods to be applied, and the type of applicable special education services are identified by personnel who have received special training, such as the school counsellor, language-speech therapist, special education and general education teacher, and a medical specialist if necessary (Kargın, 2013). These specialists may employ a number of assessment methods, preparing formal and informal evaluation tools in the process of assessment. Formal Assessment The formal assessment refers to the set of pre-determined structured assessments, entailing preparation, application, scoring, and interpretation steps to follow a specific timeline. Formal assessment process entails the assessment of individual differences by determining the score, the time required, the behaviour patterns on part of the practitioner, and the nature of the process itself. 357
Measurement and Evaluation in Special Education
The findings regarding these would be used to compare the individual against the wider group, with a view to understand student condition. These assessments are usually applied with the use of reliable and valid standard tests (Kargın, 2007). The assessment based on standard tests entail the comparison of the child against others with comparable performance levels, age, gender etc. (Avcıoglu, 2011). Doing so would reveal the position of the child within the norm-group, along with his/her strengths and weaknesses and performance in various areas. The standard tests are used to evaluate students under the same terms as their peers with comparable characteristics (Pierangelo & Giuliani, 2006). The practice assumes that children with similar scores would exhibit similar features. However, the scores from standardized tests are not enough alone to respond to the question what to teach to students who need special education. All they provide is input regarding the identification, placement, and development processes of the students (Ercan, 2015). The data gathered through these tests are used to provide a starting point for a more detailed assessment process. IQ tests, general skills tests, standard achievement tests, and development inventories are a few examples of such standard tests. Informal Assessment Informal assessment, on the other hand, evaluates the student’s performance on an individual basis, rather than in comparison with peers in the norm-group (Gürsel, 2004). Doing so enables the teacher to get an idea about the student’s performance using ad hoc non-standard tools which are neither validated nor certified. The findings from informal assessment, with due consideration of the individual differences of the students, help in providing education in line with such differences (Ercan, 2015). Thus, the performance levels or skills that the students have in terms of academic success, psycho-motor, sensory, and cognitive aspects are identified; and the educational environment is arranged accordingly. In some cases, the performance that the student had at two distinct times would be compared, to understand her/his development through time. The data gathered through informal assessment can be used as input in the preparation of customized education programs. The informal assessment tools include criterionreferenced evaluation tools, control lists prepared by the teacher, portfolios containing the in-class activities and work performed by the students, enabling a close review of the development of the students [(i) the portfolio leading to the final product through a process involving the teacher, the student, and the parents, (ii) the portfolio containing only the best works of the students, chosen by the 358
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
student rather than containing all works through the process, (iii) the portfolio kept by the teacher, containing the examples of the tests applied and their works, ranking scales, what the student writes as part of the homework assigned by the teacher and the work sheets, the observations of the teacher, surveys, interviews, and the student’s ability to apply knowledge to real life. These materials can be used alongside a wide range of teaching methods (Batzle, 1992 as cited in Bigge & Stump, 1999; Kargın, 2007) Example of Enriched Worksheet for Mainstreamed Students In this section, a sample material is included that science teachers can use in the process of teaching the mainstreamed students. The material is designed as enriched worksheets. Enriched worksheet is comprising of “Engage, Explore and Evaluate” phases. The researchers can embed various discussion techniques/methods into the worksheets. A sample enriched work sheet is presented below.
Figure 17.1 A sample provocative question for brainstorming technique in the “Engage” phase of the student worksheet
359
Measurement and Evaluation in Special Education
The provocative question “Does magnet attract which substances?” is firstly directed to the students. Then, the teacher writes their ideas on the board to vote their ideas/responses. Then, the teacher asks them to write their predictions on the worksheet. After implementing the brainstorming technique in this phase, the “buzz 22” technique is carried out for discussing another provocative question.
Figure 17.2 A sample provocative question for buzz 22 technique in the “Engage” phase of the student worksheet The “buzz 22” technique purposed to elicit the students’ pre-conceptions and engage them in minds-on activities contributing their social skills. After asking the question “Does the magnet attract every item?”, the students discuss the question in their small groups of two. Hence, such a process tried to arouse their learning curiosity for the “Explore” phase of the worksheet.
360
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Let’s do our activity together and look for answer for this question.
Tools equipment:
and
Preparation of the Activity: 1. Create a group with your friends.
magnet (1 pcs)
2. Place all materials on a table.
steel spoon (1 pcs)
3. Hold magnet on all materials one by one.
plastic spoon (1 pcs) tweezers (1 pcs)
4. Check which ones are pulled by the magnet. 5. Record your observations.
Write down what you observed as a result of the experiment in the space below.
Our observations: ……………………………………………………………………… Conclusion: Does the magnet attract every item? Please explain. ……………………………………………………………………… Figure 17.3 A sample “Explore” phase of the student worksheet This phase fosters the students to develop their social skills and share their conceptual understanding with their peers. The teacher asks them to complete their experiments by following the directions in the student worksheets. Later, the students put their observations down on the student worksheet. The teacher re-asks the question in the buzz 22 to identify and argue their arguments. Hence,
361
Measurement and Evaluation in Special Education
they have an opportunity to make a conclusion/inference through their first-hand experiences.
You can watch the experiment from here.
Figure 17.4 A sample QR code-embedded-experiment in the “Explore” phase of the student worksheet The teacher distributes tablet computers to students. Then, the teacher allows students to watch the QR codes-embedded-experiments by help of a tablet computer. The evaluation section is going to be started after completing the experimental monitoring process. Question:
The refrigerator you see in the photo is full of magnetic magnets. How are these magnetic magnets attached to the refrigerator?
Let’s discuss the above question, and let’s write down the points where we agree to the place which was left blank on the worksheet.
362
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Let’s discuss the attachment of magnets to the refrigerator.
………………………………………………………………………………… …………………………………………… ………………
Figure 17.5 A sample screenshot for snowball technique in the “Evaluate” phase of the student worksheet In the “Evaluate” phase of the student worksheet, the students transfer their newly generated knowledge/conceptions to novel situations. For this purpose, the students are asked to write their thoughts about the question in the space under the question. First, students are asked to write their opinions individually. Second, students are asked to discuss the question in groups of two. Then, students are asked to discuss the question in groups of four. In this way, the whole class is included in the discussion. The number of students in the group increases continuously. This technique is called snowball technique. The results of the discussions are presented to students and at the end of the activity, the results are printed by the teacher on the corresponding location. Evaluating of Mainstreamed Students’ Conceptual Understanding Concept test, drawing test and interview questions can be used to determine the conceptual understanding levels of mainstreamed students. Examples of questions about the prepared worksheet are presented in Figure 17.6.
363
Measurement and Evaluation in Special Education
Concept test 1. Which materials does the magnet attract? Why is that? Please explain. 2. Which materials does not the magnet attract? Why is that? Please explain. Drawing test 1. Draw a substance that you think is attracted to the magnet. And write the name of this substance. 2. Draw a substance that you think is not attracted to the magnet. And write the name of this substance. Interview Questions 1. Which materials does the magnet attract? Please explain it with an example. 2. Which materials does not the magnet attract? Please explain it with an example. Figure17.6 Examples of questions that can be used to evaluate the effectiveness of worksheet
364
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Reference Acarlar, F. (2013). Mainstreaming model and characteristics of children with special needs. in Sucuoğlu, B., Bakkaloğlu, H. (Eds.) Mainstreaming in pre-school (pp. 21-74). Ankara: Kök Publishing. Avcı, N. & Ersoy, Ö. (1999). Okul öncesi dönemde entegrasyonun önemi ve uygulamalarda dikkat edilecek noktalar [Importance of integration in preschool and considerations in applications]. Milli Eğitim Dergisi, 144, 68-70. Avcıoğlu, H. (2011). Özel eğitime gereksinim duyan öğrencilerin eğitsel ve davranışsal değerlendirmesi [Educational and behavioral assessment of students who need special education]. Ankara: Vize Publications. Avcıoğlu, H. (2015). Özel gereksinimli olan bireylerin değerlendirilmesi [Evaluation of individuals with special needs]. Ankara: Vize Publications. Batu, S. (2000). Kaynaştırma, destek hizmetler ve kaynaştırmaya hazırlık etkinlikleri [Inclusion, support services and preparation activities for inclusion]. Journal of Special Education, 2(4), 35-45. Bigge, J. L. & Stump, C. S. (1999). Curriculum, assessment and instruction for students with disabilities. The wads worth special educator series. Wadsworth Publishing Co., 10 Davis Dr., Belmont, CA 94002. Bruns, D. A. & Mogharreban, C. C. (2007). The gap between beliefs and practices: Early childhood practitioners' perceptions about inclusion. Journal of Research in Childhood Education, 21(3), 229-241. Ercan, (2015). Educational Diagnosis and Evaluation in Early Childhood Period. Ed. Koleva, Efe, Atasoy, Kostova. Sofia, Education in the 21st Century: Theory and Practice, St. Kliment Ohridski University Press. 100-110. Friend, M. & Bursuck, W. D. (2009). Including students with special needs (5. Edition). Columbus, Ohio, USA: Pearson. Gürgür, H., Kış, A. & Akçamete, G. (2012). Examining pre-service teachers’ opinions about providing individual support services to mainstreaming students. Elementary Education Online, 11(3), 689-701.
365
Measurement and Evaluation in Special Education
Gürsel, O. (2004). Bireyselleştirilmiş eğitim programlarının geliştirilmesi [Development of individualized education programs]. Eskişehir: Anadolu University Publications. Gürsel, O. (2008). Evaluation in special education. in Diken, İ. H. (Ed.) Students with special education needs and special education (pp. 29-58). Ankara: Pegem Academy. Hornby, G. (2015). Inclusive special education: Development of a new theory for the education of children with special educational needs and disabilities. British Journal of Special Education, 42(3), 234-255. Kargın, T. (2004). Kaynaştırma: Tanımı, gelişimi ve ilkeleri [Mainstreaming: Definition, principles and its development]. Journal of Special Education, 5(2), 1-13. Kargın, T. (2007). Eğitsel değerlendirme ve bireyselleştirilmiş eğitim programı hazırlama süreci. [Educational evaluation and individualized training program preparation process]. Ankara University Faculty of Educational Sciences Journal of Special Education, 8(1), 1-16. Kargın, T. (2013). Performance evaluation in pre-school and preparation of individualized education plans. in Sucuoğlu, B., Bakkaloğlu, H. (Eds.) Mainstreaming in pre-school (pp. 77-127). Ankara: Kök Publishing. Lundy, C., Hill, N., Wolsley, C., Shannon, M., McClelland, J., Saunders, K. & Jackson, J. (2011). Multidisciplinary assessment of vision in children with neurological disability. The Ulster Medical Journal, 80(1), 21-27. Nal, A. & Tüzün, I. (2011). Türkiyede kaynaştırma/bütünleştirme yoluyla eğitimin durumu [State of education through mainstreaming/integration in Turkey]. İstanbul: Sabancı Vakfı. Pierangelo, R. & Giuliani, G. A. (2006). Assessment in special education: A practical approach (2nd ed.). Boston, MA: Allyn & Bacon. Skiba, R. J., Simmons, A. B., Ritter, S., Gibb, A. C., Rausch, M. K., Cuadrado, J. & Chung, C. G. (2008). Achieving equity in special education: History, status, and current challenges. Exceptional Children, 74(3), 264-288. Stainback, S. B. E. & Stainback, W. C. (1996). Inclusion: A guide for educators. Paul H Brookes Publishing.
366
Sibel ERNAS, Şenay DELİMEHMET DADA, Hava İPEK AKBULUT
Sucuoğlu, B. & Kargın, T. (2006). Implementation of inclusive education in primary education: Approaches methods and techniques. İstanbul: Morpa Publishing. Taylor, L. R. (1997). Assessment of exceptioanl students: Educatioanl and psychological procedures. (4th ed.). Boston: Allyn and Bacon. U. S. Department of Education. (2010). 32nd Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act, Washington, DC: Author.
367
Lütfiye ÖZALEMDAR
Chapter 18 A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests) Lütfiye ÖZALEMDAR
Introduction Today's technology, which has been increasingly used, as well as the increased knowledge, has brought forward people's skills to select, collect, and use information, while putting education in an important place (Adnan, 2015). In a broad sense, “education is the activity and process of making the required changes in the behaviours of an individual so that they can adapt to the society and improve their abilities” (Adnan 2015). Training is defined as “the process of trying to ensure learning through school activities that are conscious, controlled, purposeful, planned and organized” (Özmen, 2008, p. 36). As it has experienced events related to especially natural sciences in a large part of our daily lives, the competition between countries for science and technology has put the science education in the centre of science and technology (Kıncal, Ergül and Timur, 2007). In the information age, contribution of the science-based technologies to the development of countries increases the importance of natural sciences and science education (Enginar, Saka and Sesli, 2002). Natural sciences include all science branches that encompass information obtained as a result of the research about the human beings themselves and their natural environment (Gökçen, 2012). Information obtained in especially biology among the natural sciences affects our life directly, thus constantly increasing the need in the society for teaching biology topics and bringing biology education into prominence (Altunoğlu and Atav 2005).
369
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
Education and training processes are aimed at helping individuals contribute both to the society and to themselves, prepare for their life, and gain behavioural skills that will help them to make effective communication with their environment (Yiğit, Devecioğlu and Ayvacı, 2002). Biology education, which is aimed at raising individuals who think, research about, question, observe and deduce from the data that they obtain, enables students to think scientifically and gain problem-solving skills (Gökmenoğlu, 2011). Understanding biology allows people to know about themselves, other living beings and the environment; to think and interpret; to gain a sense of healthy life, positive environmental attitude and to find solutions to problems that they might encounter in daily life (Gökçen, 2012). Evaluation, thus assessment, is needed to determine whether the expected behaviour was realized by the end of education, which is aimed at adding certain behaviours to people or changing some of their behaviours (Nalbantoğlu Eyitmiş, 2007). Assessment is the process of observing the extent that an object possesses some characteristic and describing the results of this observation through numbers. Evaluation is an act of judging that gives some meaning to the results of the assessment and specifies a value about the measured objects (Tekin, 2010, p. 39). Assessment in education is the process of describing to what extent students realized the change in behaviour in accordance with the specified objectives, through numbers and symbols and by using various methods. Evaluation in education can be defined as determining the learning level of a student during the education-training process (Karahan, 2007). Assessment and evaluation, which vary by the objectives in training practices and the targeted qualities to be gained, are generally discussed in two approaches. These approaches are divided into two as traditional approach and alternative approach (Nartgün, 2006, pp. 357-358). Traditional approach uses techniques including written examination, shortanswer, true-false, multi-choice, and matching tests; while alternative approach involves grading keys, diagnostic tree, structured grid, word association tests, portfolio, project, performance evaluation, problem-solving, observation, concept map, interview and student evaluation techniques (Bahar, Nartgün, Durmuş and Bıçak, 2012, pp. 25-139).
370
Lütfiye ÖZALEMDAR
Traditional assessment and evaluation approach based on standardized tests mostly focuses on cognitive abilities, performance according to test objectives, success independent from development and in-class learnings. Alternative assessment and evaluation approach based on performance-based, realistic, constructive and practicable tests give prominence to cognitive, affective, and psychomotor skills, real performance, success based on development, in-class and out-of-class learnings (Korkmaz, 2004, pp. 62-63). Alternative evaluation approaches and tools allow students to make a connection between the real life and their own knowledge and provide multiple solutions to the encounter problems. In this respect, traditional evaluation approach and tools fail to evaluate real and performance-based science skills, while alternative evaluation approach and tools are more often preferred (Korkmaz, 2004, pp. 60-61). An alternative assessment and evaluation technique used in biology education, which is one of the physical science fields, is word association test. Word Association Tests (WAT) In this technique, the student responds within a certain timeframe with the words that come to mind through a presented keyword. It is thought that the response given to the keyword reveals the cognitive connections and semantic closeness between concepts. Depending on the semantic closeness of the concepts, the closer two concepts are in the semantic memory, the more they are related to each other (Bahar and Özatlı, 2003). In applying a word association test, the teacher determines 5 to 10 different concepts to introduce the subject structure. Each keyword is written ten times in one page to prevent a chain of responses. Thirty seconds are given, as it is considered as the optimum timeframe in the academic studies. When the time runs out, the student passes to the next page and the process continues (Özsevgeç, 2008, pp. 409-410; Nartgün, 2006, pp. 411-412). The responses given by the student to each keyword is graded and an evaluation can be made. However, the responses may not always have a meaningful relation with the keywords. That's why the teacher may ask the student to make a meaningful sentence that includes the keyword and the given response. In this case, the response given by the student to the keyword can be graded separately from the meaningful sentence that is made as part of a twostage grading system. Aside from assessment and evaluation, word association 371
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
tests can also be used as a diagnostic tool. In this case, the teacher counts how many times a response is given to each keyword and a table of frequency is prepared. A concept map model can be created from the data in this table. This might reveal the network in the cognitive structure of the students (Nartgün, 2006, p. 412). Table 18.1 Example of Word Association Test: Vertebrate Animals Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ................... Fishes ...................
Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians ....................... Amphibians .......................
Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ............... Reptiles ...............
Birds ................. Birds ................. Birds ................. Birds ................. Birds ................. Birds ................. Birds ................. Birds ................. Birds ................. Birds .................
Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals .............. Mammals ..............
Literature Samples for the Aim of Word Association Tests in Biology Education As biology topics are thought at different learning levels, such as in primary school, middle school, high school and university, and through different classes, it was ensured that research samples reflect this. In a study by Bahar ve johnstone and Sutcliffe (1999), it was aimed to map university first-grade biology students' cognitive structures on genetics through word association tests (WAT). In a study by Bahar and Özatlı (2003), it was aimed to study high school firstgrade students' cognitive structure on the principal components of living beings 372
Lütfiye ÖZALEMDAR
before and after the learning process, to determine post-learning cognitive changes and to find out their pre-existing misconceptions through WAT. In a study by Hovardas and Korfiatis (2006), it was aimed to analyse the cognitive change in the science education before and after an ecology course in the university through WAT. In a study by Kostova and Radoynovska (2008), it was aimed to find out the scientific cognitive structures of teachers and students through WAT. In a study by Yalvaç (2008), it was aimed to determine whether an education based on the cooperative learning approach had any influence on the teacher candidates' relations among concepts regarding the environment through WAT. In a study by Cardak (2009), it was aimed to determine the university science students' cognitive structure regarding energy relations of the organisms and the energy flow through WAT. A study that was carried out by Özatlı and Bahar (2010), aimed to identify the high school students' cognitive structures regarding the excretory system through WAT, structured grid and V diagrams. In a study by Ercan, Taşdere and Ercan (2010), it was aimed to analyse the effectiveness of WAT in finding out the students' cognitive structure regarding the unit: solar system and beyond: space puzzle, determining their cognitive changes and revealing misconceptions. In a study by Kurt, Ekici, Aktaş and Aksu (2013), it was aimed to investigate student biology teachers' cognitive structures related to "diffusion" through WAT and the drawing-writing technique. In a study by Polat (2013), it was aimed to determine 9th-grade students' cognitive structures on the nature of science after the teaching and the rehearsal of the information in their cognitive structures through WAT. In a study by Taşdere, Özsevgeç and Türkmen (2014), it was aimed to determine science and technology teacher candidates' cognitive structures on the nature of science by using WAT. In a study by Özata Yücel and Özkan (2015), it was aimed to determine secondary education students' cognitive structures and their misconceptions on ecological concepts through WAT.
373
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
In a study by Yüce and Önel (2015), it was aimed to determine science teacher candidates' cognitive association levels for biodiversity through WAT. In a study by Tokcan and Yiter (2017), it was aimed to determine 5th-grade students' cognitive structures on natural disasters through WAT. In a study by Kalaycı (2017), it was aimed to analyse science teacher candidates' cognitive structures on prokaryote-eukaryote by using WAT and drawing-writing technique. In a study by Derman and Yaran (2017), it was aimed to determine high school students' cognitive structures on the concept of "water cycle" by using WAT and drawing-writing technique. In a study by Yener et al. (2017), it was aimed to find out science teacher candidates' cognitive structures on basic concepts related to astronomy and their pre-existing misconceptions through WAT. In a study by Özaslan and Çetin (2018), it was aimed to analyse 9th-grade biology students' cognitive structures related to the principal components of living beings and the relations between the concepts that form these principal components through WAT. It can be seen in the literature that word association tests were also used in different fields (Kempa ve Nicholls, 1983; Maskill and Cachapuz ,1989; Cardellini and Bahar, 2000; Bahar and Kılıç, 2001; Çiftçi, 2009; Nakipoğlu, 2008; Işıklı, Taşdere and Göz, 2011; Şimşek, 2013; Keskin and Örgün, 2015; Kaya and Taşdere, 2016; Önal, 2017; Kurtaslan, 2018) as an alternative assessment and evaluation technique Conclusion and Evaluation It is vital for the societies to adapt to the rapid developments in recent years in almost all aspects of science and technology, and to put these developments to good use in terms of their future (Tan and Temiz, 2003). Development of the societies is possible through guidance of the education system in accordance with these developments (Kılıç, 2004). Natural sciences, which is the combination of all the facts in the world, requires logical and critical thinking, thus an effective science education plays a significant role in raising individuals according to the needs of today's information societies (Atıcı, Keskin Samancı and Özel, 2007). Education of 374
Lütfiye ÖZALEMDAR
biology, which is one of the core fields of natural sciences, is one of the most important tools in preparing people for the life and ensuring that they can make sense of the events that take place during their daily lives (Özay Köse and Gül, 2016). Biology education is becoming increasingly important, as information that is obtained about especially biology and the new technological developments directly affect human lives (Altunoğlu and Atav, 2005). There have been some changes in education and training practices in order to raise the type of person that is required in today's age, and teacher-centred teaching practices started to be replaced by student-centred practices (Nartgün, 2006, p. 413). As the main factor of the education and training process, assessment and evaluation, which are carried out in order to determine the quality of education and to eliminate the deficiencies (Bektaş and Akdeniz Kudubeş, 2014), have been influenced by these changes and new assessment and evaluation techniques needed to be used (Nartgün, 2006, p. 413). Thoughts, ideas of the students, researches that they make, and the difficulties they go through in this process can be reflected, and information can be collected about what the students can do with their knowledge through alternative assessment and evaluation techniques (Mamur, 2010). Word association tests are one of these alternative assessment and evaluation techniques used to demonstrate whether the intended goals were achieved in the biology education or to what degree they were achieved. It was found that the researches on the use of word association tests in biology education focused on determining the individuals' cognitive structure on any biology topic and the relations among concepts, finding out the cognitive changes, revealing misconceptions, and establishing memorability of the information in the cognitive structure (Eryılmaz Toksoy & Kaya, 2017). In this respect, this assessment and evaluation technique, which allows to identify the existing deficiencies and makes it possible to eliminate these deficiencies, is considered to play a significant role in realizing meaningful conceptual learning in biology, one of the branches of sciences for understanding life, and facilitating adaptation of the learned information to the daily life. Through its flexible structure, word association tests can be used in various fields other than natural sciences and can be applied individually or collectively. Word association tests, which can also be used as a diagnostic tool, can be availed of for determining effectiveness of learning (Ercan et al., 2010). 375
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
References Adnan, Y. A. (2015). Ortaöğretim 12. sınıf biyoloji ders kitabında kullanılan analojiler üzerine bir araştırma [A study on analogies in secondary 12th biyology textbook]. Master’s Thesis, Necmettin Erbakan Üniversitesi, Konya. Altunoğlu, B. D., & Atav, E. (2005). Daha etkili bir biyoloji öğretimi için öğretmen beklentileri [Teacher expectations for a more efficient biology instruction]. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 28, 19-28. Atıcı, T., Keskin Samancı N., & Özel, Ç. A. (2007). İlköğretim fen bilgisi ders kitaplarının biyoloji konuları yönünden eleştirel olarak incelenmesi ve öğretmen görüşleri [Critical analysis of primary education science textbooks in terms of the biology subjects and teachers’ opinions]. Türk Eğitim Bilimleri Dergisi, 5(1), 115-131. Bahar, M., Johnstone, A. H., & Sutcliffe, R. G (1999). Investigation of students’ cognitive structure in elementary genetics through word association tests. Journal of Biological Education, 33(3), 134-141. Bahar, M., & Kılıç, F. (2001). Kelime iletişim testi yöntemi ile Atatürk ilkeleri arasındaki kavramsal bağların araştırılması. X. Eğitim Bilimleri Kongresi, Abant İzzet Baysal Üniversitesi, Bolu. Bahar, M., & Özatlı, N. S. (2003). Kelime iletişim test yöntemi ile lise 1. Sınıf öğrencilerinin canlıların temel bileşenleri konusundaki bilişsel yapılarının araştırılması. Balıkesir Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 5(2). 75-85. Bahar, M., Nartgün, Z., Durmuş, S., & Bıçak, B. (2012). Geleneksel-Tamamlayıcı ölçme ve değerlendirme teknikleri öğretmen el kitabı (5. bs). Ankara: Pegem Akademi. Bektaş, M. & Akdeniz Kudubeş, A. (2014). Bir ölçme ve değerlendirme aracı olarak: yazılı sınavlar [As a measurement and evaluation tool: written exams]. Dokuz Eylül Üniversitesi Hemşirelik Fakültesi Elektronik Dergisi, 7(4), 330-336. Cardak, O. (2009). The determination of the knowledge level of science students on energy flow through a word association test. Energy Education Science and Technology Part B Social and Educational Studies, 1(3), 139-155. 376
Lütfiye ÖZALEMDAR
Cardellini, L., & Bahar, M. (2000). Monitoring the learning of chemistry through word association tests. Australian Chemistry Resource Book, 19, 59-69. Çiftçi, S. (2009). Kelime çağrışımlarının cinsiyet değişkenine göre gösterdiği temel nitelikler üzerine bir deneme [An essay about basic qualities of word associations that points out gender variable]. Turkish Studies, 4(3). 633654. Derman, A., & Yaran, M. (2017). Lise öğrencilerinin su döngüsü konusuyla ilgili bilgi yapıları [High school student’s knowledge structure related to water cycle topic]. Mustafa Kemal Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 14(39), 255-274. Enginar, İ., Saka, A. & Sesli, E. (2002). Lise 2 öğrencilerinin biyoloji derslerinde kazandıkları bilgileri güncel olaylarla ilişkilendirebilme düzeyleri, V. Ulusal Fen Bilimleri ve Matematik Eğitimi Kongresi. Ortadoğu Teknik Üniversitesi, Ankara. Ercan, F., Taşdere, A., & Ercan, N. (2010). Kelime ilişkilendirme testi aracılığıyla bilişsel yapının ve kavramsal değişimin gözlenmesi [Observation of cognitive structure and conceptual changes through word associations tests]. Türk Fen Eğitimi Dergisi, 7(2). 136-154. Eryılmaz Toksoy, S. & Kaya Ö. (2017). Fizik öğretmenlerinin kavram yanılgıları, kavram yanılgılarının tespiti ve giderilmesine yönelik düşünceleri. 3rd Ulusal Fizik Eğitimi Kongresi (UFEK-III), 14-16 September, Ankara. Gökçen, B. B. (2012). Kavram haritalarının genel biyoloji dersine yönelik tutum ve akademik başarı üzerine etkileri [Concept maps for the general biology course on the effects of academic achievement and attitude]. Master’s Thesis, Çanakkale Onsekiz Mart Üniversitesi, Çanakkale. Gökmenoğlu, R. (2011). Lise 9. sınıf öğrencilerinde inorganik maddelerle ilgili karşılaşılan kavram yanılgılarının araştırılması [The investigation of misconceptions about inorganic substances 9th level secondary school students]. Master’s Thesis, Selçuk Üniversitesi, Konya. Hovardas, T., & Korfiatis, K. J. (2006). Word associations as a tool for assessing conceptual change in science education. Learning and instruction, 16, 416-432. 377
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
Işıklı, M., Taşdere, A., & Göz, N. L. (2011). Kelime ilişkilendirme testi aracılığıyla öğretmen adaylarının Atatürk ilkelerine yönelik bilişsel yapılarının incelenmesi [Investigation Teacher Candidates’ Cognitive structure About Principles of Ataturk through word association test]. Uşak Üniversitesi Sosyal Bilimler Dergisi, 4(1), 50-72. Kalaycı, S. (2017). Fen bilgisi öğretmen adaylarının “prokaryot” ve “ökaryot” kavramları hakkındaki bilişsel yapılarının belirlenmesi [Determining preservice science teachers’ cognitive structure on the concepts of “Prokaryote” and “Eukaryote” ] . E-Uluslararası Eğitim Araştırmaları Dergisi, 8(3), 46-64. Karahan, U. (2007). Alternatif ölçme ve değerlendirme metodlarından grid, tanılayıcı dallanmış ağaç ve kavram haritalarının biyoloji öğretiminde uygulanması [Application of alternative measurement and evaluation methods that are grıd, diagnostic tree and concept maps within biology education]. Master’s Thesis, Gazi Üniversitesi, Ankara. Kaya, M. F., & Taşdere, A. (2016). İlkokul Türkçe eğitimi için alternatif bir ölçme değerlendirme tekniği: Kelime İlişkilendirme Testi (KİT) [An alternative measurement and assessment method for elementary Turkish education: Word Association Test (WAT)] . Turkish Studies, 11(9), 803820. Kempa, R. F. & Nicholls, C. E. (1983). Problem solving ability and cognitive structure – an exploratory investigation. European Journal of Science Education, 5, 171-184. Keskin, E., & Örgün, E. (2015). Kelime ilişkilendirme testi aracılığıyla sürdürülebilir turizm olgusunun kavramsal analizi: Ürgüp Örneği [Conceptual analysis of sustainable tourism phenomenon by means of word association test: Ürgüp sample]. Journal of Tourism and Gastronomy Studies 3(1), 30-40. Kılıç, D. (2004). Biyoloji eğitiminde kavram haritalarının öğrenme başarısına ve kalıcılığına etkisi [The effect of the concept maps on achievement and retention of learning in biology education]. Master’s Thesis, Hacettepe Üniversitesi, Ankara. Kıncal, R. Y., Ergül, R., & Timur, S. (2007). Fen bilgisi öğretiminde işbirlikli öğrenme yönteminin öğrenci başarısına etkisi [Effect of cooperative 378
Lütfiye ÖZALEMDAR
learning method to student achievement in science teaching]. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 32, 156-163. Korkmaz, H. (2004). Fen ve teknoloji eğitiminde alternatif değerlendirme yaklaşımları. Ankara: Yeryüzü Yayınevi. Kostova, Z., & Radoynovska, B. (2008). Word association test for studying conceptual structures of teachers and students. Bulgarian Journal of Science and Education Policy, 2(2), 209-231 Kurt, H., Ekici, G., Aktas, M., & Aksu, Ö. (2013). Determining biology student teachers' cognitive structure on the concept of" diffusion" through the free word-association test and the drawing-writing technique. International Education Studies, 6(9), 187-206. Kurtaslan, Z. (2018). İlkokul öğrencilerinin kelime ilişkilendirme testi ile “nota” kavramı konusundaki bilişsel yapılarının belirlenmesi [Identifying primary school students’ cognitive structure oriented to note concept with word association test]. Fine Arts (NWSAFA), 13(4), 77-90. Mamur, N. (2010). Görsel sanatlar eğitiminde ölçme ve değerlendirme [Measurement and assessment in visual art education]. Pamukkale Üniversitesi Eğitim Fakültesi Dergisi, 28, 175-188. Maskill, R., and Cachapuz, A. F. C. (1989). Learning about the chemistry topic of equilibrium: The use of word association tests to detect developing conceptualizations. International Journal of Science Education, 11, 57-69. Nakiboğlu, C. (2008). Using word associations for assessing non major science students’ knowledge structure before and after general chemistry instruction: the case of atomic structure. Chemistry Education Research and Practice, 9, 309-322. Nalbantoğlu Eğitmiş, A. (2007). Ortaöğretim öğretmenlerinin ölçme değerlendirme tekniklerini etkin kullanabilme yeterliliklerinin araştırılması (Kahramanmaraş Örneği) [Searching for the proficiency of the teachers at secondary schools in using the techniques of measurement and evaluation in an active way (sample of Kahramanmaraş)]. Master’s Thesis, Kahramanmaraş Sütçü İmam Üniversitesi, Kahramanmaraş. Nartgün, Z. (2006). Fen ve teknoloji öğretiminde ölçme ve değerlendirme. In M. Bahar (Ed.), Fen ve teknoloji öğretimi (pp. 355-415). Ankara: Pegem A. 379
A Complementary Assessment and Evaluation Technique in Biology Education (Word Association Tests)
Önal, N. (2017). Bilişim teknolojileri öğretmen adaylarının bölümlerine yönelik bilişsel algılarının KİT aracılığıyla incelenmesi [Investigation of information technologies pre-service teachers’ cognitive perceptions towards their departments with WAT]. Ahi Evran Üniversitesi Kırşehir Eğitim Fakültesi Dergisi (KEFAD), 18(2), 255-272. Özaslan, M., & Çetin, G. (2018). Biology students’ cognitive structures about basic components of living organisms. Science Education International, 29(2), 62-74. Özata Yücel, E., & Özkan, M. (2015). Determination of secondary school students' cognitive structure, and misconception in ecological concepts through word association test. Educational Research and Reviews, 10(5), 660-674. Özatlı, N. S., & Bahar, M., (2010). Öğrencilerin boşaltım sistemi konusundaki bilişsel yapılarının yeni teknikler ile ortaya konması [Revealing students’ cognitive structures regarding excretory system by new techniques]. Abant İzzet Baysal Üniversitesi Dergisi, 10(2). 9-26. Özay Köse, E., & Gül, Ş. (2016). Sınıf öğretmeni adaylarının biyoloji bilgilerini günlük yaşamla ilişkilendirme düzeyleri. Amasya Üniversitesi Eğitim Fakültesi Dergisi, 5(1), 84-103. Özmen, H. (2008). Öğrenme kuramları ve fen bilimleri öğretimindeki uygulamaları. In S. Çepni (Ed.), Kuramdan uygulamaya fen ve teknoloji öğretimi (pp. 33-98). Ankara: Pegem Akademi Özsevgeç, T. (2008). Eğitimde ölçme ve değerlendirme. In Ö. Taşkın (Ed.), Fen ve teknoloji öğretiminde yeni yaklaşımlar (365-421). Ankara: Pegem Akademi. Polat, G. (2013). Determination of the cognitive structures of year secondary school students through word association test techniques. Necatibey Eğitim Fakültesi Elektronik Fen ve Matematik Eğitimi Dergisi (EFMED), 7(1), 97-120. Şimsek, M. (2013). Sosyal Bilgiler öğretmen adaylarının Coğrafi Bilgi Sistemleri (CBS) konusundaki bilişsel yapılarının ve alternatif kavramlarının kelime ilişkilendirmesi testi ile belirlenmesi [Definition of cognitive structure for the Geographical Information Systems (GIS) and alternative issues of 380
Lütfiye ÖZALEMDAR
candidates of social studies teachers vıa a word association test]. Researcher: Social Science Studies, 1(1), 65-75. Tan, M., & Temiz, B. K. (2003). Fen öğretiminde bilimsel süreç becerilerinin yeri ve önemi [The importance and role of the science process skills ın science teaching]. Pamukkale Üniversitesi Eğitim Fakültesi Dergisi, 1(13), 89-101. Taşdere, A., Özsevgeç, T., & Türkmen, L. (2014). Bilimin doğasına yönelik tamamlayıcı bir ölçme aracı: kelime ilişkilendirme testi. Fen Eğitimi ve Araştırmaları Derneği Fen Bilimleri Öğretimi Dergisi, 2(2), 129-144. Tekin, H. (2010). Eğitimde ölçme ve değerlendirme. Ankara: Yargı Yayınevi. Tokcan, H., Yiter, E. (2017). 5. Sınıf Öğrencilerinin Doğal Afetlere İlişkin Bilişsel Yapılarının Kelime İlişkilendirme Testi (KİT) Aracılığıyla İncelenmesi [Examining the 5th grade students’ perceptions about natural disaster through the word association tests]. Ahi Evran Üniversitesi Kırşehir Eğitim Fakültesi Dergisi (KEFAD), 18(1), 115-129. Yalvaç, H. G. (2008). İşbirlikli Öğrenme Yaklaşımının Öğretmen Adaylarının Çevreye İlişkin Zihinsel Yapılarına Etkisi [Effect of cooperative learning approach on intellectual structure of teacher candidates regarding environment]. Master’s Thesis, Abant İzzet Baysal Üniversitesi, Bolu. Yener, D., Aksüt, P., Somuncu Demir, N., Aydın, F., Fidan, H., Subaşı, Ö., & Aygün, M. (2017). Öğretmen adaylarının astronomi konusundaki kavramlara yönelik bilişsel yapılarının incelenmesi. Amasya Üniversitesi Eğitim Fakültesi Dergisi, 6(2), 531-565. Yiğit, N., Devecioğlu, Y. ve Ayvacı, H., Ş. (2002). İlköğretim fen bilgisi öğrencilerinin fen kavramlarını günlük yaşamdaki olgu ve olaylarla ilişkilendirme düzeyleri. V. Ulusal Fen Bilimleri ve Matematik Eğitimi Kongresi. Orta Doğu Teknik Üniversitesi, Ankara. Yüce, Z., & Önel, A. (2015). Fen bilgisi öğretmen adaylarının biyoçeşitliliğe ilişkin kavramsal ilişkilendirme düzeyleri [The cognitive binding levels of the science teacher candidates in relation to biodiversity]. Abant İzzet Baysal Üniversitesi Eğitim Fakültesi Dergisi, 15(1), 326-341.
381
Hüseyin UYSAL
Chapter 19 Assessment and Evaluation Process in Art Education Hüseyin UYSAL
Introduction Art education leads to the development of many aesthetic-oriented perceptive, interpretive and analytical skills in individuals. As a result of this education, there is an assessment and evaluation stage just like in all educational processes. Assessment and evaluation process in art education includes certain difficulties. This section provides some useful information about assessment and evaluation in art education. Art is one of the strongest ways of expression in which an individual can found and interpret her/his inner conditions, expectations and needs with an aesthetic attitude. Different materials such as color, line, sound, motion, word and object can be used in the expression process. Diversity of the used materials leads to the development of different branches of art. Although the materials used in these branches of art are different from each other, the course followed in the application process and the ultimate result add up to the same thing that can be defined as “present an aesthetic product”. In order for an artistic application which is qualitatively mature, it is required for the implementer to have received an art education discipline or acquired certain knowledge, skills and perceptions concerning the relevant branch of art. Art education comprises all efforts made for providing a competence in terms of material, technique, historical development, philosophy and style concerning the application. It is a comprehensive process feeding on numerous areas such as art history, art sociology, aesthetic, art philosophy, art criticism, art theories and art pedagogy (Unver, 2002). Every branch of art has a distinctive art education. They are named as, for instance painting education, music education and dance education. Because aesthetic products presented as a result of the materials (such as color, sound, word, motion) and applications used by each of them are
383
Assessment and Evaluaiton Process in Art Education
materially different; design and application elements used in the art educational process also differ from each other. Painting is a branch of art which enables the individual to seek beauty via lines and colors on a two-dimensional surface, create a composition and express herself/himself through that composition. Painting education, on the other hand, includes education provided to the individual to apply the art of painting qualitatively. In the narrow sense, San (2010) suggests that this education includes the lessons provided in schools concerning this area. Assessment and evaluation process are conducted in order to understand whether the target behaviors and perceptions have had a positive place in the individual by the end of these lessons or not. Although assessment and evaluation complete each other, they are actually different concepts. Assessment is an act of observing any quality and expressing the results with numbers or symbols. Assessment gives us information only about the quantity of things. It is the duty of evaluation to determine whether that information is sufficient and fit for purpose or not. Evaluation is a process of comparing the assessment results with a criterion and making a decision. As it is understood, there are three basic phenomena as assessment result, criterion and decision within the existantial integrity of evaluation. Assessment result is the transformation of a quality observed by the implementer as a result of the relevant assessment process into numbers and symbols. Criterion and decision are the two concepts supporting one another. While criterion is the knowledge and skills to be taken as a reference in the evaluation process; decision is an absolute judgment made according to the convenience of the assessment result for the criterion (Turgut & Baykul, 2010; Yildirim, 1973; Oncu, 1999) Major goal in the assessment and evaluation process which is applied in the educational process is to determine the levels of knowledge, skills and perceptions acquired by students as a result of learning. However, the most important problem of this process is not paying enough regard to different talents and improvable potentials of students (Cepni, 2006). Assessment and evaluation processes are considered as difficult in art education. Many art trainers and domain experts perform studies on assessment and evaluation applications that are known to be difficult in art education; however, individuality of artistic application and evaluation makes the process even more difficult. Because according to Kirisoglu (2005), an evaluator’s artistic 384
Hüseyin UYSAL
personality may directly bring her/him down subjectivity in evaluation. The evaluator may usually reflect her/his own aesthetic taste on the result of assessment and evaluation as expressesed with words and motions. In art education, the assessment and evaluation processes are grounded on two basic phenomena. They are academic and artistic assessment and evaluation. However, it should be noted that they both are difficult processes. Academic Assessment and Evaluation in Art Education In academic assessment and evaluation, target behaviors and perceptions are assessed and evaluated as a result of the educational process. The process prioritizes the individual’s skills of comprehending and applying the relevant knowledge, skills and perceptions qualitatively. It is based on assessing and evaluating with notes. Criterion determination is diverse and interpretation is difficult. According to Buyurgan & Buyurgan (2012), the criterion of evaluating with notes should be handled in five different categories. They are; creative elements, technical skills, completion of work in time, application of the information provided and opinion. Kirisoglu (2005) suggests that technical skill, aesthetic in product and expression should be handled in two criterions as elements and creative elements in product. In addition to these criterion, design principles and elements forming the plastic infrastructure of an artistic product such as sample point, line, color, stain, style, direction, tone, tissue, emptinessfullness, rhythm, balance, emphasis, contrast, unity, variety, repetition and harmony, as well as their usage qualities can also be given as examples. According to Eisner (1997) and Kirisoglu (2005) who supports his opinions, the assessment and evaluation process in art education is handled in three different approaches. They are; evaluating the student through comparing her/him with herself/himself, evaluating the student through comparing her/him with her/his classmates and evaluating the student according to the criterion detected. Evaluating the student through comparing her/him with herself/himself is an evaluation approach performed by considering all the works of the student from the beginning of the educational process until the end. In this approach, the student’s knowledge and skill level in the first work is examined along with her/his knowledge and skill levels in all works in chronological order. If the student shows an improvement, it is handled as final evaluation. Thus, the student who realizes her/his own success will also develop a confidence in art. This approach is individual and examines the student within herself/himself. 385
Assessment and Evaluaiton Process in Art Education
Therefore, it is necessary to develop a separate evaluation form for each student within the approach. Evaluating the student through comparing her/him with her/his classmates is an evaluation approach performed by comparing the artistic application product presented by the student with artistic products of her/his classmates. Regardless of the fact that students have different individual characteristics, in other words different interests, talents and intelligences; the teacher separates their products into three different categories as the best, moderate and the weakest and then evaluates them accordingly. Each student may show a parallelism with one another in their artistic development stages according to age groups. According to Lowenfeld, children’s stages of artistic development are; scribble stage (2-4 years old), preschematic stage (4-7 years old), schematic stage (7-9 years old), preadolescence stage (9-11 years old), reasoning stage (1113 years old) and adolescence crisis and adulthood (13 years and over). However, it is not found to be so appropriate to generalize this condition and evaluate students through comparing them with each other for art education, because every individual is different. In addition to this, artistic perception and application levels may also differ (Buyurgan & Buyurgan, 2012; Kirisoglu, 2005). Evaluating the student according to the criterion detected is an evaluation approach based on the student’s predetermined target behaviors and perceptions that are already available in the lesson content, as well as her/his acquisitions. Mainly technical skills and doctrinal behaviors are assessed and evaluated. For example, while forming the dark values of colors in an application, if a color is obtained through mixing with a transverse color and this is the target behavior of a lesson, then the results should be evaluated according to that criterion. Figure 19.1 can be analyzed as an example.
Figure 19.1 Examples of Two Different Paintings Concerning the Use of the Color Blue 386
Hüseyin UYSAL
As is seen in Image 1; two different paintings are presented side by side. While the blue linen in the painting on the left was darkened by mixing the dark stains with orange; the blue linen in the painting on the right was darkened by mixing with black. Because the transverse of blue is orange, the painting on the left reached a more positive result according to the relevant criterion. Although criterion-based evaluation may seem to be easy to apply, in this approach there is a possibility that expressive and creative behaviors will not be detected within the frame of predetermined criterion. Artistic Assessment and Evaluation in Art Education While grades in the academic assessment and evaluation process are given according to the standard formed in the classroom though just barely, there is no such a standard in the area of art in actual fact. Every student studying art and their works should be assessed and evaluated within themselves in an artistic sense. Because artistic evaluations regard a number of factors such as genre, difference, new discourse brought to art, technique, dominance, being in one’s own way, ability of expressing emotions and style. The student’s individual efforts, development stage and psychological condition are also effective on evaluation (Erbay, 2013). Artistic assessment and evaluation are a high-level process performed with individuals who have completed a certain cognitive, affective and psychomotor development. This process is more difficult than academic assessment and evaluation. As individuals who produce, assess and evaluate the work all present their entity with their subjective self, there might be difficulties due to the absence of certain standard assessment and evaluation criterion. In the process, art history, art criticism and art philosophy are a great reference for the evaluator. It is required to fully comprehend stylistic concerns in the work of the individual and the content, in other words the manifesto that she/he intends to express and also attach importance to parallelism between the discourses of the two phenomena. Otherwise, it is possible that an unqualified or a random aesthetic product will necessarily be accepted positively or a high-level artistic application will be ignored before it is appreciated. There are many similar examples in art history. For example, there is a direct parallelism between the manifesto of numerous art movements such as Dadaism or Futurism formed by masters and plastic applications. They presented unique works of art on the basis of their own arguments. At first, they got great reactions but then the quality of their works was appreciated. An art trainer should be able to comprehend the significant bond 387
Assessment and Evaluaiton Process in Art Education
between the implementer’s purpose and outcome very well. Because mistakes that are made in the assessment and evaluation process may either offend the individual or cause her/him to quit artistic applications for good. For example, Cezanne’s works were not immediately accepted in exhibition halls of the era, could not be understood in terms of the relationship of style and content and were largely criticized. However, considering the present day; he is mentioned as the “Father of Modern Art” due to the maturity of his works and his inspiration for his posteriors (Gompertz, 2018). As the artistic assessment and evaluation process is based on style and content, it is a sensitive process and the impact of style and content on one another has become an important problem since ancient ages until today (Fischer, 2015). Qbjective qualities of the concept of beauty which is the target in artistic assessment and evaluation are also handled by Tunali (2008) from the aspect of style and content. He handles internal-contextual qualities of beauty from three different aspects as “convenience for id, type or style”, “competence” and “vitality and expression” and external-stylistic qualities of beauty from three different complex aspects as “proportion and symmetry”, “harmony” and “principle of unity and economy in majority”. As it is understood in the process, the relationship of style and content is of prime importance. Image 19.2 can be analyzed as an example to the relationship of style and content.
Figure 19.2 Two Works from Uysal’s “Tarantella I Poisoning” Series The two works in Figure 19.2 are two oil paintings from Huseyin Uysal’s “Tarantella I Poisoning” series. The works are inspired by the tarantella dance 388
Hüseyin UYSAL
and music played at a vivid and quick tempo, which date back to the 15th and 17th centuries and used to be performed by people poisoned as a result of being bitten by spiders to recover. Uysal shapes his works on the basis of the relationship of painting and music in parallel with the “Tarantella I-Poisoning Violin Sonata” that he has composed for violin.
Figure 19.3 First Phrase of Uysal’s “Tarantella I-Poisoning Violin Sonata”
Image 3 includes the first phrase of Uysal’s “Tarantella I-Poisoning Violin Sonata”. Here is a description of the first injection of poison into the body. The spiritual condition here is shaped plastically in the colors and facial expressions of both paintings in Figure 19.4.
Figure 19.4 A Sequence from Uysal’s “Tarantella I-Poisoning Violin Sonata” Figure 19.4 shows a sequence from Uysal’s “Tarantella I-Poisoning Violin Sonata”. Here is a description of the rhythmic transformation caused by poison in the body. Feeding also on the rhythmic pattern of music in his paintings; Uysal handles the color and line equivalent of that rhythmic pattern on the characters’ hair in his paintings in Figure 19.2 Stylistic expressions in the paintings are the reflections of musical content. The paintings of Uysal which are given as examples here should definitely be evaluated based on the relationship of style and content, or else the aesthetic product will not be understood and there will probably be deficiencies in the evaluation process. 389
Assessment and Evaluaiton Process in Art Education
An Application Example of Assessment and Evaluation in Art Education In this section, “an application example of assessment and evaluation in art education” was presented in detail. The application example of assessment and evaluation was formed within the frame of three main themes. These themes were determined as “Aesthetic Object, Process and Task”. Aesthetic Object It includes all artistic products made by the student in the workshop/classroom throughout the relevant educational process. Each of these products is grounded with design principles and elements such as color, point, line, stain, style, direction, tone, tissue, emptiness-fullness, rhythm, balance, emphasis, contrast, unity, variety, repetition and harmony and revealed by supporting with the relationship of technical skill and creativity. Being one of the design elements; point is the smallest unit and design element in painting art. It may differ in terms of magnitude and smallness according to the qualitative feature of the surface where it is. Points can create lines, stains and color areas when they come side by side. A point may not only have a meaning alone, but also create new meanings with the convergent relationship of more than one points (Odabasi, 1996). Line is an imaginary border that is accepted to exist between two colors, stains or styles. Being the major design element for painting art; this border is applied to the surface via an object that leaves a mark. This element has existed in human life on a large spectrum from the primitive man until today’s modern artists in order to imitate nature and shape what it feels (Oztuna, 2007). Color is an effect made by the light on our eyes after hitting substances and reflecting them. In order for colors to exist, light needs to exist. All substances, except for transparent substances have an individual color. Colors are categorized as primary, accent, bright, cold and transverse colors among themselves. Primary colors are expressed as yellow, red and blue; accent colors green, orange and purple; bright colors yellow, red and orange; cold colors blue, purple and green; transverse colors yellow-purple, red-green and blue-orange and new colors can be obtained by mixing colors (Buyurgan & Buyurgan, 2012). Tone corresponds to color gradings according to light in paintings. Human eye has a certain tone depending on the light of all colors perceived through light and these color tones provide a variety and richness in composition (Yolcu, 2009). Tissue is a judgment made concerning the surface of a substance as a result of feeling and perceiving the surface. All organic and inorganic beings on earth have 390
Hüseyin UYSAL
a tissue. Smooth, rough, soft and solid surfaces can be given as examples to tissues. Tissue in painting is of prime importance in terms of the quality of application and it can be encountered as not only an objective reality tissue, but also unrealistic stains, lines and colors (Artut, 2007). Stain is a state in which moderate, darkness and lightness degrees of a color are given meanings. Every color has a stain value equivalent and stain value equivalents of colors may differ from each other. For example, the stain value of yellow is considered a lighter stain value than purple. In addition, different tones of the same color may correspond to different stain values. In order for an area to be explained with a distinct stain value, the value should be distributed homogeneously within its own boundaries (Yilmaz, 2007). Style is an appearance on a two-dimensional surface restricted by other design elements such as color, line or stain. Form and style are the two concepts that may lead to an ambiguity and the difference between them is that the meaning of form is built on a three-dimensional basis, whereas the meaning of style on a two-dimensional basis. Every form has an equivalent of style in painting art (Gokaydin, 2002). Emptiness and fullness are the two concepts completing one another. Fullness is an integrity formed by elements that come to the fore in a restricted area. Numerous adjoining elements may reduce the perceptive value of each other and thus, lead to the need for an emptiness. Emptiness and fullness play an important role in concretizing an argument that is intended to be expressed in a painting qualitatively, because they may be encountered as a great part of a space and sometimes an object (Deliduman Gence & Istifoglu Orhon, 2006). Direction is the sum of lines created on the basis of an axis by the forms of all phenomenon, which are perceived by human eye. Direction perception begins after being born and walking and develops with maturing. These lines are handled in three categories as horizontal, vertical and diagonal (Atalayer, 1994). Being one of the basic design principles; rhythm forms by using differences in such a way that they constitute a meaningful whole. Differences also enable us to perceive motion. Motion is the basic factor that constitutes rhythm in art and there will be no rhythm without motion (Say & Balci, 2002). Balance occurs when homogeneous or different objects exist in harmony. In a painterly composition, there might either be a symmetrical balance or an asymmetrical balance (Buyurgan & Buyurgan, 2012). Emphasis is prioritizing a phenomenon which is tried to be expressed over another phenomenon. Emphasis in painting art is based on noticeability and dominance of certain pieces. This dominance 391
Assessment and Evaluaiton Process in Art Education
may be formed by contrast elements such as transverse colors, light, shade or horizontal-vertical (Tepecik & Toktas, 2014). Contrast develops when entities establish a bond between each other via a contradiction. Black-white, cold-hot, big-little can be given as examples to contrast. Usage of contrast values in paintings may lead to a mess and an incompatibility on one hand and transform into a mobility and liveliness as a result of quality usage on the other (Yolcu, 2009). Unity is a meaningful whole constituted by all elements and principles in a design with a certain cognitive condition and an aesthetic attitude. The principle of unity is formed when all the pieces being used reach a mature whole (Ertok Atmaca, 2014). Variety is a design principle including differences and contrasts. Its presence in a painting removes monotony in the composition. Variety can be constituted by all design elements (Buyurgan & Buyurgan, 2012). Repetition is repeating all entities perceived by human senses in a similar way. Repetition in painting art creates not only a visual pleasure, but also an emphasis (Oztuna, 2007). Harmony means adaptation. It denotes the integrity of two or more elements in painting art in conformity. Similarities, differences and contrasts in a composition constitute harmony. Usage and application quality of these design principles and elements is an important criterion for the assessment and evaluation process. In addition to these, technical application skills (such as oil painting, crayon and watercolor) and creative elements are important, as well. Technical skill, instrument usage skill and creativity, on the other hand, includes all the steps taken by the student out of ordinary. Process According to Kirisoglu (2005), process in evaluation is as important as product (aesthetic object). Many behaviors that are ignored in assessments performed only on the basis of product remain within the scope in evaluation. In this context, evaluation can be defined as “approaching all behaviors validated via education in the process and outcome with value judgment”. According to San (2010), art education should be considered a multi-directional and complex “process”. In this process, applied studies in the visual area and theoretical and art-oriented knowledge concerning the same area should be involved within a certain system and organization for a certain purpose. The competences and skills here are built by seeing, kneading and drawing in connection with objects and reality and relevant behaviors and consciousness-raising are directed. This process should be examined by the teacher very well. 392
Hüseyin UYSAL
Buyurgan & Buyurgan (2012) suggest that evaluation within the process occurs when the teacher walks among students and interacts with them during the course. In the process of workshop/classroom applications of students, the teacher takes observation notes and determines the score equivalents of these notes. Interacting and communicating with students; the teacher reveals whether they have acquired relevant knowledge, skills and perceptions via indirect inferences or not and notes down the score equivalents. The average of notes taken down at the end of each course is written in the relevant section. Task Task includes academic activities performed by students either theoretically or practically out of the class. Tasks aim to make acquisitions more permanent in parallel with intraclass acquisitions and make the educational process more productive with a preliminary preparation in line with the theme. It is necessary to control the tasks regularly and give feedback to students. Thus, students will have an opinion about their developmental process and explore their deficiencies. In addition to this, Turkoglu, Iflasoglu Saban and Karakus (2014) suggest that tasks can also be evaluated as assessment instruments in the educational process. However, it is not appropriate for the teacher to evaluate the tasks prepared by students with notes only in her/his own assessment and evaluation process. Students should be informed not only with notes, but also with verbal interpretation. Application Process of Assessment and Evaluation Considering that art education is based on an individual aesthetic attitude; there should be two different evaluators for a more positive and reliable assessment and evaluation process. Table 19.1 Example Painting Evaluation Form STUDENT
AESTHETIC OBJECT A B C D
PROCESS
TASK
Observation Communication Notes and Interaction
a b c d
TOTAL
1 2 3
393
Assessment and Evaluaiton Process in Art Education
As is seen in Table 19.1; the evaluation form consists of three different themes as “Aesthetic Object”, “Process” and “Task”. The themes are evaluated individually for each student. Before starting the evaluation process, the evaluator fills in the sections related to “Process” and “Task” beforehand according to the notes taken earlier. Score equivalent of general observations about the student is written in the section of “Observation Notes” in the sub-heading of process theme; and score equivalent of opinions related to knowledge and skill levels of the student is written in the section of “Communication and Interaction” as a result of communicating and interacting with the student throughout the educational process. In the section of tasks, on the other hand, the scores given by the evaluator at the end of each task control process are written. Then the evaluation process continues with the theme of “Aesthetic Object”. “A, B, B and D” here represent imaginary aesthetic objects. Each work is presented to the evaluators in chronological order, graded with scales that are developed according to special criterion and the result is written in the relevant section. Themes are formed, criteria are determined and scores are given according to distinctive values of each work. It is difficult to evaluate all works with a standard painting evaluation scale. For example, in the painting evaluation scale developed by Uysal (2016) to analyze the impact of different music genres on children’s paintings; the theme of “Subject” includes the item of “fiction preference suitable for the subject of music” and the theme of “Visual Image” includes the item of “Use of musical images (such as musical note, instrument, dance). These themes and items can be used in a music-related work, but it might be inappropriate to use these items in a work related to rate-proportion. Every work is graded according to their own criteria and the result is written in relevant section in the evaluation form. Following the assessment and evaluation process, every work of the student should be checked over and over, photographed and documented. Then averages of the scores of each theme are taken, compared with the other evaluator and the common score is written in the section of “Total” score with a critical attitude. Even if there is no other evaluator, it is recommended to receive opinion from another domain expert. As art education includes special ability and artistic application area, mistakes to be made in the assessment and evaluation process may determine the individual’s attitudes and behaviors toward the area. Percentile of each theme in the evaluation may vary according to teacher, student, conditions, as well as the quality of aesthetic object-process and tasks. For example, it is possible that the aesthetic object will be 40%, process 30% and 394
Hüseyin UYSAL
the task 30% in an application; and the aesthetic object will be 70%, process 20% and the task 10% in another application. It should not be forgotten that both themes and percentages may vary according to the target behavior, perception and course application quality. Conclusion In this section, the assessment and evaluation process in art education is discussed and useful information about the area are shared. The assessment and evaluation processes are a difficult process in art education. Thus, the process might be shaped according to different approaches depending on the course objective and basic characteristics of the age group. However, in basic terms, assessment and evaluation in art education are categorized as “Academic Assessment and Evaluation” and “Artistic Assessment and Evaluation”. It is considered useful to use these two different approaches correctly. In this section, an application example aiming to facilitate the assessment and evaluation process of art trainers is presented. This application example consists of three different themes as “Aesthetic Object”, “Process” and “Task”. Every one of these themes fairly supports each other in order for the process to arrive at a correct conclusion. The more correct the assessment and evaluation process ends up, the more correct the steps will be toward art education for both art trainer and student. Because mistakes in assessment and evaluation may detract the individual from art and art education.
395
Assessment and Evaluaiton Process in Art Education
REFERENCES Uysal, H. (2016). The Impact of Different Music Genres on Children’s Paintings. Ankara: Gazi University Institute of Education Sciences Doctoral Thesis Kirisoglu, O. T. (2005). Training, Learning and Creating in Art. Ankara: Pegem Publishing Oncu, H. (1999). Assessment and Evaluation in Education. Ankara: Yaysan INC. Artut, K. (2007). Theories and Methods of Art Education. 5th Edition. Ankara: Ani Publishing Atalayer, F. (1994). Basic Art Elements. Eskisehir: Anadolu Universitesi Publications Cepni, S. (2006). Planning and Evaluation in Education. Ahmet Doganay, Emin Karip (Publ. Prep.). Ankara: Pegem Akademik Publications Deliduman Gence, C. & Istifoglu Orhon, B. (2006). Basic Art Education. Ankara: Gerhun Publishing Erbay, M. (2013). About Art Education. Istanbul: Beta Printing Publishing and Distribution Fischer, E. (2015). Necessity of Art. 4th Edition. Istanbul: Sozcukler Publications Unver, E. (2002). Art Education. Ankara: Nobel Publications Gompertz, W. (2018). Think Like an Artist. Istanbul: Yapi Kredi Publications Gokaydin, N. (2002). Teaching System and Knowledge Scope of Art Education, Basic Art Education. Ankara: The Ministry of National Educations Publications Ertok Atmaca A. (2014). Basic Design. Ankara: Nobel Akademik Publishing. Buyurgan, S. & Buyurgan, U. (2012). Art Education and Teaching. 3rd Edition, Ankara: Pegem Akademi Yolcu, E. (2009). Theories and Methods of Art Education. 2nd Edition, Ankara: Nobel Publishing Tepecik, A. & Toktas, P. (2014). Basic Art Education in Faculties of Fine Arts. Ankara: Gece Library Tunali, I. (2008). Aesthetic. 11th Edition. Istanbul: Remzi Bookstore 396
Hüseyin UYSAL
Turgut, M. F. & Baykul, Y. (2010). Assessment and Evaluation in Education. Ankara Pegem Akademi Turkoglu, A., Iflasoglu Saban, A. & Karakus, M. (2014). Task in The Process of Education. 2nd Edition. Ankara: Anı Publishing Odabasi, H. (1996). Basic Design in Diagram. Istanbul: Yorum Sanat Publications Oztuna, H. Y. (2007). Basic Design in Visual Communication. Istanbul: Yorum Sanat Publications. San, I. (2010) Theories of Art Education. (3rd Edition). Istanbul: Utopya Say, N. & Balci, Y. B. (2002). Basic Art Education. Istanbul: Ya-Pa Publishing Yildirim, C. (1973). Assessment and Evaluation in Education. Istanbul: Milli Egitim Printing House Yilmaz, M. (2007). Applications in Visual Arts Education. Ankara: Gunduz Egitim Publishing.
397
Bilge SULAK AKYÜZ
Chapter 20 Assessing Student Learning Outcomes in Counselling Bilge SULAK AKYÜZ
Introduction In modern life, an employer seeks an employee who knows how to apply the knowledge and contributes to the whole system. Thus, knowing only the content is not adequate in today’s competitive world. Spady (1994) asserted that information age does not only require firm knowledge and skill set but also continuous learning and performance. Employers determine the necessary skills for certain occupations and look for individuals who have them. As a result, training individuals is accordingly changing by formal education. It is not based on traditional education that focuses on lectures, exams, grades, and the idea of failing or passing. Education transformed in a way that students need to demonstrate certain skills necessary for their future occupations (Goodman, Henderson, & Stenzel, 2006). Consequently, grades are not the definite assessment criterion for learning, instead a variety of methods should be employed to assess the competency levels of students. Universities want their programs to be more marketable, and their graduates to be unique in terms of their abilities and adaptabilities to work under every condition (Goodman, Henderson, & Stenzel, 2006). That is the reason for universities are challenged to prepare graduates to the information age who can handle the demands of 21st century (Stenzel, 2006). Subsequently, it is obvious that the education system is moving away from knowledge-based approaches. Contemporary society seeks for individuals who are competent to do their work. That means college students need to be trained based on competencies. Professions that value skills and abilities such as medicine, pharmacy, helping professions, engineering etc. take into consideration desired outcomes while training prospective professionals.
399
Assessing Student Learning Outcomes in Counselling
Background of Outcome-Based Education OBE became popular around the 1990s in public school system, and William Spady was considered as the father of OBE (Berlach & O’Neil, 2008). According to Spady (1994), OBE has 500 years of history and can be found back in Europe among craftsman. In contemporary world, OBE can be found in trainings that are based on demonstrating certain competencies some of which are martial arts, skiing, military, and cooking. Examples of professionals based their training on outcomes are medical doctors, lawyers, dentists, nutritionists, and so forth (Spady, 1994). Tower and Tower (1996) said that there are two approaches that OBE derived from; competence-based education and mastery-learning. In competence-based education the focus is demonstration of certain predetermined competencies and skills, and students are evaluated based on levels of mastery. When students are given time to advance necessary skills before they move to the next level is called mastery learning, and in mastery learning students are assessed according to their current levels. According to Harden (1999) “[outcome-based education] is an approach to education in which decisions about the curriculum are driven by the outcomes that the students should display by the end of the course” (p. 2). According to this statement, the outcomes are significant components of decision-making process because if students do not achieve the desired outcomes, then the whole system needs to be reviewed. To be able to implement OBE philosophy, an education system should have five elements. These five dimensions of OBE are defining outcomes, designing curriculum, delivering instruction, documenting outcomes, and determining advancement (Spady, 1994). Dimensions of OBE Dimension 1: Defining outcomes. First dimension of OBE is to determine learning outcomes (LO). According to Spady (1994), outcomes are not just what students know but also what they can do with that knowledge. In OBE, LOs derived from program’s mission statement and educational goals (Spady, 1994). The congruency among LOs, mission, and goals is a requirement because this alignment can assure accountability (Spady, 1994). Hence, accountability is an evaluative tool for the education. First requirement of accountability is to communicate intended outcomes explicitly with students, teachers, parents, and other stakeholders (Harden, 2007). These outcomes need to be observable and measurable because Huba and Freed (2000) said that “learning outcomes are statements describing our intentions about what students should know, 400
Bilge SULAK AKYÜZ
understand, and be able to do with their knowledge when they have graduated” (p. 9-10). Thus, while writing outcomes active verbs such as analyse, exhibit, and design should be used instead of “learn”, “understand”, and “know”. The use of active verbs emphasizes the transferability of the knowledge and comprehension. Subsequently, education is beyond memorizing facts and taking multiple choice exams; education, then, ought to prepare students for competitive and everchanging work places. Dimension 2: Designing curriculum. Second dimension of OBE is to design the curriculum. Curriculum is a combination of courses that a program offers. Hence, the courses are expected to be congruent with intended outcomes as well as one another. Spady (1994) indicated that OBE does not force institutions to implement a predetermined mandatory curriculum instead it requires institutions to review existing curriculum and make necessary changes. Dimension 3: Delivering instruction. The third dimension of OBE is to decide instructional methods. Throughout instruction delivery planning phase, OBE assumptions should be considered. According to Spady (1994), the assumptions of OBE are as follows: 1) all individuals can learn and succeed, but on different times and levels. 2) Successful learning enhances advance learning. That is the reason for OBE is rooted in Skinner’s operant conditioning that hypotheses reinforced behaviours have higher chances to be repeated (Luksik & Hoffecker, 1995). 3) School personnel controls the conditions that influence school learning. The most important component is teachers who do their best to facilitate student learning. Spady (1994) asserted that OBE teachers are encouraged to discover the effective ways of teaching and reaching students to achieve the desired outcomes. Consequently, teachers need to know expected outcomes and personal differences; then diversify the delivery methods because only by doing so learning can be assured. Dimension 4: Documenting results. The only way to document results in OBE can be accomplished by successful assessment. The important element of OBE is to measure outcomes to determine whether students reach desired outcomes (Spady, 1994). Using more than one assessment tool also increases the validity of the assessment (Mukhopadhyay & Smith, 2010) and facilitates comprehensiveness (Maki, 2002). Even though teachers utilize variety of assessment methods, these methods are required to be exhaustive and valid. Comprehensive and cumulative assessment methods are facilitative on detecting 401
Assessing Student Learning Outcomes in Counselling
learning insufficiencies (Ben-David, 1999). Regardless of the range of assessment tools, teachers oversee creating generic criteria that learning will be compared against. These criteria ought to link desired outcomes to the assessment method or to the assignment. Consequently, each student will be assessed at their own level instead of a general comparison. Dimension 5: Determining advancement. Spady (1994) asserted that institutional advancement relies on data, and institutions need to be cognizant on the way to utilize the data into their improvement approaches. The data can be collected from students, teachers, parents, employers, and other stakeholders. The data collected through assessment helps an institution to evaluate itself (Maki, 2002) because assessment results reveal the differences between actual and desired outcome attainment level. The results can be evident of effectiveness of the program on student learning as well. Based on results, the content and the instructional methods can be reviewed for advancement (Harden, 2007). OBE in Higher Education Stenzel (2006) said “…. producing thoughtful, well-informed, reflective, and competent graduates” (p. 114) is the main goal of higher education. In higher education curriculums, competencies are considered measurable and observable outcomes (Goodman, Harderson, & Stenzel, 2006). Therefore, OBE transforms into competency-based education (CBE) in higher education. Many colleges are changing the educational approaches, so the traditional understanding of grades considered as the determinants of learning is being abandoned (Faouzi et al., 2003). That means higher education values complex cognitive skills instead of having a grade letter as an indicator of student learning. CBE aims at supporting students to attain higher-level outcomes (Goodman, Harderson, & Stenzel, 2006) some of which are critical thinking, analysing, and evaluating. However, it is impossible to remove knowledge-based courses from the curriculum because without having a foundation an individual cannot further cognitive complexity. One of the aims of undergraduate and graduate programs is to prepare students function effectively after graduation (Stenzel, 2006) because learning is life-long. Even after graduation, students will be expected to go through job related trainings in order to keep updated with current advancements in the field. Hence, they need to transfer their previous learning into new schemas.
402
Bilge SULAK AKYÜZ
OBE in Mental Health Profession Mental health professions work with individuals who need help, so any mistake can harm the clients. To be able to avoid harm, many disciplines determine certain competencies that are required for all students to demonstrate prior to graduation. Around the U.S., the efforts to enhance competencies increased because of public announcements that are related to have qualified practitioners in the field of health (Gehart, 2011). Accordingly, disciplines such as psychiatry, psychology, social work, marriage and family therapy, addiction counselling, and counselling emphasized the importance of competencies in their training. The current status of each of these disciplines will be provided shortly. Psychiatry General medical competencies are established by Accreditation Council for Graduate Medical Education (ACGME). This accreditation body is also in charge of determining training standards and requirements for general physicians. According to ACGME (2008), a general physician needs to have six core competencies prior to graduation, which are patient care, medical knowledge, practice-based learning, communication skills, professionalism, and systembased practice. In addition to general competency requirements, Psychiatry Residency Review Committee had one more requirement for the psychiatry residents that was demonstrating competencies on five major psychotherapies. However, the committee did not further specify the assessment criteria for the competency instead programs were given freedom to decide how to assess psychotherapy competencies (Hoge et al., 2005). Psychology Outcome-based approach was utilized into psychology during the 1990s (Rubin et al., 2007). American Psychological Association (APA) (2005) requires doctoral, internship, and postdoctoral programs to assess student learning according to core competency attainment levels. The graduates of psychology programs are expected to demonstrate competencies at eight areas (Kaslow, Celano, & Stanton, 2005). Most psychology programs utilize a three-dimensional competency model called the cube. In this model trainers who are at different developmental levels are assessed in relation to foundational and functional competencies. This model takes into consideration the developmental level of the trainers such as an intern is not expected to have same competency level as a postdoctoral trainer (Fouad et al., 2009). The model aims at assessing trainers at their developmental level, but the challenge is to generate various valid and 403
Assessing Student Learning Outcomes in Counselling
reliable assessment methods. As a result, the profession still searches innovative methods to achieve its intended goal (Rubin et al., 2007). Counselling psychology adopts scientist-practitioner model as a training approach. In this model, scientific approaches integrated into professional practice (Stoltenberg et al., 2000). Social Work The field of social work educates their students based on outcomes (Carpenter, 2011). According to Council on Social Work Education’s (CSWE) Educational Policy and Accreditation Standards (EPAS), social work curriculums are based on competencies as outcomes. When students graduate, they are expected to have ten core competencies all of which have knowledge, value, and skill subcategories (CSWE, 2008). As a result, social work programs offer several courses that can compensate these competencies (Carpenter, 2011). Marriage and Family Therapy (MFT) Marriage and family therapy programs moved from content-based education to output-based education (Miller, Todahl, & Platt, 2010). Marriage and family therapy programs adapted the American Association for Marriage and Family Therapy (AAMFT, 2004) core competencies that consist of over a hundred skills and knowledge areas for trainers (Gehart, 2011). Additionally, Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE, 2005) requires programs to develop set of competencies and assess student learning and mastery because these are the fundamental requirements for a program to maintain accreditation. After this revision, programs are no longer allowed to demonstrate student learning based on hours of experience or superviseesupervisor ratios, so MFT programs to abandon content-based training model (Gehart, 2011). Addiction Counselling Addiction Technology Transfer Center (ATTC) in 1998 attempted to generate a competency list for addiction counsellors that consists of over a hundred competencies (Hoge et al., 2005). In 2005, this list was revised by the Center for Substance Abuse Treatment and subcategories were included for competencies. Counselling Practicing in boundaries of competence and maintaining competence is an ethical obligation of a counsellor since ACA (2005) is devoted Code of Ethics Section C.2. to professional competence. This section indicates that the professional counsellors need to practice in relation to their scope of practice and 404
Bilge SULAK AKYÜZ
based on their education and training. If competences are derived from education and training, then the counsellor training needs to respond these questions: “What kinds of counsellors will the training produce?”, “What are some characteristics of competent counsellors?”, “What kind of competencies will graduates have?”, “Will graduates be ethical in their practice?”, and “Will they be able to conduct a research to advance services?”. The responses to the questions above conclude that the society expects counsellors to be knowledgeable, skilful, and ethical in their practice. However, it is difficult to answer these questions in a definite way because counsellor competence is difficult to identify due to its complexity (Shepherd, Britton, & Kress, 2008). Although it is difficult to have a definite list of competencies, counsellor education programs need to prepare students to practice in complex clinical or school settings, so counsellors-in-training require demonstrating certain competencies prior to graduation. That is the reason for the counselling field adopts OEB because outcome-based educational systems require their students to demonstrate certain skills prior to graduation (Harden, Crosby, & Davis, 1999). In the counselling field, CACREP standards determine competencies as student learning outcomes (SLO) because accreditation body requires everyone to meet certain knowledge, skills, and competencies prior to graduation (Harden et al., 1999). As a result of the accreditation, the unity among institutions is assured because graduates are expected to demonstrate the same knowledge and skill/practice that are based on standards. When a counselling program is accredited, then SLOs, which are related to knowledge, skill, and value, derived from standards, and the graduate of a program is portrayed via standards as well. If programs’ graduates attain determined outcomes, then the programs are eligible for accountability because accrediting bodies require programs to maintain accountability by attaining specific outcomes (Goodman, Harderson, & Stenzel, 2006). When CACREP revised the standards, the emphasis was given to SLO during assessment process (Liles & Wagner, 2010), and the CACREP 2009 standards required all programs to assess SLOs in program area standards. Even though core curriculum areas remain content oriented, program area standards included outcome-based standards specific to knowledge and skill/practice (Urofsky & Bobby, 2012). Bobby and Urofsky (2008) asserted that the programs need to demonstrate acquisition of students the required knowledge and skill related program area standards. According to the authors, the outcome-based standards 405
Assessing Student Learning Outcomes in Counselling
shift due to providing assurance and accountability for the stakeholders and public. CACREP 2016 standards added to “professional disposition” as the assessment criteria in addition to knowledge and skills. It can be said that CACREP wants to provide insurance that the graduates of an accredited program are competent to provide service for the ones in need of help. The evidence, which is based on reliable data, supplied by programs is direct verification that graduates possessed adequate preparation based on knowledge and skill during their education. Data collection in counselling programs is not easy because good counsellors are not defined by “what they do”, but “how they do it”. Collecting data based on competencies can be perceived as difficult, but it is not impossible. Lack of definite list of competencies and valid and reliable assessment tools might hinder collecting data (Minton & Gibson, 2012). However, institutions that are pressured to demonstrate their accountability to the stakeholders need to have comprehensive data that shows graduates of the program are competent to practice in the work place (Toutkoushian, 2007). Not having useful data complicates the process of accountability and informing the stakeholders that the education has significant contribution to the public (Keller & Hammang, 2007). It is obvious that what graduates know and demonstrate is the reflection of the credibility of the program. For instance, if a counsellor demonstrates ethical behaviours and is aware of diversity, these behaviours increase the credibility of the counselling programs because the program fulfils its mission statement by having ethically competent graduates. CACREP (2016) standards in “Section 4.F” indicates that evaluation can be accomplished “…by examining student learning in relation to a combination of knowledge and skills. The assessment process includes the following: (1) identification of key performance indicators of student learning in each of the eight core areas, (2) measurement of student learning conducted via multiple measures and over multiple points in time, and (3) review or analysis of data” (pp. 17-18). Although assessment and data collection are necessities, programs have freedom to create their unique methods of assessment. The standards assert that the programs need to have a systematic and comprehensive assessment plan (CACREP, 2016). The programs or the faculty were not imposed to implement certain assessment styles. It might be because programs are set free to choose and implement the best approach, they deem appropriate as long as there is reliable evidence of student learning. Because CACREP standards are so broad, so it is 406
Bilge SULAK AKYÜZ
difficult to evaluate student learning outcomes based on standards. In the counselling literature, there is only one article written by Minton & Gibson (2012) that conceptually provide information regarding assessment of student learning outcomes in the counselling field. Lack of studies is evidence that researchers in the counselling field tend not to write about assessment and SLO. It might be because assessment requirement is a new concept, and appropriate assessment methods are still under search. Since there is a gap in the literature in regard to assessment, the following two sections aim to offer possible program level assessment systems for student learning outcomes in the counselling field. Program Level Assessment System Prior to entering graduate programs, each student would have different knowledge and experience towards counselling. That is the reason for educators in graduate programs hope to reach each student at their level. Professors want their teaching to make significant contributions to student learning. However, regardless of their effort, each student has unique ability to accomplish the task at their own level. Someone can advocate that there might be other environmental or situational confounding variables on learning. Even though there are other variables and these variables cannot account for the total change. That is the reason for programs need to assess the effectiveness of the education. For programs to be considered credible and accountable, there needs to be reliable data. This data communicates to the stakeholders and policy makers whether education is significant and worth the investment. To be able to accomplish this goal, a comprehensive assessment method is required for programs to collect reliable and usable data. Readiness survey Furthering knowledge and experience through graduate education requires a considerable investment both time and financial resources. Due to rapid changes in society, higher education is more of a requirement for employment than in the past. Education is expected to prepare graduates who are competent and multiskilled as a result of their education. Even though competencies in counselling are so broad and the attainment levels vary, CACREP (2009) program area standards are treated as SLOs (Urofsky & Bobby, 2012). Counselling programs are unique to their trainings and expectations, so it would be beneficial if students will be assessed in terms of their readiness prior to admission by a readiness survey. This survey assists faculty in evaluating whether students are ready and fully understand the expectations of the program. Based on the results, faculty 407
Assessing Student Learning Outcomes in Counselling
will acknowledge students about their level of readiness and admission status. It resembles the idea that during admission process to the university if a prospective student does not meet the requirements but demonstrates potential, a conditional admission is given. Then, this student is given one semester to exhibit full performance before being given the full admission. The same idea can be employed for counselling admission as well. This kind of preview can answer the question asked by Miller et al. (2010) “How many times Type I or Type II errors occur?” (p. 7) during admission process. This early identification also helps students to save money, time, and energy because they will know as to whether they can meet the expectations prior to dedication. This comprehensive data then can be used to determine and clarify program expectations such as if students struggle on the same items, that means these items need revision. Comprehensive assessment Counsellor educators are in roles of gatekeepers, so they need to decide having which type of error will protect the welfare of the clients and the society. To fulfil this role, student learning and professional development should be monitored by comprehensive and periodic evaluation. Comprehensive assessment will be given at the very beginning of the education, and it is necessary to be hybrid assessment methods. Comprehensive means that all the intended outcomes need to be included and measured, and hybrid means the assessment needs to include multiple choice questions, short answers, ethical vignettes, and case studies. This assessment will provide significant information about entry levels of students. A similar comprehensive assignment will be given to students at the end of each semester. Comparing pre and post test results provides a general understanding of where students were, where they are at the time of assessment, and where they will be. The collected data then will release a baseline for each student. This baseline information eases the process of monitoring, so the faculty can have intervention and prevention. Baseline assessment helps to monitor student development and learning at the same time (Driscoll & Wood, 2007). A similar method will be administered periodically for each course as well. At the beginning and at the end of the course, students will be given a sort assignment relevant to the course material. The results will help faculty to review their teaching and recognize student development. However, not all students are expected to have high levels of articulation while taking a written exam because some of them might be good at oral communication. Smith and Dollase (1999) said that beginning medical students might know the necessary information but 408
Bilge SULAK AKYÜZ
cannot demonstrate adequate competence in application, so they are given an oral exam to demonstrate their level of knowledge and skills. Thus, like medical students, counsellors-in-training might be asked to have an oral exam to explain their rationale since each student has different levels of competence due to their developmental and learning levels. Students can still pass the course but asked to accomplish additional tasks as evidence of their learning. Electronic Delivery Map Online-learning or distance-learning are products of technological improvements. Hence, technological reforms need to be employed in assessment system as well. For that, an electronic outcome delivery map can be generated. This map will be treated as a guide for each course and program overall. In this map, each course will be determined by a different colour, and all outcomes will be numbered. The numbering system will facilitate to determine the cross-cutting competencies that are relevant to all other courses some of which are ethical practices, individual and multicultural diversity. This system will be a visual map and a contract for students. Moreover, this system will push students to be responsible of their learning rather than following a pre-prepared program of study. Each student will have an online account, and faculty will have access to students’ accounts to review their progresses. The faculty will enter the accomplished outcomes and the percentage of attainment level at end of each course or throughout the course, depending on faculty’s choice of assessment. At the end of the semester, students will be able to determine where they are currently in relation to accomplishing expected outcomes. Idealistically, students are expected to have zero differences between accomplished and desired outcomes, but it is not realistic because higher education could not achieve to have any differences between intended and attained outcomes (Shupe, 2007). The following semester, students will review their account and pick among courses which they deem helpful to complete the necessary outcomes or increase low percentages. This electronic system assists students to be aware of their areas of strengths and areas need improvement. In the counselling programs most classes are cumulative and prerequisite of each other. If the system is arranged in a cumulative fashion for the program and students were not able to demonstrate certain outcomes. Then, they can register for independent studies to enhance their knowledge. Independent studies can focus on the essential outcomes that students could not attain. Assignments and assessment methods will be arranged accordingly. After they demonstrate evidence of outcome attainment, they can be 409
Assessing Student Learning Outcomes in Counselling
released to move further. At the end of each semester, faculty will meet to discuss student progress. The mapping will provide valuable information regarding which outcomes were common across courses. Hence, if a student did not meet the required outcome in one course, maybe he/she met the outcome in another course. By this method, faculty will provide significant feedback to each other to improve learning and teaching experiences. Electronic mapping might have profound benefit for the program because this system eases data collection process. A properly designed program can provide graphs, students’ attainment levels, which outcomes are the most or less attained, intersected outcomes between courses, and so forth. This data can be utilized in program enhancement and advancement. Self-evaluation form Students are not given chance to reflect on their own learning on course evaluations. Accordingly, another version of course evaluation form will be generated collaboratively. This Likert scale will ask students to rate their learning, perceived confidence and competence level. For instance, if the outcome was “Students will be able to understand the impacts of crisis and trauma-related events on people and will be able to utilize crisis related interventions when it is needed”, student rating statement about knowledge will be “I understood that crisis and trauma-related event have psychological impacts on people such as post-traumatic stress disorder (PTSD), depression, etc., and I know how to differentiate these reactions from diagnostics”, confidence statement will be “I feel confident utilizing crisis interventions if my client will be in need”, and competency statement will be “I have adequate competency to develop a school wide crisis prevention”. The results of this self-assessment will be compared with respect to students’ actual attainment levels and faculty’s perception of student learning. Subsequently, appropriate interpretations will be made to detect deficiencies. CACREP (2016) requires counselling programs to assess and provide evidence that the data was used for program modification. If program modifications will be done based on data, it needs to be useable. The electronic system will warrant data triangulation because data will be collected from various sources, faculty and students, and by various methods such as faculty perceptions, student self-assessment, Likert scales, student self-reflection journals etc. Consequently, comprehensive, reliable, and usable data will be acquired.
410
Bilge SULAK AKYÜZ
Conclusion OBE emphasizes the idea that everyone is capable of learning in enough time. It is an educational plan to determine outcomes, design educational experiences, and document student learning. OBE is a student-centred instructional method that highly focuses on outcomes and student learning. Ideal OBE implementation is possible, but it can have demands from teachers to graduates of the program. In an ideal OBE system, teachers are knowledgeable regarding writing outcomes, delivery methods, and assessment process; students are responsible of their learning; and graduates are responsible to provide follow-up data for program improvement and accountability. Nevertheless, not all the conditions exist simultaneously for the ideal world. Because education systems designed in terms of elementary to higher value competencies rather than solely knowledge, OBE is a new alternative way to produce students who are competent instead of students who have higher GPAs. OBE transformed to CBE in higher education. Outcomes are perceived as competencies because higher education prepares next generation professional who can make differences and contribute to their profession in a better way. Higher education graduates are expected to be competent to battle with the endless demands of workplaces. The professionals range from engineers to doctors, bankers, politicians, nurses, counsellors, teachers, lawyers, technicians who are going to shape the future. That is the reason for necessity for them to attain sufficient skill sets during their education. Current workforce demands graduates to be knowledgeable and explore their knowledge to fulfil the various societal needs. That means that graduates need to be prepared competent to challenge the conditions. OBE activities in counselling are similar to other professions such as psychiatry, psychology, social work, marriage and family therapy, and addiction counselling. Counsellor education was transitioned to outcome-based approaches. Although the transformation was completed, the process of assessing the outcomes is lacking. To be able to accomplish this goal, there is a need for developing reliable and valid measurement methods to identify students who need remediation. Hence, student progress needs to be reviewed periodically, and there needs to be benchmark assessment points throughout the education. Lack of appropriate and comprehensive assessment strategies and systems is one of the drawbacks for collecting useable data. Although individual efforts are valuable, faculty members need to work collaboratively to create a unified assessment 411
Assessing Student Learning Outcomes in Counselling
system for the program. Subsequently, they can keep one of their most important duties to be gatekeepers. It is valuable for counsellor educator to develop and share the most appropriate assessment methods for measuring competencies. This paper aims at providing a new perspective regarding program level assessment methods. However, further studies are needed to demonstrate the effectiveness of these methods on assessing student learning outcomes in the counselling field. Overall, assessment related research is limited in the counselling field, so future research might focus on this significant topic.
412
Bilge SULAK AKYÜZ
References Accreditation Council for Graduate Medical Education. (2008). Common program requirements. Retrieved from http://www.acgme.org/acgmeweb/ Portals/0/PDFs/commonguide/IVA5_EducationalProgram_ACGMECom petencies_Introduction_Explanation.pdf American Association for Marriage and Family Therapy. (2004). Marriage and therapy core competencies. Retrieved from http://www.aamft.org/imis15/ Documents/MFT_Core_Competencie.pdf American Counseling Association. (2005). ACA code of ethics. Retrieved from www.counseling.org. American Psychological Association. (2005). Guidelines and principles for accreditation of programs in professional psychology. Retrieved from http://www.apa.org/ed/accreditation/about/policies/guiding-principles.pdf Ben-David, M., F. (1999). Assessment in outcome-based education. Medical Teacher, 21, 23-25. Berlach, R. G. & O’Neil, M. (2008). Western Australia’s ‘English’ course of study: To OBE or not to OBE, perhaps that is the question Australian Journal of Education, 52(1), 49–62. Booby, C. L. & Urofsky, R. I. (2008). CACREP adopts new standards. Counseling Today, August, 59-60. Carpenter, J. (2011). Evaluating social work education: A review of outcomes, measures, research designs and practicalities. Social Work Education, 30(2), 122-140. Center for Substance Abuse Treatment. (2006). Addiction counseling competencies: The knowledge, skills, and attitudes of professional practice. Technical Assistance Publication (TAP) Series, 21, Retrieved from http://www.kap.samhsa.gov/products/manuals/pdfs/TAP21.pdf Commission on the Accreditation of Marriage and Family Therapy Education. (2005). COAMFTE version 11 standards. Retrieved from http://www.aamft.org/imis15/Documents/Accreditation_Standards_Version_11. pdf
413
Assessing Student Learning Outcomes in Counselling
Council for Accreditation of Counseling and Related Educational Programs [CACREP]. (2009). 2009 standards for accreditation. Retrieved from http://www.cacrep.org/doc/2009%20Standards%20with%20cover.pdf Council for Accreditation of Counseling and Related Educational Programs [CACREP]. (2016). 2009 standards for accreditation. Retrieved from http://www.cacrep.org/wp-content/uploads/2017/08/2016-Standards-withcitations.pdf Council for Accreditation of Counseling and Related Educational Programs [CACREP]. (n.d). Guiding statement on student learning outcomes. Retrieved from http://www.cacrep.org/doc/Guiding%20Statement%20 on%20Student%20Learning%20Outcomes.pdf Council of Social Work Education. (2008). Educational policy and accreditation standards. Retrieved from http://www.cswe.org/File.aspx?id=13780 Driscoll, A., & Wood, S. (2007). Developing outcomes-based assessment for learner-centered education: A faculty introduction. Sterling, VA: Stylus Publishing, LLC Faouzi, B., Lansari, A., Al-Rawi, A., & Abonamah, A. (2003). A novel outcomebased education model and its effect on student learning, curriculum development, and assessment. Journal of Information Technology Education, 2, 203–14 Fouad, N. A., Grus, C. L., Hatcher, R. L., Kaslow, N. J., Hutchings, P. S., Madson, M. B., . . . Crossman, R. E. (2009). Competency benchmarks: A model for understanding and measuring competence in professional psychology across training levels. Training and Education in Professional Psychology, 4(Suppl.), 5–26. Gehart, D. (2011). The core competencies and MFT education: Practical aspects of transitioning to a learning-centered, outcome-based pedagogy. Journal of Marital and Family Therapy, 37, 344-354. doi: 10.1111/j.17520606.2010.00205.x Goodman, B., Henderson, D., & Stenzel, E. (2006). An interdisciplinary approach to implementing competency-based education in higher education. Lewiston, NY: Edwin Mellen Press.
414
Bilge SULAK AKYÜZ
Harden R. M., Crosby J. R., Davis M. H., & Friedman M. (1999). AMEE guide no. 14: Outcome-based education: Part 5 – from competency to metacompetency: A model for the specification of learning outcomes. Medical Teacher, 21, 546–552. Harden, R. M., Crosby, J. R., & Davis, M. H. (1999). An introduction to outcomebased education. Medical Teacher, 21, 1-8. Harden, R. M. (2007). Outcome-based education: The future is today. Medical Teacher, 29, 625-629. Hoge, M. A., Paris, M., Jr., Adger, H. Jr., Collins, F. L., Jr., Finn, C. V., Fricks, L., et al. (2005). Workforce competencies in behavioral health: An overview. Administration and Policy in Mental Health and Mental Health Services Research, 32, 593–631. Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston, MA: Allyn & Bacon. Kaslow, N., Borden, K. A., Collins, F., Forrest, L., Illfelder-Kaye, J., Nelson, P., et al. (2004). Competencies conference: Future directions in education and credentialing in professional psychology. Journal of Clinical Psychology, 60(7), 699–712. Keller, C. M., & Hammang, J. M. (2007). The voluntary system of accountability for accountability and institutional assessment. New Directions for Institutional Research, 2008 (1), 39-48. doi: 10.1002/ir.260 Liles, R. G., & Wagner, M. (2010). The CACREP 2009 standards: Developing a counselor education program assessment. Retrieved from http://counselingoutfitters.com/ vistas/vistas10/Article_23.pdf Luksik, P. & Hoffecker, P. H. (1995). Outcome-based education: The state’s assault on our children’s values. Lafayette, LA: Huntington House. Maki, P. L. (2002). Developing an assessment plan to learn about student learning. The Journal of American Librarianship, 28, 8-13. Miller, J. K., Todahl, J., & Platt, J. (2010). The core competency movement in marriage and family therapy: Key considerations from other disciplines. Journal of Marital and Family Therapy, 36, 59-70.
415
Assessing Student Learning Outcomes in Counselling
Minton, C. A. B., & Gibson, D. M. (2012). Evaluating student learning outcomes in counselor education: Recommendations and process considerations. Counseling Outcome Research and Evaluation, 3(2), 73-91. doi: 10.1177/2150137812452561 Mukhopadhyay, S., & Smith, S. (2010). Outcome-based education: Principles and practice. Journal of Obstetrics & Gynaecology, 30(8), 790-794. Rubin, N. J., Bebeau, M., Leigh, I. W., Lichtenberg, J., Smith, I. L., Nelson, P. D., et al. (2007). The competency movement within psychology: An historical perspective. Professional Psychology: Research and Practice, 38(5), 452–462 Shupe, D. (2007). Significantly better: The benefits for an academic institution focused on student learning outcomes. On the Horizon, 15, 48-57. Smith, S. R., & Dollase, R. (1999). Planning, implementing and evaluating a competency-based curriculum, Medical Teacher, 21, 15-22. Spady, W. G. (1994). Outcome-based education: Critical issues and answers. Arlington, VA: American Association of School Administrators. Stenzel, E. (2006). Competency-based education: An evolving model. In Goodman, B., Henderson, D., & Stenzel, E. (2006). An interdisciplinary approach to implementing competency-based education in higher education (pp. 11-38). Lewiston, NY: Edwin Mellen Press. Stenzel, E. (2006). A competency-based approach to teaching social justice in a human service program. In Goodman, B., Henderson, D., & Stenzel, E. (2006). An interdisciplinary approach to implementing competency-based education in higher education (pp. 123-132). Lewiston, NY: Edwin Mellen Press. Stoltenberg, C., Pace, T. M., Kashubeck-West, S., Biever, J. L., Patterson, T., & Welch, I. D. (2000). Training models in counseling psychology: Scientistpractitioner versus practitioner-scholar. Counseling Psychologist, 28, 622– 640. Toutkoushian, R. K. (2007). From the editor-in-chief. In Borden, V. M. H. & Pike, G. R. (Eds.) Assessing and accounting for student learning: Beyond the spellings commission. San Francisco, CA: Jossey-Bass. 416
Bilge SULAK AKYÜZ
Towers, C. G., & Towers, J. M. (1996). An elementary school principal’s experience with implementing an outcome-based curriculum. Contemporary Education, 68, 67-72 Urofsky, R. I., & Bobby, C. L. (2012). The evolution of a student learning outcomes focuses in the CACREP standards in relation to accountability in higher education. Counseling Outcome Research and Evaluation, 3(2), 63-72. doi: 10.1177/2150137812452562
417
Sedat TURGUT
Chapter 21 Elementary Mathematics and the Most Used Alternative Assessment Techniques Sedat TURGUT
Introduction Mathematics have an important role in daily, educational and professional life of individuals. In addition, the role of mathematics in individual and social life evolves depending on scientific developments. Mathematical competence can be regarded as a requirement for keeping up with the achievement to adapt into this evolution. The literature review shows that mathematics education should start at earlier ages (e.g., Clements, 2001; Flevares & Schiff, 2014; Frye et al., 2013; National Council of Teachers of Mathematics [NCTM], 2000; Ürey et al., 2013). In the curricula of developed countries, mathematics is placed in all stages of education from kindergarten to higher education. In these curricula, students are aimed to be provided mathematical comprehension and quality. At this point, it can be pointed out that elementary mathematics especially creates a basis for further learning. The reason for that is the status of elementary school as a starting point for introducing mathematics. Therefore, students initially experience success and failure in mathematics during elementary school years. Consequently, they develop behaviors towards mathematics such as enjoying or hating. It can be pointed out that an effective mathematics instruction in elementary school depends on mathematical content and interaction between teacher and students (Aydın et al., 2010). Assessment needs to be used as an effective tool in this interaction process because assessment helps students to develop learning while supporting teachers to develop teaching. Accordingly, assessment support teaching mathematics when elementary teachers completely make sense of what assessment stands for in mathematics and how the assessment period should be processed. In this chapter, major issues in mathematics education were explained 419
Elementary Mathematics and the Most Used Alternative Assessment Techniques
such as the meaning of assessment in mathematics, the dimensions of assessment in mathematics and the relationship between elementary mathematics and assessment. Additionally, the process of traditional evaluation implementations in elementary mathematics teaching was mentioned and the most used alternative techniques were introduced and exemplified. Elementary Mathematics and the Most Used Alternative Assessment Techniques Mathematics are in every space of daily life. The ones who can achieve mathematics are supposed to have a lot of professional and personal opportunities. Mathematical efficacy and quality are going to open the doors for a better future. Therefore, students should be satisfactorily supported and provided with opportunities to comprehend mathematics and have competence in mathematics (NCTM, 2000). When the literature is reviewed, there are researches underlines that mathematics teaching should start in earlier ages. Elementary school mathematics creates a basis for later learning. Because advanced level of mathematics consists the basic form of previous levels (Ma & Kessel, 2001). Comprehension of elementary school mathematics enable the understanding for advanced levels of mathematics. In elementary school, an effective introduction of mathematics requires mathematical comprehension of elementary teachers. In addition, elementary teachers need to be able to understand what they know and what they need to know (NCTM, 2000). The elementary teachers should prefer methods and strategies that would affect the student learning positively in classes, encourage them to explain their opinions about given tasks and closely follow individual mathematical development of each student. At this point, assessment process has an important role. Conducting an effective and accurate assessment are necessary for reaching the target outcomes. Assessment is the process of collecting information about student development and learning, recording this information and analysis, and deciding on student performance after interpreting the results of analysis (Berry, 2008; McAfee, Leong & Bodrova, 2015; Russell & Airasian, 2012). The period of assessment is an irreplaceable part of teaching and learning. This period serves for students to recognize learning, be aware of the progress in personal learning and determine the difficulties that has been experienced. Assessment guide students in learning and teacher in instruction.
420
Sedat TURGUT
In elementary school, the traditional evaluation implementations in mathematics instruction generally serve for the purpose of grading students at the end of a chapter or semester and determining whether a student is successful or not. The traditional implementations cause discrepancies between assessment and teaching. Consequently, the elementary teachers cannot use assessment as a tool to support education. On the other hand, international assessments have been performed by external evaluators to measure mathematical knowledge of the students in different countries and to compare these measurement findings in recent years. The gradual increase in international assessment provides educators who work at different levels through focusing on assessment. Assessment is getting more crucial in teaching process. Elementary teachers began to use different assessment techniques in determining mathematical learning of their students. Therefore, international assessments influence the implementations of elementary teachers in mathematics instruction in addition to the educational policies of the states. Assessment will support learning and completely serve for learning in case that the elementary teacher develop understanding on meaning and process of assessment in mathematics. In mathematics, assessment is “the process of gathering evidence about a student’s knowledge of, ability to use, and disposition toward mathematics and of making inferences from that evidence for a variety of purposes” (NCTM, 1995, p.3). In the literature, it might be seen that some concepts such as evaluation, measurement and test were used instead of assessment. Whereas these concepts are associated with each other, they reflect in different meanings. Evaluation is “the process of determining the value of a thing or an assignment on the basis of an attentive investigation and judgement” (NCTM, 1995, p.3). In mathematics, tests are the tools for evaluation. Test scores are generally obtained as a result of the tests. These scores explain the measurement. Hence, the evaluation reflects in only one dimension of the assessment. Evaluation should help students to have an opinion about the importance of mathematics and should present contributive information both for students and teachers (NCTM, 2000). The elementary teacher should share the assessments about mathematical learnings of students with them at certain periods. Thus, the students can be provided with recognizing their knowledge and skills, weaknesses and strengths, attitudes as well as learning of their peers. Hereby, assessment transforms into a tool contributing to learning. Dimensions of assessment in mathematics is presented from a holistic perspective in Figure 21.1. 421
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Figure 21.1 Dimensions of Assessment in Mathematics (NCTM, 1995) In Figure 21.1, the dimensions of assessment in mathematics education are pictured as monitoring students’ progress, making instructional decisions, evaluating students’ achievement and evaluating programs. The details of these heading provide an answer to the question of “Why we make assessment in mathematics?”. Monitoring Students’ Progress Within this scope, the assessment should present information about mathematical outcomes in short term and overall objectives of class level in long term. The assessment should provide clear evidences about the progress of each student. The students should be enabled to monitor their own learning through sharing these evidences with them. Thus, the students need to be able to determine their own objectives about the behaviors that are expected to be acquired by them and shape their own learning. The elementary teacher can make observations about the attitudes of students towards mathematics learning, mathematics learning processes and how the objectives of mathematics learning are covered by monitoring the learning progress (Köğce, Aydın & Yıldız, 2009). In addition, the evidences might be provided for determining the students who have specific learning needs. Determining learning objectives of the class, controlling student comprehension during the process and providing feedback for the students can help elementary teacher to develop an effective progress in monitoring. In-class activities, observations, questions, tests, homework and projects can be utilized 422
Sedat TURGUT
for this purpose. All activities should aim at to improve mathematical learning of the student. Making Instructional Decisions In this scope, the determination on the most efficient way of mathematical content presentation to the student should be provided within the assessment. The data obtained through tests, exams, homework, projects etc. which are aimed at determining mathematical achievement of the students are significant evidences for making instructional decisions. It should be intended to provide data from various resources as many as possible through considering individual differences in learning and educational level. This data can be interpreted in accordance with elementary mathematical curriculum and objectives of the course, and the effectiveness of instruction can be assessed. Effective instructional strategies and implementation should be developed depending on learning needs and performances of students. Thus, all students can be assisted to achieve mathematics. A systematic way of making instructional decisions should be followed as the interest and attitude towards mathematics, the value put on mathematics, and accordingly the achievement in mathematics can transform over time. The systematic of making instructional decisions is presented in Figure 21.2.
Figure 21.2 Dimensions of Making Mathematical Instructional Decisions (NCTM, 1995, p.27) 423
Elementary Mathematics and the Most Used Alternative Assessment Techniques
A comprehensive planning assessment is needed before making instructional decisions. While doing this, it should be considered that the purpose of the assessment is to determine effectiveness of the instruction and improve the teaching. The evidence needs to be collected for this purpose. Effectiveness of teaching, advantages and disadvantages, improvement and rooms for improvement etc. might be determined through utilizing the obtained data. Evaluating Student Achievement The performance reflecting in mathematical knowledge and comprehension of each student should be revealed in the evaluations conducted within this scope. The data that could be used for evaluating mathematical achievement in elementary school are required to be obtained through observable and assessable learning outcomes. Learning outcomes should be clearly defined to effectively proceed the evaluation. The data which are obtained via various evaluation techniques present information about the quality and extent of learning. The predetermined criteria are used to figure out the learning outcomes, comprehension and practice of the student. Performing relational comprehension depending on conceptual and operational comprehension should be the major criteria when evaluating the achievement. Comparing students with each other should be avoided in evaluation and improving mathematical achievement of each student should be intended in evaluation. Evaluation needs to improve learning of the low achieving students and support the high achieving students. Determining the school achievement should take a backseat in elementary mathematics. The major purpose is to determine learning shortcomings of the students and to plan learning activities for improving them. Thus, individual and accessible objective can be determined for achievement of each student. In that way, the students might take the responsibility of consciousness and responsibility of their own learning. Evaluating Programs The success and effectiveness of mathematics program should be revealed in the evaluation that is done within this scope. In the evaluation of elementary mathematical program, the focused main points are the purpose, achievement objectives and success, the activities followed for that, implementation of these activities, and the benefited resources. However, the degree should be considered that the elementary teachers are able to practice the program. Thus, both the process and the outputs of the program can be evaluated. This serves for revealing and improving the quality of the program. The data should be collected 424
Sedat TURGUT
systematically. Then, these data should be used for improving and developing the program in program evaluation. Determining the fields to be improved and the progress that will be made more effectively provide the opportunity of practicing program objectives. The convenience of the program needs to be determined with the designed form and progresses procedure. The differences between the objectives and the outputs and the underlying reasons should be determined depending on the program outputs in terms of the degree of practiced objectives. Consequently, updates can be done in the program by deciding the learning needs of the students and the ways of providing these needs. Increasing effectiveness and efficiency of the program should be aimed in all evaluations. Collecting Evidence in Assessment Formative or summative evidences could be collected for evaluating elementary mathematics. Indeed, the assessment is not either formative or summative itself (William, 2007). Because, the assessment can be practiced in various ways and the results can be used for various purposes. The evidence reflecting in student learning is collected in formative assessment to present information about teaching and promote learning (Haylock & Thangata, 2007). Feedback is continuously provided to students in order to improve learning. Thus, supportive activities can be planned by determining the learning needs of students within the conceptions they have trouble in understanding and the skills in acquiring. On the other hand, the purpose of summative assessment is to make an overall assessment about the learning process (Haylock & Thangata, 2007). Generally, student learning is assessed according to certain criteria or standards at the end of a learning period. Regardless of purpose, the assessment is supposed to improve mathematical learning of students. There are various techniques for assessing elementary mathematics. Identifying the purpose will provide the most convenient assessment technique to select for using the data to be reached as a result of assessment. In this part, some alternative assessment techniques are listed. Performance-based Assessment The practical skills of students to use mathematical knowledge is can be assessed through performance assessment. The activities taking a couple of minutes in a classroom or the projects to be conducted out of classroom. Performance tasks encourage students to use advanced thinking skills to make a product or completing a process (Chun, 2010). The students are assigned to tasks 425
Elementary Mathematics and the Most Used Alternative Assessment Techniques
that they can achieve in various ways. Performance tasks need to be meaningful and original to the students. They are supposed to reveal practical capability of students to use knowledge not to focus on the amount knowledge they have. A sample of in-classroom performance-based task is presented in Figure 21.3.
Figure 21.3 In-classroom Mathematics Performance Task Example In the figure, the land is divided into equal parts. The short edge of each piece is 1m long and the long edge is 2m long. The hatched area shows a farmer's orchard. There are three types of fruit trees in the garden and a part of the garden is empty. According to this; a. Could you explain how the farmer’s orchard can be measured in terms of square meters by demonstrating it on the figure? b. If you know that ¼ of the garden is empty, could you show the area on the figure where fruit trees are planted? c. 1 3 of the planted area consists of the cherry trees, so could you show the area where the cherry trees are planted on the figure? d. If you know that ½ of the half of the whole land consists peach trees and the remaining part consists apricot trees, could you show the areas where these two fruit trees are planted on the figure? This task will direct students to use advanced mathematical thinking skills. Operational, conceptual and correlational comprehension skills of the student, problem-solving strategies, development of reasoning and exploring processes can be assessed through this task. In addition, the students can be provided with thinking about what is learnt, how learning happened, and the degree of 426
Sedat TURGUT
improvement made, and they can get more consciousness. Thereby, it encourages students to improve their metacognitive sides and do self-assessment. Various students might be asked for each stages of the performance task which will be completed in the classroom. Thus, peer interaction can be provided among students. Through peer interaction, each student might get the opportunity of expressing personal opinions, discussing problem-solving strategies, showing respect to opinions of others and benefitting from others’ problem-solving strategies. A strategy which is brought by any students for solution are questioned by all students in the classroom in terms of the way to practice the strategy, whether the strategy leads to conclusion and whether the conclusion is reasonable. This also contributes to the development of self-confidence in mathematics and positive attitude towards mathematics. A performance task prepared within the scope of an elementary school mathematics course should;
include various mathematical solutions, be interesting, meaningful, challenging and original, depend on previous learning and experience, require using different mathematical skills together such as conceptual understanding, operational fluency, strategic competency, provide opportunities for self-assessment, critical thinking, peer interaction etc., support assessing the process and conclusions.
Anecdote Anecdotes includes in noting the observations about student learning as short paragraphs (McAfee, Leong & Bodrova, 2015). Mathematical improvement process of the students can be recorded as short notes while observing or listening to the students either during the class or after the class. These notes include the observed case and what it refers to. Judgmental expressions should be avoided during the record of observation notes. An example of anecdote is presented in Figure 21.4.
427
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Figure 21.4 An Anecdote Example By gathering the anecdotal notes having been recorded about mathematical development of any students, some important information can be revealed that could not be noticed during the class. For example, a clue for a skill that the students use effectively can be used for supporting a skill in which the student is not efficient enough. Besides, it can be used as additional evidence for assessing student’s mathematical achievement. Using the observation notes of a class should be avoided when anecdotal notes are used for supporting the assessment. The anecdotal notes that are recorded systematically for each student might present valuable evidences serving for giving feedback, planning activities, and grading. Checklists Checklists are the tools for assessment where the improvement and learning are recorded in line with the determined performance criteria (McAfee, Leong & Bodrova, 2015). In checklists, the specific behaviors (performance criteria) are placed to observe. These behaviors must be consistent with the course objectives. The student performances are not graded. It is only recorded whether the performance has been practiced or not. The checklists can be used repeatedly at 428
Sedat TURGUT
different times in order to determine the shifts in student performances beside of their weaknesses and strengths (Russell & Airasian, 2012). A checklist which is repeatedly used in time presents powerful evidence for identifying student’s mathematical achievement. A sample for checklist is presented in Figure 21.5.
Figure 21.5 A Checklist Example The following statements should be considered when checklists are designed (Cunningham, 1998): 429
Elementary Mathematics and the Most Used Alternative Assessment Techniques
The behaviors to be assessed should be identified tangibly by determining the convenient performance. All the important behaviors must be listed that are required for practicing the performance. Common errors must be added for avoiding inconvenient behaviors of the students. The behaviors to be assessed must be placed in the form that they could be understood and checked easily.
Sequential skills or behaviors should be listed in checklists which are directly associated with instructional or developmental objectives. Whereas general behaviors can be listed in checklist such as problem-solving skills in addition to more specific behaviors like the list of required conceptions for mathematical operations. Mathematical improvement or mathematical skills of elementary students can be defined through checklists. Thus, elementary teachers can use the evidence they have got for planning upcoming classes for a better teaching. Also, the checklists present information to the parents about the development of their children. The checklists which are prepared systematically provide valuable information for understanding mathematical improvement of students and improving the curriculum. Separate checklists can be used for each student in mathematics classes. Hence, individual assessment can be accomplished more professionally. Grading Scale Grading scales are the recording tools where the performance is placed into a scale according to certain development and learning criteria (Cunningham, 1998; Wortham, 2008). The difference between grading scale and checklist is that grading scale can define performance with the help of multiple grades (McAfee, Leong & Bodrova, 2015). It provides opportunity of comparing the performance according to meaningful and sequential categories (Berry, 2008). The grading scales can help to determine what is performance ratio of elementary students in mathematics. While doing that, teachers need to know which performance they assess and what grades placed in the scale stand for well. In the scale, the grading should be relatively objective. When the performance of any students is assessed, only the performance to be assessed should be focused. The performance of the student in previous mathematics classes should not have any positive or negative influence on the performance evaluation.
430
Sedat TURGUT
The most common used three types of grading scales are numerical, graphic, and descriptive (Russell & Airasian, 2012). In Figure 21.6, example of a grading scales is presented.
Figure 21.6 Grading Scale Example In numerical grading scales, a number should be assigned for each performance behavior. These numbers express the degree of the practiced performance. Assessor chooses the corresponding number to demonstrate the performance degree. The biggest number in the scale express the maximum degree of performance while the smallest number reflects in the minimum. 431
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Graphic grading scales are similar to numerical grading scales. Unlike numerical grading scales, the performance is stated via performance degrees in graphical scales. On the other hand, descriptive grading scales include different explanations describing the real performance. Assessor chooses the explanation closest to student performance. Rubric Rubrics are the grading tools with the listed performance evaluation criteria (Berry, 2008). Both the performance is identified through numbers and the meaning of them are explained in rubrics. Hence, the performance is summarized holistically. The criteria to be listed in a rubric for evaluating mathematical performances of elementary student should be closely associated with learning objectives of mathematics course. Concurrently, they should be prepared as covering the individual learning differences. A rubric example is presented in Figure 21.7.
Figure 21.7 Rubric Example 432
Sedat TURGUT
Explaining the expected performance degree help students in figuring out better what is expected from them. Meanwhile, it provides to develope a common perception of what is valuable in delivered performance. It allows for students to think about the quality of other performances in addition to their own performance. The descriptions in the rubric help elementary teachers to focus on mathematics teaching. In addition, rubrics present evidence for explaining mathematics performance of students to parents. Portfolio Portfolio is a purposeful and holistic collection of student works which are defined based on learning objectives (Butler & McMunn, 2006; Russell & Airasian, 2012). Multiple performance samples or student products are found in portfolio. These are not random student works. They are specifically selected works that depend on predetermined learning objectives. In a portfolio prepared within the scope of elementary mathematics course are included the samples from mathematics homework of the student, reports of mathematics projects, developed materials, mathematical drawings, various mathematical tasks reflecting the characteristics of student and valuable to them etc. A portfolio example is presented in Figure 8.
Figure 21.8 Portfolio Example 433
Elementary Mathematics and the Most Used Alternative Assessment Techniques
The student works collected in the portfolio can help to determine the success of learning objectives. In a portfolio, student performance is assessed from a longterm perspective. In portfolio, the assessment is provided for both the students and the parents with through the student performance. The students can be encouraged to think and compare their mathematical works with the guidance of elementary teacher. Thus, the students can see their mathematical development and progress through investigating their own works. Portfolio can be an effective tool for students to recognize their mathematical achievement by assessing their ability to use mathematical knowledge and performing on knowledge basis. It provides the opportunity of finding the domains where students are strong and they need support in mathematics. According to the predetermined learning objectives, assessment can be used for both summative and formative purposes. Thus, the purpose of assessment, convenient performance criteria and grading format should be explained clearly. In portfolio assessment, both the learning process and the product are focused. Since the portfolio include tangible works, it helps to reveal the relationship between student learning process and performance in addition to perceptibly reviewing this relationship. Hence, students comprehend the importance of process and performance in learning mathematics. Assessment presents knowledge and evidence about the previous learning of students who graduate to the next grade. This contributes to the elementary teacher’s planning of the new mathematics program. The performance of the students practice according to mathematics curriculum helps teachers conclude about convenience of the curriculum. Interview Interviews are the assessment tools that allow to analyze students’ knowledge deeply via verbal interaction (Berry, 2008). It presents rich information about the opinions, mental strategies and misconceptions of the students towards a presented concept (Van De Walle, Karp & Bay-Williams, 2009). The students might be given a mathematical task (problem) and asked to express solution process for understanding what they know and how they think about elementary mathematics. At this point, the assigned task should be consistent with learning objectives. What students know, how they think and interpret, deficiencies and errors can be revealed in this way. A couple of examples that can be addressed to students in interviews are presented below. 434
Sedat TURGUT
Figure 21.9 Interview Question Example I
Figure 21.10 Interview Question Example II 435
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Elementary teachers can ask questions to students that will help to notice their mistakes and correct them during interviews. The questions to be asked to students should provide them making detailed explanations about their mental processes including mathematical concepts and relations. This information will help teachers in supporting and improving student opinions and strategies. Self and Peer Assessment Self-assessment means evaluation of student on his/her own learning. Peer assessment refers to assessment of student on the works of his/her peers. In both assessments, students make inferences about their own learning as a result of monitoring and evaluating both their own and peer learning. Thus, both assessments are closely related to each other. Within the scope of elementary mathematics, self-assessment is an assessment method which depends of reviewing, projecting and reporting one’s own learning (Fan, 2011). The student can think about mathematical learning and make relevant decisions for improving learning. In other words, the student takes the responsibility of his own learning. By providing the right conditions under the guidance of elementary teachers, students can also make suggestions for further learning through analyzing mathematical learning of themselves and their peers. In addition, students’ evaluation, decision-making and meta-cognitive skills can develop. What is important at this point is to know which criteria will be followed for evaluating the product or the performance in self or peer assessment. The influencing factors and complicated or unclear points should be focused on when assessing a mathematical product of performance. Expectations of qualified mathematical products of performances can be tangible examples of assessment criteria. Multiple tools might be used for self or peer assessment. Self and peer assessment examples are presented in Figure 21.11 and Figure 21.12.
436
Sedat TURGUT
Figure 21.11 Self-Assessment Form Example
437
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Figure 21.12 Peer Assessment Form Example Self and peer assessment help students to reflect in their mathematics learning process and their performance in this process. Reviewing and assessing their own mathematical learning might increase students’ learning motivation and performance. In addition, it might positively affect self-confidence and attitude towards mathematics. In peer assessment, students can interpret the quality of their own works or performances depending on their peers’ mathematical works. The feedbacks of students towards their own learning present valuable information to teachers when they actively participate in assessment. It helps 438
Sedat TURGUT
students to figure out their strengths and weaknesses, the domains where they need help and determine their current learning according to targeted learning. Depending on this, teachers can reorganize their teaching plan. Conclusion Mathematics is one of the courses instructed in elementary school. Students construct causation, do reasoning and think systematically during mathematics courses. Their use of these ways of thinking is one of the first steps of a long journey that will continue for years. Mathematics learnt at elementary schools with this starting point creates a basis for constructing further learning. Therefore, all students must be provided with necessary opportunities for comprehending and learning mathematics through an effective mathematics instruction. one of the important components of effective mathematics instruction is assessment. Because, assessment provides important information for making instructionrelated decisions. It guides learning for the student and instruction for the teacher. Thus, assessment is a significant part of education. In mathematics, assessment is the process of collecting evidence and using this evidence for improving instruction about mathematical knowledge of the student, ability to practice this knowledge, and overall tendency towards mathematics. In elementary mathematics, assessment should be towards determining deficiencies of students and planning instructional activities for improving these deficiencies. Grading student achievement should be in the background. An attentive assessment process will support students’ learning and teachers’ instruction. Thus, assessment will serve for improving mathematical instruction. Traditional assessment methods and techniques are used for determining whether the student achieves or fails and grading. This does not serve for an assessment to be used as a tool to support the development of mathematics instruction. On the other hand, alternative assessment methods create an extensive process which encourages students to implement mathematical knowledge and skills while it presents the opportunity of reviewing the instruction for teachers. Elementary teachers can get extensive information about students’ mathematical learning by combining multiple alternative assessment methods and techniques with mathematical instruction. Thus, continuous feedback is provided by determining the mathematical concepts which students have trouble in understanding, the challenging mathematical skills and their 439
Elementary Mathematics and the Most Used Alternative Assessment Techniques
strengths in mathematics. Besides, supportive and developmental instructional activities can be planned through determining learning needs of students. Assessment will support learning mathematics and completely serve for learning when elementary teachers internalize the meaning and function of assessment in mathematics.
440
Sedat TURGUT
References Aydın, M., Baki, A., Yıldız, C., & Köğce, D. (2010). Mathematics teacher educators’ beliefs about teacher role. World Conference on Educational Sciences 2010: Creativity and Innovation, 2 (2), 5468-5473. Berry, R. (2008). Assessment for learning. Hong Kong: Hong Kong University Press. Butler, S., & McMunn, N. (2006). A teacher’s guide to classroom assessment: Understanding and using assessment to improve student learning. San Francisco, CA: Jossey-Bass. Chun, M. (2010). Taking teaching to (performance) task: Linking pedagogical and assessment practices. Change the Magazine of Higher Learning, 42(2), 22-29. Clements, D. H. (2001). Mathematics in the preschool. Teaching Children Mathematics, 7(5), 270-275. Cunningham, G. K. (1998). Assessment in the classroom: Constructing and interpreting texts. London: The Falmer Press Fan, L. (2011). Implementing self-assessment to develop reflective teaching and learning in mathematics. In B. Kaur & W. K. Yoong (Eds.), Assessment in the mathematics classroom (pp. 275-290). Singapore: World Scientific Publishing. Flevares, L. M., & Schiff, J. R. (2014). Learning mathematics in two dimensions: a review and look ahead at teaching and learning early childhood mathematics with children’s literature. Frontiers in Psychology, 5, 1-12. https://doi: 10.3389/fpsyg.2014.00459 Frye, D., Baroody, A. J., Burchinal, M., Carver, S. M., Jordan, N. C., & McDowell, J. (2013). Teaching math to young children: A practice guide (NCEE 2014-4005). Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. Retrieved January, 26, 2019, from http://whatworks.ed.gov Haylock, D., & Thangata, F. (2007). Key concepts in teaching primary mathematics. Thousand Oaks, California: SAGE Publications Inc.
441
Elementary Mathematics and the Most Used Alternative Assessment Techniques
Köğce, D., Aydın, M., & Yıldız, C. (2009). A comparison of attitudes between first- and fourth-year teacher candidates on the profession of teaching. First International Congress of Educational Research, Çanakkale Onsekiz Mart University, Çanakkale. Ma, L., & Kessel, C. (2001). Knowledge of fundamental mathematics for teaching. In Mathematics Teacher Preparation Content Workshop Steering Committee, Knowing and learning mathematics for teaching (pp. 12-16). Washington, DC: National Academy Press. McAfee, O., Leong, D. J., & Bodrova, E. (2015). Assessing and guiding young children's development and learning. USA: Pearson. National Council of Teachers of Mathematics. (1995). Assessment standards for school mathematics. Reston, VA: Author. National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics. Reston, VA: NCTM. Russell, M. K., & Airasian, P. W. (2012). Classroom assessment. Concepts and application. New York: McGraw Hill. Ürey M., Çepni S., Köğce D. & Yildiz C. (2013). Serbest etkinlik çalışmaları dersi kapsamında geliştirilen fen temelli ve disiplinlerarası okul bahçesi programının öğrencilerin bazı matematik kazanımları üzerine etkisinin değerlendirilmesi. Türk Fen Eğitimi Dergisi, 10, 37-58. Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regulation of learning. In F. K. Lester, Jr. (Ed.), Second handbook of research on mathematics teaching and learning (pp. 1053-98). Charlotte, NC: Information Age Publishing. Wortham, S. C. (2008). Assessment in early childhood education. NJ: Pearson Education Inc. Van De Walle, J. A., Karp, K. S., & Bay-Williams, J. M. (2009). Elementary and middle school mathematics. US: Pearson.
442
Part V Issue Specific Evaluation
Hasan BAKIRCI
Chapter 22 Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes Hasan BAKIRCI
Introduction People make measurement and evaluation at every stage of their daily lives. Life conditions change day by day as the technology develops, and people make an intense effort to find a job. This pushes people to evaluate the opportunities that they have. For example, a person, who wants to buy a house, needs to calculate whether there is enough money, and decide whether the house should be close to the workplace, the hospital or school. Besides, the factors like neighbourhood of the house and earthquake resistance also need to be taken into consideration. This decision-making process requires evaluation. Therefore, measurement and evaluation in every stage of human life is an indispensable component of the education process. It has an important role in determining the students' learning outcomes, learning deficiencies and identifying how they can learn better. In recent years, alternative measurement and evaluation approaches have become dominant in learning environments. This suggests the importance of instructional models and methods based on student-centred and processoriented assessment in learning environments. In Turkey, the evaluation system assesses the product and the process of education programmes. Science education curricula have been recently revised considering the conditions of the era, researches in the field and individual differences. The curricula have used the alternative assessment and evaluation approaches (Ministry of National Education [MoNE], 2018). The recent researches proved that common knowledge construction model (CKCM) in science, chemistry and physics education can improve students’ learning (Bakırcı, Çalık & Çepni, 2017; Bakırcı & Ensari, 2018; Ürey & Çepni, 2015). These studies argued that the 445
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
evaluation and assessment approach based on the process plays an important role to the extent that CKCM is effective. Therefore, this chapter examines the assessment and evaluation approach of CKCM. This chapter will present the importance of assessment and evaluation in education, an overview of assessment and evaluation of science curriculum, Common Knowledge Construction Model, assessment and evaluation approach of this teaching model, and common alternative assessment and evaluation techniques used in CKCM based learning environment. The Importance of Assessment and Evaluation in Education Education is defined as the process of change in self-voluntary behaviour. This change in the behaviour of people is realized through the education system. The education system consists of elements such as inputs, processes, outputs and feedback. The feedback constitutes the education system, but it also constitutes the evaluation in the learning environment. Assessment and evaluation enable to reach some conclusions about students and identifies the strengths and weaknesses of students in the learning process. Thus, it also helps in identifying the problematic issues of the education system. Therefore, assessment and evaluation should be concerned whether students learn the subject or not with the reasons. This will also help to increase the quality of education (Airasian, 1996; Binbaşıoğlu, 1983). Assessment and evaluation are the elements of the curriculum and asks the question of “How much have we taught?”. This element explains the measurement tools used in the learning environment. Different assessment and evaluation techniques are used in the evaluation of learning environments since there are many variables affecting learning. In this case, the assessment and evaluation of the curricula should be directly or indirectly affected, and the curriculum was updated at regular intervals. An example of this is the science curriculum. Therefore, it is considered that it would be useful to examine the evaluation of the Science Course Curriculum after 2000s. A general overview of the understanding on assessment and evaluation in science education curriculum Scientific research and the rapid development of technology led to revisions in curricula and education programmes. The adaptation of these developments and technology contributes to the development and effective learning of 21st century skills (Günbatar & Bakırcı, 2018). Especially in recent years, the 446
Hasan BAKIRCI
inclusion of technology in learning environments, the development of curricula focusing on individual differences of students, and research on effective teaching made it necessary to change the assessment and evaluation approach of curricula. The assessment and evaluation in Turkey until 2005 had a traditional approach and understanding. The constructivist learning approach was only introduced in 2005. This also led to a student-centred and process-oriented assessment and evaluation approach in learning environments. This evaluation approach was included in the curricula as an alternative assessment and evaluation approach (Ministry of National Education, 2005). Many studies were carried out after the introduction of constructivist learning program. These studies analysed the assessment and evaluation approach focusing on the views of teachers, principals and students. Although teachers and principals found the theoretical evaluation of the curriculum robust and wellformulated, they expressed their concerns such as being difficult to implement this in practice due to crowded classroom, employing traditional approach in teacher training, and lack of enough instructional materials (Doğan, 2010). The results of research on constructivist learning program argued that education programs introduced in 2005 was not efficient to equip students with the skills of 21st century and theoretically comprehensive. Therefore, they highlighted the need to revise the education programmes, which led to a number of changes of 2013 (Ayas et al., 2015). The education programmes, including science curricula were revised in 2013. The assessment and evaluation approach in curricula were complementary to the assessment and evaluation understanding (MoNE, 2013). The evaluation of students' learning needs to be based on process and therefore teachers should implement this evaluation. This program includes socio-scientific topics and nature of science under the section of Science-Technology-Society and Environment (STSE). The new teaching models, methods and techniques are necessary. In recent years, one of the teaching models used in science, physics and chemistry teaching is the Common Knowledge Construction Model (CKCM) (Bakırcı, Çalık & Çepni, 2017). In this teaching model, there is a distinct emphasis on socioscientific issues, the nature of science, STEM and especially process-oriented evaluation. In addition, this teaching model draws attention on the evaluation of these issues (Bakırcı et al., 2018). 447
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
STEM education was integred to Science Education Teaching Programs in 2017. The integration of STEM to science education and training was influenced by the variables such as the idea of raising the qualified individuals and encouraging female students in engineering (Akgündüz et al., 2015). This revision in the curriculum also affected the understanding of direct assessment because every point emphasized in the curriculum was reflected in the evaluation. The products and processes are evaluated together in the assessment and evaluation approach of the Science Education Training Program. In other words, the learning product and the performance of the student are offered to evaluate (Ministry of National Education, 2018). Besies, in the evaluation approach of the CKCM, traditional assessment methods (gap-filling, multiple-choice, rightwrong questions, and matching questions, etc.) that require a single correct answer are not considered effective evaluation practices to question the conceptual understanding (Ebenezer et al., 2010). In other words, it can be said that CKCM has a more process-oriented and performance-based assessment and evaluation approach. Therefore, the use of complementary measurement and evaluation methods, in addition to the traditional assessment methods, are of great importance for the evaluation stage of the CKCM. From this point of view, it can be said that the Science and Education Curriculum and CKCM’s assessment and evaluation approaches align well. So, teachers' use of this model in the course will serve to the purpose of the program. The next section presents CKCM and explains the assessment and evaluation understanding of the model. Common Knowledge Construction Model Common Knowledge Construction Model is a teaching model and a synthesis of many learning theories (Ebenezer & Connor, 1998). The purpose of this teaching model is the structuring of knowledge and realization of conceptual change (Ebenezer & Haggerty, 1999). The model is theoretically based on Marton's Learning Variation Theory and Piaget's conceptual change studies (Ebenezer et al., 2010). In addition, Bruner's view of language as a part of the symbolic system of culture is based on Vygotsky's field of proximal development transmitted within the social environment and the post-modern ideas of Doll on scientific discourse and curriculum development (Biernacka, 2006). This model is at the intersection of Piaget's conceptual change theory and phenomenology and composed of four phases: exploring and categorising, constructing and negotiating, extending and translating, reflecting and assessing. The schematic structure of CKCM is given in Figure 22.1. 448
Hasan BAKIRCI
Figure 22.1 The Common Knowledge Construction Model (Biernacka, 2006; Ebenezer et al., 2010). The first three stages of CKCM are briefly presented below and the evaluation phase is explained in detail. Exploring and Categorizing At this stage, activities are conducted that aim to reveal the students' preliminary knowledge and attract the attention of the students. Students' existing knowledge is uncovered, and students classify the knowledge. They also become aware of the nature of science. Activities to create phenomenological categories, discover alternative concepts and increase awareness of students take place at this stage. Constructing and Negotiating Students realize that knowledge can be revealed not only through scientific methods such as experimentation, observation and proof, but also through social dimensions such as debating, sharing and discussing (Ebenezer & Connor, 1998). The role of the teacher at this stage is to increase students’ level of performance to the maximum level that they can reach (Vygotsky, 1987). The discourse is realized with the guidance of the teacher to socially construct the knowledge 449
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
(Duschl & Osborne, 2002). This helps to gain the ability to understand the others’ ideas and develop social skills such as empathy. Through discourse, students learn that scientists interact and discuss with other scientists to advance their knowledge (Biernacka, 2006; Kilinc, Demiral & Kartal, 2017). The structuring of knowledge in the mind takes place through teacher-student and student-student interaction process. In the first two stages, students try to structure the knowledge (Brown & Ryoo, 2008). Extending and Translating At this stage of CKCM, students are provided with opportunities to conceptualize the scientific ideas developed in the second stage to shape the problems in socio-scientific issues (Bakırcı, 2014). Students can transfer their understanding of science to other contexts such as technology, society, and environment. Therefore, Science-Technology-Society and Environment (STSE) relations are established with the concepts described at this stage. Understanding these relationships is necessary for scientific literacy (Hodson, 2003). The aim of STSE cycle in science education is to gain the students the awareness of “Taking social responsibility during joint decision-making on science and technology related issues” (Biernacka, 2006). It is important to include STSE in teaching process to discuss socio-scientific issues such as ozone layer, global warming, forest degradation, soil, air and water pollution (Biernacka, 2006; Çalık & Coll, 2012; Hodson, 2003). Reflecting and Assessing One of the features that distinguishes CKCM from other teaching models is that the evaluation of students' learning depending on a process-based evaluation. This means that the model rejects traditional assessment and evaluation because the traditional assessment techniques are insufficient to measure students' conceptual knowledge and changes. Traditional methods focus on the correct or inaccurate answers, knowledge, and results. However, complementary assessment and evaluation methods also evaluate the process together with the product. In the process of conceptual change, evaluation investigates not only what the students learn, but also how they learn, how they discover knowledge, and how they construct the knowledge. This evaluation also measures scientific research skills, behaviours, attitudes, beliefs and social skills of students as well as their scientific knowledge (Biernacka, 2006).
450
Hasan BAKIRCI
CKCM's assessment and evaluation approach play an important role in students' discovery and classification of concepts, structuring and negotiation of knowledge. In addition, it is very important for students to explain their personal opinions about scientific and socio-scientific issues along with their justifications (Ebenezer et al., 2010). Traditional assessment methods (gap-filling, multiplechoice, right-wrong questions, and matching questions, etc.) that require a single correct answer are not considered to be effective assessment practices to question conceptual understanding (Ebenezer et al., 2010). In the process of questioning conceptual understanding, evaluation should measure how students discover, disclose, repeat or reject concepts, and how effective learning can be achieved to gain conceptual understanding. The evaluation should also focus on the concepts that should be explored in the future and how students designed, conducted and evaluated scientific and socio-scientific research and concepts (Ebenezer et al., 2010). For this reason, instead of traditional evaluation methods, complementary assessment and evaluation methods are used in CKCM. At the evaluation stage of the CKCM, the teacher determines how the teaching process will be guided by the four main questions in the teaching process and reflects on the process. These questions are i) what do my students know? ii) What do I want my students to learn? iii) How can I help them to learn? iv) What have my students learned? (Barba, 1998). With these questions, the teacher measures content knowledge, scientific research skills, behaviours, attitudes, beliefs and social skills (Biernacka, 2006; Demiral & Turkmenoglu, 2018). The way that students reach knowledge, how effective teaching is in conceptual change and students' conceptual understanding of the knowledge displays the approach of assessment and evaluation (Bakırcı, 2014). The first question is “What do my students know?”. The learning environment based on CKCM. The purpose of this question is to determine the preliminary knowledge and readiness level of the students and to link the newly learned knowledge with the existing knowledge. This constitutes the basic philosophy of many learning theories and forms the basis of CKCM, which is synthesis of many learning theories (Bakırcı, 2014). In the context of the above explanations, it can be said that the teaching model is influenced by Ausubel's learning theory. Ausubel's learning theory is based on the statement that “The most important factor that affects learning is the student's prior knowledge and available knowledge. Instruction should be planned after uncovering this situation.” As can be seen here, it can be said that the students' prior knowledge is important. Class 451
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
discussion and concept map can be administered to reveal the students' preliminary knowledge in CKCM (Driver, 1990). Most of the time, even the diaries of the students can reveal their prior knowledge. The second question is "What do I want my students to learn?". The aim of this question is to teach the aims of the curriculum, school and national education. These goals are mostly realized with scientific discourse between student-student and teacher-student in the second stage of the teaching model, “Structuring and Negotiation” stage. At this stage of the model, the attitudes of the teacher are evaluated towards scientific studies. The students realize that there are many different explanations about the same phenomenon, and the teacher can assess whether their students are ready to accept these different ideas or opinions. In other words, the teacher gets a chance to assess whether students in the classroom are open-minded, they have prejudices, are honest and ready to respect for the ideas of others. For example, students can be asked “how scientists conduct research”. The aim is to enable students to understand the question of what is necessary to obtain scientific knowledge. They become aware that scientific research defines the natural world. In this process, the teacher can see how students do research, gather data, analyse and interpret the data. Thus, the teacher assesses the students' scientific research skills. The third question is “How do I help my students to learn?” The aim is to identify the instructional model, strategy, method and technique that the teacher will use in the learning environment. The teacher should make a choice in the selection of methods and techniques, considering the individual differences of the students, the contemporary teaching approaches, and the nature of the topic and the active participation of the students in learning. These teaching methods ensure that students learn the subject permanently and transfer the knowledge to daily life (Shulman, 1987). In summary, it is necessary for students to choose specific teaching approaches to learn the subject effectively and to provide students with the necessary conditions for meaningful learning (Zembal-Saul, Krajcik, & Blumenfeld, 2002). At this stage, the teacher looks at whether the students are transferring their knowledge to daily life and integrating the knowledge they have learned with technology. For example, brainstorming and discussion techniques are used to reveal students' views on socio-scientific issues. The aim is to evaluate whether students can explain their opinions on scientific grounds and whether they respect others' opinions. In addition, students are expected to present their suggestions about these issues (Ebenezer & Connor, 1998). Therefore, in this 452
Hasan BAKIRCI
process, the teacher evaluates students' ability to make decisions and their ability to implement these decisions and use their leadership skills. The fourth question is “What have my students learned?”. Different alternative assessment and evaluation techniques are used to reveal students' learning. Some of these techniques are portfolio, concept maps, diagnostic branched tree, structured grid and word association test. In addition, performance and project assignments are used to measure the practical skills of the students. These assessment techniques are effective because they are student-centred and used in daily life. They also measure high-level mental and thinking skills and allow self-assessment. In short, they make it possible to evaluate cognitive, emotional and kinaesthetic skills (Bakırcı, Artun & Şenel, 2016). Some of the alternative assessment techniques used in the evaluation of students in CKCM based science teaching are briefly explained below. Portfolio Portfolios are systematically generated development files that reflect in students' knowledge and skills, abilities, weaknesses and strengths in a period of time. In other words, it is a file that includes students’ assignments. It includes assignments, project results, reports, any other written work and materials associated with learning. It is a collection that reflects the students' original work and the process of development. Within the scope of the portfolio, evaluation criteria are included to understand how the student is evaluated and what is the value of what they do (MoNE, 2005). Gilman, Andrew & Cathleen (1998) explained the benefits of using portfolio in a learning environment as follows:
Students feel that they are part of the evaluation process. Teachers evaluate both the product and process. The evaluation is not limited to a single score. Portfolios give more information about the students' development. They develop skills needed for lifelong learning. They integrate learning and evaluation.
Portfolios improve students' high-level cognitive skills, gain them research habits, help the product and process to be evaluated together, provide more feedback on student development, allow learning and evaluation to support each other and provide information about the student in the following years (Erdemir & Bakırcı, 2010). Therefore, science education based on CKCM use portfolios because they increase students' academic success and conceptual understanding. 453
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
Academic success and conceptual understanding are the important skills that a student gains in a learning environment. Performance Assignments Performance assignments are short-time studies that enable students to associate their knowledge and skills with daily life. These assignments help students to use their knowledge and skills in their everyday life. Teachers have a big role in these assignments. Perhaps the most important roles of these is to track student progress and provide good guidance. The performance homework cannot be effective if teachers do not fulfil their tasks and parents do not have awareness about it (Deveci, Önder & Çepni, 2013). The literature shows that these projects create disadvantages when teachers do not provide enough guidance, parents do the projects on behalf of their children, and students plagiarise their work using the internet or when the projects are costly (Kutlu, Doğan & Karakaya, 2017). Teachers should consider these points to make an effective assessment. Performance assignments have many benefits for students:
Students gain independent research skills. Students develop their sense of responsibility. Students better understand what they read. Students develop their problem-solving and creativity skills. Parents pay more attention to their children and involve in the learning process. Students gain self-confidence.
Projects Project is an evaluation type that focuses on detailed analysis of a topic. It is a comprehensive evaluation that aims to activate high level skills of students. Projects can be conducted individually or in groups. The project topic is identified in collaboration with teachers and students and should be related to daily life. After determining the topic of the project, students identify the purpose of the project, the paths to be followed, and the tools and equipment required. They can get help from teacher when they have difficulty in this process. Teachers often assign projects in science classes since the nature of science courses is suitable for the project assignments and such assignments have process-oriented evaluation (Bakırcı, Çalık & Çepni, 2017). The use of processbased project assignments in the evaluation phase of CKCM-based science teaching can facilitate the learning of students and enable them to learn 454
Hasan BAKIRCI
permanently and effectively. The benefits of the project assignments used in education are as follows:
They develop students' high-level skills such as research, creativity and communication. They help student socialise. They contribute to transfer of the learned knowledge to daily life. They encourage students to use technology. They improve students' problem-solving skills.
Diagnostic Branched Tree It is an assessment and evaluation technique that aims to identify whether students learned the topic, and understand the misconceptions of students about the topic (Çepni & Çil, 2009). This technique is concerned with key concepts related to the subject and misconceptions of students. While creating this measurement tool, one should pay attention to writing sentences through general expressions and then forming more detailed expressions (Şahin & Çepni, 2011). This measurement tool is different from true /false tests. Each question is evaluated separately and is independent from other questions. On the other hand, the expressions are interconnected in the diagnostic branched tree. In this technique, any student gets zero points. Therefore, this measurement tool differs from others. While the student who answered all the questions receives full points, the student who gave wrong answer still gets some point, which is one of the advantages of this test. It also gives the students an opportunity to correct their mistakes. These advantages of the diagnostic branched tree become desirable as an assessment tool. The benefits of this measurement tool in the learning environment are given below (Şahin & Çepni, 2011).
It enables the cognitive structuring of the knowledge. It reveals the topics that students fail to learn. It provides instant feedback to students’ misconceptions. Students get the points for what they have learned.
Structured Grid It is one of the evaluation tools commonly used in learning environments to find out whether students learn the subject. It is mostly used to determine students' level of knowledge, deficiencies, misinformation and misconceptions (Egan, 455
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
1972). This measurement tool is like the structure in the multiple-choice test because the students choose the correct answer. However, the structured grid differs from the multiple-choice tests in following ways:
While multiple-choice tests have one correct answer, there are multiple correct answers on structured grids. Since there is only one correct answer in multiple choice tests, student either gets full score or zero score. In grids, the student gets points for each box s/he has answered correctly. While multiple-choice tests have negative questions, structured grids do not have any negative questions.
Structured grids measure students' knowledge about the content but multiplechoice tests cannot (Egan, 1972). In addition, it is an alternative assessment and evaluation technique used in addressing the misconceptions of CKCM (Bakırcı, 2014). Word Association Tests Word Association Test (WAT) is one of the techniques used in identifying cognitive structure, conceptual change and misconceptions (Bahar, Johnstone & Sutcliffe, 1999). This is one of the most common and oldest techniques used to analyse the cognitive structure of students and the interconnections between concepts in this structure and to determine whether the relationships are enough between the concepts in the long-term memory (Johnstone & Moynihan, 1985; Kempa & Nicholls, 1983). The biggest advantage of this technique is being easy to prepare and be applied to many students at the same time (Özatlı & Bahar, 2010). WAT is an instructional material but it can also be used as an assessment and evaluation tool. Key concepts related to the topic should be identified to prepare WAT (Özatlı & Bahar, 2010). Thus, it can be determined whether the concepts in the cognitive structure of students and the relations between these concepts are enough and meaningful. Some of the advantages of WATs are being easy to prepare and implement, revealing and mapping the relationships between the concepts in the cognitive structure of the students, determining the misconceptions and provide conceptual change. In addition, WATs are used in analysing the relationship of conceptual associations with the ability of problem-solving and achievement (Gussarsky & Gorodetsky, 1990; Johnstone & Moynihan, 1985; Kempa & Nicholls, 1983). WATs are used in the first stage of the CKCM to explore the 456
Hasan BAKIRCI
ideas of the students on the topic and to find out whether the students learn the topic in the evaluation stage of the model. Concept Maps Concept maps are two-dimensional diagrams that enhance inductive and deductive thinking which make it easy for students to see, learn and remember the relationship between concepts on any subject. Concept maps include functions such as interpretation, concretization, classification and visualization (Novak & Gowin, 1984). An important feature is being appropriate to use it for every teaching level. The use of concept maps in the learning environment has been effective on the students' conceptual understanding (Evrekli, Inel & Balim, 2012) and has many benefits:
They facilitate learning of key concepts related to the content.
They contribute to the learning of relationship between key concepts and subconcepts.
They provide the opportunity to understand the basic concepts.
They make learning more meaningful and permanent.
They help to establish meaningful relationship between learned knowledge and new knowledge.
They are used effectively in revealing conceptual misconceptions and structuring knowledge of students.
CKCM is a teaching model that focuses on students' conceptual understanding and academic achievement (Kıryak, 2013). It is effective in determining and eliminating misconceptions of the model and associating the new subject to the previous topic. In addition, the model focuses on the relationship between concepts, and teaching of socio-scientific nature of science. Concept maps, which are process-oriented evaluation techniques, are used to teach the content effectively. All the stages of CKCM provide teacher with many opportunities to evaluate the scientific skills, behaviours, attitudes, beliefs and social skills of the students. As can be seen here, students are evaluated by teachers in each stage of CKCM. The assessment in the CKCM-based learning environment reveals that the final stage is not the stage of Reflection and Assessment but includes all stages. The reflection and evaluation phase of the CKCM also allows teachers to evaluate the 457
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
nature of science and to reflect on its practices. At the same time, it provides students with the opportunity to reflect on the knowledge they have learned. Conclusion In the learning environment based on the Common Knowledge Construction Model (CKCM), it was concluded that the use of complementary measurement and evaluation techniques should be used in the evaluation of students, whereas the traditional measurement assessment techniques should not be used. Traditional measurement assessment techniques are insufficient to measure students' conceptual changes and conceptual changes. Students are limited in their ability to acquire 21st century skills and high-level thinking skills (Ebenezer & Connor, 1998). Complementary measurement and evaluation techniques have advantages such as measuring deeper and meaningful learning, how students structured knowledge, and informing students about how conceptual change takes place. Due to these advantages, complementary measurement and evaluation techniques are used in the evaluation of students in learning environments based on the CKCM (Bakırcı, 2014; Biernacka, 2006). Because, it is important how the student learns and learns in the process of conceptual change in the understanding and evaluation approach of the CKCM. In this process, it is important to see how the student structured and correlated information in his mind as he explored and learned knowledge. According to the CKCM, this behavior change in students can only be revealed through complementary assessment and evaluation techniques.
458
Hasan BAKIRCI
References Airasian, P. W. (1996). Assesment in the classromm. Mc Graw Hill: Boston College. Akgündüz, D., Aydeniz, M., Çakmakçı, G., Çavaş, B., Çorlu, M. S., Öner, T., & Özdemir, S. (2015). A report on STEM Education in Turkey: A provisional agenda or a necessity? İstanbul, Turkey: Aydin University. Ayas, A., Çepni, S., Akdeniz, A. R., Özmen, H., Yiğit, N., & Ayvacı, H. Ş. (2015). Teaching science and technology from theory to practice. Ankara: Pegem Academy. Bahar, M., Johnstone, A. H., & Sutcliffe, R. G. (1999). Investigation of students’ cognitive structure in elementary genetics through word association tests. Journal of Biological Education, 33, 134-141. Bakırcı, H., & Ensari, Ö. (2018). The effect of common knowledge construction model on academic achievement and conceptual understandings of high school students on heat and temperature. Education & Science, 43(196), 171-188. Bakırcı, H., Çalık, M., & Çepni, S. (2017). The Effect of the common knowledge construction model-oriented education on sixth grade pupils’ views on the nature of science. Journal of Baltic Science Education, 16(1), 43-55. Bakırcı, H., Artun, H., Şahin, S., & Sağdıç, M. (2018). Investigation of opinions of seventh grade students about socio-scientific issues by means of science teaching based on common knowledge construction model. Journal of Qualitative Research in Education, 6(2), 207-237. Bakırcı, H. (2014). The study on evaluation of designing, implementing, and investigating the effects of teaching material based on common knowledge construction model: Light and voice unit sample (Unpublished doctoral dissertation). Karadeniz Technical University, Institute of Educational Sciences, Trabzon. Bakırcı, H., Artun, H., & Şenel, S. (2016). The effect of common knowledge construction model-based science teaching on the seventh-grade students’ conceptual understanding-let’s learn celestial bodies. Yuzuncu Yıl University Journal of Education Faculty, 13(1), 514-543.
459
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
Bakırcı, H., Çalık, M., & Çepni, S. (2017). The effect of the common knowledge construction model-oriented education on sixth grade students’ views on the nature of science. Journal of Baltic Science Education, 16(1), 43-55. Barba, R. H. (1998). Science in the multicultural classroom. A guide to teaching and learning.. Needham Heights, MA: Allyn and Bacon. Biernacka, B. (2006). Developing scientific literacy of grade five students: A teacher researcher collaborative effort (Unpublished doctoral dissertation). University of Manitoba. Binbaşıoğlu, C. (1983). Measurement and evaluation in education. Ankara: Ani Publishing. Brown, B., & Ryoo, K. (2008). Teaching science as a language: A ‘‘contentfirst’’ approach to science teaching. Journal of Research in Science Teaching, 45, 525–664. Çalık, M., & Coll, R. K. (2012). Investigating socio-scientific issues via scientific habits of mind: Development and validation of the scientific habits of mind survey. International Journal of Science Education, 34(12), 1909-1930. Demiral, U., & Turkmenoglu, H. (2018). The relationship of preservice science teachers’ decision-making strategies and content knowledge in socioscientific issues. Journal of Uludag University Faculty of Education, 31(1), 309-340. Deveci, İ., Önder, İ. & Çepni, S. (2013). Parents’ Views Regarding Homework Given in Science Courses. Journal of Baltic Science Education, 12(4), 497508. Doğan, Y. (2010). The problems encountered during the implementation of science and technology curriculum. Yuzuncu Yıl University, Journal of Education Faculty, 7(1), 86-106. Duschl, R., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Studies in Science Education, 38, 39-72. Ebenezer, J. V., & Connor, S. (1998). Learning to teach science: A model for the 21th century. Upper Saddle River, NJ: Prentice-Hall, Simon and Schuster/A Viacom Company.
460
Hasan BAKIRCI
Ebenezer, J., Chacko, S., Kaya, O. N., Koya, S. K., & Ebenezer, D. L. (2010). The effects of common knowledge construction model sequence of lessons on science achievement and relational conceptual change. Journal of Research in Science Teaching, 47(1), 25-46. Ebenezer J. V., & Haggerty S., (1999). Becoming secondary school science teachers: Preservice teachers as researchers, Upper Saddle River, New Jersey: Prentice-Hall, Inc., Simon and Schuster/A Viacom Company. Egan, K. (1972). Structural communication: A new contribution to pedagogy. Programmed Learning and Educational Technology, 1, 63-78. Erdemir, N., & Bakırcı, H. (2010). A study on validity and reliability of the portfolio’s scale of applicability in different settlements. Electronic Journal of Social Sciences, 9(32), 75-91. Evrekli, E., İnel, D., & Balım, A. G. (2012). The effects of the use of concept and mind map on students’ conceptual understandings and attitudes toward science and technology. Journal of Education Faculty of Abant İzzet Baysal University, 12(1), 229-250. Gilman, D.A., Andrew, R. R., & Cathleen, D. (1995). Making Assessment a Meaningful part of instruction, NASSP Bulletin, 79(573), 20-24. Gussarsky, E., & Gorodetsky, M. (1990). On the concept “chemical equilibrium”: The associative framework. Journal of Research in Science Teaching, 27 (3), 197-204. Günbatar, M. S., & Bakırcı, H. (2019). STEM teaching intention and computational thinking skills of pre-service teachers. Education and Information Technologies. 9(2), 1-15. Hodson, D. (2003). Time for action: Science education for an alternative future. International Journal of Science Education, 25(6), 645-670. Johnstone, A. H., & Moyniyan, T. F. (1985). The relationship between performances in word association tests and achievement in chemistry. European Journal of Science Education, 7(1), 57-66. Kıryak, Z. (2013). The effect of common knowledge construction model on grade 7 students' conceptual understanding of ‘water pollution’ subject (Unpublished master’s thesis). Karadeniz Technical University, Institute of Educational Sciences, Trabzon. 461
Analysis of Assessment and Evaluation Approach based in Common Knowledge Construction Model in Science Education Programmes
Kempa, R. F., & Nicholls, C. E. (1983). Problem solving ability and cognitive structure an exploratory investigation. European Journal of Science Education, 5(2), 171-184. Kilinc, A., Demiral, U., & Kartal, T. (2017). Resistance to change from monologic through dialogic socioscientific discourse: The impact of and argumentation-based workshop, first teaching practicum and beginning year experiences on a preservice science teacher. Journal of Research in Science Teaching, 54(6), 764-789. Kıryak, Z., & Çalık, M. (2017). Improving grade 7 students’ conceptual understanding of water pollution via common knowledge construction model. International Journal of Science and Mathematics Education, 16(6), 1025-1046. Kutlu, Ö., Doğan, C. D. & Karakaya, İ. (2017). Assessment and Evaluation Determining Performance and Portfolio Based Situation. Ankara: Pegem Akademi Publishing. Ministry of National Education (2018). Science courses curriculum (elementary school and middle school 3, 4, 5, 6, 7 and 8 grades). Ankara: State Books Printing House. Ministry of National Education (2017). Science courses curriculum (3, 4, 5, 6, 7, and 8th grade) presentation. Retrieved from https://tegm.meb.gov.tr/ meb_iys_dosyalar. Ministry of National Education (2005). Elementary school’s science and technology courses (6, 7 and 8th grades) curriculum. Ankara: State Books Publishing House. Ministry of National Education (2013). Curriculum of science courses of primary education institutions (3, 4, 5, 6, 7 and 8th grades). Ankara: State Books Publishing House. Novak, J. D., & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press. Özatlı, N. S., & Bahar, M. (2010). Revealing students’ cognitive structures regarding excretory system by new techniques. Journal of Education Faculty of Abant İzzet Baysal University, 10(2), 9-26.
462
Hasan BAKIRCI
Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1-23. Şahin, Ç., & Çepni, S. (2010). Developing of the concept cartoon, animation and diagnostic branched tree supported conceptual change text: “Gas pressure”. Eurasian Journal of Physics and Chemistry Education, 1(1), 25-33. Ürey, M. & Çepni, S. (2015). Evaluation of the effect of science-based and interdisciplinary school garden program on some science and technology course from different variables. Hacettepe University Journal of Education, 30(2), 166-184. Vygotsky, L. (1987). The collected works of Vygotsky: Problems of general psychology, including the volume thinking and speech (R. W. Rieber & A. S. Carton Eds). New York: Plenum Press. Zembal‐Saul, C., Krajcik, J., & Blumenfeld, P. (2002). Elementary student teachers' science content representations. Journal of Research in Science Teaching, 39(6), 443-463.
463
Ümit DEMİRAL
Chapter 23 Assessment and Evaluation of Socioscientific Issues Ümit DEMİRAL
Introduction One of the aims of contemporary education systems is to integrate scientific knowledge into the students’ world by encouraging them to participate in the discussion and evaluation of socio-scientific issues to help them understand the events and problems they face every day, by trying to increase the interest of students in scientific activities (Feierabend & Eilks, 2010). Within this context, when the national science curriculum is examined, the necessity of using teachers' original and creative assessment and evaluation processes to ensure the effectiveness of assessment and evaluation practices rather than the teaching programs is stated in the program (MoNE, 2018; Deveci, Konuş & Aydız, 2018). In this chapter, some assessment and evaluation tools which are frequently used in the literature are included to give the teachers an idea in the classroom activities. The aim of this section is to examine the assessment and evaluation methods related to current and controversial titles in socio-scientific issues in national science curriculum. For this purpose, firstly literature review is done, and psychometric factors related to socio-scientific issues are tried to be determined. Then, the assessment tools are determined that are used to measure the psychometric factors. Finally, it is examined how the evaluations are made with the assessment tools used in the researchers. As a result of these studies, it was found that socio-scientific issues are closely related to psychometric factors such as content knowledge, attitude, self-efficacy belief, epistemological belief, and belief in nature of science. It has been determined that the content knowledge test, Likert-type scales, and a wide range of prepared scenarios such as area content knowledge, dilemma, critical thinking, risk, decision-making, economy and social are used in order to measure these factors. It is also determined that basic statistical methods are used in the process of evaluating the Likert-type scales, while some models and the rubrics obtained by the adaptation of these 465
Assessment and Evaluation of Socioscientific Issues
models to the field are used in the process of evaluation of scenarios. In the end of the chapter, examples are given for the assessment and evaluation methods frequently used in the context of socio-scientific issues in the field of science education. This chapter of the book is to provide teachers who personally implement the curriculum in schools and the teacher training programs with important information on how to assess and evaluate current and controversial issues and make important contributions to the field of science education. Current Situation in Assessment and Evaluation The International Student Assessment Program (PISA) represents a new assessment and evaluation approach compared to national and international assessment initiatives for assessing student progress in science education coordinated by the Organisation for Economic Co-operation and Development (OECD). One of the remarkable features of PISA is the expression of science literacy. Science literacy in many countries' science programs emphasizes the use of science concepts in real-life contexts. PISA is prepared to assess students': 1) qualities such as being prepared for future challenges, 2) analysis, reasoning and effective communication, 3) capacity to continue lifelong learning. It is stated that the assessment and evaluation offered by OECD (2007) in PISA is different from the traditional style. Unlike many traditional assessments and evaluations, student performance in science education is not limited to assess students' success in specific science content in PISA. It is also measured that students' ability to define scientific problems, scientifically explain and perceive phenomena, interpret, decide and resolve the problems that they face in life situations involving science and technology, and to use scientific evidence in these stages. In the PISA exam, three more process-oriented qualifications stand out: (1) identifying scientific issues, (2) explaining phenomena scientifically, and (3) using scientific evidence. Exam questions focus not only on scientific phenomena, but also on the importance of student practices towards experiences in everyday life and give priority to complex processes such as interpretation, decision making, problem solving and argumentation (Sadler & Zeidler, 2009; Ürey, Cepni & Kaymakçi, 2015). In the OECD (2007) report, it was emphasized that the students should be able to solve problems that do not have clear solutions rather than to remember scientific information and to communicate complex scientific ideas in a clear and 466
Ümit DEMİRAL
convincing way. So, students can fully adapt to the global economic system. The science qualifications measured in the PISA exam is shown in Table 23.1. Table 23.1 PISA exam science competencies Scientific explanation of the cases
Scientific interpretation of the data and findings
Remembering and applying appropriate scientific knowledge
Designing and evaluating the scientific inquiry method Distinguishing the research question in a specific scientific study
Defining explanatory models, using and designing
Distinguishing the questions that can be researched scientifically
Analysing and interpreting data and making appropriate results
Making appropriate estimates and verifying them
Proposing a way to scientifically investigate a specific question
Defining the assumptions, findings and reasoning in science related texts
Offering explanatory hypotheses
Evaluating the ways of researching a specific question scientifically
Distinguishing scientific findings and arguments based on theory and arguments based on other opinions
Explaining the potential implications of scientific knowledge for society
Expressing and evaluating how scientists provide the reliability of data and objectivity and generalizability of explanations
Evaluating scientific arguments and findings in different sources (e.g. newspaper, internet, magazines)
Converting data from one display to another
Designing and assessing scientific inquiry 467
Assessment and Evaluation of Socioscientific Issues
The evaluation criteria for science literacy in the PISA tests are given in Figure 23.1 below.
Figure 23.1 General characteristics of science literacy evaluation framework Turkey participates in the PISA exam since 2000. The results of the exam which was last applied in 2015 were announced in 2018. Turkey's 2006, 2009, 2012, and 2015 scientific literacy scores in exams are given in Table 2. Table 23.2 Average scores of science literacy over the years 2006 2009 2012 The average of 498 495 501 OECD The average of all 478 471 477 countries The average of 424 454 463 Turkey Ranking 47 42 43 The number of 57 65 65 participating countries 468
2015 493 465 425 54 72
Ümit DEMİRAL
As shown in Table 23.2, Turkey's average score of scientific literacy is below the average score of all countries and OECD. Failure in international examinations such as the PISA test has led to changes in science education programs (Çepni & Ormancı, 2016). The approach of science education program in Turkey has changed in 2015. Science process skills, life skills, skills in areas such as engineering and design skills, current issues like socio-scientific issues, innovations in classroom practices such argumentation are included in the program while Turkey has continued to receive points below the OECD countries. Investigating the underlying problems of PISA results, researchers have determined that factors such as students' social environment, technological support, reading habits and attitudes towards science affect PISA scores (Erenler, Eryılmaz Toksoy & Reisoğlu, 2017; Kartal, Doğan & Yıldırım, 2017; Ormancı, 2016). Anagun (2011) in his study emphasized that exam-centred education in Turkey necessarily brought the students closer to education system based on rotelearning. SSI and PISA The scope of the science literacy expanded with the revised science curriculum in Turkey in 2013. One of the main aims of the Science Curriculum, which aims to educate all individuals as science literate, is to develop reasoning skills, scientific thinking habits and decision-making skills by using socioscientific issues (MNE, 2018). Socio-scientific issues are complex in nature and often have no clear solution (Kilinc, Demiral & Kartal, 2017). Although they have ground in science, they cannot be solved only by applying scientific knowledge. On the contrary, they involve a variety of social aspects and need to be resolved by the integration of different, often competitive perspectives (Eggert & Bögeholz, 2010). There are some basic characteristics of socio-scientific issues. These are:
It usually has a scientific basis in the boundaries of scientific knowledge. It involves forming ideas, making choices on a personal or social level. It is mostly based on media reports. It is related to incorrect information due to contradictory scientific evidence. 469
Assessment and Evaluation of Socioscientific Issues
It has local, national and international dimensions due to political and social issues. It includes cost-benefit analysis where the risk interacts with values. It considers sustainable development. It includes values and ethical reasoning. It requires knowledge about probability and risk. It includes current topics.
There are some skills that students are asked to do in the process of teaching socio-scientific issues. These are:
Demonstrating a correct perception of scientific concepts and processes involving scientific research, Understanding and recognizing the nature of decision making at the personal and social level, Comprehending and describing the nature, strengths and limitations of media documents related to scientific issues, Recognizing and discussing the missing part of the information.
The pedagogical components of SSI introduced by Zeidler, Sadler, Applebaum and Callahan (2009) is shown in Figure 23.2.
470
Ümit DEMİRAL
Figure 23.2 Pedagogical relationships between teacher and students’ SSI
471
Assessment and Evaluation of Socioscientific Issues
1. Topic/Subject Matter Introduction
• a. Magazine headlines, articles, and advertisements • b. YouTube video presentation of controversy associated with subject matter • c. Photographs • d. Models • e. Other media formats
2. Challenging Core Beliefs
• a. Contentious questions that “attacks” commonly held beliefs • b. Challenging “common knowledge” of subject matter • c. Misconceptions
3. Formal Instruction
• a. Anatomy • b. Physiology • c. Related science information
4. Group Activity
5. Develop Contextual Questions
6. Class Discussion
7. Teacher Reiteration of Content/Subject Matter
8. Knowledge and Reasoning Assessments
• a. Development of related, but unconventional topic/subject matter questions • b. Individual investigation of data and evidence • c. Small group negotiation of evidence • d. Group presentation of consensus understanding
• a. Fundamental science concepts of subject matter • b. Defeating misconceptions • c. Contemporary claims regarding subject matter
• a. Evidence reliability of contemporary issues • b. Importance of specific knowledge for informal decision-making
• a. Essential learning of subject matter content • b. Purpose and relevance of specific knowledge • c. Application of content knowledge • d. Negotiating contemporary issues
• a. Group presentations • b. Posters • c. Argumentation/debate activities • d. Paper production of selected topics • e. Written tests of subject matter
Figure 23.3 Patterns of socio-scientific issues The comparison of traditional teaching and socio-scientific issues by Wilmes and Howarth (2009) is given in Table 23.3 below.
472
Ümit DEMİRAL
Table 23.3 Patterns of socio-scientific issues Less emphasis on… Discussing science isolation Working alone
in
Acquiring scientific information Close-ended questions with one correct answer Multiple choice assessments
More emphasis on… Discussing science concepts and understanding in the context of personal and social issues Collaborating with a group that simulates the work of a scientific community or represents authentic groups found in society Acquiring conceptual understanding in making and evaluating personal, social, and global decisions Open-ended questions that require students to explain phenomena or take positions supported by evidence Authentic assessments
Although the purpose and components of socio-scientific issues are clearly defined, it is seen that the approaches which do not fully reflect the nature of the issues are used both the national and international examinations in the assessment and evaluation process in this field. For example, in 2018, students were asked a question about global warming, which is one of the socio-scientific issues in the national placement test for middle school students. It was seen that the qualification to assess in the national exam is defining scientific problems.
Figure 23.4 SSI question example 473
Assessment and Evaluation of Socioscientific Issues
In the PISA exam, one question was asked about acid rain which is one of the socio-scientific issues. In this question, students' skills at identifying scientific issues and explaining phenomena scientifically and attitudes towards socioscientific issues were measured (Sadler & Zeidler, 2009). Question: Acid Rain Below is a photo of statues called Caryatids that were built on the Acropolis in Athens more than 2500 years ago. The statues are made of a type of rock called marble. Marble is composed of calcium carbonate. In 1980, the original statues were transferred inside the museum of the Acropolis and were replaced by replicas. The original statues were being eaten away by acid rain.
QUESTION 1 Normal rain is slightly acidic because it has absorbed some carbon dioxide from the air. Acid rain is more acidic than normal rain because it has absorbed gases like sulphur oxides and nitrogen oxides as well. Where do these sulphur oxides and nitrogen oxides in the air come from? ……………………………………………………………………………… …………………………………………………………………………………… …………………………………………………………………………………
474
Ümit DEMİRAL
The effect of acid rain on marble can be modelled by placing chips of marble in vinegar overnight. Vinegar and acid rain have about the same acidity level. Gas bubbles form when a marble chip is placed in vinegar. The mass of the dry marble chip can be found before and after the experiment. QUESTION 2 A marble chip has a mass of 2.0 grams before being immersed in vinegar overnight. The chip is removed and dried at next day. What will the mass of the dried marble chip be? A. Less than 2.0 grams B. Exactly 2.0 grams C. Between 2.0 and 2.4 grams D. More than 2.4 grams QUESTION 3 Students who did this experiment also placed marble chips in pure (distilled) water overnight. Explain why the students included this step in their experiment. ……………………………………………………………………………… …………………………………………………………………………………… ………………………………………………………………………… QUESTION 4 (ATTITUDE) How much do you agree with the following statements? Tick only one box in each row.
475
Assessment and Evaluation of Socioscientific Issues
QUESTION 5 (ATTITUDE) How much do you agree with the following statements? Tick only one box in each row.
It is seen that the subject matter dimensions of the socio-scientific issues in the national examinations and the subject matter and partially argumentation skills of socio-scientific issues in international exams are measured when the national and international exams are examined. However, most of the actual PISA items, at least the publicly released ones, do not promote the objectives expressed by the SSI movement. The items associated with defining scientific problems and explaining phenomena, especially scientifically, are far from the purpose of the SSI movement (Sadler & Zeidler, 2009). Existing written exams in the context of socio-scientific issues do not assess the dimensions of life experience, reasoning, character and reflective, decision making and argumentation. Assessment and Evaluation of Socio-scientific Issues Many paper and pencil tests, Likert-type (agree/disagree), multiple-choice, open-ended answers and interviews were developed to assess scientific issues related to society. However, this area is also fraught with difficulties. First, such subjects to be evaluated are dialectical and complex. Second, the difficulty of assessment reveals important methodological aspects of the validity and reliability of the assessment instrument affecting the validity of the results. The main criticisms about assessment instrument and approaches are as follows: 1. The Immaculate Perception Hypothesis: This hypothesis assumes that the investigator and the respondent perceive and understand the questions in the same way. This problematic assumption seriously affects the validity of most instruments as it is indirectly involved in most of them. 2. Assessment instruments carry the philosophy and prejudices of the researchers who develop them. Certain labels (e.g., empiricist, reductionist, relative, inductive, etc.) to respondents who respond to 476
Ümit DEMİRAL
the assessment instrument are sometimes a product of the assessment instrument which is used, rather than representing the views of respondents. 3. The compatibility between the score and the result is uncertain due to the infringement of one dimensionality of the structure, which is the condition for low content validity and the support of the results. To overcome the existing limitations in quantitative and qualitative data, Aikenhead (1988) stated that it would be appropriate to conduct quantitative and qualitative data to be used together in order to methodologically strengthen such controversial issues. It was emphasized that the questionnaires based on experimentally developed qualitative researches, the Likert type scales, multiple choice tests and the interviews should be used together, and the results should be matched in these studies. Number and letter grade systems are used to assess the knowledge and skills of the students in the teaching process. However, socio-scientific issues in the curriculum encourage the practical and conceptual understanding of science in the “real world’’. In this context, the assessment of empirical information is standard. Although student presentations, arguments, posters and evidence, student understanding provide a wealth of information about students, the assessment of these products can be subjective. Neither traditional written examinations nor performance tests can fully demonstrate that a student develops his/her understanding of a socio-scientific issue. Therefore, it is recommended to use process and product evaluations including assessing the skills such as taking into consideration the opposing views and assessing the quality of the evidence that the student uses to defend his views. Although it is difficult to assess individual perception, it should involve assessing the awareness of content knowledge and use of evidence. The final exams that will be applied to the students should include the content knowledge related to these subjects and the questions they may face with informal and moral judgments (Kara, 2012; Zeidler, Applebaum & Sadler, 2011). The new assessment paradigms suggest that multiple levels of assessment should be applied to reveal the effects of an innovative curriculum in the desired direction. To better assess the potential effects of socio-scientific based learning, relevant measures should assess the impact of the curriculum on both the immediate impact of the curriculum and on exams such as national and 477
Assessment and Evaluation of Socioscientific Issues
international examinations (Klosterman & Sadler, 2010). This multiple level of assessment approach has been used by several researchers (Cross et al., 2008; Roehrig & Garrow, 2007; Ruiz-Primo et al., 2002) to determine the effectiveness of the innovative science teaching curriculum in various contexts. According to Ruiz-Primo et al. (2002), the convergent and divergent evaluation of students' achievements will provide a better view for the scope of the impact. Convergent evaluation depends on a specific curriculum. On the contrary, divergent assessments are not dependent on a specific curriculum. Divergent assessments tend to be compatible with a generalized context and are not compatible with a specific curriculum. Divergent assessments are closely related to the types of standardized assessments used by managers and policy-makers to identify general content knowledge or skills. This type of assessment measures how much information students can transfer to new contexts and situations (Klosterman & Sadler, 2010). In a study by Cross et al. (2008), a significant difference was found in the academic achievement performance of the students before and after the test in the curriculum-based test at national level tests. However, no such difference was found in the non-curriculum-based standard test. Given that SSI often includes scientific ideas from research boundaries, most people must rely on external sources of information to form positions on these issues. Such information is communicated to decision-makers through various sources such as newspapers, magazines, internet, politicians, teachers, friends and family (Sadler, 2003). Authentic Assessment If the main goal in socio-scientific issues is develop the ability of the student to take responsibility and to act based on knowledge when the student is faced with a current and real controversial issue, process-based (authentic) and validated testing and assessment approaches should be used to assess the extent to which these skills are developed (Ratcliffe & Grace, 2003). In performance evaluations, students are asked to demonstrate specific behaviours or abilities within the used test. In this type of evaluation, students are asked to perform certain tasks such as writing homework, conducting an experiment, and making a solution to a problem via commenting it rather than responding to questions in paper-pencil tests. In authentic assessment, performance evaluation is taken one step further and emphasizes the importance of applying skill or skill in the context of real-life situation (Arends, 2012). Researchers have found that students use high level knowledge available in performances to solve real-world problems 478
Ümit DEMİRAL
(Stiggins, 2007). Examples of authentic assessments include exhibiting works in exhibitions such as a science fair or art performance, demonstrating skills in a portfolio, participating in discussions, and presenting original work documents to peers or parents. However, there are some difficulties in the implementation of such original assessments. The first is that the relationship cannot be clearly revealed between the training and the behaviour change that emerged as a result of this training. Another difficulty is that authentic assessment is not practical, in other words it is not easy to apply (Ratcliffe & Grace, 2003). Depending on the assessment and the assessment of learning, there may be a formative or summative assessment. Normally, classroom evaluation is expected to be a process evaluation. This type of evaluation provides feedback on learning to guide subsequent learning and teaching. However, if it is not made in accordance with its nature, it becomes into the final assessment. On the contrary, written tests (large-scale or otherwise) can act as process evaluation if their results are used to help future learning. However, multiple-choice tests, especially national exams, are considered as measurement-based assessment instruments that give information about the final achievements of the students. The outcome evaluation mainly focuses on reliability. The national test of exams, which are focused on selection and placement, has a high degree of reliability and facilitates the comparison of the performance of students and teachers. This test format, which can be applied to students of all age groups, is mostly written test consisting of multiple choice or open-ended tests with time limits. The student evaluation process in these exams depends on the national evaluation system and the wishes and abilities of the teachers. If the outcome evaluation methods reflect in process evaluation, the advantages of authentic evaluation are ignored, and students are prepared for written tests. In the studies conducted on authentic evaluation and performance assessments, which are a process-based type of assessment and evaluation, it has been determined that authentic process evaluation assesses students' skills and learning outcomes more qualified than the results-based evaluation (Ratcliffe & Grace, 2003). In this type of assessment and evaluation, portfolios are used. These portfolios provide information about the student's performance. It is expected that the student will be aware of the increasing curve from low performance to better. Recognizing the improvement in the students' performances increase their motivation to learn and positively affects their subsequent performance. It is common to use scoring rubric in the evaluation of student files. Based on this table, what is expected from the student 479
Assessment and Evaluation of Socioscientific Issues
is clicked in the work of those who are positive, and the results are evaluated. Authentic evaluation is high in terms of validity but can be time consuming. The criteria determined before the evaluation can help to make authentic evaluation reliable. The comparison of authentic evaluation and written tests is shown in Table 23.4. Table 23.4 Comparison of authentic and large-scale written evaluations Authentic evaluation Provides an assessment of the actual ability shown in the face of a real situation. Tends to be holistic, bringing together different abilities and understandings. The real-life context has high validity. High reliability can be difficult to achieve. Students take part in the evaluation process of their own learning.
Large-scale written test Provides the assessment of some special abilities shown in the face of an artificial situation. Tends to be atomistic, each question focuses on specific skill/content. The real-life context has limited validity. High reliability can be achieved. The student is not involved in the evaluation process of their own learning.
The first step in the implementation of effective authentic assessment is being clear about the learning objectives of teachers. The fact that socio-scientific issues cannot be separated as “true-false” and that there are a wide variety of situations within them cause the teachers to have difficulties in clearly determining their learning objectives. To overcome these challenges, students can be encouraged to present their evidence in various ways (such as written, group discussion, oral presentations, etc.). In addition, in an authentic evaluation considering socioscientific issues, holistic assessment can be used to assess skills and conceptual learning gained in the teaching process. In a holistic assessment, the skills required from students are as follows: 480
It should detect a problem in close circle, consider the national and global impact of this problem; It should follow and research the scientific and social views on the issue; It should identify the missing information; It should make a risk-benefit analysis based on scientific evidence; It should be able to assess human/environmental interaction ethically; It should propose actions based on all these considerations; It should implement actions.
Ümit DEMİRAL
Evaluation of SSI The atomistic or holistic approach can be used for learning and evaluation. In the atomistic approach, each of these basic characteristics of socio-scientific issues can be transformed into learning objectives by themselves. An atomistic approach may result in a good performance in some areas, however, coping with the overall complexity of socio-scientific issues is a limited approach. A holistic approach can cope with the complexity of socio-scientific issues but has the risk of not being able to assess learning objectives that emphasize specific skills and perceptions. Research on how people evaluate information about SSI shows that most people are unprepared for the task. Individuals generally adopt two strategies: evaluation of the information provided or evaluation of the source of information (Sadler, 2003). One of the things that can be done within the scope of formal education is to evaluate the ideas and decisions developed in the learning process. For example, two students may have completely different views on the use of animals in experimental research. In such a case, the following question is faced in the evaluation of these two different views. Is it appropriate to consider a student's view as better than the other? Researchers think that such an evaluation is not appropriate. However, what should be done for the exam is the logic of the student to achieve this view. Can students sort net arguments? Can students evaluate considering the evidence? Do the students consider the opinions of others or ignore them? By examining students' performances towards these questions, these skills can be addressed can aid to development through focused learning activities in formative and peer assessment.” As in other science subjects, test substances can be prepared on socioscientific issues. It is thought that authentic evaluation can evaluate the procedural knowledge, attitudes and beliefs gained as a result of experience. Under any circumstances, all measurement methods must have valid and reliable criteria. Some assessment and evaluation methods according to the purpose and strategy is given Table 23.5.
481
Assessment and Evaluation of Socioscientific Issues
Table 23.5 Learning goals, learning strategies and assessment methods – evaluated techniques Learning goal Conceptual knowledge Understanding of scientific endeavour Understanding of probability and risk Recognition of scope of the issue Sustainable development Procedural knowledge Decision-making using evidence Cost-benefit analysis
Learning strategy
Assessment method
Use of media reports
Formal test classroom task
item,
item,
Evidence evaluation, including media reports Ethical reasoning Attitudes and beliefs Clarification of personal and societal values Recognition of values impinging on issues
Use of media reports
Formal test classroom task Formal test classroom task Formal test classroom task Classroom task
Risk analysis Community project; media reports Community project Decision-making task Risk-benefit analysis
Ethical analysis Ethical analysis; decision-making task Ethical analysis; decision-making task
item, item,
Classroom task
Pedagogical Components of Socio-scientific Issues and Assessment and Evaluation Instruments The pedagogical components of socio-scientific issues were given in Figure 2. Considering the student factor in particular, this section is included information on the subject matter of knowledge, decision-making, argumentation, reasoning, character, reflective components, and the assessment and evaluation instruments that are in use for these components. The combined use of performance tests and conventional tests can make the assessment process objective. For example, student presentations, arguments, posters and evidence provide rich insights into student understandings. In addition, written examinations can be objectively controversial. While assessing individual perceptions is difficult, student assessment should include an assessment of the awareness that content 482
Ümit DEMİRAL
knowledge and reliable evidence are fundamental building blocks for formulating well-measured stances. The exact final exam was held with students confronting controversial socio-scientific issues that required an understanding of empirical knowledge and informal, moral reasoning skills (Zeidler, Applebaum & Sadler, 2011). In addition, socio-scientific issues both contribute to the development of a discussion environment in the classroom through the dilemmas inherent in it and to the development of their content knowledge by enabling students with the opportunity to present their personal views through argumentation (Barrue & Albe, 2013). From this point of view, students can be assessed with performance and authentic assessment and evaluation approaches by making classroom argumentation activities. When presented with the contemporary SSI, students can transfer conceptual understanding from a context and apply in a new and/or different context. For example, examination of stem cell problem, nervous and muscular system diseases, the effect of smoking on respiratory tissue, osteoporosis and infectious diseases such as AIDS and influenza have given students the opportunity to study cell structure and repulsive principles of homeostasis. The students showed that complementarity and the relationship between form and function were better understood (Zeidler, Applebaum & Sadler, 2011). A test that assesses the content knowledge for GMOs is given in Table 6 (Sönmez & Kılınç, 2012; Demiral & Çepni, 2018; Demiral & Türkmenoğlu, 2018a). Table 23.6 The questions below relate to what you know about GMOs One of the areas in which gene transfer in plants is used to obtain strains that are more resistant to diseases. Genetically modified tomatoes contain genes, while normal tomatoes contain no genes. Main foods such as carbohydrates, proteins and fats can be produced by biotechnological methods using genetic engineering. Mad cow disease is a result of changing the genetics of animals. By changing the genetic structure of a plant, the need for fertilizer and medicine is reduced.
True O
False O
I don’t know O
O
O
O
O
O
O
O
O
O
O
O
O
483
Assessment and Evaluation of Socioscientific Issues
To alter the genes of a plant, the cells must be killed. Genetic engineering techniques are used to increase the flavour of food. GMOs cannot be digested.
O
O
O
O
O
O
O
O
O
Socio-scientific Issues and Decision-making “Chocolate Selection” scenario and interview questions were prepared by Demiral and Türkmenoğlu (2018a) to assess decision-making skills in socioscientific issues. The scenario is related to real-world problems involving content knowledge, risk and morality in the daily life of the parents with many children. The issue chosen in the scenario related to GMOs, which is one of the issues of science and technology, enables ordinary citizens to make a choice among alternative situations by using their knowledge in the face of a situation that they may face in daily life. The scenario is designed to reveal the decision making by reasoning a real current situation, which is an objective of current science education reforms. Scenario: Chocolate Selection Oktay is a worker with minimum wage. After work, he gets his son's favourite chocolates as much as he can. He is a grumpy boy about eating. He sits down for dinner only on the condition that his father brings the chocolate. If there is no chocolate, Yigit quarrels with his parents not to eat. Although Oktay knows the negative behaviour of his children, he brings chocolate every day for his child to eat. One day, Oktay reads an article in the newspaper. In the news, members of the non-governmental organization give information about the substances found in chocolates. Newspaper Title: Exclusive Interview: Stunning explanations from Dr. Zeynep Deniz, Professor of civil society. Interviewer: What are your thoughts on GMOs to be used in Turkey? Dr. Zeynep Deniz: Today, many agricultural products have been genetically modified such as alfalfa, canola, cotton, linen, lentil, corn, melon, plum, potato, rice, soybean, sugar beet, sunflower, tobacco, tomato and wheat. Of all these products, the most widely used in the food industry are corn, soybean, cotton and canola. Sowing area of these products has reached 134 million hectares 484
Ümit DEMİRAL
worldwide. In Turkey, GMOs are used in over 800 different products. You will not find any chocolate without GMO. All kinds of chocolate, cake, biscuit, etc. products including soya lecithin are made with GMO. Almost all the cocoa is also made with GMO. If it contains corn starch mixed in almost all kinds of products, this means it is at least 75-80 percent with GMO. On top of some products, there is a label written as that product has modified corn starch. This is the statement that the product is with GMO. Interviewer: Are you telling us that many of the chocolate that our children have consumed is made with GMO. Dr. Zeynep Deniz: Look, let me show you a table. Criterion Cultivation of cocoa plants Price Supplier
Chocolate 1 Chemical
Chocolate 2 Organic
Chocolate 3 Chemical
Chocolate 4 Organic
2 TL Many supermarkets
Genetically modified substances
Milk obtained from cows fed with genetically modified feed
30 TL Luxury organic food groceries None
10 TL Many supermarket s None
15 TL Many supermarkets/Some organic grocery stores None
In this table, you see the chocolate brands most preferred by the parents. It is your decision. Interviewer: So, you tell us not to consume that GMOs? Dr. Zeynep Deniz: I'm not saying such a thing. There are some people who say that GMOs are useful, and who say that GMOs are harmful. Such issues have a certain degree of risk, with their own pros and cons. There is a lot of positive and negative information about the issue. I think, it is families who will decide on which products to consume. Interviewer: You have given very important information. Thank you. Dr. Zeynep Deniz: I thank you, too.
485
Assessment and Evaluation of Socioscientific Issues
Oktay shows the article to his wife. She tells him, “It is your decision!” Interview Questions 1. How would you decide if you were Oktay? Explain why you took this decision. (Being aware of decision alternatives) 2. Why did you prefer X chocolate, not others? Why did not you buy the others? (Being aware of the advantages and disadvantages of the decision) 3. Do you have an assessment system in your mind about why you prefer X chocolate, not others? How? (Consideration of advantages and disadvantages as important) 4. After all, do you think you've made the right decision? (Making the most appropriate decision as a result of the evaluation above) As a result of the interviews, the interview data is evaluated to determine whether the students have the right decision-making mechanisms. In the evaluation, 5 step model can be used that developed by Beyth-Marom et al. (1991) and evaluates students' decision-making mechanisms (Demiral & Türkmenoğlu, 2018b). This model is given in Figure 4 below. Definition of the problem Awareness of decision alternatives Awareness of the advantages and disadvantages of the decision Significant evaluation of advantages and disadvantages Weighting and decision making
Figure 23.5 Decision making mechanism 5 step model Socio-scientific Issues and Argumentation SSI researchers have adopted various frameworks to conceptualize and evaluate learning outcomes related to SSI-based teaching. For example, many projects use the scientific argument as a result of variable and use various strategies to evaluate student performance. Zohar and Nemet (2002) created written tools with discrete discussion tasks that force students to justify claims, develop counter-arguments, and create rebuttals. These approaches are like some of the items in PISA using scientific evidence competence. Dori, Tal and Tsaushu 486
Ümit DEMİRAL
(2003) presented a series of open-ended questions and case studies that required students to create arguments based on the context of case studies. Walker and Zeidler (2007) examined what students were using to justify and discuss their position in SSI activities. The quality of the students' arguments was analysed in terms of claims, grounds, warrants, backings, and rebuttals to support their discussion positions. These approaches have seemed promising in the evaluation of relatively small-scale research projects (Sadler & Zeidler, 2009). In many studies conducted in the field of argumentation, the argumentation skills exhibited by the participants in the argumentation process (Sandoval & Millwood, 2005; Lin & Mintzes, 2010; Demiral & Çepni, 2018) and the argumentation quality (Erduran, Simon & Osborne, 2004), “Toulmin's Argumentation Model (TAP)” is used for the analysis. The TAP model is given in Figure 23.6.
Figure 23.6 Toulmin’s argument pattern (Toulmin, 1958) The TAP model has been a fundamental model for many researchers working on socio-scientific issues. One of the reasons why Toulmin's model is often preferred in argumentation studies is due to the facility of the model. Thus, the discussion, which is a very abstract concept, becomes concrete for teachers and researchers through this model (Demiral, 2014). The Toulmin model and the basic skills to be measured in some studies based on this model are given in Table 7 below.
487
Assessment and Evaluation of Socioscientific Issues
Table 23.7 Skills assessed in the TAP model Demiral & Topçu, Çepni Sadler & (2018) Tüzün (2011) Claim Position
Lin & Mintzes (2010)
Sadler (2003)
Kuhn (1991)
Toulmin (1958)
Descriptive questions?
Claim
Position
Give reason
Claim
Warrant
Justification
Warrant
Explanation
Counterarguments
Counterposition
Counterarguments
Counterarguments
Alternatives, Counterarguments
Rebuttal
Rebuttal
Rebuttal
Rebuttal
Rebuttal
Rebuttal
Evidence
Backing
What is your claim? What's the reason for your claim? What is your counterarguments? What do you refute by using the opposite arguments? What are the supporters of your arguments?
Evidence
Evidence
Data, warrant
To assess the students' argumentation skills, Demiral (2014) prepared a scenario called “Help to Somalia” and four questions about this scenario. The researchers developed a scoring rubric (Table 23.8) to evaluate students' data. Scenario: Help to Somalia
488
Ümit DEMİRAL
Somalia is experiencing the driest period of recent years. Because of African countries' unconsciously water-sharing, there were thirst in some areas, in parallel with drying of agricultural products and, finally, the famine. People are trying to survive in internment camps with aid that will come from other countries. The United Nations (UN) is planning to provide food and medicines to these people who suffer from hunger in Somalia under the World Food Program (WFP), with money collected from many countries. Turkey is at the head of the countries participating in this aid campaign. You are the person responsible for the charity campaign. You will make food aid to Somalia with the money to be collected. Approximately 12.000.000 TL (approximately 3.000.000 $) cash was collected from Turkey. You can buy genetically modified food that will be enough for 1 year to the people who need food with this money in your hand, or you can buy natural food for up to 6 months with this money. There is a chance that new money will come from Turkey in the future, but it is not clear when. If you distribute food with GMOs, you will face up to potential risks, or if you distribute normal food, people in the camp will live in hunger for a while again.
Interview Questions 1. As a member of Somali Aid Commission, would you like to send GMO or natural food to Somali? What would you like to do in such a case? (Taking position skills against a state were evaluated) 2. What are your reasons to choose the position you take on help? (Reasoning positions skills are evaluated) 3. If someone disagrees with the view that you have stated in the first and second questions, then he/she is against some of your views and isn’t convinced with some of your reasons. What could be the opposite opinions? (The skills in creating counter-arguments of the participants are evaluated) 4. How do you rebut someone who is against your views and how would you convince him? (Producing supportive arguments to refute claims. The skills of the participants are evaluated)
489
Assessment and Evaluation of Socioscientific Issues
Table 23.8 Scoring rubric for argumentation skills Argumentation skills
Claims and warrants
Explanation
Scoring
No answer or invalid warrant Only an acceptable claim and no warrant
0 1 (One point for a claim) 1+1 (One point for claim plus one point for each warrant) 2+1 (One point for each additional warrant) 0 1+ (One point for each warrant) 0 2+ (Two points for each rebuttal)
An acceptable claim and a valid warrant An acceptable claim and more than one valid warrant No answer or invalid warrant
Counterarguments (compare claims)
One or more valid warrants No answer or invalid warrant
Supportive arguments
Rebuttal to counterargument No evidence or supplementary explanation
0
Valid evidence
1+ (One point for each evidence)
Evidence
Socio-scientific Issues and Reasoning It was also proposed a novel framework for conceptualizing learning goals and assessment associated with SSI-based instruction. For the SSI negotiation, socio-scientific reasoning was proposed as a structure that forms the basis of the important applications and scientific literacy perspective (Sadler & Zeidler, 2009). Socio-scientific reasoning (SSR) is a set of cognitive competencies ranging from naive or low performance levels to the level of target performance representing the ways of informed and sophisticated thinking and resolving SSI 490
Ümit DEMİRAL
(Romine, Sadler & Kinslow, 2017). The basic idea behind this approach is that there are unchangeable practices and understandings generalized in the context of many SSIs (Sadler & Zeidler, 2009). As suggested at the outset, the SSR construct consists of four dimensions: (i) recognizing the intrinsic complexity of SSI; (ii) examining issues from multiple perspectives; (iii) appreciating that the SSI is subject to ongoing investigation; and (iv) examining potentially prejudicial information with scepticism (Romine, Sadler & Kinslow, 2017). To evaluate scientific reasoning, students are engaged in many events presenting a set of questions and contextualized socio-scientific issues that provide opportunities for students to show these practices. Unlike some of the PISA questions reviewed above, even though many of these tasks are theoretically related to the SSI but are closely related to the context. The first attempts to assess socio-scientific reasoning were based on the interview format, but the follow-up was concluded in a written form that could be used reasonably with large examples (Sadler & Zeidler, 2009). In the following activity on socio-scientific reasoning, after the participants read each scenario request, the interviewer posed questions to expose the positions, rationales, counter positions, and rebuttals. Due to the interest in the reasoning patterns and not necessarily the quality of the argumentation, prompts and probes are explicitly presented in the form of requesting positions, rationales, counter-positions and rebuttals, to encourage the expression of reasoning (Sadler & Zeidler, 2009). During the second interview, an author examined two scenarios (one based on gene therapy for Huntington's disease and the other on reproductive cloning), as well as the positions and rationales originally presented by the participant in the first interview. The participant was given the opportunity to reflect his/her position and to clarify any interpretations as interpreted by the researcher. The interviewer then put forth a series of questions to reveal details of the impact of personal experiences, social considerations, and the moral construal of general informal reasoning patterns. For example, one scenario to which the participants gave reaction included gene therapy for Huntington's disease. The participant stated a position about whether they considered using gene therapy in this context (during the first interview). The interviewer asked the participant whether he/she considered several factors, such as feelings, valid ethical principles or perspectives, rights and responsibilities, and his/her own immediate response to the determination of that position (Sadler & Zeidler, 2009). Eggert and Bögeholz (2010) developed the idea of “socio-scientific 491
Assessment and Evaluation of Socioscientific Issues
decision-making”, a structure that includes noteworthy links with socio-scientific reasoning. Scenario: Huntington’s Disease Gene Therapy Prompt Huntington’s disease (HD) is a neurological disorder caused by a single gene. Its symptoms usually start between the ages of 35 and 45. The first symptoms include uncontrollable body spasms and cognitive impairment. As the disease progresses, patients become physically incapacitated, suffer from emotional instability, and eventually lose mental faculties. HD usually runs its course over a period of 15–20 years and always results in death. No conventional treatments are known to work against HD. Because HD is controlled by one gene, it could be a candidate for gene therapy. Should gene therapy be used to eliminate HD from sex cells (egg cells or sperm cells) that will be used to create new human offspring? First Interview Questions 1. Should gene therapy be used to eliminate HD from sex cells (egg cells or sperm cells) that will be used to create new human offspring? Why or why not? 2. How would you convince a friend or acquaintance of your position? 3. (If necessary) Is there anything else you might say to prove your point? 4. Can you think of an argument that could be made against the position that you have just described? How could someone support that argument? 5. If someone confronted you with that argument, what could you say in response? How would you defend your position against that argument? 6. (If necessary) Is there anything else you might say to prove that you are right?
Second Interview Questions 1. What factors were influential in determining your position regarding the HD issue?
492
Ümit DEMİRAL
2. Did you immediately feel that gene therapy was the right/wrong course of action in this context? Did you know your position on the issue before you had to consciously reflect on the issue? 3. In arriving at your decision, did you consider the perspective or feelings of anyone involved in the scenario? (a) Did you consider the position or feelings of a parent faced with giving birth to a child that has HD? If so, how did this affect your decision making? (b) Did you consider the feelings of a potential child carrying the HD gene? If so, how did this affect your decision making? 4. Did you try to put yourself in the place of either a potential parent or child? If so, how did this affect your decision making? 5. Do you think that gene therapy as described in this case is subject to any kind of moral rules or principles? If so, how did this affect your decision making? 6. Did you consider the responsibility of parents? If so, what are the responsibilities of the parents in this scenario? 7. Did you consider whether a parent has the right to alter the child’s genes? If so, how did this affect your decision making? 8. Did you consider the rights of the future child? If so, how did this affect your decision making? 9. Did you think about the roles and responsibilities of the doctors who would perform the gene therapy? If so, how did this affect your decision making? 10. Did you consider the child’s future with and without gene therapy? What aspects of the child’s future did you think about, and how did it shape your position? 11. Did you consider possible side effects for either the mother or the potential child. If so, how did this affect your decision making? 12. Were you concerned with any technological issues associated with gene therapy? If so, what issues did you think about? 13. Did you think about who would have access to gene therapy? If so, how did this affect your decision making? 14. Is there anything else that I might know about your thinking process or decision making as you considered this gene therapy issue?
493
Assessment and Evaluation of Socioscientific Issues
Socio-scientific Issues and Character and Reflective Moral education and the associated forms of character education require conscience. In the process of raising science literate individuals, the aim is to create collective social conscience. Students can develop or strengthen a human character such as reliability, trustworthiness, dependability, altruism and compassion by participating in activities that are carefully designed and raise an awareness of social sense of responsibility. The argumentation activities, which are frequently mentioned in SSI education, provide an opportunity to train characterful individuals. At the end of the argumentation process, the class decisions given in a democratic atmosphere and fostering the emotional intelligence on a socio-scientific issue that contributes to the development of human values facilitating the understanding are the centre of a socio-scientific issue and are defined as the fundamentals of a character formation. Character education is known to have a direct effect on academic achievement. Recent research showed that teaching in socio-scientific issues can increase the moral sensitivity of students and thus contribute to general moral development (Zeidler & Nichols, 2009). Conclusion Classroom assessment contributed to an appreciation of the role that assessment that can play in increasing student learning and achievement. It provides a better understanding of the link between learning and assessment and enables learning to be social rather than an individual process. Classroom assessment should be used throughout the learning process, and it is formed from open-ended questions of a variety of modes, such as the artefacts that the students create along with individual or collaborative work, performance tasks, and so on. Literature often offer a complex, authentic problem that refers to interdisciplinary content and has multiple possible solutions. When used in conjunction with students with a wide range of abilities, performance tasks that enable more than one correct solution can provide maximum benefit in evaluating student skills. Recent studies have shown that low achievers mostly make better improvement in performance tasks or case studies involving relevant authentic subjects (Tal & Kedmi, 2006). When the studies are examined, it is seen that a real problem is given in the assessment and evaluation process related to socio-scientific issues and knowledge, skills, attitudes and beliefs are assessed within this context. However, the inadequacy of the teachers in pedagogical and pedagogical content 494
Ümit DEMİRAL
knowledge, the existence of questions aiming to assess only the content knowledge in national and international examinations, physical properties of classes, lack of time, material and so on significantly impedes performance or authentic assessment. As a result, 21st century skills such as argumentation skills, reasoning, decision-making skills, and life skills cannot be assessed and the training activities for students cannot be adequately regulated as there is not an evaluation result. To eliminate this disadvantage, a new assessment instrument which are not very time consuming about socio-scientific issues should be placed in textbooks. Performance and authentic assessment should be provided to teachers.
495
Assessment and Evaluation of Socioscientific Issues
References Aikenhead, G. S. (1988). An analysis of four ways of assessing student beliefs about STS topics. Journal of research in science teaching, 25(8), 607-629. Anagun, S. S. (2011). The impact of teaching-learning process variables to the students' scientific literacy levels based on PISA 2006 results. Education and Science, 36(162), 84-102. Arends, R. (2012). Learning to teach (Vol. 9). New York: McGraw-Hill. Barrue, C. & Albe, V. (2013). Citizenship education and socioscientific issues: Implicit concept of citizenship in the curriculum, views of French middle school teachers. Science and Education, 22, 1089-1114. Beyth-Marom, R., Fischhoff, B., Quadrel, M. J., & Furby, L. (1991). Teaching decision making to adolescents: A critical review. In J. Baron & R. V. Brown (Eds.), Teaching decision making to adolescents (pp. 19-59). Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc. Cross, D., Taasoobshirazi, G., Hendricks, S., & Hickey, D. T. (2008). Argumentation: A strategy for improving achievement and revealing scientific identities. International Journal of Science Education, 30(6), 837-861. Çepni, S. & Ormancı, Ü. (2016). TIMSS Uygulamalarının Tanıtımı. S. Çepni (Ed). PISA ve TIMSS Mantığını ve Sorularını Anlama, PegemA Akademi, Ankara. Deveci, İ., Konuş, F.Z., & Aydız, M. (2018). Investigation in terms of life skills of the 2018 Science Curriculum Acquisitions. Çukurova Üniversitesi Eğitim Fakültesi Dergisi, 47(2), 765-797. Demiral, Ü. (2014). Investigating argumentation skills of pre-service science teachers in a socio-scientific issue in terms of critical thinking and knowledge level: GM foods case. Karadeniz Technical University, Institute of Education Sciences. Doctoral dissertation, Trabzon. Demiral, Ü., & Çepni, S. (2018). Examining argumentation skills of preservice science teachers in terms of their critical thinking and content knowledge levels: An example using GMOs. Journal of Turkish Science Education, 15(3), 128-151.
496
Ümit DEMİRAL
Demiral, Ü., & Türkmenoğlu, H. (2018a). The relationship of preservice science teachers’ decision-making strategies and content knowledge in socioscientific issues. Journal of Uludag University Faculty of Education, 31(1), 309-340. Demiral, Ü., & Türkmenoğlu, H. (2018b). Examining the Relationship between Preservice Science Teachers’ Risk Perceptions and Decision-Making Mechanisms about GMOs. Van Yuzuncuyil University Journal of Education, 15(1), 1025-1053. Dori, Y. J., Tal, R. T., & Tsaushu, M. (2003). Teaching biotechnology through case studies—can we improve higher order thinking skills of nonscience majors? Science Education, 87(6), 767-793. Eggert, S., & Bögeholz, S. (2010). Students' use of decision‐making strategies with regard to socioscientific issues: An application of the Rasch partial credit model. Science Education, 94(2), 230-258. Erduran, S., Simon, S., & Osborne, J. (2004). TAPping into argumentation: Developments in the application of Toulmin's argument pattern for studying science discourse. Science education, 88(6), 915-933. Erenler S., Eryılmaz Toksoy S. & Reisoğlu İ. (2017). Öğretmen Adaylarinin Sosyo Bilimsel Konulara Ilişkin Argümantasyon Kalitelerinin Incelenmesi. VII. Uluslararası Eğitimde Araştırmalar Kongresi (ULEAD), 27-29 April, Çanakkale. Feierabend, T., & Eilks, I. (2010). Raising students' perception of the relevance of science teaching and promoting communication and evaluation capabilities using authentic and controversial socio-scientific issues in the framework of climate change. Science Education International, 21(3), 176-196. Kara, Y. (2012). Preservice biology teachers’ perceptions on the instruction of socio-scientific issues in the curriculum. European Journal of Teacher Education, 35 (1), 111-129. Kartal, E. E., Doğan, N., & Yıldırım, S. (2017). Exploration of the factors influential on the scientific literacy achievement of Turkish students in PISA. Necatibey Faculty of Education Electronic Journal of Science & Mathematics Education, 11(1), 320-339. 497
Assessment and Evaluation of Socioscientific Issues
Kilinc, A., Demiral, U., & Kartal, T. (2017). Resistance to dialogic discourse in SSI teaching: The effects of an argumentation‐based workshop, teaching practicum, and induction on a preservice science teacher. Journal of Research in Science Teaching, 54(6), 764-789. Klosterman, M., & Sadler, T. D. (2010). Multi-level assessment of content knowledge gains in the context of socioscientific issues-based instruction. International Journal of Science Education, 32, 1017–1043. Lin, S. S., & Mintzes, J. J. (2010). Learning argumentation skills through instruction in socioscientific issues: The effect of ability level. International Journal of Science and Mathematics Education, 8(6), 993-1017. Ministry of National Education (MNE). (2018). Science education curricula (Grades 3–8). Retrieved from http://mufredat.meb.gov.tr. Accessed on December 12, 2018. OECD. (2007). PISA 2006. Science competencies for tomorrow’s world. Volume I: Analysis. Paris: OECD. Ormancı, Ü. (2016). Türkiye'deki Ulusal Sınavların Tanıtımı. S. Çepni (Edt). PISA ve TIMSS Mantığını ve Sorularını Anlama, PegemA Akademi, Ankara. Ratcliffe, M., & Grace, M. (2003). Science education for citizenship: Teaching socio-scientific issues. UK: McGraw-Hill Education. Roehrig, G., & Garrow, S. (2007). The impact of teacher classroom practices on student achievement during the implementation of a reform‐based chemistry curriculum. International Journal of Science Education, 29(14), 1789-1811. Romine, W. L., Sadler, T. D., & Kinslow, A. T. (2017). Assessment of scientific literacy: Development and validation of the Quantitative Assessment of Socio‐Scientific Reasoning (QuASSR). Journal of Research in Science Teaching, 54(2), 274-295. Ruiz‐Primo, M. A., Shavelson, R. J., Hamilton, L., & Klein, S. (2002). On the evaluation of systemic science education reform: Searching for instructional sensitivity. Journal of Research in Science Teaching: The
498
Ümit DEMİRAL
Official Journal of the National Association for Research in Science Teaching, 39(5), 369-393. Sadler, T. D. (2003). Informal reasoning regarding SSI: The influence of morality and content knowledge. Doctoral dissertation, Florida. Sadler, T. D., & Zeidler, D. L. (2009). Scientific literacy, PISA, and socioscientific discourse: Assessment for progressive aims of science education. Journal of Research in Science Teaching, 46(8), 909-921. Sandoval, W. A., & Millwood, K. A. (2005). The quality of students' use of evidence in written scientific explanations. Cognition and instruction, 23(1), 23-55. Sönmez, A., & Kılınç, A. (2012). Preservice science teachers’ self-efficacy beliefs about teaching GM Foods: The potential effects of some psychometric factors. Necatibey Faculty of Education Electronic Journal of Science and Mathematics Education, 6(2), 49-76. Stiggins, R. (2007). Assessment through the student's eyes. Educational leadership, 64(8), 22. Tal, T., & Kedmi, Y. (2006). Teaching socioscientific issues: Classroom culture and students’ performances. Cultural Studies of Science Education, 1(4), 615-644. Walker, K. A., & Zeidler, D. L. (2007). Promoting discourse about socioscientific issues through scaffolded inquiry. International Journal of Science Education, 29(11), 1387-1410. Wilmes, S. & Howarth, J. (2009). Using issues-based science in the classroom. The Science Teacher, 76(7), 24. Zeidler, D. L., Applebaum, S. M., & Sadler, T. D. (2011). Enacting a socioscientific issues classroom: Transformative transformations. In Socio-scientific issues in the classroom (pp. 277-305). Springer, Dordrecht. Zeidler, D. L., Sadler, T. D., Applebaum, S., & Callahan, B. E. (2009). Advancing reflective judgment through socioscientific issues. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 46(1), 74-101.
499
Assessment and Evaluation of Socioscientific Issues
Zeidler, D. L., & Nichols, B. H. (2009). Socioscientific issues: Theory and practice. Journal of Elementary Science Education, 21(2), 49. Zohar, A., & Nemet, F. (2002). Fostering students' knowledge and argumentation skills through dilemmas in human genetics. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 39(1), 35-62.
500
Mustafa ÜREY
Chapter 24 Outdoor Learning Environments and Evaluation Mustafa ÜREY
Introduction The aim of today's education world is to be able to disseminate fast-paced information traffic in the best way to the whole society, with its dazzling developments and changes in science and technology. The flow of information that has not been so fast in any period has necessitated a change in the existing educational philosophies, which led to the revision of many subjects from the teaching methods to what was to be taught. The educational activities, which are shaped according to the time and the needs of the society, have been forced to change in terms of purpose and application in these days at the beginning of the 21st century. Developments in human rights, science and technology and changes in the socio-economic status of the society have increased expectations from education. As a result, the pressure on the traditional education approach has increased and the educational activities have been tried to be integrated into the current life of the individual. The idea that only school education cannot make the individual ready for life with the official education programs and that “child himself should also take part in his education” have put pressure on the classical curriculums and accelerated the transition to programs based on student-centred activities (Binbaşıoğlu, 2000; Fenoughty, 2006). Türkmen (2010) states that the existing curricula are incomplete, especially in terms of what problems may be encountered in life and how to behave against these problems. At this point, outdoor learning, which has an academic, practical, and vital value, and the outdoor learning environments used in this context, have come to the fore. Outdoor Learning Vygotsky has developed a sociocultural perspective on learning based on his theoretical work on child development. According to Vygotsky, learning emerged because of the interaction and experiences of individuals with their social environment. In other words, it is suggested that individuals should be in a 501
Outdoor Learning Environments and Evaluation
social environment and interact with this environment for learning to take place. In this context, it has been suggested that in recent years, when the concept of “outdoor learning” has come to the fore, the environments where individuals interact best with the social environment are out of school learning environments. The concept of outdoor learning takes place in the literature with different definitions and denotations. According to Binbaşıoğlu (2000), outdoor learning is defined as planned, scheduled, and regular studies conducted under the guidance of the school administration and the teacher to develop personalities in accordance with the interests and wishes of the students and the aims of the school or outside of school. Laçin Şimşek (2011) expresses the outdoor learning activities as “planned and programmatic activities for the education and training outside the school walls”. Although the expression of outdoor learning environments evokes an unplanned (informal) education, it also involves a planned (formal) teaching process. Outdoor learning activities are informal because they occur outside the school walls while it falls within the scope of formal education as it includes planned and scheduled education. In this context, researchers recognize outdoor learning environments as the intersection point of formal and informal education and define them as non-formal. Maarschalk (1988) and Tamir (1990) tried to explain this distinction in the following table (cited by Eshach, 2007): Table 24.1 Comparison of learning environments Formal Usually at school Can be repressive Structured Usually pre-arranged Motivation extrinsic Compulsory
Non-Formal Outside of school Usually supportive Structured Usually pre-arranged Motivation intrinsic and extrinsic Usually volunteer
Under the leadership of Under the leadership of the teacher the guide or teacher The learning situation is The learning situation is both evaluated and not evaluated evaluated 502
Informal At anywhere Supportive Unstructured Spontaneous Motivation intrinsic Volunteer Usually under leadership of student
the the
The learning situation is not evaluated
Mustafa ÜREY
The main purpose of outdoor learning activities is to enrich the achievements pointed out by the education programs of each discipline in the school programs. Therefore, outdoor learning environments offer a large-scale interdisciplinary learning environment that includes many disciplines such as physics, chemistry, biology, mathematics, history, and geography. Many problem situations encountered in the school environment can be acquired in outdoor learning environments based on real life problems. The aim is to educate individuals who interact with the outside environment and with the natural environment and contribute to their personal and social development through the activities realized during this interaction. Therefore, outdoor learning environments: (1) consist of environmental education, (2) outdoor activities, and (3) personal and social development components (Higgins, Loynes & Crowther, 1997). Although each of these components has a very significant value within themselves, the size of outdoor activities is often addressed in terms of trip and picnic in the view of parents, students, and teachers. This situation leads to neglect of the learning process and cannot go beyond the size of entertainment. Importance and Limitations of Outdoor Learning Environments Outdoor learning environments contribute to reflect in the student's in-class theoretical knowledge to the current life while also contributing to the development of communication and interaction among students. Particularly, it is possible to find the students’ cognitive, affective and physical development in natural and human environments by directing them to the outdoor learning environments where the students' levels of curiosity are highest in primary education period (Altın & Demirtaş, 2009; Anderson, Lukas & Ginns, 2003; Griffin, 2004; Balkan-Kıyıcı & Atabek-Yiğit, 2010; Braund & Reiss, 2006; Türkmen, 2010). It is suggested that this situation will make the students' classroom learning situations more meaningful and contribute to their individual and social development (Altın & Demirtaş, 2009; Chin, 2004; Panizzon & Gordon, 2003). Current classroom activities are criticized because they are less related to real objects and events, less dependent on real symbols, and less social opportunities for students (Türkmen, 2010). On the other hand, outdoor school activities increase motivation to learn as well as motivation for learning and attitudes towards classes (Felix, Dornbrack & Scheckle, 2008; Ramey-Gassert, 1997). 503
Outdoor Learning Environments and Evaluation
Considering the differentiation of the learning environment, Braund & Reiss (2006) expressed this situation as “Students become more excited and more willing to learn when they encounter new places and areas (p.1380)”. Outdoor school learning environments offer learning opportunities in different ways that classroom environments cannot reveal. These environments allow students to learn in different learning styles and help each student to learn at their own pace. In this way, students can spend enough time in such environments, and they can structure their own senses in the best way. It is also stated by many researchers that educational activities in such learning environments have an enriching, supportive and complementary potential in education at the school (BalkanKıyıcı & Atabek-Yiğit, 2010; Chin, 2004; DeWitt & Osborne, 2007; Gurnett, 2009; Melber & Abraham, 1999; Panizzon & Gordon, 2003; Türkmen, 2010). In addition to this, students can provide empathic links, to wonder, to communicate, to take responsibility, to look critically, to think creatively, and to gain practical skills through such learning environments (Seidel & Hudson, 1999). Outdoor learning activities provide an opportunity for students to gain some behaviour. As a result of the studies conducted in such learning environments, students achieve several achievements such as being with others, belonging, working in cooperation, sharing responsibility and feeling that it is important as a member of the class (Özyürek, 2001). In addition, these activities, which enable the individual to make self-assessments by making comparisons between himself and others, cause the self-esteem of students to be shaped by triggering other social behaviours (Harter 1993; Shaffer, 1994). The results of the studies have shown that students who have completed their qualifications in terms of social competence are also successful academically (Holloway, 2000; Mygind, 2007; O’Brien & Rollefson, 1995; Rathunda, 1993). For the student to be successful in academic terms, first, the research that suggests that he loves the school and sees himself as a valued member of the school emphasizes that the students should believe that they are doing their jobs in the school and they love and share something with other individuals in the school. It has been suggested by many studies that the students' interests and needs as well as outdoor learning activities that consider the school's environmental conditions are very important for students to orient to the school and adopt the school. However, many studies have stated that many schools do not allow students to see themselves as part of the school (Binbaşıoğlu, 2000). It is possible to find researches that teachers spend most of their energies on classroom management and control rather than 504
Mustafa ÜREY
education. So, the child cannot see himself/herself as a valuable member of school (Albayrak, Yıldız, Berber & Büyükkasap, 2004; Altın & Demirtaş, 2009; Özyürek, 2001; Sezen, 2007). Griffin and Symington (1997) argue that the most important role in the outdoor learning activities belongs to the teacher and states that it is not possible for an activity to achieve its goal that has not been planned and directed by the teacher. McComas (2006) stated that class or extracurricular activities, which are thought to be made within the scope of learning activities outside the school, should be prepared by the teacher before the application. Otherwise, students will face problems in obtaining the knowledge and skills that they need to reach and expresses. McComas concluded that teachers had no clear idea of how to use such environments and could not relate in material and activities in the environment and issues in the school curriculum. Griffin (2004) emphasized that teachers' constructivist learning theories should be included in classroom environments and suggested that the theoretical structure should be reinforced with class or outdoor environments. For this reason, teachers have expressed the need for education-oriented plans and programs that can reveal the natural learning behaviours of students in the classroom or outdoor environments. When Falk & Storskdieck (2005) examined students' perceptions of outdoor learning activities, and found that students perceived the activities performed in the learning environments, created or visited within the outdoor activities as entertainment, rather than as learning. With this perception, it has been suggested that students ignore the learning dimension because they do not care about the learning activities outside the school and identify learning environments with the classroom environments. Outdoor Learning Environments Used in Science Education Science subjects cover the phenomena and events that is seen and experienced in life. It's intertwined with our daily life. However, science courses are one of the most popular and least understood courses by students. The reasons for this situation are including abstract science concepts and in complete associations with current life. Giving the concepts of science to the present life context and pointing to the laboratory conditions for each subject or concept cause students to perceive science subjects as if they are cases belonging to invisible worlds (Kara, 2018). This is one of the biggest obstacles to students' understanding of the relevant concept. In this respect, it is important that the students experience the subjects that they learned in science courses, learn by doing, and adapt them 505
Outdoor Learning Environments and Evaluation
to different situations. At this point, outdoor learning environments come to the fore as an important alternative to overcome this deficiency. When recent studies are reviewed, it has seen that they used science and technology museums, zoo, botanical garden, planetarium, industrial organizations, and national parks and nature education. Museums, archaeological museums, ethnographic museums, history museums, fine art museums, open-air museums, military museums, private museums, and science and technology museums are special venues that offer many kinds of learning environments for many disciplines (Chin, 2004). In terms of science, science and technology museums come to the fore. Such museums show the change of science and technology throughout history, how inventions were made, how machines and tools were developed. According to Aydoğdu and Kesercioğlu (2005), science and technology museums bring at least one of the cognitive, affective and psychomotor skills that are one of the targets of science courses (Bozdoğan, 2012). In this sense, science and technology museums have an important place in terms of science education. Science and technology museums are evoking the curiosity of the science and its products from every age to the people by organizing various activities (experiments, games, conferences, seminars, summer schools, panels, film shows, etc.) to help the visitors to learn freely by having fun and to make comparisons between the old and new technologies, to provide opportunity to educate individuals who are creative, who think, to question and to solve problems by providing scientific interaction, to provide social interaction between the visitors, to understand the natural phenomena in their environment and to provide science literacy (Bozdoğan, 2007). Botanical gardens are a special garden category that aims to protect plants and contribute to environmental education based on scientific foundations which draw attention to planting designs (Önder & Konaklı, 2011). Öztan (2004) examined the botanical gardens in three categories in terms of their function and dimensions. These are botanical gardens affiliated to scientific institutions, urban botanical gardens and botanical gardens of schools. Especially botanical gardens of schools are learning environments that consider the garden-based learning approach. The garden-based learning approach has become very popular in the international education arena. With this approach, it is tried to create students who can communicate with their natural environment, use school knowledge in 506
Mustafa ÜREY
daily life and willing to spend time in the school with outdoor practices and different activities while trying to create an alternative learning environment. With the garden-based learning approach, it was aimed to establish a sustainable school garden in schools and to carry out activities in parallel with the classroom curriculum implementations in this field (Ürey & Güler, 2018). It is mentioned by many studies that the garden-based learning approach has a very important position in terms of physical health, social health and mental health (Blair, 2009; Braun, Buyer & Randler, 2010; Dyment & Bell, 2008; Robinson & Zajicek, 2005). This situation has transformed the school gardens into a field of practice in terms of many different disciplines (education, psychology, sociology, medicine, landscape architecture, etc.). In the school gardens based on natural environment, it is put forth that students have more opportunities for safer and more creative games (Malone & Tranter, 2003), increasing their social relationships (Dyment & Bell, 2008; Robinson & Zajicek, 2005; Thorp and Townsend, 2001), developing their individual skills (Bartosh, et al., 2006), which increase their academic performance (Klemmer, Waliczek & Zajicek, 2005; Sparrow, 2008; Ürey & Çepni, 2015), both teachers and students increase their motivation in rich learning environments (Dyment, 2005). Planetariums are structures that are used to examine the sky with realistic simulations on a dome-shaped screen with the help of a special projector created to understand and learn astronomy and space sciences (Ertaş & Şen, 2011). Planetariums are given names such as the stellar theatre, star theatre, space theatre, the sky theatre and the galaxy besides the planetary house. Planetariums were first invented in 1914 by Walther Bauersfeld and Werner Straubel to bring the concepts that are difficult to reach (Yavuz, 2012). Since then, it has spread throughout the world. Planetarium, which allows people to enjoy astronomy, space, and other scientific subjects, has become an important educational tool, especially for school groups. Planetarium, where teachers can benefit effectively in science courses, can be an effective outdoor learning environment that can attract students' attention and enjoy them while learning. Industrial Organizations are the centres of formation of industry, which are defined as “the whole of the methods and tools used to process raw materials and produce products in line with the needs of human beings and to efficiently process energy resources” (Atabek Yiğit, 2011, p. 105). Industrial organizations are also one of the important outdoor learning environments in terms of science teaching. Students will be able to see how the products they use in their daily lives are 507
Outdoor Learning Environments and Evaluation
produced, what processes are used at the same time, and how they relate to the science concepts they learn in school through trips that can be made to these organizations. Moreover, students' recognition of the occupational groups working in these organizations is gaining importance in terms of career development. National Parks are special protection areas that can be used for recreation and tourism purposes with their natural and cultural resources, which are of scientific and aesthetic importance (Papp & Thompson, 2003). In recent years, national parks have been used for learning purposes with the increasing importance of learning environments in education. National parks are gaining importance in terms of conservation of natural areas, sustainability, and biodiversity. While the learning environments created in national parks and the limited use of natural resources and the awareness of the transfer of cultural values from generation to generation, the development of skills can be provided such as observation, classification, ranking, and relationship building. In addition, through the national parks, the development of emotional dimensional learning can be realized by providing students with a special sensitivity towards the endangered and protected species. Nature Education has a multidisciplinary field due to its content. Nature education is carried out in order to recognize nature in natural environments and to experience the living-living and living-inanimate interactions at first hand. Nature education can be walking on the edge of a lake, as well as field trips, hiking, camping and adventure activities (Keleş, 2011). According to Ballantyne and Uzzell (1994), nature education gives students opportunity to apply their theoretical knowledge in “the field” and allows them to approach problems and events from a different point of view while exploring real life (Yardımcı, 2009). According to Ozaner (2004), only the education given in the school is not enough to understand nature. For nature education, practical education should be provided at school and out of school. Factors Affecting Learning in Outdoor Learning Environments There are several factors influencing outdoor learning. Kisiel (2003) collected these factors under 8 headings. 1. Motivation and Expectations: People can visit outdoor environments for various reasons. What they want to do or what they want to see in outdoor environments will affect their experience. 508
Mustafa ÜREY
2. Preliminary Information, Interests and Beliefs: Visitors' interests and current information will affect their selection during the trip and trip program. 3. Selection and Control: Learning will increase when visitors are given the opportunity to select and control what they will learn, and what to deal with. 4. Intra-Group Socio-Cultural Mediation: Outdoor settings have a unique suit for social learning. Visitors in the group use these environments as tools to strengthen each other's beliefs and give meaning. 5. Mediation Facilitated by Others: Employees and other visitors in outdoor settings may affect individual learning. 6. Orientation and Organizer: Learning is more likely when visitors are familiar with their environment and expected behaviour. 7. Design: The design of the location of the trip can help or prevent one's learning. 8. Strengthening of Experiences Outdoor Environments: Those learned after the trip may affect what is learned during the trip. It is thought that a teacher who wants to perform outdoor learning should consider these factors while planning a trip and increase the quality of the trip. Things to Pay Attention in Outdoor Learning Environment Outdoor learning environments have been important educational environments for students because they leave cognitive, affective and social traces (Türkmen, 2010; Anderson, Kisiel & Storksdieck, 2006; Orion & Hofstein, 1994). However, an outdoor learning activity that is not well planned despite all these positive effects may not produce the desired results on students. Students and teachers can only perceive outdoor learning environments for fun and travel purposes (Laçin Şimşek, 2011). In this direction, teachers who want to carry out courses in outdoor settings should know what they should do in order to be able to perform these tasks effectively and efficiently. Things to be taken into consideration in the extra-school learning environments are divided into three studies: studies to be carried out before the trip, studies to be carried out during the trip and studies to be carried out after the trip (Bozdoğan, 2007; DeWitt & Osborne, 2007).
509
Outdoor Learning Environments and Evaluation
Pre-trip Activities There are some tasks that need to be done before conducting the lessons in outdoor school learning environments. These are classified as educational preparations, bureaucratic work and transportation, eating and drinking (Bozdoğan, 2007; Laçin Şimşek, 2011). Educational Preparations: All the educational preparations should be planned by the teachers who will teach in the outdoor learning environments. Therefore, teachers have a great responsibility. We can list them as follows;
510
The teacher should go to the outdoor learning environment in which he/she will study and give information to the staff. In addition, the course will be covered to learn the materials, tools, and materials. If there are deficiencies, they should be completed to the extent of possibilities. If there is no material to guide on the place to visit, the teacher should plan for the trip. The course should be related to the materials in the plan and the basic concepts and skills. Students must be informed about the place to be visited in advance. This information should be made in order to reduce anxiety in students, to ensure that the right learning is carried out, not only to the subjects that will be of interest, but also to examine the whole area. Brochures must be obtained before the students go on a trip to provide information about the trip. If there is no brochure, it should be prepared by the teacher or prepared by the institution. Pre-study worksheets should be prepared or make students prepare. The students should fill in the study papers prepared before the trip. Therefore, the worksheets should be related to the subject with items and objects. In addition, “pre-trip question papers” can be prepared prior to the trip to be answered by the students in order to examine the students' prior knowledge. It is thought that the trip plans will be made more efficient with the student answers given to these questionnaires. After the trip, these questions can be directed to students and compared with the situation before the trip. Thus, the effectiveness of the trip can be evaluated.
Mustafa ÜREY
Bureaucratic Affairs and Transport: Before the trip, teachers should:
Obtain permission from the parents of the students, the school administration and the national education directorate. Arrange the vehicle before going on a trip. In addition, the round trip time, the number of students, the trip fare must be determined in advance. Have an appointment with the staff at the place of the trip before the visit. Staff should be informed about the number of students, the date and time of the trip.
Food - Drink and Accommodation: If the place of the trip is out of town, reservations must be made in advance by the teacher. Activities to be done During the Trip The works that teachers should do during the trip can be listed as follows:
During the trip, a guide should be provided to inform the students and help them with their practice. Students who are prepared to answer during the trip should be asked to fill in questionary. A variety of activities and games can also be arranged. Students should be allowed to walk free. It is thought that this will strengthen students' creativity and interactions. Care should be taken not to overload responsibility to students. This may cause students to take a negative attitude towards trips and activities.
Post-Trip Activities After returning to the school at the end of the trip, activities to reinforce what is learned on the trip as follows;
The pre-trip question papers distributed before the trip can be redistributed after the trip and the effectiveness of the trip can be evaluated. Achievement test, alternative measurement, and evaluation methods can be applied to the students. Class discussions can be made about item and objects experienced in the trip process. Students should be provided to express the items that took attention and to express themselves. 511
Outdoor Learning Environments and Evaluation
A discussion about the trip can be created in the classroom for the emergence of mis-learning. This allows students to develop critical thinking skills. After returning to school, students are made to write story, composition, poetry and so on about the trip. In order to increase the students' courage and interest in science, parents can be informed about the activities related to the trip.
The trip can be reviewed, and different activities can be planned for the next year. Evaluation in Outdoor Learning Environments Research in outdoor education generally falls into two broad categories: research which is aimed at determining and analysing the effects of participation in outdoor education, and research which is aimed at demonstrating the transfer of outdoor learning to other academic disciplines. Each of these categories can be underlined in order to understand current practice in assessing outdoor learning. Worksheets are used more in the evaluation of the targeted gains in outdoor learning environments. Depending on the purpose of the trip, worksheets can be used prior to, during or after the trip. Although the worksheets prepared before or after the trip do not pose problems for the teacher and the student, worksheets used during the trip can arise problems especially for the students. The students stated that they could not pay enough attention to the exhibitions while they were trying to fill in the worksheets during the trip. Although students are not satisfied with the use of worksheet during the trip, it is difficult to determine whether learning has occurred without worksheets. Ürey (2013) states that worksheets can lead students to observe, but may create problems in making accurate observations. Therefore, it is necessary for teachers to prepare worksheets carefully. This is because worksheets are the materials that give the teacher the opportunity to control the organization and the process besides evaluation. A worksheet for outdoor learning activities should have the following characteristics:
512
Worksheets to be prepared for outdoor learning activities should be intriguing and thought-provoking, not too difficult or comprehensive. The worksheet should be neither too long nor too short. The contents of the worksheet and the questions should not bother the student.
Mustafa ÜREY
Worksheets should be suitable for the purpose of the trip and must be free of the details to remove the student from the aim. Flexible worksheets should be prepared as much as possible where students can create their own questions and observations instead of standard worksheets. Worksheets should direct the students to the accurate observations for the purpose of the trip and should open to new observations due to the inquiries that will arise from these observations. In this way, the student can explore the trip field in line with his / her own interests. Worksheets should guide students who do not know the environment by providing a general description of the field visited, not just the subject context. Worksheets should be prepared as much as possible to encourage group work. This will help students learn from each other and increase their motivation. Activities and questions in worksheets should be related to the activities carried out during the trip (Laçin Şimşek, 2011).
The number and variety of activities and questions in worksheets are important. These activities and questions should be composed of different types of questions and activities that direct the student about the purpose of the course, where to obtain the information and how it can be obtained and which are open to social interaction. Usually, worksheets include gap-filling, open-ended questions, matching, concept networks, concept maps, mind maps, checklists, meaning analysis tables, puzzles, orienteering questions and events. The preparation of questions and activities that are thought to be used in outdoor learning environments is a process that requires creativity and flexibility. Therefore, different questions and activities can be used depending on the creativity of the teacher. Gap-filling questions are the types of questions that students can answer at all levels. By means of such questions, the students can be asked to focus on the targeted situation by questioning the objects, persons, concepts or events in the situations we expect them to observe. Worksheets should also include open-ended questions in which students can express their thoughts and opinions. Thus, the students will be directed to the observation that will be carried out and will be able to discover and research based 513
Outdoor Learning Environments and Evaluation
on their own inquiry. Through open-ended questions, it is possible to establish a relationship between what they encounter in outdoor learning environments and the concepts they see in the course. One of the appropriate types of assessment for worksheets used in outdoor learning environments is the questions and activities in the shape of matching. In such questions and activities, it is tried to match the concept, definition, formula or pictures given in two columns. The names, pictures, figures and features of the entities that can be seen during the trip can be given and matching with the information in the other column can be requested in direction with observations. One of the questions and activities used in worksheets are the concept networks and concept maps. With concept networks, it is tried to identify the concepts that attract the attention of the student during the trip. Thus, it is determined to what extent the concepts within the scope of the subject matter are determined and if there are any missing concepts, it is tried to be completed with the student. The concept maps are the tables in which the relationships between concepts that are put forth by the concept networks are schematized in two dimensions. Concept maps are the type of question or type of activity in which the relationship between concepts is expressed. The concepts that are learned and neglected by the students are asked to form a concept network from the concepts they have learned during the trip. If any, the relations between the concepts determined by passing the concept map after the deficiencies are completed are expressed. These relationships are tried to be explained through the arrows and the shape of the relationship written on the arrows. Mind maps are the type of questions or activities that we frequently encounter in worksheets created for outdoor learning environments. Mind maps are graphical materials in which the relations between different concepts and ideas are schematized by brainstorming method (Ayas, 2012). In mind maps colored pencils, drawings, pictures, symbols can be used. Students may be asked to keep notes on objects, creatures and events that they observed during the trip and then to describe them with a mind map. Checklists are particularly effective in organizing outdoor learning environments. Expectations before the trip through checklists are written in articles. These expectations may be objects, people or events that we expect students to observe, or there may be situations in which we expect them to learn. A summary of the
514
Mustafa ÜREY
trip activity can be queried by questioning whether these situations have occurred through checklists. The meaning analysis tables that we can use in worksheets are twodimensional tables in which the entities or objects are classified according to some characteristics (Ayas, 2012). It may be asked to have this picture prepared by the students, it may also be prepared by the teacher before the trip and may be asked to have the students filled at the end of the trip. Through the meaning analysis table, the students are able to classify the objects, living creatures, or events they encounter during their observations according to different characteristics. Puzzles are often used in worksheets and are fun activities for students. In the puzzles, the names of the living creatures, object, person, events and visited places encountered during the trip are questioned. When preparing puzzles, a message should be formed with the letter key at the end of the puzzle. For this reason, the puzzles are activities left at the end of the trip. Orienteering is one of the most appropriate activities for outdoor learning environments. It is activity against time which includes finding directions with the help of the map. With this activity, students are expected to reach the target by giving them sketches and following these sketches. There are certain stops on the sketch and a tip is given to the student at each stop. Using this hint, the student tries to reach the next hint. At the last stop, the target is expected to be reached by the student. Collaborative group work is recommended for orienteering and it is aimed to maximize social interaction.
515
Outdoor Learning Environments and Evaluation
References Albayrak, M., Yıldız, A., Berber, K. & Büyükkasap, E. (2004). Parents idea about activities out of lesson and student behaviour interested with activities. Kastamonu Education Journal, 12(1), 13-18. Altın, B. N. & Demirtaş, S. (2009). Sosyal bilgiler dersinde sınıf dışı eğitim etkinlikleri. Safran, M. (Ed.), Sosyal Bilgiler Öğretimi (pp.507-541), Ankara: Pegem Akademi. Anderson, D., Kisiel, J. & Storksdieck, M. (2006). Understanding teachers’ perspectives on fields trips: Discovering common ground in three countries. Curator, 49(3), 364-386. Anderson, D., Lucas, K. B. & Ginns, I. S. (2003). Theoretical perspectives on learning in an informal setting. Journal of Research in Science Teaching, 40(2), 177-199. Atabek Yiğit, E. (2011). Sanayi kuruluşları, Laçin Şimşek, C. (Ed.), Fen Öğretiminde Okul Dışı Öğrenme Ortamları (pp.65-84). Ankara: Pegem Akademi. Ayas, A. (2012). Kavram öğrenimi, Çepni, S. (Ed.), Kuramdan Uygulamaya Fen ve Teknoloji Öğretimi (pp.151-177), Ankara: Pegem Akademi. Balkan-Kıyıcı, F. & Atabek-Yiğit, E. (2010). Science education beyond the classroom: A field trip to wind power plant. International Online Journal of Educational Sciences, 2(1), 225-243. Bartosh, O., Tudor, M., Ferguson, L. & Taylor, C. (2006). Improving test scores through environmental education: Is it possible? Applied Environmental Education and Communication, 5(3), 161–169. Binbaşıoğlu, C. (2000). Okulda Ders Dışı Etkinlikler, İstanbul: Milli Eğitim Basımevi. Blair, D. (2009). The child in the garden: An evaluative review of the benefits of school gardening. Journal of Environmental Education, 40(2), 15-38. Bozdoğan, A. E. (2007). Role and importance of science and technology in education, Unpublished doctoral dissertation, Gazi University, Institute of Educational Science, Ankara.
516
Mustafa ÜREY
Bozdoğan, A.E. (2012). The practice of prospective science teachers regarding the planning of education-based trips: Evaluation of six different field trips. Educational Sciences: Theory & Practice, 12(2), 1049-1072. Braun, M., Buyer, R. & Randler, C. (2010). Cognitive and emotional evaluation of two educational outdoor programs dealing with non-native bird species. International Journal of Environmental & Science Education, 5(2), 151158. Braund, M. & Reiss, M. (2006). Towards a more authentic science curriculum: The contribution of out-of-school learning. International Journal of Science Education, 28(12), 1373-1388. Chin, C. C. (2004). Museum experience-A research for science teacher education. International Journal of Science and Mathematics Education, 2, 63-90. DeWitt, J. & Osborne, J. (2007). Supporting teachers on science focused school trips: Towards an integrated framework of theory and practice. International Journal of Science Education, 29(6), 685-710. Dyment, J. E. (2005). Green school grounds as sites for outdoor learning: Barriers and opportunities. International Research in Geographical and Environmental Education, 14(1), 24–41. Dyment, J. E. & Bell, A. C. (2008). Grounds for health: The intersection of green school grounds and health-promoting schools. Environmental Education Research, 14(1), 77-90. Ertaş, H. & Şen, A.İ. (2011). Planetaryumlar, Laçin Şimşek, C. (Ed.), Fen Öğretiminde Okul Dışı Öğrenme Ortamları (pp.85-104). Ankara: Pegem Akademi. Eshach, H. (2007). Bridging in-school and out-of-school learning: Formal, nonformal, and informal education. Journal of Science Education and Technology, 16(2), 171-190. Falk, J. & Storskdieck, M. (2005). Using the contextual model of learning to understand visitor learning from a science center exhibition. Science Education, 89, 744-778. Felix, N., Dornbrack, J. & Scheckle, E. (2008). Parents, homework and socioeconomic class: Discourses of deficit and disadvantage in the New South Africa. English Teaching: Practice and Critique, 7, 99-112. 517
Outdoor Learning Environments and Evaluation
Fenoughty, S. (2006). “The landscape of the school ground”, in outdoor education, authentic learning in the context of landscapes, Scotland: European Service Training Course Book. Griffin, J. (2004). Research on students and museum: Looking more closely at the students in school groups. Science Education, 88(1), 59-70. Griffin, J. & Symington, D. (1997). Moving from task-oriented to learningoriented strategies on school excursions to museums. Science Education, 81, 763-779. Gurnett, D. (2009). Environmental education and national parks, a case study of Exmoor: “Outdoor education research and theory: critical reflections, new directions”. The Fourth International Outdoor Education Research Conference (pp. 53-59), Victoria, Australia: La Trobe University. Higgins, P., Loynes, C. & Crowther, N. (1997). A guide for outdoor educators in Scotland, Penrith, UK: Adventure Education Press. Holloway, J. H. (2000). Extracurricular activities: The path to academic success. Educational Leadership, 57(4), 211-222. Kara, Y. (2018). Determining the effects of microscope simulation on achievement, ability, reports, and opinions about microscope in general biology laboratory course. Universal Journal of Educational Research, 6, 1981 - 1990. Keleş, Ö. (2011). Doğa eğitimleri, Laçin Şimşek, C. (Ed.), Fen Öğretiminde Okul Dışı Öğrenme Ortamları (pp.133-151). Ankara: Pegem Akademi. Kisiel, J. F. (2003). Teachers, museums and worksheets: A closer look at a learning experience. Journal of Science Teacher Education, 14(1), 3-21. Klemmer, C. D., Waliczek, T. M. & Zajicek, J. M. (2005). Growing minds: The effect of a school gardening program on the science achievement of elementary students. HortTechnology, 15(3), 448–552. Laçin Şimşek, C. (2011). Okul dışı öğrenme ortamları ve fen eğitimi, C. Laçin Şimşek (Ed.) Fen Öğretiminde Okul Dışı Öğrenme Ortamları. (pp.1-23), Ankara: Pegem Yayıncılık. Maarschalk, J. (1988). Scientific literacy and informal science teaching. Journal of Research in Science Teaching, 25(2), 135-146. 518
Mustafa ÜREY
Malone, K. & Tranter, P. J. (2003). School grounds as sites for learning: Making the most of environmental opportunities. Environmental Education Research, 9(3), 283-303. McComas, W. F. (2006). Science teaching beyond the classroom. The Science Teacher, 73(1), 26-30. Mygind, E. (2007). A comparison between children’s physical activity levels at school and learning in an outdoor environment. Journal of Adventure Education and Outdoor Learning, 7(2), 161-176. O’Brien, E. & Rollefson, M. (1995). Extracurricular participation and student engagement. ERS Spectrum. 13(3), 12-15. Önder, S. & Konaklı, N. (2011). The Determination of Planning Principles of Botanic Garden for Konya. Journal of Tekirdag Agricultural Faculty, 8(2), 1-11. Orion, N. & Hofstein, A. (1994). Factors that influence learning during a scientific field trip in a natural environment. Journal of Research in Science Teaching, 31(10), 1097–1119. Ozaner, F. S. (2004). Türkiye’deki okul dışı çevre eğitimi ne durumda ve neler yapılmalı? V. Ulusal Ekoloji ve Çevre Kongresi (5-8 Ekim 2004), Bildiri Kitabı (Doğa ve Çevre), 67-98, İzmir. Öztan, Y. (2004). Yaşadığımız çevre ve peyzaj mimarlığı. Ankara: Tisamat Basım Sanayi. Özyürek, M. (2001). Sınıf yönetimi. Ankara: Karatepe yayınları. Panizzon, D. & Gordon, M. (2003). Mission possible: A day of science, fun and collaboration. Australian Primary & Junior Science Journal, 19(2), 9-14. Papp, S. & Thompson, G. (2003). What is national park? Teachers guide. NSW National Parks and Wildlife Service. Ramey-Gassert, L. (1997). Learning science beyond the classroom, The Elementary School Journal. 97(4), 433-450. Rathunde, K. (1993). Motivational importance of extracurricular activities for adolescent development: Cultivating undivided attention, Annual Meeting of the American Educational Research Association (April 12-16), Atlanta.
519
Outdoor Learning Environments and Evaluation
Robinson, C. W. & Zajicek, J. M. (2005). Growing minds: The effects of a oneyear school garden program on six constructs of a life skills of elementary school children. HortTechnology, 15(3), 453-457. Seidel, S. & Hudson, K. (1999). Müze eğitimi ve kültürel kimlik (Çev. Bahri Ata). Ankara: Ankara Üniversitesi Sosyal Bilimler Enstitüsü Yayınları. Sezen, G. (2007). Off-class activities tailored to students of lower socioeconomic status (Intel Student Program: Istanbul Case), Unpublished master’s thesis, Beykent University, Institute of Social Science, Istanbul. Shaffer, D. R. (1994). Social and personality development. California: Brooks/Cole Press. Sparrow, L. (2008). Real and relevant mathematics: Is it realistic in the classroom? Australian Primary Mathematics Classroom, 13(2), 4-8. Tamir, P. (1990). Factors associated with the relationship between formal, informal, and nonformal science learning. Journal of Environmental Education, 2(2), 34-42. Thorp, L. & Townsend, C. (2001). Agricultural education in an elementary school: An ethnographic study of a school garden. Proceedings of the 28th Annual National Agricultural Education Research Conference (pp. 347360), New Orleans. Türkmen, H. (2010). İnformal (sınıf dışı) fen bilgisi eğitimine tarihsel bakış ve eğitimimize entegrasyonu. Çukurova Üniversitesi Eğitim Fakültesi Dergisi, 3(39), 46-59. Ürey, M. (2013). Development and evaluation of science-based and interdisciplinary school garden program within the scope of free activity studies course, Unpublished doctoral dissertation, Karadeniz Technical University, Institute of Educational Science, Trabzon. Ürey, M. & Çepni, S. (2015). Evaluation of the effect of science-based and interdisciplinary school garden program on some science and technology course from different variables. Hacettepe University Journal of Education, 30(2), 166-184. Ürey, M. & Güler, M. (2018). Garden-based learning approach. Çetin, T., Şahin, A., Mulalic, A. & Obralic, N. (Eds.) New Horizons in Educational ScienceII (pp.299-317), Riga: Lambert Academic Publication Press. 520
Mustafa ÜREY
Yardımcı, E. (2009). The effect of activity-based nature education at a summer science camp on 4th and 5th grades conceptions of the nature, Unpublished master’s thesis, Abant Izzet Baysal University, Institute of Social Science, Bolu. Yavuz, M. (2012). The effect of using zoos in science education on student’s academic achievement and anxiety towards science and teachers-student’s conceptions, Unpublished master’s thesis, Sakarya University, Institute of Educational Science, Sakarya.
521
Özlem YURT, Esra ÖMEROĞLU
Chapter 25 Evaluation in Inquiry-Based Science Education 1 Özlem YURT, Esra ÖMEROĞLU
Introduction Inquiry-based learning is included in the literature as an approach that allows children to configure their knowledge in the learning process. Inquiry-based learning approach, based on the theoretical foundations of John Dewey's scientific thinking steps, is a multi-faceted process with various activities such as observing, asking questions, researching resources to see previous information, collecting data, analysing, reporting, estimating, recommending and sharing results with others through the actions required to bring a logical explanation to the events or phenomena that are of interest to individuals. One of the main goals of science education is the development of scientific processes and research skills in parallel with the mentioned multiple processes. In line with these objectives, researchers indicate that inquiry-based science education takes place at certain stages based on the research cycle. Research skills are supported through research, which consists of inquiry, estimation, implementation, conclusion and exhibition cycles. After this learning, there are various recording techniques to record and evaluate the acquired knowledge and skills. These techniques can be anecdotal records, short notes, diagrams, graphs, three-dimensional structures, sketches, photographs, visual and auditory recordings, drawings made by children, checklists, participation charts, rating scales. It is very important to evaluate the experiences, thoughts and achievements of the children through different means of documentation and to demonstrate the learning processes of the children in this way.
1
This chapter is produced as part from the Ph. D. Thesis entitled “The Validity and Reliability Study on Science Learning Assessment Test for 60-72 Months Children and an Examination of The Effect of Inquiry Based Science Education Program on Science Learning”.
523
Evaluation in Injury-Based Science Education
Inquiry Based Learning In recent years, countries have started restructuring their education programs. In restructured programs, it is aimed to educate individuals who acquired concepts, engaged in research and inquiry, established cause-effect relationships, developed critical thinking skills, used scientific process skills, followed problem-solving steps and scientific research processes, and had high-level mental process skills instead of memorizing information. To gain these skills, it is very important that teachers use different methods and materials. One of these methods is the inquiry-based learning approach. Inquiry-based learning aims to involve children in an original process of scientific discovery (Pedaste et al., 2015). This approach allows children to configure their knowledge in the learning process. Wolfolk (2001) described this approach as an approach in which the teacher presents a complex problem or situation, and children try to solve this problem through collecting information and testing results (as cited in Çalışkan & Turan, 2010, p. 1239). King (1995) defines inquiry-based learning as a responsibility for learning to the child (as cited in Cripe, 2009, p. 42). Inquiry-based learning provides teachers and children with the opportunity to research and solve their curiosity with the obtained data (Aydoğdu & Ergin, 2008, p. 21). Similarly, this learning approach includes behaviours that are generally necessary for an attempt to bring a logical explanation to events or phenomena that are interesting for individuals. In other words, it is seen that inquiry-based learning is a way of creating explanations about how the world works as well as being a learning method (Köseoğlu, Tümay & Budak, 2008, p. 231). This learning approach is both an important tool in increasing scientific knowledge and a multi-faceted process which involves various activities such as observation, asking questions, researching resources to see previous information, collecting data, analysing, making explanations, estimating, proposing, and sharing results with others (Llewellyn, 2002, p. 10; Tash, 2009, p. 2). During this process, children are given the opportunity to actively use scientific research methods. Scientific research for children includes asking a simple question, doing research, answering the question, and sharing the results with others (Brewer, 2007, p. 396). Children work actively in the planning, implementation and evaluation of the process, as well as the repeat and verification studies during their research. In this way, by doing the content and process of science, they learn by living (Tatar & Kuru, 2006, p. 149). 524
Özlem YURT, Esra ÖMEROĞLU
The centralization of the child and the development of a child-based thinking are among the main characteristics of inquiry-based learning approach. The teacher determines the acquisition and indicators, foresees the children's reactions to the subject, guides the children when necessary as a class leader, arranges the educational environment in such a way that they can take one-to-one care of each child (Çalışkan & Turan, 2010, p. 1239). However, it encourages children to use their scientific process skills to find answers to their questions (Eastern III & Myers, 2011, p. 37; Fang, Lamme & Pringle, 2010, p. 3). Thus, the use of scientific process skills is the basis of inquiry-based learning (Saracho & Spodek, 2008, p. 19; Tosa, 2009, p. 22). In the researches, it was determined that children and students who have applied inquiry-based science education have a positive effect on their research skills and science achievements (Cuevas et al., 2005; Deveci, 2018; Lynch et al., 2005; Marx et al., 2004; Ruby, 2006; Samarapungavan, Mantzicopoulos & Patrick, 2008; Suarez, 2011; Wise, 1996). Similarly, recent meta-analysis studies showed that inquiry-based learning is more effective than traditional descriptive instructional approaches on student achievement (Lazonder & Harmsen, 2016). In Europe, it is observed that different programs centralize Science Education based on inquiry as a pedagogical approach to develop and promote scientific attitudes and scientific knowledge understanding (van Uum, Verhoeffe & Peeters, 2017). Inquiry-based learning has begun to take place in the educational literature since the beginning of the 20th century. Dewey (1919, 1933), Conant (1947), Bruner (1960), Schwab (1960), Suchman (1961), Gagne (1963), Piaget and Lawson (1985) are among the leading researchers of this approach (as cited in Tatar, 2006, p. 56). Theoretical and Philosophical Foundations of Inquiry-Based Learning Approach The inquiry-based learning approach takes theoretical foundations from John Dewey's scientific thinking steps (Conklin, 2004, p. 206; Deboer, 2004, p. 26; Edelson, 1998; Goldman et al., 2010, p. 297; Llewellyn, 2002, p. 42). Inquirybased learning processes allow children to make discoveries on their own (Çalışkan & Turan, 2008, p. 604; 2010, p. 1239). John Dewey is one of the first American educators to emphasize the importance of discovery learning and research (Dewey, 1900; 1902; 1916). In his early work, Dewey argued that the 525
Evaluation in Injury-Based Science Education
learning process did not begin until the learner encountered a problematic situation and the individual was not fully focused (as cited in Llewellyn, 2002, p. 42). John Dewey (1910) emphasizes that the purpose of science is to develop thinking and reasoning skills, to configure new knowledge in mind, to learn science concepts and to understand science processes through research (Demir & Abell, 2010, p. 717). At the same time, Dewey believes that children can solve problems through real life experiences and discuss them by sharing with others (Crawford, 2000, p. 918). Inquiry-based learning approach was also influenced by the works of Jean Piaget, Lev Vygotsky and David Ausubel. The works of these theorists has been integrated into the philosophy of learning, known as “constructivism”, which is used to shape educational materials. This kind of constructivism-based activities embodies science concepts, motivates children and takes place with practical activities. In the constructivist approach, knowledge is formed by the individual in the direction of the active thinking, which is defined as selective attention, regulation of information and integration or displacement of existing information. It also emphasizes that social interaction is necessary to create common meanings. Therefore, the individual should be actively involved in the learning process both behaviourally and mentally (Albright et al, 2012, p. 21; Demiral, 2018; Keys & Bryan, 2001, p. 633; Minner, Levy & Century, 2010, p. 475; Trundle, 2009). The philosophical foundations of the inquiry-based learning approach are based on the philosophy of Progressive education. In inquiry-based learning, which coincides with the basic principles of Progressivism, the child is in the centre. The child should be active in the classroom, solving problems, and selfdiscovery of information. Teachers are not the guides that give information to children, but they are the guides that provide them with access to information. It encourages children to work cooperatively and educates them as individuals who learn to learn (Çalışkan & Turan, 2010, p. 1239; Tatar, 2006, p. 64). It is very important to support children's research skills in order to achieve inquiry-based learning based on theoretical and philosophical basis. Research Skills Research is a multi-faceted activity, such as observing, asking questions, researching books and other sources, researching for experimental evidence about the subject, collecting data through tools, analysing and extracting conclusions. 526
Özlem YURT, Esra ÖMEROĞLU
As a result, an answer, explanation or prediction can be proposed, and the results can be discussed. To do research, it is necessary to verify assumptions, to use logical and critical thinking, and to think about alternative answers. At all levels, children should have these skills (Cuevas et al., 2005, p. 339; Harlen, 2004, p. 5; NRC, 1996; Llewellyn, 2002, p. 6). Developing research skills for children is one of the main goals of preschool science education. These skills can be supported with different materials and different science activities. These skills that constitute scientific research, creating new perspectives, asking questions about objects and events around them, examining objects, materials and events, observing objects and living things through all senses, comparing, editing and classifying objects, deepening on their research through various materials, recording findings, writing and graphics, working for harmony with friends, sharing ideas and discussion. Within the scope of all these skills, it is necessary to consider the research of children as a process involving various steps. Specific research skills are given to children at each step (NRC, 2000; Worth & Grollman, 2003, p. 18). Children are very interested in understanding their physical and social environment. As researchers of materials, organisms and events, they add curiosity and a natural research desire to study and play. However, it is important for an adult to guide and be curious to improve children's abilities and to strengthen their understanding. Teachers should ask open-ended questions to start the research, guide children to plan their research, encourage all children to participate in the research, and keep records of each child's science (Akman, 2011, p. 153). Lederman (2006) states that teachers should ensure that children understand the processes of scientific research. Lederman (2006) highlights 8 points that children need to know about scientific research before they start any activity. These are, each research begins with a question, all scientific research has various ways and steps to follow, research is structured through questions, all scientists perform the same process, but they may not get the same results, the procedures may affect the results, the results of the research should be consistent with the information gathered, scientific data and scientific evidence are different concepts and explanations develop with the combination of information (as cited in Hung, 2009, p. 19). As a result, children with research skills in inquiry-based learning can begin to carry out the research cycle. The research cycle consists of 6 phases, including investigation, acquisition, prediction, implementation,
527
Evaluation in Injury-Based Science Education
conclusion and exhibition (Llewellyn, 2002, p. 13; Militello, Rallis & Goldring, 2009, p. 28). During the investigation, children usually begin their research by finding a question or problem. The process generally begins with the question “If?” and continues with a different event, question or teacher-directed event. In general, the teacher plans discoveries by observing experiences that children can't predict with their intuitions under normal circumstances. This creates an imbalance in children and leads children to the question of “Why?”. During the acquisition phase, children brainstorm and share ideas about the results that may arise in the research. At this stage, the child asks the question, “Do I already know the situation that how I can solve this question?”. During the prediction phase, children begin to collect the information they have gained in the expression “I think”. This phase usually involves creating a plan to answer the question that is being queried. In the implementation stage, children continue to plan and design the question-solving section (Llewellyn, 2002, p. 13). At the conclusion stage, children begin to compare their observations with the first “if?” question. At the same time, children may face contradictions that will lead to a new “if?” question and bring the group back to the investigation stage. In the final stage called as exhibition, children share and discuss the acquired information with others. The research cycle is a helpful practice for teachers when planning a research for children. During the cycle, children can usually return to different phases of the research cycle with different perspectives. Therefore, the loop guides children through their research (Llewellyn, 2002 p. 14). Figure 1 summarizes the studies of children in the research cycle (Llewellyn, 2002, p. 15).
Figure 25.1 Research Cycle (Llewellyn, 2002, p. 15) 528
Özlem YURT, Esra ÖMEROĞLU
Inquiry-Based Science Education In the inquiry-based learning approach, “science in real life” and “making science” descriptors are often used and thus the concept of inquiry-based science becomes the basis of science education (Crawford, 2000, p. 918; Harris, 2009, p. 28). Inquiry-based science includes asking questions and gathering information about natural life using children's mental and physical skills (Gillies et al., 2012, p. 93; Harlen, 2004, p. 2; Harlow, 2009, p. 143). In this way, inquiry-based science education is a child-centred learning method (Harris, 2009, p. 31). Research in developmental and cognitive psychology shows that early development is important in early childhood and that it can reach its full potential in the development of the child by providing the necessary stimulus (Trundle, 2009, p. 1). For this reason, early recognition of children in preschool years is very important in terms of supporting their development in many aspects. Researchers emphasize that science education should start in the early years of school life (Eshach & Fried, 2005, p. 316; Kallery, 2004, p. 148; Saçkes et al., 2011, p. 217; Watters et al., 2001, p. 2). At the beginning of science education, children innate curiosity and interest, begin to observe and think about nature from the age of three and four (Eshach & Fried, 2005, p. 319; Conezio et al., 2002, p. 1-2). In this process, they discover information about their environment and try to find the answers that they are looking for (McDonald & McDonald, 2002, p. 6; Saracho & Spodek, 2008, p. 19). It can also be associated with fields such as science, games, mathematics, art and social studies. For this reason, science significantly supports language and literacy. The use of scientific expressions from the early ages also affects the acquisition of scientific concepts in a positive way (as cited in Kubicek, 2005). Science education is also an effective method for the development of scientific thinking in children (Aktaş, 2002, p. 1; Eshach & Fried, 2005, p. 320). Children are supported academically by the development of scientific thought and can make critical evaluations by sharing information with their environment (ŞirajBlatchford & MacLeod-Brudenell, 1999, p. 6; Trundle, 2009). As a result, the aims of Science Education include exploring the environment of children, learning about the environment and communicating with their environment, thinking independently, reasoning and developing basic science skills. At the same time, it is also necessary to use appropriate scientific processes in the face of events and decisions, and enable people working in the fields of 529
Evaluation in Injury-Based Science Education
science to think about science and technology related events by using their knowledge, understanding and skills. (Brewer, 2007, p. 387; Dere & Ömeroğlu, 2001, p. 1; Krajcik, Charlene & Berger, 1999, p. 13). The basis of inquiry-based science education is to start research to find answers and explanations and to answer questions. Therefore, children work with science-based questions. In this process, children prioritize evidence which enable them to develop and evaluate statements based on scientific questions. They evaluate their scientific explanations under the guidance of alternative explanations. Inquiry-based science not only provides questions but also includes active involvement of children in practical activities and research. With such discoveries, children will be able to understand scientific concepts and develop the necessary scientific process skills to conduct scientific research (Capps & Crawford, 2013; Hammerman, 2006; Minner, Levy & Century, 2010, p. 476). With the use of all these processes, inquiry-based science education is an approach that enables children to actively participate in activities that reflect in methods of scientific research. For an effective education, it is necessary to understand how scientists work, as well as the process skills that need to be acquired to carry out scientific research. With the effective use of inquiry-based science education, children can do their own planned research and learn to think scientifically. The important point in this process is not the result of the research, but the performed research. Therefore, it is very important to encourage children to take time to discuss and explain their ideas (Kubicek, 2005). Models Used in Inquiry-Based Science Education Inquiry-based learning is a learning approach that involves children to prepare their own questions about the event and then discover the answers. Questionmaking is a part of the plan, while problem-solving is a part of the result. Inquirybased learning requires responsibility because children actively explore the concepts and meaning of these concepts through various methods. As a part of this process, children can begin to transfer their learning into new situations because they are confronted with interesting and conflicting thoughts (Rushton, 2008, p. 6). Inquiry-based learning, which emphasizes learning by doing and thinking and awakens children's interest and curiosity through real life connections, is often used as a 5E learning cycle model. This model consists of 5 phases: engage, explore, explain, elaborate and evaluate (Carin & Bass, 2001, p. 119; Hackling, 530
Özlem YURT, Esra ÖMEROĞLU
Smith & Murcia, 2010, p. 18; Llewellyn, 2011, p. 47; Wilder & Shuttleworth, 2005, p. 37). The first phase of the 5E learning cycle model starts with engage. In this event, the teacher creates a question or problem and draws attention and interest from the children. Teacher determines questions about concepts. Children attempt research to answer the teacher's question. To find the answer to the question shows interest in the event and begin to think. This phase enables children to communicate with their previous information by revealing their prior information. Phase 2 is the explore phase. In this part, children actively produce ideas for solving the question or problem and seek solutions. After the children's interests are attracted to the event, research activities are started by observing such as establishing relationships and raising questions. These activities are concrete experiences that include scientific process skills. In the motivation phase, the teacher guides them when necessary by encouraging children. It gives them the time to review and observe and share ideas with children. The third phase, the explain stage; the teacher asks children to answer questions based on observation. First, the teacher asks the children questions to explain their observations. After receiving the answers from the children, the teacher explains the events scientifically in a way that children can understand. In this way, it relates to the answers of children. Verbal methods are widely used at this stage. In the fourth stage, elaborate phase, children are encouraged to expand the concepts they have acquired and to establish connections with the related concepts. The teacher repeats the alternative explanations to the children. Thus, questions, answers and explanations are understood from a deeper and wider perspective. Children share information with groups to create new definitions and conclusions. In the final phase, evaluate, the teacher observes the use of new concepts and skills for children and changes in children's behaviour and thinking. Teacher asks children open-ended questions and allows them to answer these open-ended questions using their observations, previous and obtained information. The teacher assesses oneself and the learning process and creates the basis for subsequent research. (Beltran, Sarmiento & Mora-Flores, 2013, p. 40; Bybee, 2004, p. 8; Campbell, 2000, p. 36; Carin & Bass, 2001, p. 120). Rushton (2008, p. 6) stated that the learning process in inquiry-based science education took place in 7 stages. During the stimulation phase; When a new activity is started, the teacher motivates the children with the experiences that contain the senses. During the mobilization and communication phase, children 531
Evaluation in Injury-Based Science Education
brainstorm to discover the problem and ask questions. The teacher allows children to ask their own questions and choose their own learning outcomes. The teacher develops a simple hypothesis by asking children “What we will learn and what we want to find?” and guides children about asking questions such as “What is it about? Have you ever thought about that?”. In the planning and estimation phase, the teacher prepares a general plan for the event in advance. Then children are given opportunity to prepare their own plans. Careful guidance will show that the plans of the teacher and the children overlap with the questions and the expected results. In the research phase, children work individually or in groups to collect, classify, and group information and observations. During recording and reporting phase, children try to discuss and interpret the collected information and observations. During the merge phase, the obtained information and observations are compared, and generalizations are made. Shared information increases and more broadly connects with suggestions. In the evaluation phase is to think and discuss the event results from the gathering of children and teachers. Khasnabis (2008, p. 17) stated that the learning process in inquiry-based science education took place in 5 phases. In the first phase, children work with a scientific question in connection with their prior knowledge. In the second phase, children explore ideas through practical experiences, testing assumptions, problem solving and explanations. In the third phase, the teacher guides the children to understand the meaning and analysis of information, synthesis of ideas, sample forming and explanations of concepts. In the fourth phase, new knowledge and skills acquired by children are expanded with new situations. In the fifth phase, the teacher evaluates how learning is realized. As stated in the literature, many researchers point out that inquiry-based science education takes place at certain stages. (Edelson, 1998, p. 78; Eryılmaz Toksoy & Kaya, 2017; Goldman et al., 2010, p. 299; Khasnabis, 2008, p. 17; NRC, 1996, p. 122; Rushton, 2008, p. 6). Although it is seen that these stages are different at some points, it is understood that they are similar at many points. All researchers have emphasized that the first stage of inquiry-based science education is to draw attention to the question or problem of children. Then, they stated that the children collected information with the active use of scientific process skills related to the question or problem. The collected information was provided by the children to explain the ideas and the children use the gained information and the concepts in new situations. Finally, they agreed that the learning process was to evaluate the teacher and the child. 532
Özlem YURT, Esra ÖMEROĞLU
Evaluation in Inquiry-Based Science Education The inquiry-based approach to acquire scientific concepts provides many opportunities to understand science, to continuously assess development and learning, and to make reflective assessments based on concrete evidence for children's progress. Evaluation in inquiry-based science education should be based on three basic questions: “Where are students trying to go?”: This question is related in the achievements, indicators and targets that children are expected to gain. In this respect, the purpose of class evaluation is to improve development and learning. “Where are the children now?”: This question refers to evaluation of children's learning during different activities throughout the day. Formal and informal evaluation techniques can be combined and the development and learning of children can be monitored in various aspects. “How are students going to get there?”: This question relates in use of evaluation data. The evaluation results give an idea of how to change the guidelines and feedback that can help children gain and access indicators (Contant et al., 2018). The data collected and recorded in the evaluation process are the basic recorded data. These data are collected by various recording techniques. These techniques have three basic types: explanatory records, rating scales and rubrics (Ekinci, 2012). Explanatory Recording Techniques Such techniques include anecdotal records, short notes, figures/charts, sketches, photographs and concept maps. Anecdotal records are detailed descriptive statements that are recorded after a behaviour or situation occurs. A teacher can record the behaviour in detail when observes that they are critical to children's development (McFarland, 2008). In the context of inquiry-based science activity, an example of an anecdotal record of the teacher observing a child is shown in Table 25.1.
533
Evaluation in Injury-Based Science Education
Table 25.1 Anecdotal Record Example Place: Science Centre
Date: 1.11.2018
Child’s name: Sırma
Hour: 11.00
Observer: Teacher
Notes / Comments Sırma planted the flowers in the pots according to their types and sizes by observing the flowers.
Figures/ graphs, sketches and photographs: Graphs, sketches, figures and photographs that can be used during inquiry-based science education activities provide information for evaluation of the educational process as well as for documenting the children's experience in the process and the product. Figure 25.2 shows the shapes of children's leaves. Figure 3 shows the drafts of children's CDs.
Figure 25.2 Figures that children created from leaves
534
Özlem YURT, Esra ÖMEROĞLU
Figure 25.3 Sketches that children created from CDs Concept maps Concept maps are graphical tools used to organize and represent information. These include relationships between topics and concepts (Novak, Gowin & Johansen, 1983), as well as the ability of individuals to show their prior knowledge and thoughts (Novak, Gowin & Johansen, 1983). Concept maps as an evaluation tool can be considered as a method to measure the structure of children's information based on reporting (Ruiz-Primo & Shavelson, 1996). At this point, the children are asked to express what they know about a subject or idea. In this process, both the correct information given by the children and misconceptions of the concept should be recorded. Figure 25.4 shows the concept map of children on the trees (Birbili, 2006).
Figure 25.4 Trees Concept Map 535
Evaluation in Injury-Based Science Education
Control lists Control lists are used when there are not expected complex responses from children. However, control lists can be used to determine if children are taking steps to evaluate their behaviour or to complete an activity effectively. (Butler, McColskey & O'Sullivan, 2005). Control lists are an appropriate starting point for teachers to examine their own assessments and look at them more closely and gather more evidence (Brown, Gibbs & Glover, 2003). The example is shown in Table 25.22. Table 25.2 Control list Name: Age:
_/__/__
Indicators + Exhibits behaviour regularly * Shows improvement - Does not exhibit behaviour _/__/__ _/__/__ _/__/__
Can she/he use science-related materials effectively? Can she/he use science tools effectively? Willing to participate in science activities? Does she/he actively participate in the Science Centre? Rubrics and descriptive rating scales Rubrics is a scoring tool used to assess children's work. It can usually be used when evaluating children's performances or products derived from a performance task. Rubrics are powerful tools for both training and assessment. However, it can improve the performance of children by clarifying the expectations of
536
Özlem YURT, Esra ÖMEROĞLU
teachers (Andrade, 2000; Mertler, 2001). The rubric example of the ability to measure and record science process skills is shown in Table 25.3. Table 25.3 Scoring rubric for measuring and recording process Measures Insufficient Estimating the measurement result. Measures with nonstandard units. Comparing the measurement results with the estimation. Can define measurement tools.
Beginning
Improving
Enough
Narrative writing During inquiry-based science activities, children can write observations, thoughts, questions, or drawings in science books. In this way, teachers can record and evaluate the past and daily status of children. “Llama” drawings of children's books are shown in Figure 6 (Ganzel & Stuglik, 2003).
Figure 25.5 “Llama” drawings of children's Portfolios Portfolios include a collection of work that shows children's efforts, progress and qualification levels for the purpose. The development of children can be demonstrated as a proof of their competence or skills. Portfolios can be considered as tools that encourage children to communicate with their understanding of science, strengths, weaknesses, progress, and self-reflection (Butler, McColskey & O'Sullivan, 2005). With the portfolio culture created in 537
Evaluation in Injury-Based Science Education
pre-school education classrooms, teachers will facilitate the understanding processes of children through continuous evaluation and learning interaction. Teachers help them evaluate children's concepts, strategies, or language use, as a basis for guiding children's educational activities (Duschl & Gitomer, 1997). Here are some examples that may be included in the portfolio: Pictures, children's drawings, student made pictures and photos,
Figure 25.6 Portfolio Product-Animals Drawings of Children (Pargellis, 2014; Ahi, 2016)
Reports of children's research, questions, thoughts, plans, Schemes, graphics, or other saved data
Figure 25.7 Portfolio product - children's three-dimensional products
538
References from children's math, science, or social studies journals/notebooks,
Özlem YURT, Esra ÖMEROĞLU
Examples of problem solving, solution suggestions, created problems, etc., It can be videotape and / or audio cassettes.
Figure 25.8 Portfolio Product – Map Drawings of Children (Katie, 2014) Conclusion One of the methods that should be used effectively in preschool science education is inquiry-based learning approach. In this approach, it is the basis of inquiry-based learning that the child conducts research with individual or group. In inquiry-based science education, children learn to do research by using the necessary steps for research. The prerequisite is to gain basic process skills such as observation, measurement, classification and comparison of children who are actively involved in the process in which science process skills are gained through research. For inquiry-based learning approach to be effective, it is important for children to participate effectively in the research process and steps. In order to achieve this participation, the trainers must first understand the philosophy of the approach and have the necessary knowledge and skills. Educators should take part in activities where children can use their science process skills when they plan their learning process for children. Thus, inquiry-based learning approach philosophy should be provided in applications of science activities by making plans to gain the concepts of science. At the end of all these learning processes, an evaluation plan can be created to decide what teachers are going to evaluate as well as when and how to evaluate them. In this plan, the purpose of the evaluation should be determined and decided on what and when to do. Thus, teachers can administer alternative evaluation methods. 539
Evaluation in Injury-Based Science Education
References Ahi, B. (2016). Flying, feathery and beaked objects: Children’s mental models about birds. International Electronic Journal of Environmental Education, 6(1), 1-16. Akman, B. (2011). Evaluation in science education [Fen eğitiminde değerlendirme]. Akman, B., Balat, U. G. & Güler, T. (Ed.) Preschool science education [Okul öncesi dönemde fen eğitimi] (pp. 151-162). Ankara: Pegem A Publishing. Aktaş, A. Y. (2002). Objectives of science education in pre-school period [Okul öncesi dönemde fen eğitiminin amaçları]. Çocuk Gelişimi Eğitimi Dergisi, 6, 1-8. Albright, K., Petrulis, R., Vasconcelos, A., & Wood, J. (2012). Inquiry-based learning, research methods and dissertation support in information. Education for Information, 29(1), 19-38. https://doi.org/10.3233/EFI2010-0912. Andrade, H. G. (2000). Using rubrics to promote learning. Educational leadership, 57(5), 13-19.
thinking
and
Aydoğdu, B., & Ergin, Ö. (2008). The effects of open-ended and inquiry-based laboratory techniques on students’ science process skills [Fen ve teknoloji dersinde kullanılan farklı deney tekniklerinin öğrencilerin bilimsel süreç becerilerine etkileri]. Ege Eğitim Dergisi, 2(9), 15-36. Beltran, D., Sarmiento, L. E., & Mora-Flores, E. (2013). Science for today's language learners: Developing academic language through inquiry-based instruction. CA: Shell Educational Publishing, Inc. Birbili, M. (2006). Mapping knowledge: Concept maps in early childhood education. Early Childhood Research & Practice, 8(2), n2. Brewer, J. A. (2007). Introduction to early childhood education: Preschool through primary grades. (Sixth edition) USA: Pearson Education Inc. Brown, E., Gibbs, G., & Glover, C. (2003). Evaluation tools for investigating the impact of assessment regimes on student learning. Bioscience Education, 2(1), 1-7. https://doi.org/10.3108/beej.2003.02000006.
540
Özlem YURT, Esra ÖMEROĞLU
Butler, S.M., McColskey, W., & O'Sullivan, R. G. (2005). How to assess student performance in science: Going beyond multiple-choice tests 3rd Edition. Greensboro, NC: Southeast Regional Vision for Education. Bybee, R. W. (2004). Scientific inquiry and science teaching. In L. Flick and N. G. Lederman (Eds.), Scientific inquiry and nature of science. Dordrecht: Kluwer Academic Publishers. Campbell, M. A. (2000). The effects of the 5E learning cycle model on students’ understanding of force and motion concepts. Unpublished Master’s Thesis, Millersville University, Pennsylvania. Capps, D. K., & Crawford, B. A. (2013). Inquiry-based instruction and teaching about nature of science: Are they happening? Journal of Science Teacher Education, 24(3), 497-526. https://doi.org/10.1007/s10972-012-9314-z. Carin, A. A., & Bass, J. E. (2001). Teaching science as inquiry. (Ninth edition). Upper Saddle River, NJ: Prentice Hall, Inc. Conezio, K., French L., Nwachi, O., & Sanders, S. (2002). Science at the center of the integrated curriculum: 10 benefits noted by head start teachers. Beyond the Journal-Young Children on the Web. Web: Retrieved on February 10, 2008 from http://journal.naeyc.org/btj/200209/ ScienceAtTheCenterOfTheIntegratedClasroom.pdf. Conklin, W. (2007). Applying differentiation strategies: Teacher's handbook for grades 3-5. Huntington Beach, CA: Shell Education. Contant, T. L., Bass, J. L., Tweed, A. A., & Carin, A. A. (2017). Teaching science through inquiry-based instruction. Thirteenth Edition. USA: Pearson Education. Crawford, A. B. (2000). Embracing the essence of inquiry: New roles for science teachers. Journal of Research in Science Teaching, 37(9), 916-937. https://doi.org/10.1002/1098-2736(200011)37:93.0.CO;2-2. Cripe, M. K. (2009). A study of teachers' self-efficacy and outcome expectancy for science teaching throughout a science inquiry-based professional development program. Unpublished Doctoral Thesis, The Graduate Faculty of The University of Akron, Ohio.
541
Evaluation in Injury-Based Science Education
Cuevas, P., Lee, O., Hart, J., & Deaktor, R. (2005). Improving science inquiry with elementary students of diverse backgrounds. Journal of Research in Science Teaching, 42(3), 337–357. https://doi.org/10.1002/tea.20053. Çalışkan, H., & Turan, R. (2008). Investigating the effect of Inquiry-Based Learning (IBL) approach on academic achievement and retention level of the course of social studies [Araştırmaya dayalı öğrenme yaklaşımının sosyal bilgiler dersinde akademik başarıya ve kalıcılık düzeyine etkisi]. Türk Eğitim Bilimleri Dergisi, 6(4), 603–627. Çalışkan, H., & Turan, R. (2010). The effect of inquiry-based learning approach on attitude in the course of social studies [Sosyal bilgiler dersinde araştırmaya dayalı öğrenme yaklaşımının derse yönelik tutuma etkisi]. İlköğretim Online, 9(3), 1238-1250. DeBoer, G. E. (2004). Historical perspectives on inquiry teaching in school. In L. Flick and N. G. Lederman (Eds.), Scientific inquiry and nature of science (pp. 17-37). Dordrecht: Kluwer Academic Publishers. Demir, A., & Abell, S.K. (2010). Views of inquiry: Mismatches between views of science education faculty and students of an alternative certification program. Journal of Research in Science Teaching, 47(6), 716–741. https://doi.org/10.1002/tea.20365. Demiral, U. (2018). Examination of critical thinking skills of preservice science teachers: A perspective of social constructivist theory. Journal of Education and Learning, 7(4), 179-190. Dere, H., & Ömeroğlu, E. (2001). Science nature math studies in pre-school period [Okul öncesi dönemde fen doğa matematik çalışmaları]. Ankara: Anı Yayıncılık. Deveci, İ. (2018). Ortaokul Fen Laboratuvarı Akademik Risk Alma Ölçeği: Geçerlik ve Güvenirlik Çalışması. İlköğretim Online, 17(4), 1861-1876. Duschl, R. A., & Gitomer, D. H. (1997). Strategies and challenges to changing the focus of assessment and instruction in science classrooms. Educational Assessment, 4(1), 37-73. https://doi.org/10.1207/s15326977ea0401_2. Easterly III, R. G., & Myers, B. E. (2011). Inquiry-based instruction for students with special needs in school based agricultural education. Journal of Agricultural Education, 52(2), 36-46. 542
Özlem YURT, Esra ÖMEROĞLU
Edelson, D. C. (1998). Matching the design of activities to the affordances of software to support inquiry-based learning. In Bruckman, A.S., Guzdial, M., Kolodner, J. L. and Ram, A. (Eds.), Proceedings of the International Conference of the Learning Sciences 1998 (pp. 77-83). Charlottesville, VA: AACE. Ekinci, B. (2012). Evaluation and support of development and learning in early childhood [Erken çocukluk döneminde gelişim ve öğrenmenin değerlendirilmesi ve desteklenmesi] (Çev. Ed: Birsen Ekinci Palut) Documentation: Recording information [Belgelendirme: Bilgilerin kaydedilmesi]. (Unit 5- p.72-95). Ankara: Nobel Publishing. Eryılmaz Toksoy, S. & Kaya, Ö. (2017). Fizik Öğretmenlerinin Benimsedikleri ve Kullandıkları Problem Çözme Stratejileri. 5th International Instructional Technologies & Teacher Education Symposium (ITTES2017), 11- 13 October, İzmir. Eshach, H., & Fried M. N. (2005). Should science be taught in early childhood? Journal of Science Education and Technology, 14(3), 315-336. https://doi.org/10.1007/s10956-005-7198-9. Fang, Z., Lamme, L., & Pringle, R. (2010). Language and literacy in inquirybased science classrooms, grades 3-8. Thousand Oaks, CA: Corwin Press and Arlington. Ganzel, C., & Stuglik, J. (2003). The llama projects. Early Childhood Research & Practice, 5(2), n2. Gillies, R. M., Nichols, K., Burgh, G., & Haynes, M. (2012). The effects of two strategic and meta-cognitive questioning approaches on children’s explanatory behaviour, problem-solving, and learning during cooperative, inquiry-based science. International Journal of Educational Research, 53, 93-106. https://doi.org/10.1016/j.ijer.2012.02.003. Goldman, S., Radinsky, J., Tozer, S., & Wink, D. (2010). Learning as inquiry. In Baker, E., McGraw, B. and Penelope, P. (Eds), The International Encyclopaedia of Education. (Third edition). Oxford: Elsevier. Hackling, M., Smith, P., & Murcia, K. (2010). Talking science: Developing a discourse of inquiry. Teaching Science, 56(1), 17-22.
543
Evaluation in Injury-Based Science Education
Hammerman, E. L. (2006). Eight essentials of inquiry-based science, K-8. Thousand Oaks, California: Carwin Press. Harlen, W. (2004). Evaluating inquiry-based science developments, National Academy of Science. Web: Retrieved on June 10, 2012 from http://www.nsrconline.org/pdf/NAS_paper_eval_inquiry_science.pdf. Harlow, D. B. (2009). Structures and improvisation for inquiry-based science instruction: A teacher’s adaptation of a model of magnetism activity. Science Education, 94(1), 142-163. https://doi.org/10.1002/sce.20348. Harris, F. D. (2009). Using inquiry-based instructional strategies in third-grade science. Unpublished Doctoral Thesis, Capella University, Minnesota. Hung, M. (2009). Achieving science, math and reading literacy for all: The role of inquiry-based science instruction. Unpublished Doctoral Thesis, University of Utah, Utah. Kallery, M. (2004). Early years teachers' late concerns and perceived needs in science: An exploratory study. European Journal of Teacher Education, 27(2), 147-165. https://doi.org/10.1080/026197604200023024. Katie (2014). Montessori mapping activities {Intro to geography for kids}.1.12.2018:https://www.giftofcuriosity.com/montessori-mappingactivities-intro-to-geography-for-kids/ Keys, C., & Bryan, L. A. (2001). Co-constructing inquiry-based science with teachers: Essential research for lasting reform. Journal of Research in Science Teaching, 38, 631-645. https://doi.org/10.1002/tea.1023. Khasnabis, D. (2008). Developing scientific literacy through classroom instruction: Investigating learning opportunities across three modes of inquiry-based science instruction. Unpublished Doctoral Thesis, The University of Michigan, Michigan. Köseoğlu, F., Tümay, H., & Budak, E. (2008). Paradigm changes about nature of science and new teaching aproaches [Bilimin doğası hakkında paradigma değişimleri ve öğretimi ile ilgili yeni anlayışlar]. G.Ü. Gazi Eğitim Fakültesi Dergisi, 2, 221-237. Krajcik, J.S., Charlene M.C., & Berger, C. (1999). Teaching children science a project- based approach. US: McGraw-Hill College.
544
Özlem YURT, Esra ÖMEROĞLU
Kubicek, J. (2005). Inquiry-based learning, the nature of science, and computer technology: New possibilities in science education. Canadian Journal of Learning and Technology 31(1), 51-64. Lazonder, A. W., & Harmsen, R. (2016). Meta-analysis of inquiry-based learning: Effects of guidance. Review of Educational Research, 86(3), 681718. https://doi.org/10.3102/0034654315627366. Llewellyn, D. (2002). Inquiry within: implementing inquiry-based science standards. USA: Corwin Press, Inc. A Sage Publications Company. Llewellyn, D. (2011). Differentiated science inquiry. Thousand Oaks, CA: Corwin Press. Lynch, S., Kuipers, J., Pyke, C., & Szesze, M. (2005). Examining the effects of a highly rated science curriculum unit on diverse populations: Results from a planning grant. Journal of Research in Science Teaching, 42(8), 912-946. https://doi.org/10.1002/tea.20080. Marx, R., Blumenfeld, P. C., Krajcik, J., Fishman, B., Soloway, E., Geier, R., & Tali Tal, R. (2006). Inquiry-based science in the middle grades: Assessment of learning in urban systemic reform. Journal of Research in Science Teaching, 41(10), 1063-1080.https://doi.org/10.1002/tea.20039. McDonald, J. M., & McDonald, R. B. (2002). Nature study: A science curriculum for three and four-year-olds. In J. Cassidy and S. D. Garrett (Eds.), Early childhood literacy: Programs & strategies to develop cultural, linguistic, scientific and healthcare literacy for very young children & their families (pp. 164-185). Texas: University Corpus Christi. McFarland, L. (2008). Anecdotal records: Valuable tools for assessing young children's development. Dimensions of Early Childhood, 36(1), 31-36. Mertler, C. A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25), 1-10. Militello, M., Rallis, S., & Goldring, E. (2009). Leading with inquiry & action: How principals improve teaching and learning. Thousand Oaks, CA: Corwin. Minner, D. D., Levy, A. J., & Century, J. (2010). Inquiry-based science instruction-What is it and does it matter? Results from a research synthesis
545
Evaluation in Injury-Based Science Education
years 1984 to 2002. Journal of Research in Science Teaching, 47(4), 474– 496. https://doi.org/10.1002/tea.20347. National Research Council [NRC] (1996). National science education standards. Washington, DC: National Academy Press. National Research Council [NRC]. (2000). Inquiry and the national science education standards. Washington, DC: National Academies Press. Novak, J. D., Bob Gowin, D., & Johansen, G. T. (1983). The use of concept mapping and knowledge vee mapping with junior high school science students. Science education, 67(5), 625-645. https://doi.org/10.1002/sce.3730670511. Pargellis, S. (2014). Early childhood: observational drawing as a science tool. 1.12.2018: https://www.mustardseedschool.org/2014/11/observationaldrawing-as-a-science-tool-in-the-early-childhood-classroom/ Pedaste, M., Mäeots, M., Siiman, L. A., De Jong, T., Van Riesen, S. A., Kamp, E. T., ... & Tsourlidaki, E. (2015). Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational research review, 14, 47-61. https://doi.org/10.1016/j.edurev.2015.02.003. Ruby, A. (2006). Improving science achievement at high-poverty urban middle schools. Science Education, 90(6), 1005-1027. https://doi.org/10.1002/sce.20167. Ruiz‐Primo, M. A., & Shavelson, R. J. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 33(6), 569-600. https://doi.org/10.1002/(SICI)10982736(199608)33:63.0.CO;2-M. Rushton, S. (2008). Activate your students: An inquiry-based learning approach to sustainability. Carlton South, Vic.: Curriculum Corporation. Saçkes, M., Trundle, K. C., Bell, R. L., & O'Connell, A. A. (2011). The influence of early science experience in kindergarten on children's immediate and later science achievement: evidence from the Early Childhood Longitudinal Study. Journal of Research in Science Teaching, 48(2), 217235. https://doi.org/10.1002/tea.20395.
546
Özlem YURT, Esra ÖMEROĞLU
Samarapungavan, A., Mantzicopoulos, P., & Patrick, H. (2008). Learning science through inquiry in kindergarten. Science Education, 92, 868-908. https://doi.org/10.1002/sce.20275. Saracho, O. N., & Spodek, B. (2008). Contemporary perspectives on science and technology in early childhood education. Charlotte, NC: IAP-Information Age Pub. Suarez M. L. (2011). The relationship between inquiry-based science instruction and student achievement. Unpublished Doctoral Thesis, The University of Southern Mississippi, Mississippi. Şiraj-Blatchford, J., & MacLeod-Brudenell, I. (1999) Supporting science, design and technology in the early years. Buckingham: Open University Press. Tash, G. G. (2009). A phenomenological study of assessment methods ın the ınquiry-based science classroom: How do educators decide? Unpublished Doctoral Thesis, Walden University, Minnesota. Tatar, N. (2006). The effect of inquiry-based learning approaches in the education of science in primary school on the science process skills, academic achivement and attitude [İlköğretim fen eğitiminde araştırmaya dayalı öğrenme yaklaşımının bilimsel süreç becerilerine, akademik başarıya ve tutuma etkisi]. Doctoral Thesis, University of Gazi, Eğitim Bilimleri Enstitüsü, Ankara. Tatar, N., & Kuru, M. (2006). The effect of inquiry-based learning approach in science education on academic achivement [Fen eğitiminde araştırmaya dayalı öğrenme yaklaşımının akademik başarıya etkisi]. Hacettepe Üniversitesi Eğitim Fakültesi Dergisi, 31, 147–158. Tosa, S. (2009). Teaching science as inquiry in us and in japan: a cross-cultural comparison of science teachers' understanding of, and attitudes toward inquiry-based teaching. Unpublished Doctoral Thesis, University of Massachusetts Lowell, Massachusetts. Trundle, K.C. (2009). Teaching science during the early childhood years. Carmel, CA: National Geographic School Publishing. van Uum, M. S., Verhoeff, R. P., & Peeters, M. (2017) Inquiry based science education: scaffolding pupils’ self-directed learning in open inquiry,
547
Evaluation in Injury-Based Science Education
International Journal of Science Education, 39(18), 2461-2481. https://doi.org/10.1080/09500693.2017.1388940. Watters, J. J., Diezmann, C. M., Grieshaber, S. J., & Davis, J. M. (2001). Enhancing science education for young children: A contemporary initiative. Australian Journal of Early Childhood, 26(2), 1-7. Wilder, M., & Shuttleworth, P. (2005). Cell inquiry: A 5E learning cycle lesson. Science Activities, 41(4), 37-43. https://doi.org/10.3200/SATS.41.4.3743. Wise, K.C. (1996). Strategies for teaching science: What works? The Clearing House, July/August, 337-338. https://doi.org/10.1080/ 00098655.1996.10114334. Worth, K., & Grollman, S. (2003). Worms, shadows and whirlpools: science in the early childhood classroom. Washington: National Science Foundation.
548
Banuçiçek ÖZDEMİR
Chapter 26 Problem-Solving Skills Banuçiçek ÖZDEMİR
Introduction Technology takes place in most areas of life. Technology has a great importance in terms of education. “The Flipped Classroom Model” is also an example of a model that technology contributes to the educational process. This model is particularly implemented in courses which has difficulties to learn, confusion and lack of enough time at application and experiment. This model makes unlimited contributions to educational process of the students by applications both in the class and out of the class. Thus, within the scope of the course, students are provided with access to information according to their interests and learning style. In a common platform, with e-content, videos, visual materials and podcasts, students are provided to be independent in their access to information wherever and whenever they want as well. “The Flipped Classroom Model” is one of the models in which students are able to listen to the places they cannot understand or ask questions, and thus provide information in the cognitive process. When the students use the knowledge with out-of-class activities at the knowledge and concept steps, it is also aimed to make the students to reach to the synthesis and analysis steps in the classroom activities. It is guided to reach information together with other peers in this group who meet in the common platform with the extracurricular activities that present a learning environment according to their interests, take the task of the students by themselves in directing the educational process, ask the questions, determine and analyse these questions by the instructor. Thus, during a certain period of listening to the content of the student and the teacher offering ready-made information in the class, it is transformed into a teacher concept which takes the student to the centre, plays a role in the educational processes of the students and becomes a guide for them to find the information.
549
Problem-Solving Skills
This chapter of the book is consisted that The emergence of Flipped Classroom Model concept, Theoretical basics of the Flipped Classroom Model, Use of Flipped Classroom Model, Components of Flipped Classroom Model, The effect of Flipped Classroom Model on learning, Advantages of Flipped Classroom Model applications, Possible difficulties during Flipped Classroom Model applications, Things to consider when implementing the Flipped Classroom Model and the measurement and evaluation of the applications in the Flipped Classroom Model. The Emergence of Flipped Classroom Model Concept The Flipped Classroom Model was first appeared at Miami University. The model was introduced as “inverted classroom” and proposed by the lecturers in order to meet the needs of each student with different learning areas in the departments that require a lot of reading such as sociology, psychology, law and philosophy. According to this model, a resource book for a subject is determined for each week. Presentations in the course are recorded on videotapes. These records are given to students in order to work at home or at university. In addition, these records are published on the Internet for students’ access. If there are questions raised from the students who are listening to these records one by one, a short presentation is given for the answers of these questions in the first 10 minutes and lecturers have these students do laboratory experiments or economic experiments according to the subject of the course. A web site for the related courses is designed by the lecturers in addition to a rich virtual library is created and questions about the past years are published here. At the same time, a chat room is created to give an opportunity for students to ask questions about the subject to their lecturers. By creating a forum named “bulletin board”, students have chance to share information and have conversation about subjects. Finally, to evaluate the application open-ended questions and surveys are given to the students. This model initially designed by Lage, Platt and Treglia (2000). Lage, Platt and Treglia (2000) was not received enough attention. In 2007, both Jonathan Bergmann and Aaron Sams, both chemistry teachers at the Woodland Park High School, observed that students who attended compulsory activities and sporting activities missed their courses (Bergmann & Sams, 2012). They prepared software for these students and recorded the live lessons related to the contents of the course, footnotes and subject content in this software. Later, they published these contents for those students who missed their courses allowing students to watch, read and catch up. 550
Banuçiçek ÖZDEMİR
It is observed that not only those students who missed the courses but also the students that attended the courses are using the application. These contents were both listened by the students who missed the courses and the students who attended the courses to fulfil the missing parts again and again. Realising this situation by re-planning the educational process, Sams decided to have classes through asking students to read theoretical parts of the subjects at home and doing the practice parts at school with teacher and taught how to watch these videos and how to take notes. In this way, students are required to bring notes and questions to the class that they did not understand. Shortly after taking the attention of the other teachers in the school, this application was soon spread across the country (Bergmann & Sams, 2012). This study is considered to be the first appearance of the “Flipped Classroom Model” in the literature. Theoretical Basis of The Flipped Classroom Model The technology develops and every day brings continuous changes in the social structures, needs of societies. Accordingly, the changes affect in individuals and their environment. Education is one of the areas in which technological developments cause changes in societies. With the application of technology to this field, education is being driven from a traditional approach to a constructivist approach that adopts learning by living. The base of the Flipped Classroom Model is to create this approach. It removes the student from the position of passive listener and gives the student an active role in learning process, and considers the lecturer as a guide for students to reach information themselves instead of presenting subjects directly. The access of students to information from multiple sources also changes the role of students in learning. In traditional approaches, the teacher was in the centre, having role of information giver and students were ready information takers while teachers’ roles and responsibilities has changed in the constructivist approach. Teacher should be a guide to students about how they can reach to information (Grover & Stovval, 2013). Thus, this trend also brings the necessity for education institutions to keep up with technological developments (Davis & Shade, 1994). While the traditional approach is mostly teacher-oriented, the students are in a passive position and the information about the course content is measured by assigning homework, Flipped Classroom Model has gained a place in the literature as a model that allows the student to learn the information at home and to practice in the class (Zownorega, 2013). Flipped Classroom model allows 551
Problem-Solving Skills
students to follow their individual learning and offer opportunities against the encountered difficulties and allows them to solve problems by replacing the learning process in the classroom with homework at the same time (Verleger & Bishop, 2013). It is also very important in terms of making in-class activities together with both individual and group. It allows the teacher to take care of the student one by one in the problems encountered individually and enables the student to reach the information at any time and any place (Talbert, 2012). In addition to the individual learning of the students, the activities in the classroom and the discussion environments in which the students can participate provide an environment in which the teacher can actively guide and reinforce the information within the course content (Seaman & Gainess, 2013). Use of Flipped Classroom Model November and Mull (2012) defined the Flipped Classroom Model as a model in which students can watch class by speaking on videos, reaching e-books, listening to music at any place or time, downloading digital medias automatically, asking questions about where they have problems or discussing to each other. The model, which provides individual learning responsibility, is known as bringing out of class activities into the class and in-class activities to out of class. (Lage, Platt and Treglia, 2000). Millard (2012) defined Flipped Classroom Model as a model that provides individual counselling, focusses on discussion environments in the classroom and gives freedom on standardised curriculum. This is a quite practical and addressing more than one learning areas in the brain method that a teacher prepares without going out of present curriculum.
Figure 26.1 Comparison of traditional and flipped classroom models 552
Banuçiçek ÖZDEMİR
Components of Flipped Classroom Model The model consists of 4 main components: F (Flexible Environment): Providing the student with the flexibility to reach the information at any time and place. L (Learning Culture): Creating a student-centred learning culture and being able to manage own learnings and plan accordingly their own learning rate. I (International Content): Determining what and how to learn by teachers. P (Professional Educator): A model that gives teacher more workload as it is a need to prepare subject videos, applications, etc. and also teachers need to know what students do (Flipped Learning Network-FLN, 2014). It consists of two components that must follow each other, including out-ofclass applications and in-class applications. Out of Class Applications Out of class applications include video recordings, e-books, podcasts, voice presentations, simulations, web sites, educational materials. The preparation of the course materials by the teacher is important in order to facilitate the participation and perception of the students' readiness levels of the sample group that they address. Prepared lecture notes, web sites and videos may not attract or get the attention of students. Thus, it is advised that lecturers should prepare these materials by themselves (Bergman & Sams, 2014). In-Class Applications In-class applications can be arranged according to each sample group in every field of education-teaching planning. Applications based on the constructivist approach that can enable the student to practice in the course that should be planned. Questions should be placed in these videos, so as to make students do out of class activities before coming the lesson (Milman, 2012; Enfield, 2013). Bishop and Verleger (2013) distinguished the Flipped classroom Model as follow; 1. Interactive classroom activities, 2. Computer-based extracurricular activities consisting of individual learning activit
553
Problem-Solving Skills
Figure 26.2 Flipped Classroom
Frydenberg (2012) mentioned to “Flipping Excel” in his work when applying the Flipped classroom model as a result of his IT 101 classroom work. The steps during the in-class activity time (75 minute) should be as follows; 1. First 5 seconds: welcomes and announcements. 2. Next 5 minutes: The part of the class which is not understood within the scope of out-of-class activity. 3. Next 5 minutes: Explaining the application of classroom activity. 4. Next 40 – 45 minutes: Completing the in-classroom activities by grouped students. 5. Last 15 –20 minutes: Explaining the activities of each group and solving the encountered problems. The Effect of Flipped Classroom Model on Learning With the inclusion of technology in education, it has become possible for the students to reach the course contents individually. The ability of students to follow the course according to their individual speed and to be able to listen to the parts that cannot understand again was the reason for the preference of the Flipped Classroom Model (Bishop & Verleger, 2013). While the knowledge learned in the traditional approach cannot go beyond the reminding and comprehension step in this taxonomy, the flipped classroom model, which is prepared with information and communication technologies, provides an educational process that proceeds towards the application, analysis and evaluation step. The Flipped Classroom Model allows not only students but also lecturers to perform activities in steps such as application, analysis and evaluation in Bloom Taxonomy. The courses also support individual learning for students 554
Banuçiçek ÖZDEMİR
with different learning speed. It offers opportunities for slow learners to perform activities that will appeal to different learning lobes and to provide more permanent learning while offering opportunities for repeated listening and repetition. In a high school at Detroit for mathematics and English classes, three videos were prepared up to 5 or 7 minutes for each week within the flipped classroom model and required students to watch it home or at school. As a result of the study, it was observed that the failure in English courses fell from 19% to 13% in mathematics class from 50% to 44% (Strayer, 2011). While in Miami University in the department of software engineering the model was applied by creating a virtual art museum (Gannod, Burge & Helmick, 2008), Wetterlund (2008) in California University within the “introduction to biology” lesson (Moravec, Williams, Aguilar-Roca& O’Dowd, 2010) and in Franklin University within the “Introduction to Computer Science” lesson students’ success was observed. Advantages of Flipped Classroom Model Applications There are advantages of Flipped Classroom Model with two points of view as student and teacher (Miller, 2012; Fulton, 2012; Duerden, 2013; Herreid& Schiller, 2013). From the point of students;
It enables students to individualize information according to their personal learning speed, Learning time is flexible, It enables students to access to the course contents at any time and place, Provides information to be taught in advance, Information can be accessed at any time, It enables parents to help their children's lessons while at the same time allowing parents to monitor the courses of their children, Makes students to be responsible of their own learning, Increases the communication between the teacher and students, Let students to study with their peers, Students obtains the ability to make comments, Allows students to learn at different times according to their learning skills,
555
Problem-Solving Skills
Students who have difficulty in the activities outside the classroom are determined by the teacher and these difficulties are solved, Provides the opportunity to do research, Develops the awareness of giving more importance on applications which are related to the taught topics, Enables students to participate actively in learning. This participation helps to develop high-level skills.
The Flipped Classroom Model which is one of the models in which students learn actively and learning by doing in the centre, contributes to the students' high-level thinking skills (Touchton, 2015). From the point of teachers;
Provides the opportunity to see the deficiencies in the course content about the area of expertise, Provides the opportunity to teach information in the guiding position, It enables students to control themselves in classroom management, Provides more time for class applications, Offers fewer number of student groups, Lovers the time separated for teaching the topic and repeating, Prepares course material in cooperation, Allow student-teacher communication to develop, Provides the opportunity to prepare the curriculum in practical and updatable form, Control the development, progress and follow-up of the students by reporting method, Their advancement in technological development and on videomaking, teaching design leads to an increase in technological literacy levels. Technology literacy is very important for teachers in terms of the growing generation of technology (Çepni, Ormancı & Ülger, 2019; Seferoglu, 2004).
Possible Difficulties as Flipped Classroom Model Applied Jeskins (2012) argues that the flipped classroom model has its disadvantages as well as its advantages and lists them as follows:
556
Every student cannot like this system and it is difficult to control the students.
Banuçiçek ÖZDEMİR
It is difficult to control if students rehearsed the classroom activities at home. Inability to understand the content of the course. To implement this system, schools may not have related software and technology content. Even if this environment is provided, each student may not be able to manage their individual learning. Students cannot have the necessary technological equipment (pc, internet, etc.) due to the low income and living in low-income areas. In any library or school computer, time and access may be limited. Reduces learning efficiency due to the obligation to spend homework times on a computer, TV or iPod.
In his study, Duerden (2013) states that out-of-class activities will have difficulty in understanding the information learned without student-teacher or student-student communication. Talbert (2012) stated that students could react negatively to the fact that they are culturally out of tradition in this method where they take responsibility for their own learning. Miller (2012) argues that the model is disadvantaged in the traditional teaching method, active listening and feedback cannot be observed while the teacher is teaching new knowledge, so it may have disadvantages.
The software that the teacher uses to create content will be paid this can lead teachers in a difficult position. Preparing the content for the class can be workload for teachers. Outside class applications must be set up in 60 minutes before the student arrives to the class (Tan, Brainard & Larkin, 2005). Otherwise, it will be a very time-taking activity for students. Students coming to the class without doing out of classroom activities will have negative effect on the lesson. Video lessons can be understood as homework. Because of these misunderstanding students’ attitudes should be followed
Things to Consider When Implementing the Flipped Classroom Model Pilot applications and researches should be taken. So, as to understand which areas are available for this system. 557
Problem-Solving Skills
It is important to have a school that has the necessary software and technological infrastructure. It is important to support teachers to integrate their knowledge in technology in their own fields as a guide to educate technology literate individuals. A technical team is needed for the elimination and monitoring of technical faults that may occur in the content of the software. The integration of information and communication technologies into the training processes should be supported in order to prevent the problems arising from the inadequate use of technology by teachers.
The instructional process should be well planned by the teacher and the students should be controlled by following these processes. It is necessary to make sure that the lecture notes prepared in order to make effective use of this process in both stages of the classroom and extracurricular activities in a clear, understandable and reusable way (Bergman & Sams, 2012). During the implementation of the Flipped Classroom Model, Miller (2012) emphasized the importance of paying attention to the followings: Need to Know It is the stage where the content of the course is determined according to the program or plan to which it is attached. It should be prepared by the teacher who will prepare the content carefully and in a clear and simple language. Engaging Models It should be planned in such a way that the student will use during class activities and make it an experience. It is very important that they include models organized by active learning strategies, based on coherent-problem-based collaborative-interactive learning. At the same time, these models must be composed of individual and group activities to be used both in class activities and in out-of-class activities. Technology It is very important for the teacher to determine which technological practice is remarkable for the students in the learning process. The suitability of students to readiness levels, whether they are suitable for the sample group, affect the efficiency of out-of-class activities. It is important that the content prepared by
558
Banuçiçek ÖZDEMİR
the teachers have rich visual, audio content and also support the different mobile technologies. Reflection It is important for the teacher to measure how much students benefit from the digital environment prepared as an out of classroom activity. Reflective activities should be planned for the student to reflect the information he/she has learned and for the areas that are not learned or learned about the subject. Students are expected to be aware of what they learn and learned about the subject and reflect them. Time and Place It is necessary to teach where and when the learning is occurred, and the students should follow them. It may be difficult for all students to find the necessary time and to meet the technological requirements to reach the course content. Long time-consuming videos should be avoided and accessible, understandable videos should be prepared instead of them. Conclusion The contents are determined for the objectives when planning the training system. This content subject will be taught through teaching process suitable to the sample group or universe. However, the executed process should be planned for the measurement process. The Flipped Classroom Model integrates this in education in a century when the effect of technology is increasing day by day. As Presky (2001) mentions, it is inevitable that technology will be included in the education of individuals who are growing as a digital citizen, who are developing and feel the existence of today's technology in every aspect of life. It is seen that technology takes place in almost every area of an individual's life. It is a very practical and more than one learning areas to be prepared for the teacher and his/her subject and subject without having to change the curriculum. When the literature is scanned, the effect that the Flipped Classroom Model is asked to students shows that they mostly support learning and also learn more permanently (Strayer, 2011; Bergman & Sams, 2012; Frydenberg, 2012; Grover & Stovval, 2013; McLauglin, Gharkholonarehe & Davidson, 2014; Moraros et al., 2015; Hung, 2015). At the same time, when planning the training process, the measurement and evaluation of the individual should be compatible with the learning-teaching process. The Flipped Classroom Model evaluation, which students take part in, 559
Problem-Solving Skills
should be planned by considering this process. There are studies indicating that students should also take part in classroom activities with active learning activities (Mok, 2014). In the classes where the Flipped Classroom Model is applied, videos and simulations related to the content of the course are given to the students. They can access and follow technological tools such as computers, tablets, smartphones, etc. from outside the classroom (Knewton.com, 2011). In the literature, it is observed that students get more grades and better grades compared to traditional methods in education with the Flipped Classroom Model (Mason, Shuman & Cook, 2013; Frindlay-Thompson & Mombourquette, 2013). McLaughlin et al. (2014) adapted the Flipped Classroom Model as part of the “Basic pharmacy” course. In this adaptation, students are given reading texts, textbooks and lecture videos. In the classroom activities, the lesson was evaluated by applying open-ended questions and keypad question-answer systems to determine readiness levels before starting applications. According to the results of the evaluation, feedback is given to the missing parts. In addition to the class activities, a discussion topic was given, and they were asked to work with the match-and-share technique and present to their peers in the classroom. Feedback was also given at the end of the presentation. The students were also given presentations on certain subjects and discussions were held on the subject. At the end of each class activities, students were evaluated by a multiple-choice test consisting of ten questions. As a result of this, if the lack of learning goals is felt, the teacher provided the students with 1-3-minute mini presentations and the deficiencies were eliminated. Each of these activities was given specific scores and the scores of students from these activities were published as a result of measurement and evaluation. A team including Mc Laughing follow up 6010 students in 10 lessons which flipped classroom model is applied with a team including Mc Laughing for 2 years and evaluated written feedbacks qualitatively (Khanovaetal et al., 2015). After this evaluation, they stated that it is really hard to follow up each content of different classes and gives them extra workload with out of classroom readings and videos, but they also stated that it has facilitated learning. The measurement and evaluation of the students who pass the learning-teaching process with the Flipped Classroom Model should be aimed at measuring the application, analysis and evaluation steps in the Bloom Taxonomy.
560
Banuçiçek ÖZDEMİR
References Bergmann, J. & Sams, A. (2012). Flip Your Classroom: Reach Every Student in Every Class Every Day. Publisher: ISTE & ASCD. Bergman J. & Sams A. (2014) Flipped Learning: Gateway to student engagement.1st ed. ISTE Washington, USA. Bishop, J. L., & Verleger, M. A. (2013). The flipped classroom: A Survey of the Research. 120th ASEE Annual Conference & Exposition (pp. 1-18). Atlanta, GA. Çepni, S., Ormancı & Ülger, B. B. (2019). Fen Okuryazarlığı (2nd Edition). Ankara: Pegem Publishing. Davis, B.C., & D.D. Shade. (1994). Integrate, don’t isolate! Computers in the early childhood curriculum. ERIC Digest, December 1994. No. EDO-PS94-17. Flipped Learning Network. (2014). Flipped Learning Network. The Four Pillars of F-L-I-P: Retrieved from: http://www.flippedlearning.org/cms/ lib07/VA01923112/Centricity/Domain/46/FLIP_handout_FNL_ Web.pdf Frindlay-Thompson, S. & Mombourquette, P. (2013). Evaluation of a flipped classroom in an undergraduate business course. Global Conference on Business and Finance Proceedings, 8, 138-145. Frydenberg, M. (2012). Flipping Excel. 2012 Proceedings of the Information Systems Educators Conference, New Orleans Louisiana, USA. Fulton, K. (2012). Upside down and inside out: Flip your classroom to improve student learning. Learning & Leading with Technology, 39(8), 12–17. Herreid, C. F., & Schiller, N. A. (2013). Case Studies and the Flipped Classroom. Journal of College Science Teaching, 42(5), 62-67. Hung, H. (2015). Flipping the classroom for English language learners to foster active learning. Computer Assisted Language Learning, 28(1), 81-96. Gannod, G., Burge, J., & Helmick, M. (2008). Using the Inverted Classroom to Teach Software Engineering. International Conference on Software Engineering (ICSE). Leipzig, Germany.
561
Problem-Solving Skills
Grover, K., & Stovall, S. (2013). Student-centred Teaching through Experiential Learning and its Assessment. Journal of NACTA, 57, 86-87. Jenkins, C. (2012). The advantages and disadvantages of the flipped classroom. Retrieved from: http://info.lecturetools.com/blog/bid/59158/TheAdvantages-and-Disadvantages-of-theFlipped-Classroom. Khanova, J., Roth, M.T., Rodgers, J.E., & Mchauglin, J.E. (2015). Student experiences across multiple flipped courses in a single curriculum. Med Educ, 49, 1038-48. Knewton.com. (2011). The Flipped Classroom Infographic. Retrieved from 19 November 2018 [url: http://www.knewton.com/flipped-classroom/]. Lage, M. J., Platt, G. J., & Treglia, M. (2000). Inverting the classroom: A gateway to creating an inclusive learning environment. Journal of Economic Education, 31(1), 30-43. Mason, G. S., Shuman, T. R., & Cook, K. E. (2013). Comparing the effectiveness of an inverted classroom to a traditional classroom in an upper-division engineering course. IEEE Transactions on Education, 56, 430-435. McLaughlin, J. E., Roth, M. T., Glatt, D. M., Gharkholonarehe, N., Davidson, C. A., Griffin, L.M., Esserman, D. A., & Mumper, R. J. (2014). The flipped classroom: A course redesign to foster learning and engagement in a health professions school. Acad Med., 89, 236-243. Millard, E. (2012). 5 reasons flipped classrooms work: Turning lectures into homework to boost student engagement and increase technology fuelled creativity. Retrieved from https://www.universitybusiness.com/article/5reasons-flipped-classrooms-work Miller, A. (2012). Five Best Practices for the Flipped Classroom. Retrieved from https://www.edutopia.org/blog/5-tips-flipping-pbl-classroom-andrewmiller Milman, N. (2012). The flipped classroom strategy: What is it and how can be used? Distance Learning, 9, 85-87. Mok, H. N. (2014). Teaching tip: The flipped classroom. Journal of Information Systems Education, 25, 7-10.
562
Banuçiçek ÖZDEMİR
Moraros J., Islam A., Yu S., Banow R. & Schindelka, B. (2015). Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting. BMC Med Educ., 28, 15-27. Moravec M., Williams A., Aguilar R. N., & O’Dowd D. K. (2010). Learn before lecture: a strategy that improves learning outcomes in a large introductory biology class. CBE Life Sci Educ., 9, 473-481. November, A. & Mull, B. (2012). Flipped learning: A response to five common criticisms. November Learning. Retrieved from http://web.uvic.ca/ ~gtreloar/Articles/Technology/flipped-learning-a-response-to-fivecommon-criticisms.pdf Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 16. Seaman, G., & Gaines, N. (2013). Leveraging digital learning systems to flip classroom instruction. Journal of Modern Teacher Quarterly, 1, 25-27. Seferoğlu, S. S. (2004). Teacher competencies and professional development [in Turkish]. Bilim ve Aklın Aydınlığında Eğitim, 58, 40-45. Strayer, J. F. (2011). The teacher’s guide to flipped classroom. Retrieved from: http://www.edudemic.com/guides/flipped-classrooms-guide/ Tan E., Brainard A. & Larkin G.L. (2015). Acceptability of the flipped classroom approach for in house teaching in emergency medicine. Emerg Med Australas, 27, 453. doi: 10.1111/1742-6723.12454. Talbert, R. (2012). Inverted Classroom. Colleagues, 9(1), Article 7. Touchton, M. (2015). Flipping the classroom and student performance in advanced statistics: Evidence from a quasi-experiment. Journal of Political Science Education, 11(1), 28-44. Verleger, M. A., & Bishop, L. J. (2013). The flipped classroom: A survey of the research. 120th ASEE Conference & Exposition. American Society for Engineering Education. Wetterlund, K. (2009). Flipping the field trip: Bringing the art museum to the classroom. Theory into Practice, 47,110–117.
563
Problem-Solving Skills
Zownorega, J. S. (2013). Effectiveness of flipping the classroom in a honours level, mechanics-based physics class. Master Thesis, Eastern Illinois University, Charleston.
564
Bestami Buğra ÜLGER
Chapter 27 Assessment of the STEM in the Classroom Bestami Buğra ÜLGER
Introduction It is known that STEM is not a new concept, but integration of STEM fields does not have a long history. Today, STEM education continues to be defined differently by various groups and individuals with many definitions evolving into a series of one-size-fits-all perceptions (Ostler, 2012). Thus, Science, Technology, Engineering, and Mathematics (STEM) education is becoming more and more popular for educators and policy makers through the 21st century. Nations implement or integrate STEM based curriculums in their educational programs. Hence, there are some results of this decisions which cause major changes in educational settings. One of the important questions of these is how teachers are going to measure STEM based lesson, skills or knowledge of the students? There are answers of this question from different aspects. In overall, the most effective and useful tools to measure a STEM based lesson are rubrics and process-based tools. The best assessment and evaluation practices for program performance and student outcomes are difficult to articulate since it is not easy to have both practical and truly effective assessment tools and practices (Scaramozzino, 2010). In this chapter, measuring students’ STEM learning via different tools and from different point of views will be discussed. STEM based lessons Because of the evolving STEM teaching and learning needs, there is a need to understand that STEM content and STEM education are not the same things (Sanders, 2009). Teachers are expected to teach STEM who had graduated from one specialized content area like science and mathematics. A degree in a STEM discipline would have the option to be highly specialized while a STEM education degree will require a broader general understanding of the interrelatedness of STEM topics (Ostler, 2012). Secondary level teachers of 565
Assessment of the STEM in the Classroom
STEM disciplines must master a broad knowledge base of STEM content and must witness advanced integrated pedagogical models to be able to develop practical conceptual lessons for their students (Hynes, & Dos Santos, 2007; Cantrell, Pakca, & Ahmad, 2006). For example, science teachers should understand the relevancy of their content and literacy with the mathematics and engineering aspects. So, the issue in here is teachers need to define where they and STEM stand. First, understanding the nature of the individual disciplines and also the nature of how they are best taught and learned at elementary, middle, secondary, and collegiate levels (Ostler, 2012). And second, find the relevancy of their identification with STEM content. After doing that there is a need to really understand what STEM content is and then act, plan, implement and measure it. In individual STEM disciplines, typically defaults to the latter in each discipline because it is more convenient to measure how well students recall facts than to measure a broad integrated understanding of methods, but perhaps there is a need to look beyond the who, what, when, and where, that make up our standardized tests and create a greater commitment to asking why and how when it comes to STEM content (Ostler, 2012). A deep understanding of one particular discipline is necessary at some point in life to solve deeper and highly specialized problems. There is no a scientist that made a historical invention or a scientific discovery in our time that stayed in general opposition of expertise in only one area. So, there is a question that needs to be answered. It is known that individuals who have deeper knowledge is expected more but suggested to teach STEM content which is a more generalist paradigm. STEM starts early and aims to teach scientific way, problem solving and everyday life skills. If there is a need to explain more detailed, secondary level students need to develop scientific inquiry methods and refine effective heuristics for knowing, testing, and verifying information in order to have the tools to understand how information is interactive, interdependent, and adaptable (Ostler, 2012). There is an increasing awareness that current K-12 teaching practices tend to isolate STEM disciplines, emphasize rote memorization of STEM content, and neglect higher-order thinking skills (NRC, 2001; Niemi, Baker, & Sylvester, 2007). But there is insufficient data for teachers in practice on how to implement high-order thinking skills in STEM content. Educators need to know how to provide instructional supports that help students to recognize connections between disciplines, and they need to support students’ developing proficiency in individual subjects in ways that complement students’ learning through integrated activities (Ayvacı et 566
Bestami Buğra ÜLGER
al., 2016; Honey, Pearson & Schweingruber, 2014). In STEM disciplines that are mainly science, technology, engineering and mathematics, the 21st century skills such as critical thinking are seen as a developmental goal that researchers have some researches (Stein et al., 2007). With all these taught skills and scientific way, in collegiate level, individuals can construct deeper knowledge much more effectively and much more aware of the knowledge itself. Measuring STEM Skills The foundation of the STEM integration creates measuring that needs to use alternative measuring techniques. STEM aims to develop skills that are going to be used for specialized content or upper level learning. For example; Zemelman, Daniels & Hyde (2005) list ten best practices for teaching math and science (Stohlmann, Moore & Roehrig, 2012): 1) using manipulatives and hands-on learning; 2) cooperative learning; 3) discussion and inquiry; 4) questioning and conjectures; 5) using justification of thinking; 6) writing for reflection and problem solving; 7) using a problem-solving approach; 8) integrating technology; 9) teacher as a facilitator; 10) using assessment as a part of instruction. As it can be seen in these items, the assessment has taken into consideration as a part of instruction. It is directly associated with the subject of this chapter. So, the main question in here is how we measure the skill development that prompted in STEM integrated courses. The measurement instruments and techniques are not new to the educational environment but need to develop unique scales in STEM basis. Also, there is a need to know that assessment is the core of the implementation of the STEM content and it is found difficult to do it so. It is known that there is a need to design tools within STEM education desperately (Hernandez et al., 2014). Such tool design is required to accurately detect and measure levels of content connects and knowledge fusion occurring when learning-in-doing through engineering design (de Miranda 2004). Developing the appropriate integrated STEM assessments will not be easy because generally 567
Assessment of the STEM in the Classroom
historical practices in the education world are discipline-specific standardized testing (Ostler, 2012). Without meaningful, iterative assessments, teachers, administrators and schools are lacking critical information and direction for improvement (Saxton et al., 2014). Existing assessments tend to focus on knowledge in a single discipline. Especially science and mathematics discipline stayed more traditional which are focused on content knowledge and not attached with technology and engineering practices (Marginson et al., 2013). Furthermore, they give little attention to the practices in the disciplines and applications of knowledge. Using a variety of appropriate outcome measures, such as portfolio assessment, oral justification, quizzes, essays, web site feedback and comments, direct observation, student review, peer review, self-review, and experience; focusing on student performance, knowledge acquisition, and attitude appraisal; and assessing both process and product (Scaramozzino, 2010). It remains to be determined under which conditions lexical analysis will meaningfully enhance our understanding of student thinking about STEM ideas (Haudek, et. al, 2011). The need for the in-class assessment or measuring instrument for teacher practice that is effective, feasible, valid, and allow for the investigation (Ball, & Hill, 2008). Such an instrument links between specific teaching strategies and student learning outcomes that also a need that is common across STEM disciplines (Saxton et al., 2014). If the integrated STEM syllabus is assessed, teachers need to develop more structured and holistic instruments based on STEM. This kind of assessment is not new to the education world. It has been implemented for decades and called rubrics. Rubrics are the assessment instruments which are used for the process evaluation mostly. It is used for the evaluation of a product at the end of a project. Rubrics can be developed by teachers for every course and content they teach. Not only the traditional knowledge assessment but also the process and skills that need to be assessed can be evaluated by rubrics. Of course, there are large scale assessments for integrated STEM such as the National Assessment of Educational Progress (NAEP) probe assessment and 2009 NAEP Interactive Computer and Hands-On Tasks Science Assessment. Moreover, NRC strongly recommend to the assessment developers that they should create assessments appropriate to measuring the various learning and affective outcomes of integrated STEM education. This work should involve not only the modification of existing tools and techniques but also exploration of novel approaches (Honey, Pearson & Schweingruber, 2014). Also, development of assessment regimes that support the commitment to problem solving, inquirybased approaches, critical thinking and creativity is seen as a key factor by 568
Bestami Buğra ÜLGER
Australian Educational Report in STEM (Marginson et al., 2013). That finding directs us to skill-based assessments. Beside from the assessment developers, teachers need to know schoolwide and STEM content based small scale assessments. The models for the evaluation in classroom, like formative assessments, host rubrics. So, we can say the rubric development is the main subject in here. Rubrics In the most general terms, almost any assessment type can be used in a formative or summative way, however, some assessment tasks, such as multiplechoice tests, provide only limited information about the student’s progress (Koretz, 2002; Capraro & Corlu, 2013). These kinds of traditional assessments measure the students’ content learning, but the teachers need to know students lack of on their learning about learning areas other than content. For that the teacher set the stage by discussing how formative rubrics are used and that rubrics are designed to help students identify the areas for improvement rather than to evaluate their success or failure (Capraro & Corlu, 2013). When the teacher used project or problem or inquiry-based learning, it is important to know the progress of the student during the whole integrated STEM syllabus. Skills development, observational notes, interviews and content knowledge are evaluated by using rubrics are the most anticipated techniques in formative assessment. Rubrics are for providing students with formative and summative feedback about their learning processes. There are studies that shows how multiple assessment rubrics are used across the skill areas and why (Eppes, Milanovic & Sweitzer, 2012). And, rubrics provide teachers to examine and feedback their students’ learning effectively (Andrade, 2000). Researchers develop the instruments in their researches are tasks and rubrics because it has high value of validity, reliability and feasibility (Mudrika, Nahadi & Kurnia, 2018). Forming a rubric has some aspects that teachers need to follow. First, teachers should determine every aspect or learning tasks of the assessment because assessment needs to align with the learning environment and teaching approach. Also, the assessment of the same content or standard can differ depending on whether learning occurs in groups or individually (Capraro & Corlu, 2013). These aspects can contain the skills expected to develop, content etc. Teachers need to determine the attainment degrees that show the students’ progress on various aspects as well. Various attainment degrees of the learning goals are specified in the rubrics. The Table 27.1 which shows a regular rubric that can be link to 569
Assessment of the STEM in the Classroom
student’s knowledge and skills for an attainment goal. Rubric has a six degree that shows degree in which the students need to be improved. Table 27.1 A regular rubric for one attainment goal Rating 1. Nascent 2. Constrained 3. Developing 4. Commendable 5. Accomplished 6. Exemplary
Brief Description Student displays preliminary knowledge and skills related to the learning task. Student displays limited knowledge and skills related to the learning task. Student displays a developing level of content and concepts related to the learning task. Student displays functionally adequate attainment of the content and concepts related to the learning task. Student displays mastery of the content and concepts related to the learning task. Student displays a novel or personal level of mastery of the content and concepts related to the learning task.
Source: Capraro & Corlu, 2013. In Table 27.2, it can be seen that a different rubric which assesses seven different categories of skills. The categories are Authenticity, Academic Rigor, Applied Learning, Active Exploration, Adult Connections, Assessment Practices, Use of Tech. This rubric is used for assessing an engineering design project how it should be ideally. It is used during a engineering project that students make. For every category or skill, teacher needs to score separately. In sum, the final score will be the students’ assessment for individual or group project. And may be the project development skill of the student is linked to engineering design lesson which teacher who decides for it. In other words, teacher can assess every skill and knowledge under the engineering design umbrella and finally the score of the student for engineering design would emerge. By doing that, teacher assesses the student’s learning process and every skill that linked to the lesson. This assessment is healthy and valid.
570
Bestami Buğra ÜLGER
Table 27.2 A holistic rubric assesses different attainment goals of engineering design ideally
Authenticity
UNACCEPTABLE Project has little or no connection with the outside world or other curricular areas. Questions have little or no meaning to the students. Task has a single correct answer.
Academic Rigor
Project is not based on content standards. Project demands little specific knowledge of central concepts.
ACCEPTABLE Project simulates the “real world”. Working world adults are likely to tackle the task. Question has meaning to the students and provides a clear “need to know”. Project has several possible correct solutions. Project is derived from specific learning goals in content area standards. Project demands specific knowledge of central concepts Student develop & demonstrate life skills (e.g. collaboration; presentation; writing)
EXEMPLARY Entities or persons outside of the school will or could use the product of student work. Students will present and defend their solution to a real and appropriate audience.
There is a well-defined, clear driving question that is derived from national, state or district content standards. Project demands breadth and depth of central concepts. There is an expectation for supporting evidence, viewpoints, cause and effect, precise language, and persistence).
571
Adult Connections
Active Exploration
Applied Learning
Assessment of the STEM in the Classroom
572
Recent knowledge not in solution. Students work primarily alone. Social interaction is not required. Learning occurs out of context or at home.
New knowledge applied in solution. Students work in groups where content is discussed and debated in project context. Students use selfmanagement skills informally
Little independent research is required. Majority of information gathered from textbooks or encyclopedia-like materials provided by the teacher
Students conduct own, independent research. Students gather information from authentic sources. Students use raw data provided by the teacher.
Students have no contacts with adults other than the teacher(s)
Students have limited contacts with outside adults (e.g., guest speakers, parents). Teacher uses role playing or other staff members to simulate “expert” contact.
Knowledge applied to a realistic and complex problem. High-performance work organization skills (e.g., teamwork, communicate ideas, collect, organize and analyze information). Formally use self-management skills (e.g., develop a work plan, prioritize work, meet deadlines, allocate resources) Includes field-based or experimental research (e.g., interview experts, survey groups of people, work site exploration). Students gather information from a variety of sources through a variety of methods (interviewing and observing, gathering and reviewing information, collecting data, model-building) Students have multiple contacts with outside adults who have expertise and experience that can ask questions, provide feedback, and offer advice. Outside adults provide students with a sense of the real-world standards for this type of work
Use of Tech.
Assessment Practices
Bestami Buğra ÜLGER
Students are not provided with clear explanation of the assessment process or and expectations. Assessment of project is summarized into a single final grade.
Students are not required to use technology or technology use is superficial
Clear explanation of assessment and expectations Structured journals or logs used to track progress. Assessments are varied: include content & life skills. Final product is an exhibition or presentation demonstrating student knowledge. Technology is used to conduct research, report information, or to calculate results where appropriate.
Students help in establishing assessment criteria. Students have many opportunities for feedback on their progress from teachers, mentors, and peers
Create interactive media, conduct experiments, manipulate data, or communicate with adult experts.
Source: Capraro, Capraro & Morgan, 2013. In Table 27.3, the criterions are listed in order to evaluate a project which are supoosed to be developed in terms of STEM understanding. The criterions are presented with the descriptions for the unsatisfactory, proficient and advanced levels of the project being assessed. In this holistic rubric, teachers assess the students’ project development skills which are linked to science lesson. So, the teachers need to think about what makes their content area and the assessments they traditionally use distinct from assessments in other content areas (Capraro & Corlu, 2013). The rubric shown in Table 27.3 is one example of this thought. The distinctions between descriptions are clear that teacher and students would easily have opinion about where students need for their development.
573
Assessment of the STEM in the Classroom
Engagement
Goals
Table 0.3 Project assessment rubric
574
Unsatisfactory Goals of the project do not seem to be tied to any specific content area standards or are not rigorous enough to challenge the students. Goals of the project seem to address only the lowest levels of critical thinking Engagement seems unlikely to engage the student’s curiosity. Precipitating event fails to create a realistic role or project for the students. Task seems unclear but leads to a contentbased “need to know” or next steps. Engagement fails to establish a timeline.
Proficient The goals of the project are tied to specific content area standards and 21st Century Skills. Goals are rigorous enough to challenge all students. Goals of the project require the students to use highorder critical thinking skills. Engagement seems likely to engage the student’s curiosity in a realistic scenario. Engagement establishes a clear role and tasks. Engagement leads to a list of content-based “need to know” and next steps. Engagement establishes a clear timeline and assessment criteria.
Advanced Goals of the project are clearly defined and successfully integrate content standards from multiple subject areas
Engagement engages the students in a real world problem that they can help solve. Entry document creates a thorough list of relevant, content specific “need to know”. Project is launched with the help of outside person or entity.
Bestami Buğra ÜLGER
The project plan has a general outline including the various phases and student activities. Some thought has been put into resources and materials that are required for this Project. The project has a list of student products.
The project lacks activities to help students. Work as an effective team on a long-term project. Reflect on their “need to knows” and to develop next steps. Understand the content and make use of the resources available (including any remediation that might be necessary)
The project has appropriate activities to help students. Work as an effective team on a long-term project (time management, collaboration, etc). Reflect on their “need to knows” and to develop next steps. Understand the content and make use of the resources available (including any necessary remediation that might be needed).
Scaffolding
Planning
The project plan may be a good idea, but little thought has been put into how to implement the idea in the classroom. No thought has been put into the resources and materials required for this Project.
The plan includes a… Detailed description of various phases with progress checks and benchmarks. Complete list of resources and materials. Well thought out plan for implementation. Description of student products and how they will be evaluated against the project goals. The project has differentiated activities for individual students and groups. Work as an effective team on a long-term Project. Reflect on their “need to knows” and to develop next steps. Understand the content and use resources available (including any remediation necessary)
575
End Product
Assessment
Assessment of the STEM in the Classroom
Rubrics are not developed, do not seem tied to the goals of the project, or are unusable by students. Evaluation does not include use of schoolwide rubrics.
End product does not demonstrate understanding and application of content standards. End product is not authentic. End product is not age level appropriate.
Rubric are designed to clearly lay out final product expectations as defined by project goals. Evaluation includes the use of school-wide rubrics. Rubrics are easy for students to use in self- and peerassessment activities. End product clearly demonstrates understanding and application of content standards. End product is authentic and reflects real world work. End product is tailored to student skill level.
Several rubrics are used to evaluate multiple individual and group products based on the stated content and goals of the project. Assessment includes input from outside sources. End product contains multiple opportunities to demonstrate learning (multiple products). End product could be used externally. End product incorporates a variety of media
Source: Capraro & Corlu, 2013 s.199 (Appendix U) The description of the degree is eminent in the rubric. Because it shows the distinction between learning outcomes and students. Description can be different on the subject or approach that teacher selected before. Each step of the project is evaluated and their description is given. Doing that shows the students’ missing parts of their project and provide the chance to reorganize and fix the problem of their project which is consistent with the engineering design process. In Appendix 1, it can be see that a more complex and structured rubric that assess the students’ quantitative reasoning through quantitative literacy. These rubrics are the examples for teachers how to develop a rubric for their STEM based courses. The questions they ask and the skills they take the centre of the learning is important for the rubric development. Thus, they can exclude and include the paradigms that they should choose. We need to remark that there are studies evidencing us to measure integrated STEM content knowledge new scales can be developed (Hernandez et al., 2014). But it is essential to say that STEM content knowledge and STEM skills are different kind of measureing. They are on the two other side of a stick.
576
Bestami Buğra ÜLGER
References Andrade, H. G. (2000). Using rubrics to promote learning. Educational leadership, 57(5), 13-19.
thinking
and
Ayvacı, H.Ş., Sevim, S., Durmuş, A., & Kara, Y. (2016). Analysis of pre-service teachers’ views toward models and modeling in science education. Turkish Journal of Teacher Education, 5(2), 84-96. Ball, D. L., & Hill, H. C. (2008). Measuring teacher quality in practice. Measurement issues and assessment for teaching quality, 80-98. Cantrell, P., Pekca, G., & Ahmad, I. (2006). The effects of engineering modules on student learning in middle school science classrooms. Journal of Engineering Education, 95(4), 301-309. Capraro, R. M., Capraro, M. M., & Morgan, J. R. (Eds.). (2013). STEM projectbased learning: An integrated science, technology, engineering, and mathematics (STEM) approach. Springer Science & Business Media. Capraro, R. M., & Corlu, M. S. (2013). Changing views on assessment for STEM project-based learning. In STEM Project-Based Learning (pp. 109-118). Sense Publishers, Rotterdam. de Miranda, M. A. (2004). The grounding of a discipline: Cognition and instruction in technology education. International Journal of Technology and Design Education, 14, 61–77. Eppes, T. A., Milanovic, I., & Sweitzer, H. F. (2012). Strengthening capstone skills in STEM programs. Innovative Higher Education, 37(1), 3-10. Gray, J. S., Brown, M. A., & Connolly, J. P. (2017). Examining Construct Validity of the Quantitative Literacy VALUE Rubric in College-Level STEM Assignments. Research & Practice in Assessment, 12, 20-31. Haudek, K. C., Kaplan, J. J., Knight, J., Long, T., Merrill, J., Munn, A., ... & Urban-Lurain, M. (2011). Harnessing technology to improve formative assessment of student conceptions in STEM: forging a national network. CBE—Life Sciences Education, 10(2), 149-155. Hernandez, P. R., Bodin, R., Elliott, J. W., Ibrahim, B., Rambo-Hernandez, K. E., Chen, T. W., & de Miranda, M. A. (2014). Connecting the STEM dots: measuring the effect of an integrated engineering design 577
Assessment of the STEM in the Classroom
intervention. International Education, 24(1), 107-120.
journal
of
Technology
and
design
Honey, M., Pearson, G., & Schweingruber, H. A. (Eds.). (2014). STEM integration in K-12 education: Status, prospects, and an agenda for research (p. 180). Washington, DC: National Academies Press. Hynes, M., & Dos Santos, A. (2007). Effective teacher professional development: Middle-school engineering content. International Journal of Engineering Education, 23(1), 24. Koretz, D. (2002). Limitations in the use of achievement tests as measures of educators' productivity. Journal of human resources, 37(4), 752-777. Marginson, S., Tytler, R., Freeman, B., & Roberts, K. (2013). STEM: country comparisons: international comparisons of science, technology, engineering and mathematics (STEM) education. Final report. Mudrika, M., Nahadi, N., & Kurnia, K. (2018, December). A performance assesment instrument for measuring psychomotor competence on vinegar titration practicum. Conference on Mathematics and Science Education Pendidikan Indonesia (Vol. 3, pp. 348-352).
development of cognitive and In International of Universitas
National Research Council. (2001). In J. W. Pellegrino, N. Chudowsky, & R. Glaser (Eds.), Knowing what students know. The science and design of educational assessment. Washington, DC: National Academy Press. Niemi, D., Baker, E. L., & Sylvester, R. M. (2007). Scaling up, scaling down: Seven years of performance assessment development in the nation’s second largest school district. Educational Assessment, 12(3), 195–214. Ostler, E. (2012). 21st century STEM education: a tactical model for long-range success. International Journal of Applied Science and Technology, 2 (1). Ozuru, Y., Briner, S., Kurby, C. A., & McNamara, D. S. (2013). Comparing comprehension measured by multiple-choice and open-ended questions. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 67(3), 215. Sanders, M. (2009). STEM, STEM education, STEM mania. Technology Teacher, 68(4), 20–26.
578
Bestami Buğra ÜLGER
Saxton, E., Burns, R., Holveck, S., Kelley, S., Prince, D., Rigelman, N., & Skinner, E. A. (2014). A common measurement system for K-12 STEM education: Adopting an educational evaluation methodology that elevates theoretical foundations and systems thinking. Studies in Educational Evaluation, 40, 18-35. Scaramozzino, J. M. (2010). Integrating STEM Information Competencies into an Undergraduate Curriculum, Journal of Library Administration, 50:4, 315-333. Stein, B., Haynes, A., Redding, M., Ennis, T., & Cecil, M. (2007). Assessing critical thinking in STEM and beyond. In Innovations in E-learning, instruction technology, assessment, and engineering education (pp. 79-82). Springer, Dordrecht. Stohlmann, M., Moore, T. J., & Roehrig, G. H. (2012). Considerations for teaching integrated STEM education. Journal of Pre-College Engineering Education Research (J-PEER), 2(1).
579
Contributors Ahmet Volkan Yüzüak is an assistant professor doctor at Bartın University Faculty of Education and Elementary Science Teaching Department. His research interests are science education, environmental education, measurement and assessment and structural equation modelling. Banucicek Özdemir is an assistant professor at Giresun University, Faculty of Education. Her research interests are educational strategies (Reflective thinking, Flipped Classroom Model, SCAMPER technique), biology education, process techniques, the education system and developments in education, science technology. Bestami Buğra Ülger is a researcher at Hakkari University. He has Ph.D. on science education. His research interests are science education, STEM education, educational assessment and gifted education. Beyza Aksu Dünya is an assistant professor doctor at the department of Educational Measurement and assessment in Bartın University. Her research interests are educational evaluation, measurement and assessment in education. Bilge Sulak-Akyuz is an assistant professor in the Guidance and Counseling department at Bartin University. Her research interests are self-injury, research in counseling, and counselor education. Buket Özüm Bülbül is an assistant professor in Mathematics Education Department in Faculty of Education at Celal Bayar University. Her research interests are geometry education, geometric habits of mind, mathematics education, assessment and evaluation, computer-based geometry learning, problem solving strategies. Burcin Gökkurt Özdemir is an associate professor doctor at Bartin University, Faculty of Education, Department of Mathematics Education. Her research interests are mathematics and geometry teaching, pedagogical content knowledge, misconceptions, problem solving and posing. Cemalettin Yıldız is an assistant professor at Giresun University, Faculty of Education, Mathematics ans Science Department. His research interests are history of mathematics, mathematical thinking, intelligence games,
581
proving methods, geometry learning and teaching, teacher education, measurement and assessment. Ebru Saka is an assistant professor doctor at Kafkas University. Her research interests are mathematics education and educational measurement. Esra Ömeroğlu is a professor doctor at Gazi University. Her research interests are preschool education, social skill education and learning environment. Gizem Saygılı is a professor at Karamanoğlu Mehmetbey University Faculty of Education and Primary School Education Department. Her research interests are multiculturalism, multicultural education, moral values education, teachers’ multicultural self-perception, teacher training. Gökhan Demircioğlu is a professor at Trabzon University, Fatih Faculty of Education, Department of Mathematics and Science Education. He is specifically interested in concept teaching, out-of-school learning environments, computer and instructional technology, secondary science education, teacher training, context-based learning, and technological pedagogical content knowledge (TPACK). Hasan Bakırcı is an associate professor doctor at Van Yüzüncü Yıl University. His research interests are science education, measurement and evaluation in science education, contemporary approaches in science education. Hava İpek Akbulut is an assisstant professor doctor at Trabzon University. Her research interests are science education, teacher education and educational measurement. Hüseyin Uysal is an assistant professor and department chair at Kastamonu University. He combines his artistic studies with his academic studies in the field of Art Education. He has participated in numerous national and international exhibitions and member of the Science Committee of the 3rd and 4th International Izmir Biennials. İlknur Reisoğlu is assistant professor doctor at Recep Tayyip Erdoğan University. Her research insterests are educational technology, computer assisted education and teacher education. İsa Deveci is an associate professor doctor at Kahramanmaras Sutcu Imam University, Faculty of Education, Science Education Unit. He performed his PhD thesis on entrepreneurship education in science teacher education 582
in 2016. His research interests are entrepreneurship science education, homework, engineering applications, science teacher education, and entrepreneurial projects. Mine Kır is an assistant professor doctor at Zonguldak Bülent Ecevit University. Her research insterst are biology eduction, preschool education, educational assessment and science education. Mustafa Ürey is an associate professor doctor at Trabzon University. His research interests are science education, out-door education and environmental education. Mustafa Yadigaroğlu is an assistant professor at Aksaray University, Faculty of Education, Department of Mathematics and Science Education. He is specifically interested in concept teaching, conceptual change, teacher education, teacher training, information and communication technology, technological pedagogical content knowledge (TPACK) and context-based learning. Özlem Yurt is an assisstant professor doctor at Trabzon University. Her research interests are science education, play and teacher education, educational measurement. Salih ÇEPNİ is a professor doctor at Science Education Department in Uludağ University. He is the Dean of Education Faculty. His research interests are science education, measurement assessment in science education, science teacher education, STEM education, curriculum and instruction in science education. Sedat Turgut is a researcher at Bartın University. He had his doctoral degree on teaching in elementary school. His interest areas are early algebra, the transition from arithmetic to algebra, patterns, functional thinking and game-based learning. Şenay Delimehmet DADA is a research assistant at Trabzon University. Her research interests are special student education, educational evaluation and diagnosis. Sevilay Alkan is a teacher at Ministry of Education and has a Ph.D. on mathematics education. Her research interests are mathematics education and educational measurement. 583
Seyhan Eryılmaz is assistant professor doctor at Recep Tayyip Erdoğan University. Her research insterests are science education, computer assisted education and teacher education. Sibel Er Nas is an associate professor doctor at Trabzon University. Her research interests are science education, teacher education and educational measurement. Ufuk Töman is an assistant professor doctor at Bayburt University Faculty of Science Education Department. His research interests are biology education, science education and teacher education. Ümit Demiral is an associate professor doctor at Kirşehir University. His research interests are science education, teacher education and educational measurement. Ümmühan Ormancı is a researcher on science education. She has the doctoral degree on science education. Her research insterst are science education, educational research and science teacher education. Yilmaz Kara is an associate professor doctor at Bartin University, Faculty of Education at the Department of Science Education. His research interests are science education major in biology, theory and practices in science education, measurement and assessment in science education, socioscientific issues, computer assisted learning, and teacher education.
584