Morehead State University

Poster Title

Item Response Theory: Implications for the Assessment of Pre-Service Teachers' Scientific Knowledge: STUDY 2 (Knell): Using Item Response Theory to Analyze a Physical Science Content

Institution

Morehead State University

Abstract

In the last few decades, results from the Program for International Student Assessment (PISA) suggest that, in the United States, school student performance in science and mathematics has moved from worldclass to middle-of-the-pack. Teacher academic preparation and quality has been pointed out as one factor that must be improved for PISA scores to recover. With the implementation of recent education reforms, there has been a push to improve the assessment of educators using statistically sound, valid, and reliable standardized tests. The same set of theories that inform the construction and evaluation of large-scale standardized assessments, particularly Item Response Theory (IRT), can be now applied to locally-made tests, particularly diagnostic ones. This study applied IRT strategies to measure test and item parameters for a diagnostic pre- and post-test that assessed the content knowledge of pre-service teachers enrolled in SCI 111, Inquiry Physical Science for Elementary Teachers, from 1998 to 2012. Using SPSS, test reliability, test scores, test averages and standard deviations, item difficulty, item discrimination, item discrimination index, item means and standard deviations, item characteristic curves, and item distractor analysis, were calculated and analyzed. The findings of the study informed the identification of physical science topics that pre-service teachers learned the best and those topics that need additional instructional time. In addition, test items that do not meet the minimum quality requirements were modified or removed, increasing the overall validity and reliability of the SCI 111 diagnostic assessment. Teacher education programs across the state are working toward improving the quality of their graduates. Using IRT, the statistical analysis of diagnostic assessments used in pre-service teacher content courses is a step in the right direction.

This document is currently not available here.

Share

COinS
 

Item Response Theory: Implications for the Assessment of Pre-Service Teachers' Scientific Knowledge: STUDY 2 (Knell): Using Item Response Theory to Analyze a Physical Science Content

In the last few decades, results from the Program for International Student Assessment (PISA) suggest that, in the United States, school student performance in science and mathematics has moved from worldclass to middle-of-the-pack. Teacher academic preparation and quality has been pointed out as one factor that must be improved for PISA scores to recover. With the implementation of recent education reforms, there has been a push to improve the assessment of educators using statistically sound, valid, and reliable standardized tests. The same set of theories that inform the construction and evaluation of large-scale standardized assessments, particularly Item Response Theory (IRT), can be now applied to locally-made tests, particularly diagnostic ones. This study applied IRT strategies to measure test and item parameters for a diagnostic pre- and post-test that assessed the content knowledge of pre-service teachers enrolled in SCI 111, Inquiry Physical Science for Elementary Teachers, from 1998 to 2012. Using SPSS, test reliability, test scores, test averages and standard deviations, item difficulty, item discrimination, item discrimination index, item means and standard deviations, item characteristic curves, and item distractor analysis, were calculated and analyzed. The findings of the study informed the identification of physical science topics that pre-service teachers learned the best and those topics that need additional instructional time. In addition, test items that do not meet the minimum quality requirements were modified or removed, increasing the overall validity and reliability of the SCI 111 diagnostic assessment. Teacher education programs across the state are working toward improving the quality of their graduates. Using IRT, the statistical analysis of diagnostic assessments used in pre-service teacher content courses is a step in the right direction.