A Framework for Evaluating the Reliability of TIMSS Test Scores for Learners in South Africa Based on Generalizability Theory (72583)

Session Information:

Session: On Demand
Room: Virtual Video Presentation
Presentation Type:Virtual Presentation

All presentation times are UTC + 1 (Europe/London)

In many countries, including South Africa, Trends in International Mathematics and Science Study (TIMSS) is widely used as an assessment tool to measure learners' mathematics and science achievement. It is worth noting, however, that the reliability of TIMSS test scores in South Africa has not been extensively examined in the literature. In this study, generalizability theory is used to evaluate the reliability of TIMSS test scores of South African learners. Specifically, the study aims to estimate the different types of errors associated with test scores, including tester effects, item effects, and occasion effects. The study used 11,891 fourth-grade learners' item responses to 35 mathematics items found in an IEA IDB Analyzer Merge module. For the analysis of the data, the lme4 package of the R language and environment for statistical computing, factor analysis, and the Tucker index of factor congruence were used. We measured the generalizability (g) coefficient, the phi coefficient, and construct validity to evaluate the psychometric properties of the data. According to the results, the g-coefficient was 0.85, and the Φ-coefficient was 0.73, indicating a high level of reliability. It has been shown that TIMSS test scores are not affected by different sources of error, such as tester effects and item effects. Results also show the items are credible, as evidenced by their construct validity of 0.92. The generalizability and reliability of the scores are therefore confirmed. In South Africa, the findings may help policymakers and educators make more informed decisions about using and interpreting TIMSS test scores.

Authors:
Musa Adekunle Ayanwale, University of Johannesburg, South Africa
Daniel O. Oyeniran, The University of Alabama, United States
Joseph Taiwo Akinboboye, Federal University of Lafia, Nigeria


About the Presenter(s)
Daniel O. Oyeniran is a Ph.D. student in the Educational Research program of the University of Alabama. His areas of interest includes Scale development, Cognitive Diagnostic Modeling and Computer Adaptive Testing.

Musa Adekunle Ayanwale is a senior postdoctoral research fellow at the UJ. His research interests include testing theories, psychometrics, generalizability theory-based reliability analysis, structural modeling, and computerized adaptive testing.
https://www.linkedin.com/in/musa-adekunle-ayanwale-06117ab7/
https://www.researchgate.net/profile/Musa-Ayanwale/research
https://twitter.com/KunleAyanwale

Akinboboye Joseph is a senior lecturer at Federal University of Lafia. I have interest in Item Respon. My current project is A Framework for Evaluating the Reliability of TIMSS Test Scores for Learners in South Africa Based on
Generalizability Theory
https://www.linkedin.com/in/Joseph-Akinboboye
https://www.researchgate.net/profile/Joseph-Akinboboye/

See this presentation on the full scheduleOn Demand Schedule



Virtual Presentation


Conference Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress

Share this Presentation

Posted by Clive Staples Lewis

Last updated: 2023-02-23 23:45:00