- Ahmet Keleşoğlu Eğitim Fakültesi Dergisi
- Cilt: 7 Sayı: 2
- Inter-Rater Reliability Analysis in Performance-Based Assessment: A Comparison of Generalizability C...
Inter-Rater Reliability Analysis in Performance-Based Assessment: A Comparison of Generalizability Coefficients and Rater Consistency
Authors : Mustafa Köroğlu
Pages : 218-234
Doi:10.38151/akef.2025.158
View : 35 | Download : 129
Publication Date : 2025-09-30
Article Type : Research Paper
Abstract :This study investigates the reliability of a performance-based assessment tool used to evaluate university students’ basic statistical skills within the framework of Generalizability Theory (GT). A total of 80 students from the Guidance and Psychological Counseling program participated in a two-hour examination consisting of 10 applied tasks. The tasks were scored independently by two raters using a detailed analytic rubric. The scores were analyzed using a fully crossed design (p × i × r), with variance components estimated via the maximum likelihood method, and 95% confidence intervals calculated using a cluster bootstrap procedure (1,000 resamples). Results showed that 50.2% of the total variance was attributable to students, 25.6% to items, and 16.6% to raters, while interaction terms remained at low levels. The initial relative generalizability coefficient was calculated as .98, and the absolute decision coefficient (Φ) was .81. When the number of items was increased to 15 and the number of raters to five, the Φ coefficient improved to .90, and absolute error variance decreased to .45. Findings indicated that true performance differences among students were strongly captured, although rater effects could not be completely eliminated. Expanding task coverage and increasing the number of raters were found to be effective strategies for reducing both absolute and relative error variances. The study supports the importance of rubric use, investment in rater training, a multi-task–multi-rater approach, and GT-based revision cycles in high-stakes performance assessments. The findings are expected to inform practical assessment strategies aimed at improving statistical literacy in teacher education programs. Additionally, it is recommended that the study be replicated with larger and more diverse samples across disciplines to enhance internal validity. Future directions may include implementing rater feedback cycles through online platforms and integrating rubric-supported scoring systems. Keywords: Generalizability theory, performance-based assessment, rater reliabilityKeywords : Genellenebilirlik kuramı, performansa dayalı değerlendirme, puanlayıcı güvenirliği
ORIGINAL ARTICLE URL
