Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring
Article Status
Published
Authors/contributors
- Wu, Xuansheng (Author)
- Saraf, Padmaja Pravin (Author)
- Lee, Gyeonggeon (Author)
- Latif, Ehsan (Author)
- Liu, Ninghao (Author)
- Zhai, Xiaoming (Author)
Title
Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring
Abstract
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI's scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students' written responses to science tasks and their alignment with human scores. We also examine whether enhancing the alignments can improve scoring accuracy. Specifically, we prompt LLMs to generate analytic rubrics that they use to assign scores and study the alignment gap with human grading rubrics. Based on a series of experiments with various configurations of LLM settings, we reveal a notable alignment gap between human and LLM graders. While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading. We found that incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate this gap and enhance LLMs' scoring accuracy. These results underscore the need for a nuanced approach when applying LLMs in science education and highlight the importance of aligning LLM outputs with human expectations to ensure efficient and accurate automatic scoring.
Repository
arXiv
Archive ID
arXiv:2407.18328
Date
2025-02-21
Citation Key
wu2025
Accessed
13/05/2025, 16:37
Short Title
Unveiling Scoring Processes
Library Catalogue
Extra
arXiv:2407.18328 [cs]
Citation
Wu, X., Saraf, P. P., Lee, G., Latif, E., Liu, N., & Zhai, X. (2025). Unveiling Scoring Processes: Dissecting the Differences between LLMs and Human Graders in Automatic Scoring (arXiv:2407.18328). arXiv. https://doi.org/10.48550/arXiv.2407.18328
Link to this record