In authors or contributors

1 resource

  • Xuansheng Wu, Padmaja Pravin Saraf, Gyeo...
    |
    Feb 21st, 2025
    |
    preprint
    Xuansheng Wu, Padmaja Pravin Saraf, Gyeo...
    Feb 21st, 2025

    Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI's scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score...

Last update from database: 28/10/2025, 11:15 (UTC)
Powered by Zotero and Kerko.