Improve LLM-based Automatic Essay Scoring with Linguistic Features

Article Status
Published
Authors/contributors
Title
Improve LLM-based Automatic Essay Scoring with Linguistic Features
Abstract
Automatic Essay Scoring (AES) assigns scores to student essays, reducing the grading workload for instructors. Developing a scoring system capable of handling essays across diverse prompts is challenging due to the flexibility and diverse nature of the writing task. Existing methods typically fall into two categories: supervised feature-based approaches and large language model (LLM)-based methods. Supervised feature-based approaches often achieve higher performance but require resource-intensive training. In contrast, LLMbased methods are computationally efficient during inference but tend to suffer from lower performance. This paper combines these approaches by incorporating linguistic features into LLM-based scoring. Experimental results show that this hybrid method outperforms baseline models for both in-domain and out-of-domain writing prompts1.
Repository
arXiv
Archive ID
arXiv:2502.09497
Date
2025-02-13
Citation Key
hou2025
Accessed
22/09/2025, 19:32
Language
en
Library Catalogue
Extra
arXiv:2502.09497 [cs]
Citation
Hou, Z. J., Ciuba, A., & Li, X. L. (2025). Improve LLM-based Automatic Essay Scoring with Linguistic Features (arXiv:2502.09497). arXiv. https://doi.org/10.48550/arXiv.2502.09497
Powered by Zotero and Kerko.