Automated Text Scoring in the Age of Generative AI for the GPU-poor
Article Status
Published
Authors/contributors
- Ormerod, Christopher Michael (Author)
- Kwako, Alexander (Author)
Title
Automated Text Scoring in the Age of Generative AI for the GPU-poor
Abstract
Current research on generative language models (GLMs) for automated text scoring (ATS) has focused almost exclusively on querying proprietary models via Application Programming Interfaces (APIs). Yet such practices raise issues around transparency and security, and these methods offer little in the way of efficiency or customizability. With the recent proliferation of smaller, open-source models, there is the option to explore GLMs with computers equipped with modest, consumer-grade hardware, that is, for the "GPU poor." In this study, we analyze the performance and efficiency of open-source, small-scale GLMs for ATS. Results show that GLMs can be fine-tuned to achieve adequate, though not state-of-the-art, performance. In addition to ATS, we take small steps towards analyzing models' capacity for generating feedback by prompting GLMs to explain their scores. Model-generated feedback shows promise, but requires more rigorous evaluation focused on targeted use cases.
Repository
arXiv
Archive ID
arXiv:2407.01873
Date
2024-07-02
Accessed
13/10/2025, 23:30
Library Catalogue
Extra
arXiv:2407.01873 [cs]
Citation Key: ormerod2024
Citation
Ormerod, C. M., & Kwako, A. (2024). Automated Text Scoring in the Age of Generative AI for the GPU-poor (arXiv:2407.01873). arXiv. https://doi.org/10.48550/arXiv.2407.01873
Link to this record