Should You Fine-Tune BERT for Automated Essay Scoring?

Article Status
Published
Authors/contributors
Title
Should You Fine-Tune BERT for Automated Essay Scoring?
Proceedings Title
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Conference Name
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Publisher
Association for Computational Linguistics
Place
Seattle, WA, USA → Online
Date
2020
Pages
151-162
Citation Key
mayfield2020
Accessed
28/03/2023, 22:09
Language
en
Library Catalogue
DOI.org (Crossref)
Extra
<标题>: 你应该为自动作文评分微调BERT吗? <AI Smry>: It is found that fine-tuning BERT produces similar performance to classical models at significant additional cost, and a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs. Read_Status: New Read_Status_Date: 2026-01-26T11:33:49.150Z
Citation
Mayfield, E., & Black, A. W. (2020). Should You Fine-Tune BERT for Automated Essay Scoring? Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, 151–162. https://doi.org/10.18653/v1/2020.bea-1.15
Powered by Zotero and Kerko.