Should You Fine-Tune BERT for Automated Essay Scoring?

Article Status
Published
Authors/contributors
Title
Should You Fine-Tune BERT for Automated Essay Scoring?
Date
2020
Proceedings Title
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Conference Name
Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications
Place
Seattle, WA, USA → Online
Publisher
Association for Computational Linguistics
Pages
151-162
Language
en
Accessed
28/03/2023, 22:09
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: mayfield2020 <标题>: 你应该为自动作文评分微调BERT吗? <AI Smry>: It is found that fine-tuning BERT produces similar performance to classical models at significant additional cost, and a review of promising areas for research on student essays where the unique characteristics of Transformers may provide benefits over classical methods to justify the costs.
Citation
Mayfield, E., & Black, A. W. (2020). Should You Fine-Tune BERT for Automated Essay Scoring? Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, 151–162. https://doi.org/10.18653/v1/2020.bea-1.15
Powered by Zotero and Kerko.