Evaluating China’s Automated Essay Scoring System iWrite
Article Status
Published
Authors/contributors
- Qian, Leyi (Author)
- Zhao, Yali (Author)
- Cheng, Yan (Author)
Title
Evaluating China’s Automated Essay Scoring System iWrite
Abstract
Automated writing scoring can not only provide holistic scores but also instant and corrective feedback on L2 learners’ writing quality. It has been increasing in use throughout China and internationally. Given the advantages, the past several years has witnessed the emergence and growth of writing evaluation products in China. To the best of our knowledge, no previous studies have touched upon the validity of China’s automated essay scoring systems. By drawing on the four major categories of argument for validity framework proposed by Kane—scoring, generalization, extrapolation, and implication, this article aims to evaluate the performance of one of the China’s automated essay scoring systems—iWrite against human scores. The results show that iWrite fails to be a valid tool to assess L2 writings and predict human scores. Therefore, iWrite currently should be restricted to nonconsequential uses and cannot be employed as an alternative to or a substitute for human raters.
Publication
Journal of Educational Computing Research
Volume
58
Issue
4
Pages
771-790
Date
2019-10-23
Journal Abbr
J. Educ. Comput. Res.
Language
en
ISSN
0735-6331
Accessed
08/11/2024, 17:19
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: qian2020
<标题>: 评估中国的自动作文评分系统 iWrite
<AI Smry>: iWrite fails to be a valid tool to assess L2 writings and predict human scores, and should be restricted to nonconsequential uses and cannot be employed as an alternative to or a substitute for human raters.
Citation
Qian, L., Zhao, Y., & Cheng, Y. (2019). Evaluating China’s Automated Essay Scoring System iWrite. Journal of Educational Computing Research, 58(4), 771–790. https://doi.org/10.1177/0735633119881472
Link to this record