Toward More Substantively Meaningful Automated Essay Scoring

Article Status
Published
Authors/contributors
Title
Toward More Substantively Meaningful Automated Essay Scoring
Abstract
This study evaluated a “substantively driven” method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater®, to compare the performance of three approaches to automated essay scoring: a brute-empirical approach in which variables are selected and weighted solely according to statistical criteria, a hybrid approach in which a fixed set of variables more closely tied to the characteristics of good writing was used but the weights were still statistically determined, and a substantively driven approach in which a fixed set of variables was weighted according to the judgments of two independent committees of writing experts. The research questions concerned (a) the reproducibility of weights across writing nexperts, (b) the comparison of scores generated by the three automated approaches, and (c) the extent to which models developed for scoring one NAEP prompt generalize to other NAEP prompts of the same genre. Data came from the 2002 NAEP Writing Online study and from the main NAEP 2002 writing assessment.
Publication
Journal of Technology, Learning, and Assessment
Volume
6
Issue
1
Date
2007
Extra
Citation Key: ben-simon2007 <标题>: 迈向更具实质意义的自动化作文评分
Citation
Ben-Simon, A., & Bennett, R. E. (2007). Toward More  Substantively Meaningful Automated Essay Scoring. Journal of Technology, Learning, and Assessment, 6(1).
Powered by Zotero and Kerko.