Anchoring Validity Evidence for Automated Essay Scoring
Article Status
Published
Author/contributor
- Shermis, Mark D. (Author)
Title
Anchoring Validity Evidence for Automated Essay Scoring
Abstract
One of the challenges of discussing validity arguments for machine scoring of essays centers on the absence of a commonly held definition and theory of good writing. At best, the algorithms attempt to measure select attributes of writing and calibrate them against human ratings with the goal of accurate prediction of scores for new essays. Sometimes these attributes are based on the fundamentals of writing (e.g., fluency), but quite often they are based on locally developed rubrics that may be confounded with specific content coverage expectations. This lack of transparency makes it difficult to provide systematic evidence that machine scoring is assessing writing, but slices or correlates of writing performance.
Publication
Journal of Educational Measurement
Volume
59
Issue
3
Pages
314-337
Date
2022-5-15
Journal Abbr
J. Educ. Meas.
Language
en
ISSN
0022-0655
Accessed
14/06/2024, 16:21
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: shermis2022
<标题>: 自动作文评分的锚定有效性证据
Citation
Shermis, M. D. (2022). Anchoring Validity Evidence for Automated Essay Scoring. Journal of Educational Measurement, 59(3), 314–337. https://doi.org/10.1111/jedm.12336
Link to this record