Automated Scoring of Constructed-Response Science Items: Prospects and Obstacles
Article Status
Published
Authors/contributors
- Liu, Ou Lydia (Author)
- Brew, Chris (Author)
- Blackmore, John (Author)
- Gerard, Libby (Author)
- Madhok, Jacquie (Author)
- Linn, Marcia C. (Author)
Title
Automated Scoring of Constructed-Response Science Items: Prospects and Obstacles
Abstract
Content‐based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept‐based scoring tool for content‐based scoring, c‐rater™, for four science items with rubrics aiming to differentiate among multiple levels of understanding. The items showed moderate to good agreement with human scores. The findings suggest that automated scoring has the potential to score constructed‐response items with complex scoring rubrics, but in its current design cannot replace human raters. This article discusses sources of disagreement and factors that could potentially improve the accuracy of concept‐based automated scoring.
Publication
Educational Measurement: Issues and Practice
Volume
33
Issue
2
Pages
19-28
Date
2014-3-6
Journal Abbr
Educational Measurement: Issues and Practice
Language
en
ISSN
0731-1745
Short Title
Automated Scoring of Constructed-Response Science Items
Accessed
23/02/2023, 17:20
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: liu2014
<标题>: 构建式科学题目的自动评分:前景与障碍
Citation
Liu, O. L., Brew, C., Blackmore, J., Gerard, L., Madhok, J., & Linn, M. C. (2014). Automated Scoring of Constructed-Response Science Items: Prospects and Obstacles. Educational Measurement: Issues and Practice, 33(2), 19–28. https://doi.org/10.1111/emip.12028
Link to this record