Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration

Article Status
Published
Authors/contributors
Title
Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration
Abstract
Abstract In this article, we argue that automated scoring engines should be transparent and construct relevant—that is, as much as is currently feasible. Many current automated scoring engines cannot achieve high degrees of scoring accuracy without allowing in some features that may not be easily explained and understood and may not be obviously and directly relevant to the target assessment construct. We address the current limitations on evidence and validity arguments for scores from automated scoring engines from the points of view of the Standards for Educational and Psychological Testing (i.e., construct relevance, construct representation, and fairness) and emerging principles in Artificial Intelligence (e.g., explainable AI, an examinee's right to explanations, and principled AI). We illustrate these concepts and arguments for automated essay scores.
Publication
Journal of Educational Measurement
Volume
59
Issue
3
Pages
288-313
Date
09/2022
Journal Abbr
J. Educ. Meas.
Language
en
ISSN
0022-0655, 1745-3984
Short Title
Validity Arguments for AI‐Based Automated Scores
Accessed
04/10/2024, 21:06
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: ferrara2022
Citation
Ferrara, S., & Qunbar, S. (2022). Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration. Journal of Educational Measurement, 59(3), 288–313. https://doi.org/10.1111/jedm.12333
Powered by Zotero and Kerko.