Monitoring the performance of human and automated scores for spoken responses
Article Status
Published
Authors/contributors
- Wang, Zhen (Author)
- Zechner, Klaus (Author)
- Sun, Yu (Author)
Title
Monitoring the performance of human and automated scores for spoken responses
Abstract
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish rigorous procedures for monitoring the performance of both human and automated scoring processes during operational administrations. This paper provides an overview of the automated speech scoring system SpeechRaterSM and how to use charts and evaluation statistics to monitor and evaluate automated scores and human rater scores of spoken constructed responses.
Publication
Language Testing
Volume
35
Issue
1
Pages
101-120
Date
2016-12-19
Journal Abbr
Lang. Test.
Language
en
ISSN
0265-5322
Accessed
18/06/2024, 18:09
Library Catalogue
DOI.org (Crossref)
Extra
Citation Key: wang2018
<标题>: 监控人工评分和自动评分对口语回答的表现
<AI Smry>: An overview of the automated speech scoring system SpeechRaterSM is provided and how to use charts and evaluation statistics to monitor and evaluate automated scores and human rater scores of spoken constructed responses is provided.
Citation
Wang, Z., Zechner, K., & Sun, Y. (2016). Monitoring the performance of human and automated scores for spoken responses. Language Testing, 35(1), 101–120. https://doi.org/10.1177/0265532216679451
Link to this record