Monitoring the performance of human and automated scores for spoken responses
Article Status
Published
Authors/contributors
- Wang, Zhen (Author)
- Zechner, Klaus (Author)
- Sun, Yu (Author)
Title
Monitoring the performance of human and automated scores for spoken responses
Abstract
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish rigorous procedures for monitoring the performance of both human and automated scoring processes during operational administrations. This paper provides an overview of the automated speech scoring system SpeechRaterSM and how to use charts and evaluation statistics to monitor and evaluate automated scores and human rater scores of spoken constructed responses.
Publication
Language Testing
Date
2016-12-19
Volume
35
Issue
1
Pages
101-120
Journal Abbr
Lang. Test.
Citation Key
wang2018
Accessed
18/06/2024, 18:09
ISSN
0265-5322
Language
en
Library Catalogue
DOI.org (Crossref)
Extra
<标题>: 监控人工评分和自动评分对口语回答的表现
<AI Smry>: An overview of the automated speech scoring system SpeechRaterSM is provided and how to use charts and evaluation statistics to monitor and evaluate automated scores and human rater scores of spoken constructed responses is provided.
Read_Status: New
Read_Status_Date: 2026-01-26T11:33:57.130Z
Citation
Wang, Z., Zechner, K., & Sun, Y. (2016). Monitoring the performance of human and automated scores for spoken responses. Language Testing, 35(1), 101–120. https://doi.org/10.1177/0265532216679451
Link to this record