Search
702 resources
-
Nitin Madnani, Anastassia Loukina, Alina...|Oct 27th, 2017|conferencePaperNitin Madnani, Anastassia Loukina, Alina...Oct 27th, 2017
-
Amber Nigam|Oct 27th, 2017|conferencePaperAmber NigamOct 27th, 2017
Automated Essay Scoring (AES) has been quite popular and is being widely used. However, lack of appropriate methodology for rating nonnative English speakers' essays has meant a lopsided advancement in this field. In this paper, we report initial results of our experiments with nonnative AES that learns from manual evaluation of nonnative essays. For this purpose, we conducted an exercise in which essays written by nonnative English speakers in test environment were rated both manually and...
-
Brian Riordan, Andrea Horbach, Aoife Cah...|Oct 27th, 2017|conferencePaperBrian Riordan, Andrea Horbach, Aoife Cah...Oct 27th, 2017
-
Ute Schmid, Christina Zeller, Tarek Beso...|Oct 27th, 2017|bookSectionUte Schmid, Christina Zeller, Tarek Beso...Oct 27th, 2017
-
Joshua Wilson|Apr 27th, 2017|journalArticleJoshua WilsonApr 27th, 2017
-
Victoria Yaneva, Constantin Orasan, Rich...|Oct 27th, 2017|conferencePaperVictoria Yaneva, Constantin Orasan, Rich...Oct 27th, 2017
-
Zhen Wang, Klaus Zechner, Yu Sun|Dec 19th, 2016|journalArticleZhen Wang, Klaus Zechner, Yu SunDec 19th, 2016
As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish rigorous procedures for monitoring the performance of both human and automated scoring processes during operational administrations. This paper...
-
Heather Buzick, Maria Elena Oliveri, Yig...|Jul 2nd, 2016|journalArticleHeather Buzick, Maria Elena Oliveri, Yig...Jul 2nd, 2016
-
Maurice Cogan Hauck, Mikyung Kim Wolf, R...|Apr 4th, 2016|journalArticleMaurice Cogan Hauck, Mikyung Kim Wolf, R...Apr 4th, 2016
This paper is the first in a series from Educational Testing Service (ETS) concerning English language proficiency (ELP) assessments for K–12 English learners (ELs). The goal of this paper, and the series, is to present research‐based ideas, principles, and recommendations for consideration by those who are conceptualizing, developing, and implementing ELP assessments for K–12 ELs and by all stakeholders in their education and assessment. We also hope to contribute to the active current...
-
Aishwarya Agrawal, Dhruv Batra, Devi Par...|Oct 27th, 2016|journalArticleAishwarya Agrawal, Dhruv Batra, Devi Par...Oct 27th, 2016
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention...
-
Jennifer Hill, Rahul Simha|Oct 27th, 2016|conferencePaperJennifer Hill, Rahul SimhaOct 27th, 2016
-
Giang Thi Linh Hoang, Antony John Kunnan...|Oct 27th, 2016|journalArticleGiang Thi Linh Hoang, Antony John Kunnan...Oct 27th, 2016
-
Brenden M. Lake, Ruslan Salakhutdinov, J...|Dec 11th, 2015|journalArticleBrenden M. Lake, Ruslan Salakhutdinov, J...Dec 11th, 2015
Handwritten characters drawn by a model Not only do children learn effortlessly, they do so quickly and with a remarkable ability to use what they have learned as the raw material for creating new stuff. Lake et al. describe a computational model that learns in a similar fashion and does so better than current deep learning algorithms. The model classifies, parses, and recreates handwritten characters, and can generate new letters of the...
-
Kevin R. Raczynski, Allan S. Cohen, Geor...|Sep 27th, 2015|journalArticleKevin R. Raczynski, Allan S. Cohen, Geor...Sep 27th, 2015
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large‐scale writing assessments. This study compared the effectiveness of two widely used rater training methods—self‐paced and collaborative frame‐of‐reference training—in the context of a large‐scale writing assessment. Sixty‐six raters were randomly assigned to the training methods. After...
-
Mark J. Gierl, Hollis Lai, Karen Fung|Aug 20th, 2015|bookSectionMark J. Gierl, Hollis Lai, Karen FungAug 20th, 2015
-
Ramakrishna Vedantam, C. Lawrence Zitnic...|Jun 27th, 2015|preprintRamakrishna Vedantam, C. Lawrence Zitnic...Jun 27th, 2015
Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new...
-
Michael Heilman, Nitin Madnani|Oct 27th, 2015|conferencePaperMichael Heilman, Nitin MadnaniOct 27th, 2015
-
Sami Virpioja, Stig-Arne Grönroos|Oct 27th, 2015|conferencePaperSami Virpioja, Stig-Arne GrönroosOct 27th, 2015
-
Steven Burrows, Iryna Gurevych, Benno St...|Oct 23rd, 2014|journalArticleSteven Burrows, Iryna Gurevych, Benno St...Oct 23rd, 2014
-
S.-J Huang|Jul 27th, 2014|journalArticleS.-J HuangJul 27th, 2014