Search
274 resources
-
Samuel Holmes, Anne Moorhead, Raymond Bo...|Sep 10th, 2019|conferencePaperSamuel Holmes, Anne Moorhead, Raymond Bo...Sep 10th, 2019
-
Elizabeth Clark, Asli Celikyilmaz, Noah ...|Apr 4th, 2019|conferencePaperElizabeth Clark, Asli Celikyilmaz, Noah ...Apr 4th, 2019
-
Chris Van Der Lee, Albert Gatt, Emiel Va...|Apr 4th, 2019|conferencePaperChris Van Der Lee, Albert Gatt, Emiel Va...Apr 4th, 2019
-
Kavita Ganesan|Mar 5th, 2018|preprintKavita GanesanMar 5th, 2018
Evaluation of summarization tasks is extremely crucial to determining the quality of machine generated summaries. Over the last decade, ROUGE has become the standard automatic evaluation measure for evaluating summarization tasks. While ROUGE has been shown to be effective in capturing n-gram overlap between system and human composed summaries, there are several limitations with the existing ROUGE measures in terms of capturing synonymous concepts and coverage of topics. Thus, often times...
-
Ryan Lowe, Michael Noseworthy, Iulian Vl...|Apr 4th, 2017|conferencePaperRyan Lowe, Michael Noseworthy, Iulian Vl...Apr 4th, 2017
-
Ramakrishna Vedantam, C. Lawrence Zitnic...|Jun 2nd, 2015|preprintRamakrishna Vedantam, C. Lawrence Zitnic...Jun 2nd, 2015
Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new...
-
Sami Virpioja, Stig-Arne Grönroos|Apr 4th, 2015|conferencePaperSami Virpioja, Stig-Arne GrönroosApr 4th, 2015
-
David Hutchison, Takeo Kanade, Josef Kit...|Apr 4th, 2013|bookSectionDavid Hutchison, Takeo Kanade, Josef Kit...Apr 4th, 2013
-
Lise Getoor, Ashwin Machanavajjhala|Aug 4th, 2012|journalArticleLise Getoor, Ashwin MachanavajjhalaAug 4th, 2012
This tutorial brings together perspectives on ER from a variety of fields, including databases, machine learning, natural language processing and information retrieval, to provide, in one setting, a survey of a large body of work. We discuss both the practical aspects and theoretical underpinnings of ER. We describe existing solutions, current challenges, and open research problems.
-
R.S.J.d Baker, B. McGaw, P. Peterson|Apr 4th, 2010|bookSectionR.S.J.d Baker, B. McGaw, P. PetersonApr 4th, 2010
-
Ehud Reiter, Anja Belz|Dec 4th, 2009|journalArticleEhud Reiter, Anja BelzDec 4th, 2009
There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE)...
-
Joseph P. Turian, Luke Shen, I. Dan Mela...|Jan 1st, 2006|conferencePaperJoseph P. Turian, Luke Shen, I. Dan Mela...Jan 1st, 2006
Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the well-known measures precision, recall, and their average, the F-measure. The unigram-based F-measure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, this standard measure has an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The...
-
David Hutchison, Takeo Kanade, Josef Kit...|Apr 4th, 2004|bookSectionDavid Hutchison, Takeo Kanade, Josef Kit...Apr 4th, 2004
-
Chin-Yew Lin, Franz Josef Och|Apr 4th, 2004|conferencePaperChin-Yew Lin, Franz Josef OchApr 4th, 2004
-
George Doddington|Apr 4th, 2002|conferencePaperGeorge DoddingtonApr 4th, 2002
-
Kishore Papineni, Salim Roukos, Todd War...|Apr 4th, 2001|conferencePaperKishore Papineni, Salim Roukos, Todd War...Apr 4th, 2001
-
webpage
-
webpage
-
webpage
Stanford launches an Ethics and Society Review Board that asks researchers to take an early look at the impact of their work.
-
webpage
If the brains behind Artificial Intelligence claim their creations can perform in tests like humans then surely those results should be assessed as if they were produced by humans. AQA's Head of Research and Development, Dr Cesare Aloisi, says this is vital to maintain trust in AI but fears that is not what is happening