Search
9 resources
-
Kerstin Denecke, Alaa Abd-Alrazaq, Mowaf...|Oct 31st, 2021|journalArticleKerstin Denecke, Alaa Abd-Alrazaq, Mowaf...Oct 31st, 2021
Abstract Background In recent years, an increasing number of health chatbots has been published in app stores and described in research literature. Given the sensitive data they are processing and the care settings for which they are developed, evaluation is essential to avoid harm to users. However, evaluations of those systems are reported inconsistently and without using a standardized set of evaluation metrics. Missing standards in health chatbot evaluation prevent...
-
Vasilis Efthymiou, Kostas Stefanidis, Ev...|Oct 26th, 2021|conferencePaperVasilis Efthymiou, Kostas Stefanidis, Ev...Oct 26th, 2021
-
Gabriel Oliveira dos Santos, Esther Luna...|Sep 28th, 2021|preprintGabriel Oliveira dos Santos, Esther Luna...Sep 28th, 2021
This paper shows that CIDEr-D, a traditional evaluation metric for image description, does not work properly on datasets where the number of words in the sentence is significantly greater than those in the MS COCO Captions dataset. We also show that CIDEr-D has performance hampered by the lack of multiple reference sentences and high variance of sentence length. To bypass this problem, we introduce CIDEr-R, which improves CIDEr-D, making it more flexible in dealing with datasets with high...
-
Elizabeth Clark, Tal August, Sofia Serra...|Jul 7th, 2021|preprintElizabeth Clark, Tal August, Sofia Serra...Jul 7th, 2021
Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts' ability to distinguish between human- and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore...
-
Xu Han, Michelle Zhou, Matthew J. Turner...|May 6th, 2021|conferencePaperXu Han, Michelle Zhou, Matthew J. Turner...May 6th, 2021
-
Michael McTear, Michael McTear|Jul 1st, 2021|bookSectionMichael McTear, Michael McTearJul 1st, 2021
-
University of Wolverhampton, UK, Hadeel ...|Jul 1st, 2021|conferencePaperUniversity of Wolverhampton, UK, Hadeel ...Jul 1st, 2021
-
Jing Xu, Da Ju, Margaret Li|Jul 1st, 2021|conferencePaperJing Xu, Da Ju, Margaret LiJul 1st, 2021
-
Weizhe Yuan, Graham Neubig, Pengfei Liu,...|Jul 1st, 2021|journalArticleWeizhe Yuan, Graham Neubig, Pengfei Liu,...Jul 1st, 2021
A wide variety of NLP applications, such as machine translation, summarization, and dialog, involve text generation. One major challenge for these applications is how to evaluate whether such generated texts are actually fluent, accurate, or effective. In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models. The general idea is that models trained to convert the generated text to/from a reference...