In authors or contributors

2 resources

  • Pierre Jean A. Colombo, Chloé Clavel, Pa...
    |
    Jun 28th, 2022
    |
    journalArticle
    Pierre Jean A. Colombo, Chloé Clavel, Pa...
    Jun 28th, 2022

    Assessing the quality of natural language generation (NLG) systems through human annotation is very expensive. Additionally, human annotation campaigns are time-consuming and include non-reusable human labour. In practice, researchers rely on automatic metrics as a proxy of quality. In the last decade, many string-based metrics (e.g., BLEU or ROUGE) have been introduced. However, such metrics usually rely on exact matches and thus, do not robustly handle synonyms. In this paper, we introduce...

  • Cyril Chhun, Pierre Colombo, Chloé Clave...
    |
    Sep 15th, 2022
    |
    preprint
    Cyril Chhun, Pierre Colombo, Chloé Clave...
    Sep 15th, 2022

    Research on Automatic Story Generation (ASG) relies heavily on human and automatic evaluation. However, there is no consensus on which human evaluation criteria to use, and no analysis of how well automatic criteria correlate with them. In this paper, we propose to re-evaluate ASG evaluation. We introduce a set of 6 orthogonal and comprehensive human criteria, carefully motivated by the social sciences literature. We also present HANNA, an annotated dataset of 1,056 stories produced by 10...

Last update from database: 28/12/2024, 06:15 (UTC)
Powered by Zotero and Kerko.