In authors or contributors

1 resource

  • Kavita Ganesan
    |
    Mar 5th, 2018
    |
    preprint
    Kavita Ganesan
    Mar 5th, 2018

    Evaluation of summarization tasks is extremely crucial to determining the quality of machine generated summaries. Over the last decade, ROUGE has become the standard automatic evaluation measure for evaluating summarization tasks. While ROUGE has been shown to be effective in capturing n-gram overlap between system and human composed summaries, there are several limitations with the existing ROUGE measures in terms of capturing synonymous concepts and coverage of topics. Thus, often times...

Last update from database: 28/12/2024, 08:15 (UTC)
Powered by Zotero and Kerko.