1 resource

  • Joshua Maynez, Shashi Narayan, Bernd Boh...
    |
    May 1st, 2020
    |
    preprint
    Joshua Maynez, Shashi Narayan, Bernd Boh...
    May 1st, 2020

    It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural...

Last update from database: 22/10/2025, 10:15 (UTC)
Powered by Zotero and Kerko.