FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization

Article Status
Published
Authors/contributors
Title
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
Date
2020
Proceedings Title
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Conference Name
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Place
Online
Publisher
Association for Computational Linguistics
Pages
5055-5070
Language
en
Short Title
FEQA
Accessed
27/10/2023, 17:10
Library Catalogue
DOI.org (Crossref)
Extra
<AI Smry>: An automatic question answering (QA) based metric for faithfulness, FEQA, is proposed, which leverages recent advances in reading comprehension and has significantly higher correlation with human faithfulness scores, especially on highly abstractive summaries.
Citation
Durmus, E., He, H., & Diab, M. (2020). FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5055–5070. https://doi.org/10.18653/v1/2020.acl-main.454
Technical methods
Powered by Zotero and Kerko.