FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization

Article Status
Published
Authors/contributors
Title
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
Proceedings Title
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Conference Name
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Publisher
Association for Computational Linguistics
Place
Online
Date
2020
Pages
5055-5070
Citation Key
durmus2020
Accessed
27/10/2023, 17:10
Short Title
FEQA
Language
en
Library Catalogue
DOI.org (Crossref)
Extra
<AI Smry>: An automatic question answering (QA) based metric for faithfulness, FEQA, is proposed, which leverages recent advances in reading comprehension and has significantly higher correlation with human faithfulness scores, especially on highly abstractive summaries. Read_Status: New Read_Status_Date: 2026-01-26T11:33:20.825Z
Citation
Durmus, E., He, H., & Diab, M. (2020). FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 5055–5070. https://doi.org/10.18653/v1/2020.acl-main.454
Powered by Zotero and Kerko.