ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
Article Status
    Published
Authors/contributors
    - Saad-Falcon, Jon (Author)
 - Khattab, Omar (Author)
 - Potts, Christopher (Author)
 - Zaharia, Matei (Author)
 
Title
    ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems
Abstract
    Evaluating retrieval-augmented generation (RAG) systems traditionally relies on hand annotations for input queries, passages to retrieve, and responses to generate. We introduce ARES, an Automated RAG Evaluation System, for evaluating RAG systems along the dimensions of context relevance, answer faithfulness, and answer relevance. By creating its own synthetic training data, ARES finetunes lightweight LM judges to assess the quality of individual RAG components. To mitigate potential prediction errors, ARES utilizes a small set of human-annotated datapoints for prediction-powered inference (PPI). Across eight different knowledge-intensive tasks in KILT, SuperGLUE, and AIS, ARES accurately evaluates RAG systems while using only a few hundred human annotations during evaluation. Furthermore, ARES judges remain effective across domain shifts, proving accurate even after changing the type of queries and/or documents used in the evaluated RAG systems. We make our code and datasets publicly available on Github.
Repository
    arXiv
Archive ID
    arXiv:2311.09476
Place
    Mexico City, Mexico
Date
    2024
Accessed
    19/04/2024, 16:46
Short Title
    ARES
Library Catalogue
    
Extra
    arXiv:2311.09476 [cs]
<标题>: ARES:面向检索增强生成系统的自动化评估框架
Citation Key: saad-falcon2024
Citation
    Saad-Falcon, J., Khattab, O., Potts, C., & Zaharia, M. (2024). ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems (arXiv:2311.09476). arXiv. https://doi.org/10.18653/v1/2024.naacl-long.20
        Link to this record