1 resource

  • Nicolò Cosimo Albanese
    |
    Apr 20th, 2024
    |
    conferencePaper
    Nicolò Cosimo Albanese
    Apr 20th, 2024

    Ensuring fidelity to source documents is crucial for the responsible use of Large Language Models (LLMs) in Retrieval Augmented Generation (RAG) systems. We propose a lightweight method for real-time hallucination detection, with potential to be deployed as a model-agnostic microservice to bolster reliability. Using in-context learning, our approach evaluates response factuality at the sentence level without annotated data, promoting transparency and user trust. Compared to other...

Last update from database: 27/12/2024, 15:15 (UTC)
Powered by Zotero and Kerko.