GPTScore: Evaluate as You Desire
Article Status
Published
Authors/contributors
- Fu, Jinlan (Author)
- Ng, See-Kiong (Author)
- Jiang, Zhengbao (Author)
- Liu, Pengfei (Author)
Title
GPTScore: Evaluate as You Desire
Abstract
Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models. Nevertheless, assessing the quality of the generation is an even more arduous task than the generation itself, and this issue has not been given adequate consideration recently. This paper proposes a novel evaluation framework, GPTScore, which utilizes the emergent abilities (e.g., zero-shot instruction) of generative pre-trained models to score generated texts. There are 19 pre-trained models explored in this paper, ranging in size from 80M (e.g., FLAN-T5-small) to 175B (e.g., GPT3). Experimental results on four text generation tasks, 22 evaluation aspects, and corresponding 37 datasets demonstrate that this approach can effectively allow us to achieve what one desires to evaluate for texts simply by natural language instructions. This nature helps us overcome several long-standing challenges in text evaluation--how to achieve customized, multi-faceted evaluation without the need for annotated samples. We make our code publicly available at https://github.com/jinlanfu/GPTScore.
Date
2023
Short Title
GPTScore
Accessed
27/10/2023, 17:11
Library Catalogue
DOI.org (Datacite)
Rights
arXiv.org perpetual, non-exclusive license
Extra
Publisher: arXiv
Version Number: 2
Citation
Fu, J., Ng, S.-K., Jiang, Z., & Liu, P. (2023). GPTScore: Evaluate as You Desire. https://doi.org/10.48550/ARXIV.2302.04166
Technical methods
Link to this record