CIDEr-R: Robust Consensus-based Image Description Evaluation
Article Status
Published
Authors/contributors
- Santos, Gabriel Oliveira dos (Author)
- Colombini, Esther Luna (Author)
- Avila, Sandra (Author)
Title
CIDEr-R: Robust Consensus-based Image Description Evaluation
Abstract
This paper shows that CIDEr-D, a traditional evaluation metric for image description, does not work properly on datasets where the number of words in the sentence is significantly greater than those in the MS COCO Captions dataset. We also show that CIDEr-D has performance hampered by the lack of multiple reference sentences and high variance of sentence length. To bypass this problem, we introduce CIDEr-R, which improves CIDEr-D, making it more flexible in dealing with datasets with high sentence length variance. We demonstrate that CIDEr-R is more accurate and closer to human judgment than CIDEr-D; CIDEr-R is more robust regarding the number of available references. Our results reveal that using Self-Critical Sequence Training to optimize CIDEr-R generates descriptive captions. In contrast, when CIDEr-D is optimized, the generated captions' length tends to be similar to the reference length. However, the models also repeat several times the same word to increase the sentence length.
Repository
arXiv
Archive ID
arXiv:2109.13701
Date
2021-09-28
Accessed
29/04/2024, 21:19
Short Title
CIDEr-R
Library Catalogue
Extra
arXiv:2109.13701 [cs]
<AI Smry>: The results reveal that using Self-Critical Sequence Training to optimize CIDEr-R generates descriptive captions, and when C IDEr-D is optimized, the generated captions’ length tends to be similar to the reference length, but the models also repeat several times the same word to increase the sentence length.
Citation
Santos, G. O. dos, Colombini, E. L., & Avila, S. (2021). CIDEr-R: Robust Consensus-based Image Description Evaluation (arXiv:2109.13701). arXiv. https://doi.org/10.18653/v1/2021.wnut-1.39
Technical methods
Link to this record