Analyzing the Behavior of Visual Question Answering Models
Article Status
Published
Authors/contributors
- Agrawal, Aishwarya (Author)
- Batra, Dhruv (Author)
- Parikh, Devi (Author)
Title
Analyzing the Behavior of Visual Question Answering Models
Abstract
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weaknesses, and identifying the most fruitful directions for progress. We analyze two models, one each from two major classes of VQA models -- with-attention and without-attention and show the similarities and differences in the behavior of these models. We also analyze the winning entry of the VQA Challenge 2016. Our behavior analysis reveals that despite recent progress, today's VQA models are "myopic" (tend to fail on sufficiently novel instances), often "jump to conclusions" (converge on a predicted answer after 'listening' to just half the question), and are "stubborn" (do not change their answers across images).
Date
2016
Accessed
11/12/2023, 03:37
Library Catalogue
DOI.org (Datacite)
Rights
arXiv.org perpetual, non-exclusive license
Extra
Publisher: arXiv
Version Number: 2
Citation Key: agrawal2016
<标题>: 分析视觉问答模型的行为
Citation
Agrawal, A., Batra, D., & Parikh, D. (2016). Analyzing the Behavior of Visual Question Answering Models. https://doi.org/10.48550/ARXIV.1606.07356
Link to this record