Answer Questions with Right Image Regions: A Visual Attention
Regularization Approach
- URL: http://arxiv.org/abs/2102.01916v1
- Date: Wed, 3 Feb 2021 07:33:30 GMT
- Title: Answer Questions with Right Image Regions: A Visual Attention
Regularization Approach
- Authors: Yibing Liu, Yangyang Guo, Jianhua Yin, Xuemeng Song, Weifeng Liu,
Liqiang Nie
- Abstract summary: We propose a novel visual attention regularization approach, namely AttReg, for better visual grounding in Visual Question Answering (VQA)
AttReg identifies the image regions which are essential for question answering yet unexpectedly ignored by the backbone model.
It can achieve a new state-of-the-art accuracy of 59.92% with an absolute performance gain of 6.93% on the VQA-CP v2 benchmark dataset.
- Score: 46.55924742590242
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visual attention in Visual Question Answering (VQA) targets at locating the
right image regions regarding the answer prediction. However, recent studies
have pointed out that the highlighted image regions from the visual attention
are often irrelevant to the given question and answer, leading to model
confusion for correct visual reasoning. To tackle this problem, existing
methods mostly resort to aligning the visual attention weights with human
attentions. Nevertheless, gathering such human data is laborious and expensive,
making it burdensome to adapt well-developed models across datasets. To address
this issue, in this paper, we devise a novel visual attention regularization
approach, namely AttReg, for better visual grounding in VQA. Specifically,
AttReg firstly identifies the image regions which are essential for question
answering yet unexpectedly ignored (i.e., assigned with low attention weights)
by the backbone model. And then a mask-guided learning scheme is leveraged to
regularize the visual attention to focus more on these ignored key regions. The
proposed method is very flexible and model-agnostic, which can be integrated
into most visual attention-based VQA models and require no human attention
supervision. Extensive experiments over three benchmark datasets, i.e., VQA-CP
v2, VQA-CP v1, and VQA v2, have been conducted to evaluate the effectiveness of
AttReg. As a by-product, when incorporating AttReg into the strong baseline
LMH, our approach can achieve a new state-of-the-art accuracy of 59.92% with an
absolute performance gain of 6.93% on the VQA-CP v2 benchmark dataset. In
addition to the effectiveness validation, we recognize that the faithfulness of
the visual attention in VQA has not been well explored in literature. In the
light of this, we propose to empirically validate such property of visual
attention and compare it with the prevalent gradient-based approaches.
Related papers
- From Pixels to Objects: Cubic Visual Attention for Visual Question
Answering [132.95819467484517]
Recently, attention-based Visual Question Answering (VQA) has achieved great success by utilizing question to target different visual areas that are related to the answer.
We propose a Cubic Visual Attention (CVA) model by successfully applying a novel channel and spatial attention on object regions to improve VQA task.
Experimental results show that our proposed method significantly outperforms the state-of-the-arts.
arXiv Detail & Related papers (2022-06-04T07:03:18Z) - REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual
Question Answering [75.53187719777812]
This paper revisits visual representation in knowledge-based visual question answering (VQA)
We propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions.
We achieve new state-of-the-art performance, i.e., 58.0% accuracy, surpassing previous state-of-the-art method by a large margin.
arXiv Detail & Related papers (2022-06-02T17:59:56Z) - VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual
Question Answering [15.017443876780286]
We present VQA-MHUG - a novel dataset of multimodal human gaze on both images and questions during visual question answering (VQA)
We use our dataset to analyze the similarity between human and neural attentive strategies learned by five state-of-the-art VQA models.
arXiv Detail & Related papers (2021-09-27T15:06:10Z) - Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a
Class-imbalance View [129.392671317356]
We propose to interpret the language prior problem in VQA from a class-imbalance view.
It explicitly reveals why the VQA model tends to produce a frequent yet obviously wrong answer.
We also justify the validity of the class imbalance interpretation scheme on other computer vision tasks, such as face recognition and image classification.
arXiv Detail & Related papers (2020-10-30T00:57:17Z) - Regularizing Attention Networks for Anomaly Detection in Visual Question
Answering [10.971443035470488]
We evaluate the robustness of state-of-the-art VQA models to five different anomalies.
We propose an attention-based method, which uses confidence of reasoning between input images and questions.
We show that a maximum entropy regularization of attention networks can significantly improve the attention-based anomaly detection.
arXiv Detail & Related papers (2020-09-21T17:47:49Z) - Visual Grounding Methods for VQA are Working for the Wrong Reasons! [24.84797949716142]
We show that the performance improvements are not a result of improved visual grounding, but a regularization effect.
We propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2.
arXiv Detail & Related papers (2020-04-12T21:45:23Z) - Counterfactual Samples Synthesizing for Robust Visual Question Answering [104.72828511083519]
We propose a model-agnostic Counterfactual Samples Synthesizing (CSS) training scheme.
CSS generates numerous counterfactual training samples by masking critical objects in images or words in questions.
We achieve a record-breaking performance of 58.95% on VQA-CP v2, with 6.5% gains.
arXiv Detail & Related papers (2020-03-14T08:34:31Z) - In Defense of Grid Features for Visual Question Answering [65.71985794097426]
We revisit grid features for visual question answering (VQA) and find they can work surprisingly well.
We verify that this observation holds true across different VQA models and generalizes well to other tasks like image captioning.
We learn VQA models end-to-end, from pixels directly to answers, and show that strong performance is achievable without using any region annotations in pre-training.
arXiv Detail & Related papers (2020-01-10T18:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.