Improving Visual Question Answering Models through Robustness Analysis
and In-Context Learning with a Chain of Basic Questions
- URL: http://arxiv.org/abs/2304.03147v1
- Date: Thu, 6 Apr 2023 15:32:35 GMT
- Title: Improving Visual Question Answering Models through Robustness Analysis
and In-Context Learning with a Chain of Basic Questions
- Authors: Jia-Hong Huang, Modar Alfadly, Bernard Ghanem, Marcel Worring
- Abstract summary: This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models.
The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models.
- Score: 70.70725223310401
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have been critical in the task of Visual Question
Answering (VQA), with research traditionally focused on improving model
accuracy. Recently, however, there has been a trend towards evaluating the
robustness of these models against adversarial attacks. This involves assessing
the accuracy of VQA models under increasing levels of noise in the input, which
can target either the image or the proposed query question, dubbed the main
question. However, there is currently a lack of proper analysis of this aspect
of VQA. This work proposes a new method that utilizes semantically related
questions, referred to as basic questions, acting as noise to evaluate the
robustness of VQA models. It is hypothesized that as the similarity of a basic
question to the main question decreases, the level of noise increases. To
generate a reasonable noise level for a given main question, a pool of basic
questions is ranked based on their similarity to the main question, and this
ranking problem is cast as a LASSO optimization problem. Additionally, this
work proposes a novel robustness measure, R_score, and two basic question
datasets to standardize the analysis of VQA model robustness. The experimental
results demonstrate that the proposed evaluation method effectively analyzes
the robustness of VQA models. Moreover, the experiments show that in-context
learning with a chain of basic questions can enhance model accuracy.
Related papers
- QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for
Zero-Shot Commonsense Question Answering [48.25449258017601]
State-of-the-art approaches fine-tune language models on QA pairs constructed from CommonSense Knowledge Bases.
We propose QADYNAMICS, a training dynamics-driven framework for QA diagnostics and refinement.
arXiv Detail & Related papers (2023-10-17T14:27:34Z) - Knowledge-Based Counterfactual Queries for Visual Question Answering [0.0]
We propose a systematic method for explaining the behavior and investigating the robustness of VQA models through counterfactual perturbations.
For this reason, we exploit structured knowledge bases to perform deterministic, optimal and controllable word-level replacements targeting the linguistic modality.
We then evaluate the model's response against such counterfactual inputs.
arXiv Detail & Related papers (2023-03-05T08:00:30Z) - Synthetic Question Value Estimation for Domain Adaptation of Question
Answering [31.003053719921628]
We introduce a novel idea of training a question value estimator (QVE) that directly estimates the usefulness of synthetic questions for improving the target-domain QA performance.
By using such questions and only around 15% of the human annotations on the target domain, we can achieve comparable performance to the fully-supervised baselines.
arXiv Detail & Related papers (2022-03-16T20:22:31Z) - Loss re-scaling VQA: Revisiting the LanguagePrior Problem from a
Class-imbalance View [129.392671317356]
We propose to interpret the language prior problem in VQA from a class-imbalance view.
It explicitly reveals why the VQA model tends to produce a frequent yet obviously wrong answer.
We also justify the validity of the class imbalance interpretation scheme on other computer vision tasks, such as face recognition and image classification.
arXiv Detail & Related papers (2020-10-30T00:57:17Z) - SOrT-ing VQA Models : Contrastive Gradient Learning for Improved
Consistency [64.67155167618894]
We present a gradient-based interpretability approach to determine the questions most strongly correlated with the reasoning question on an image.
Next, we propose a contrastive gradient learning based approach called Sub-question Oriented Tuning (SOrT) which encourages models to rank relevant sub-questions higher than irrelevant questions for an image, reasoning-question> pair.
We show that SOrT improves model consistency by upto 6.5% points over existing baselines, while also improving visual grounding.
arXiv Detail & Related papers (2020-10-20T05:15:48Z) - Contrast and Classify: Training Robust VQA Models [60.80627814762071]
We propose a novel training paradigm (ConClaT) that optimize both cross-entropy and contrastive losses.
We find that optimizing both losses -- either alternately or jointly -- is key to effective training.
arXiv Detail & Related papers (2020-10-13T00:23:59Z) - SQuINTing at VQA Models: Introspecting VQA Models with Sub-Questions [66.86887670416193]
We show that state-of-the-art VQA models have comparable performance in answering perception and reasoning questions, but suffer from consistency problems.
To address this shortcoming, we propose an approach called Sub-Question-aware Network Tuning (SQuINT)
We show that SQuINT improves model consistency by 5%, also marginally improving performance on the Reasoning questions in VQA, while also displaying better attention maps.
arXiv Detail & Related papers (2020-01-20T01:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.