Improving Selective Visual Question Answering by Learning from Your
Peers
- URL: http://arxiv.org/abs/2306.08751v1
- Date: Wed, 14 Jun 2023 21:22:01 GMT
- Title: Improving Selective Visual Question Answering by Learning from Your
Peers
- Authors: Corentin Dancette, Spencer Whitehead, Rishabh Maheshwary, Ramakrishna
Vedantam, Stefan Scherer, Xinlei Chen, Matthieu Cord, Marcus Rohrbach
- Abstract summary: Visual Question Answering (VQA) models can have difficulties abstaining from answering when they are wrong.
We propose Learning from Your Peers (LYP) approach for training multimodal selection functions for making abstention decisions.
Our approach uses predictions from models trained on distinct subsets of the training data as targets for optimizing a Selective VQA model.
- Score: 74.20167944693424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite advances in Visual Question Answering (VQA), the ability of models to
assess their own correctness remains underexplored. Recent work has shown that
VQA models, out-of-the-box, can have difficulties abstaining from answering
when they are wrong. The option to abstain, also called Selective Prediction,
is highly relevant when deploying systems to users who must trust the system's
output (e.g., VQA assistants for users with visual impairments). For such
scenarios, abstention can be especially important as users may provide
out-of-distribution (OOD) or adversarial inputs that make incorrect answers
more likely. In this work, we explore Selective VQA in both in-distribution
(ID) and OOD scenarios, where models are presented with mixtures of ID and OOD
data. The goal is to maximize the number of questions answered while minimizing
the risk of error on those questions. We propose a simple yet effective
Learning from Your Peers (LYP) approach for training multimodal selection
functions for making abstention decisions. Our approach uses predictions from
models trained on distinct subsets of the training data as targets for
optimizing a Selective VQA model. It does not require additional manual labels
or held-out data and provides a signal for identifying examples that are
easy/difficult to generalize to. In our extensive evaluations, we show this
benefits a number of models across different architectures and scales. Overall,
for ID, we reach 32.92% in the selective prediction metric coverage at 1% risk
of error (C@1%) which doubles the previous best coverage of 15.79% on this
task. For mixed ID/OOD, using models' softmax confidences for abstention
decisions performs very poorly, answering <5% of questions at 1% risk of error
even when faced with only 10% OOD examples, but a learned selection function
with LYP can increase that to 25.38% C@1%.
Related papers
- Uncertainty-aware Language Modeling for Selective Question Answering [107.47864420630923]
We present an automatic large language model (LLM) conversion approach that produces uncertainty-aware LLMs.
Our approach is model- and data-agnostic, is computationally-efficient, and does not rely on external models or systems.
arXiv Detail & Related papers (2023-11-26T22:47:54Z) - UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models [55.22048505787125]
This paper contributes a comprehensive dataset, called UNK-VQA.
We first augment the existing data via deliberate perturbations on either the image or question.
We then extensively evaluate the zero- and few-shot performance of several emerging multi-modal large models.
arXiv Detail & Related papers (2023-10-17T02:38:09Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Language Models (Mostly) Know What They Know [10.836210010868932]
We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly.
We investigate whether models can be trained to predict "P(IK)", the probability that "I know" the answer to a question, without reference to any particular proposed answer.
arXiv Detail & Related papers (2022-07-11T22:59:39Z) - Reliable Visual Question Answering: Abstain Rather Than Answer
Incorrectly [100.60560477391732]
We promote a problem formulation for reliable visual question answering (VQA)
We analyze both their coverage, the portion of questions answered, and risk, the error on that portion.
We find that although the best performing models achieve over 71% accuracy on the VQA v2 dataset, introducing the option to abstain limits them to answering less than 8% of the questions to achieve a low risk of error (i.e., 1%)
This motivates us to utilize a multimodal selection function to directly estimate the correctness of the predicted answers, which we show can triple the coverage from, for example, 5.0% to 16.7% at
arXiv Detail & Related papers (2022-04-28T16:51:27Z) - Selective Question Answering under Domain Shift [90.021577320085]
Abstention policies based solely on the model's softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs.
We train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely.
Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model's probabilities only answers 48% at 80% accuracy.
arXiv Detail & Related papers (2020-06-16T19:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.