VQA Therapy: Exploring Answer Differences by Visually Grounding Answers
- URL: http://arxiv.org/abs/2308.11662v2
- Date: Thu, 24 Aug 2023 23:58:50 GMT
- Title: VQA Therapy: Exploring Answer Differences by Visually Grounding Answers
- Authors: Chongyan Chen, Samreen Anjum, Danna Gurari
- Abstract summary: We introduce the first dataset that visually grounds each unique answer to each visual question.
We then propose two novel problems of predicting whether a visual question has a single answer grounding.
- Score: 21.77545853313608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual question answering is a task of predicting the answer to a question
about an image. Given that different people can provide different answers to a
visual question, we aim to better understand why with answer groundings. We
introduce the first dataset that visually grounds each unique answer to each
visual question, which we call VQAAnswerTherapy. We then propose two novel
problems of predicting whether a visual question has a single answer grounding
and localizing all answer groundings. We benchmark modern algorithms for these
novel problems to show where they succeed and struggle. The dataset and
evaluation server can be found publicly at
https://vizwiz.org/tasks-and-datasets/vqa-answer-therapy/.
Related papers
- Ask Questions with Double Hints: Visual Question Generation with Answer-awareness and Region-reference [107.53380946417003]
We propose a novel learning paradigm to generate visual questions with answer-awareness and region-reference.
We develop a simple methodology to self-learn the visual hints without introducing any additional human annotations.
arXiv Detail & Related papers (2024-07-06T15:07:32Z) - Equivariant and Invariant Grounding for Video Question Answering [68.33688981540998]
Most leading VideoQA models work as black boxes, which make the visual-linguistic alignment behind the answering process obscure.
We devise a self-interpretable framework, Equivariant and Invariant Grounding for Interpretable VideoQA (EIGV)
EIGV is able to distinguish the causal scene from the environment information, and explicitly present the visual-linguistic alignment.
arXiv Detail & Related papers (2022-07-26T10:01:02Z) - A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge [39.788346536244504]
A-OKVQA is a crowdsourced dataset composed of about 25K questions.
We demonstrate the potential of this new dataset through a detailed analysis of its contents.
arXiv Detail & Related papers (2022-06-03T17:52:27Z) - Grounding Answers for Visual Questions Asked by Visually Impaired People [16.978747012406266]
VizWiz-VQA-Grounding is the first dataset that visually grounds answers to visual questions asked by people with visual impairments.
We analyze our dataset and compare it with five VQA-Grounding datasets to demonstrate what makes it similar and different.
arXiv Detail & Related papers (2022-02-04T06:47:16Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - Check It Again: Progressive Visual Question Answering via Visual
Entailment [12.065178204539693]
We propose a select-and-rerank (SAR) progressive framework based on Visual Entailment.
We first select the candidate answers relevant to the question or the image, then we rerank the candidate answers by a visual entailment task.
Experimental results show the effectiveness of our proposed framework, which establishes a new state-of-the-art accuracy on VQA-CP v2 with a 7.55% improvement.
arXiv Detail & Related papers (2021-06-08T18:00:38Z) - CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question
Answering with Hypothetical Actions over Images [31.317663183139384]
We take visual understanding to a higher level where systems are challenged to answer questions that involve mentally simulating the hypothetical consequences of performing specific actions in a given scenario.
We formulate a vision-language question answering task based on the CLEVR dataset.
arXiv Detail & Related papers (2021-04-13T07:29:21Z) - Graph-Based Tri-Attention Network for Answer Ranking in CQA [56.42018099917321]
We propose a novel graph-based tri-attention network, namely GTAN, to generate answer ranking scores.
Experiments on three real-world CQA datasets demonstrate GTAN significantly outperforms state-of-the-art answer ranking methods.
arXiv Detail & Related papers (2021-03-05T10:40:38Z) - Knowledge-Routed Visual Question Reasoning: Challenges for Deep
Representation Embedding [140.5911760063681]
We propose a novel dataset named Knowledge-Routed Visual Question Reasoning for VQA model evaluation.
We generate the question-answer pair based on both the Visual Genome scene graph and an external knowledge base with controlled programs.
arXiv Detail & Related papers (2020-12-14T00:33:44Z) - Answer-Driven Visual State Estimator for Goal-Oriented Visual Dialogue [42.563261906213455]
We propose an Answer-Driven Visual State Estimator (ADVSE) to impose the effects of different answers on visual states.
First, we propose an Answer-Driven Focusing Attention (ADFA) to capture the answer-driven effect on visual attention.
Then based on the focusing attention, we get the visual state estimation by Conditional Visual Information Fusion (CVIF)
arXiv Detail & Related papers (2020-10-01T12:46:38Z) - Visual Question Answering on Image Sets [70.4472272672716]
We introduce the task of Image-Set Visual Question Answering (ISVQA), which generalizes the commonly studied single-image VQA problem to multi-image settings.
Taking a natural language question and a set of images as input, it aims to answer the question based on the content of the images.
The questions can be about objects and relationships in one or more images or about the entire scene depicted by the image set.
arXiv Detail & Related papers (2020-08-27T08:03:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.