Improving Zero-shot Visual Question Answering via Large Language Models
with Reasoning Question Prompts
- URL: http://arxiv.org/abs/2311.09050v1
- Date: Wed, 15 Nov 2023 15:40:46 GMT
- Title: Improving Zero-shot Visual Question Answering via Large Language Models
with Reasoning Question Prompts
- Authors: Yunshi Lan, Xiang Li, Xin Liu, Yang Li, Wei Qin and Weining Qian
- Abstract summary: We present Reasoning Question Prompts for VQA tasks, which can further activate the potential of Large Language Models.
We generate self-contained questions as reasoning question prompts via an unsupervised question edition module.
Each reasoning question prompt clearly indicates the intent of the original question.
Then, the candidate answers associated with their confidence scores acting as answer integritys are fed into LLMs.
- Score: 22.669502403623166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot Visual Question Answering (VQA) is a prominent vision-language task
that examines both the visual and textual understanding capability of systems
in the absence of training data. Recently, by converting the images into
captions, information across multi-modalities is bridged and Large Language
Models (LLMs) can apply their strong zero-shot generalization capability to
unseen questions. To design ideal prompts for solving VQA via LLMs, several
studies have explored different strategies to select or generate
question-answer pairs as the exemplar prompts, which guide LLMs to answer the
current questions effectively. However, they totally ignore the role of
question prompts. The original questions in VQA tasks usually encounter
ellipses and ambiguity which require intermediate reasoning. To this end, we
present Reasoning Question Prompts for VQA tasks, which can further activate
the potential of LLMs in zero-shot scenarios. Specifically, for each question,
we first generate self-contained questions as reasoning question prompts via an
unsupervised question edition module considering sentence fluency, semantic
integrity and syntactic invariance. Each reasoning question prompt clearly
indicates the intent of the original question. This results in a set of
candidate answers. Then, the candidate answers associated with their confidence
scores acting as answer heuristics are fed into LLMs and produce the final
answer. We evaluate reasoning question prompts on three VQA challenges,
experimental results demonstrate that they can significantly improve the
results of LLMs on zero-shot setting and outperform existing state-of-the-art
zero-shot methods on three out of four data sets. Our source code is publicly
released at \url{https://github.com/ECNU-DASE-NLP/RQP}.
Related papers
- Declarative Knowledge Distillation from Large Language Models for Visual Question Answering Datasets [9.67464173044675]
Visual Question Answering (VQA) is the task of answering a question about an image.
We present an approach for declarative knowledge distillation from Large Language Models (LLMs)
Our results confirm that distilling knowledge from LLMs is in fact a promising direction besides data-driven rule learning approaches.
arXiv Detail & Related papers (2024-10-12T08:17:03Z) - Multi-LLM QA with Embodied Exploration [55.581423861790945]
We investigate the use of Multi-Embodied LLM Explorers (MELE) for question-answering in an unknown environment.
Multiple LLM-based agents independently explore and then answer queries about a household environment.
We analyze different aggregation methods to generate a single, final answer for each query.
arXiv Detail & Related papers (2024-06-16T12:46:40Z) - Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering [55.295699268654545]
We propose a novel Chain-of-Discussion framework to leverage the synergy among open-source Large Language Models.
Our experiments show that discussions among multiple LLMs play a vital role in enhancing the quality of answers.
arXiv Detail & Related papers (2024-02-26T05:31:34Z) - Enhancing Answer Selection in Community Question Answering with
Pre-trained and Large Language Models [0.9065034043031668]
We first propose the Question-Answer cross attention networks (QAN) with pre-trained models for answer selection.
We then utilize large language model (LLM) to perform answer selection with knowledge augmentation.
Experiments show that the QAN model state-of-the-art performance on two datasets, SemEval2015 and SemEval 2017.
arXiv Detail & Related papers (2023-11-29T10:24:50Z) - Towards Top-Down Reasoning: An Explainable Multi-Agent Approach for Visual Question Answering [45.88079503965459]
This work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of Vision Language Models (VLMs)
arXiv Detail & Related papers (2023-11-29T03:10:42Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Rephrase, Augment, Reason: Visual Grounding of Questions for Vision-Language Models [59.05769810380928]
Rephrase, Augment and Reason (RepARe) is a gradient-free framework that extracts salient details about the image using the underlying vision-language model.
We show that RepARe can result in a 3.85% (absolute) increase in zero-shot accuracy on VQAv2, 6.41%, and 7.94% points increase on A-OKVQA, and VizWiz respectively.
arXiv Detail & Related papers (2023-10-09T16:57:57Z) - From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language
Models [111.42052290293965]
Large language models (LLMs) have demonstrated excellent zero-shot generalization to new language tasks.
End-to-end training on vision and language data may bridge the disconnections, but is inflexible and computationally expensive.
We propose emphImg2Prompt, a plug-and-play module that provides the prompts that can bridge the aforementioned modality and task disconnections.
arXiv Detail & Related papers (2022-12-21T08:39:36Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.