AVIS: Autonomous Visual Information Seeking with Large Language Model
Agent
- URL: http://arxiv.org/abs/2306.08129v3
- Date: Thu, 2 Nov 2023 07:54:37 GMT
- Title: AVIS: Autonomous Visual Information Seeking with Large Language Model
Agent
- Authors: Ziniu Hu, Ahmet Iscen, Chen Sun, Kai-Wei Chang, Yizhou Sun, David A
Ross, Cordelia Schmid, Alireza Fathi
- Abstract summary: We propose an autonomous information seeking visual question answering framework, AVIS.
Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools.
AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA.
- Score: 123.75169211547149
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an autonomous information seeking visual question
answering framework, AVIS. Our method leverages a Large Language Model (LLM) to
dynamically strategize the utilization of external tools and to investigate
their outputs, thereby acquiring the indispensable knowledge needed to provide
answers to the posed questions. Responding to visual questions that necessitate
external knowledge, such as "What event is commemorated by the building
depicted in this image?", is a complex task. This task presents a combinatorial
search space that demands a sequence of actions, including invoking APIs,
analyzing their responses, and making informed decisions. We conduct a user
study to collect a variety of instances of human decision-making when faced
with this task. This data is then used to design a system comprised of three
components: an LLM-powered planner that dynamically determines which tool to
use next, an LLM-powered reasoner that analyzes and extracts key information
from the tool outputs, and a working memory component that retains the acquired
information throughout the process. The collected user behavior serves as a
guide for our system in two key ways. First, we create a transition graph by
analyzing the sequence of decisions made by users. This graph delineates
distinct states and confines the set of actions available at each state.
Second, we use examples of user decision-making to provide our LLM-powered
planner and reasoner with relevant contextual instances, enhancing their
capacity to make informed decisions. We show that AVIS achieves
state-of-the-art results on knowledge-intensive visual question answering
benchmarks such as Infoseek and OK-VQA.
Related papers
- Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent [102.31558123570437]
Multimodal Retrieval Augmented Generation (mRAG) plays an important role in mitigating the "hallucination" issue inherent in multimodal large language models (MLLMs)
We propose the first self-adaptive planning agent for multimodal retrieval, OmniSearch.
arXiv Detail & Related papers (2024-11-05T09:27:21Z) - HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning [10.80288566599934]
HYDRA is a compositional visual reasoning framework for reliable and incrementally progressive general reasoning.
Our framework demonstrates state-of-the-art performance in various VR tasks on four different widely-used datasets.
arXiv Detail & Related papers (2024-03-19T16:31:30Z) - DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models (Exemplified as A Video Agent) [73.10899129264375]
This paper explores DoraemonGPT, a comprehensive and conceptually elegant system driven by LLMs to understand dynamic scenes.
Given a video with a question/task, DoraemonGPT begins by converting the input video into a symbolic memory that stores task-related attributes.
We extensively evaluate DoraemonGPT's effectiveness on three benchmarks and several in-the-wild scenarios.
arXiv Detail & Related papers (2024-01-16T14:33:09Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Analyzing the Efficacy of an LLM-Only Approach for Image-based Document
Question Answering [12.064056743478865]
We study the relative contributions of the vision encoder and the language model in document question answering tasks.
Our comprehensive analysis encompasses six diverse benchmark datasets, utilizing LLMs of varying scales.
Our findings reveal that a strategy exclusively reliant on the LLM yields results that are on par with or closely approach state-of-the-art performance.
arXiv Detail & Related papers (2023-09-25T07:01:16Z) - AssistGPT: A General Multi-modal Assistant that can Plan, Execute,
Inspect, and Learn [25.510696745075688]
We propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn.
The Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress.
We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-06-14T17:12:56Z) - Dynamic Key-value Memory Enhanced Multi-step Graph Reasoning for
Knowledge-based Visual Question Answering [18.926582410644375]
Knowledge-based visual question answering (VQA) is a vision-language task that requires an agent to correctly answer image-related questions.
We propose a novel model named dynamic knowledge memory enhanced multi-step graph reasoning (DMMGR)
Our model achieves new state-of-the-art accuracy on the KRVQR and FVQA datasets.
arXiv Detail & Related papers (2022-03-06T15:19:39Z) - Visual analytics of set data for knowledge discovery and member
selection support [0.7734726150561089]
We develop a method for the VA of set data aimed at supporting knowledge discovery and member selection.
A typical target application is a visual support system for team analysis and member selection.
We demonstrate the proposed method by applying it to basketball teams, and compare with a benchmark system for outcome prediction and lineup reconstruction tasks.
arXiv Detail & Related papers (2021-04-04T08:22:01Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Dynamic Feature Integration for Simultaneous Detection of Salient
Object, Edge and Skeleton [108.01007935498104]
In this paper, we solve three low-level pixel-wise vision problems, including salient object segmentation, edge detection, and skeleton extraction.
We first show some similarities shared by these tasks and then demonstrate how they can be leveraged for developing a unified framework.
arXiv Detail & Related papers (2020-04-18T11:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.