Modality-Aware Integration with Large Language Models for
Knowledge-based Visual Question Answering
- URL: http://arxiv.org/abs/2402.12728v2
- Date: Sun, 3 Mar 2024 04:51:28 GMT
- Title: Modality-Aware Integration with Large Language Models for
Knowledge-based Visual Question Answering
- Authors: Junnan Dong, Qinggang Zhang, Huachi Zhou, Daochen Zha, Pai Zheng, Xiao
Huang
- Abstract summary: We present a novel modality-aware integration with large language models (LLMs) for KVQA (MAIL)
MAIL carefully leverages multimodal knowledge for both image understanding and knowledge reasoning.
Experiments on two benchmark datasets show the superiority of MAIL with 24x less resources.
- Score: 28.48844388792774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-based visual question answering (KVQA) has been extensively studied
to answer visual questions with external knowledge, e.g., knowledge graphs
(KGs). While several attempts have been proposed to leverage large language
models (LLMs) as an implicit knowledge source, it remains challenging since
LLMs may generate hallucinations. Moreover, multiple knowledge sources, e.g.,
images, KGs and LLMs, cannot be readily aligned for complex scenarios. To
tackle these, we present a novel modality-aware integration with LLMs for KVQA
(MAIL). It carefully leverages multimodal knowledge for both image
understanding and knowledge reasoning. Specifically, (i) we propose a two-stage
prompting strategy with LLMs to densely embody the image into a scene graph
with detailed visual features; (ii) We construct a coupled concept graph by
linking the mentioned entities with external facts. (iii) A tailored
pseudo-siamese graph medium fusion is designed for sufficient multimodal
fusion. We utilize the shared mentioned entities in two graphs as mediums to
bridge a tight inter-modal exchange, while maximally preserving insightful
intra-modal learning by constraining the fusion within mediums. Extensive
experiments on two benchmark datasets show the superiority of MAIL with 24x
less resources.
Related papers
- Rethinking Visual Prompting for Multimodal Large Language Models with External Knowledge [76.45868419402265]
multimodal large language models (MLLMs) have made significant strides by training on vast high-quality image-text datasets.
However, the inherent difficulty in explicitly conveying fine-grained or spatially dense information in text, such as masks, poses a challenge for MLLMs.
This paper proposes a new visual prompt approach to integrate fine-grained external knowledge, gleaned from specialized vision models, into MLLMs.
arXiv Detail & Related papers (2024-07-05T17:43:30Z) - Multimodal Reasoning with Multimodal Knowledge Graph [19.899398342533722]
Multimodal reasoning with large language models (LLMs) often suffers from hallucinations and the presence of deficient or outdated knowledge.
We propose the Multimodal Reasoning with Multimodal Knowledge Graph (MR-MKG) method to learn rich and semantic knowledge across modalities.
arXiv Detail & Related papers (2024-06-04T07:13:23Z) - Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion [70.9767518332692]
Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks.
However, they fall short to comprehend context involving multiple images.
We propose a two phase paradigm, browse-and-concentrate, to enable in-depth multimodal context fusion.
arXiv Detail & Related papers (2024-02-19T14:59:07Z) - Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - On Exploring the Reasoning Capability of Large Language Models with
Knowledge Graphs [11.878708460150726]
Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs.
To address these questions, we employ LLMs to perform four distinct knowledge graph reasoning tasks.
Our experimental results demonstrate that LLMs can successfully tackle both simple and complex knowledge graph reasoning tasks from their own memory.
arXiv Detail & Related papers (2023-12-01T05:08:47Z) - LION : Empowering Multimodal Large Language Model with Dual-Level Visual
Knowledge [58.82222646803248]
Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability to perceive and understand multi-modal signals.
Most of the existing MLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text pairs, leading to insufficient extraction and reasoning of visual knowledge.
We propose a dual-Level vIsual knedgeOwl eNhanced Multimodal Large Language Model (LION), which empowers the MLLM by injecting visual knowledge in two levels.
arXiv Detail & Related papers (2023-11-20T15:56:44Z) - SPHINX: The Joint Mixing of Weights, Tasks, and Visual Embeddings for
Multi-modal Large Language Models [86.478087039015]
We present a versatile multi-modal large language model (MLLM) with a joint mixing of model weights, tuning tasks, and visual embeddings.
Based on our proposed joint mixing, we propose an efficient strategy aiming to better capture fine-grained appearances of high-resolution images.
We hope our work may cast a light on the exploration of joint mixing in future MLLM research.
arXiv Detail & Related papers (2023-11-13T18:59:47Z) - Multimodal Graph Learning for Generative Tasks [89.44810441463652]
Multimodal learning combines multiple data modalities, broadening the types and complexity of data our models can utilize.
We propose Multimodal Graph Learning (MMGL), a framework for capturing information from multiple multimodal neighbors with relational structures among them.
arXiv Detail & Related papers (2023-10-11T13:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.