Retrieving Multimodal Information for Augmented Generation: A Survey
- URL: http://arxiv.org/abs/2303.10868v3
- Date: Fri, 1 Dec 2023 02:58:09 GMT
- Title: Retrieving Multimodal Information for Augmented Generation: A Survey
- Authors: Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do,
Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty
- Abstract summary: We review methods that assist and augment generative models by retrieving multimodal knowledge.
Such methods offer a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness.
- Score: 35.33076940985081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Large Language Models (LLMs) become popular, there emerged an important
trend of using multimodality to augment the LLMs' generation ability, which
enables LLMs to better interact with the world. However, there lacks a unified
perception of at which stage and how to incorporate different modalities. In
this survey, we review methods that assist and augment generative models by
retrieving multimodal knowledge, whose formats range from images, codes,
tables, graphs, to audio. Such methods offer a promising solution to important
concerns such as factuality, reasoning, interpretability, and robustness. By
providing an in-depth review, this survey is expected to provide scholars with
a deeper understanding of the methods' applications and encourage them to adapt
existing techniques to the fast-growing field of LLMs.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Surveying the MLLM Landscape: A Meta-Review of Current Surveys [17.372501468675303]
Multimodal Large Language Models (MLLMs) have become a transformative force in the field of artificial intelligence.
This survey aims to provide a systematic review of benchmark tests and evaluation methods for MLLMs.
arXiv Detail & Related papers (2024-09-17T14:35:38Z) - Generative Large Language Models in Automated Fact-Checking: A Survey [0.0]
Large Language Models (LLMs) offer promising opportunities to support fact-checkers with their vast knowledge and advanced reasoning capabilities.
This survey explores the application of generative LLMs in fact-checking, highlighting various approaches and techniques for prompting or fine-tuning these models.
arXiv Detail & Related papers (2024-07-02T15:16:46Z) - LLMs Meet Multimodal Generation and Editing: A Survey [89.76691959033323]
This survey elaborates on multimodal generation and editing across various domains, comprising image, video, 3D, and audio.
We summarize the notable advancements with milestone works in these fields and categorize these studies into LLM-based and CLIP/T5-based methods.
We dig into tool-augmented multimodal agents that can leverage existing generative models for human-computer interaction.
arXiv Detail & Related papers (2024-05-29T17:59:20Z) - Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - How to Bridge the Gap between Modalities: A Comprehensive Survey on
Multimodal Large Language Model [12.890344377484759]
This review paper explores Multimodal Large Language Models (MLLMs)
MLLMs integrate Large Language Models (LLMs) like GPT-4 to handle multimodal data such as text and vision.
Choosing the appropriate modality alignment method is crucial, as improper methods might require more parameters with limited performance improvement.
arXiv Detail & Related papers (2023-11-10T09:51:24Z) - A Survey on Detection of LLMs-Generated Content [97.87912800179531]
The ability to detect LLMs-generated content has become of paramount importance.
We aim to provide a detailed overview of existing detection strategies and benchmarks.
We also posit the necessity for a multi-faceted approach to defend against various attacks.
arXiv Detail & Related papers (2023-10-24T09:10:26Z) - A Survey on Multimodal Large Language Models [71.63375558033364]
Multimodal Large Language Model (MLLM) represented by GPT-4V has been a new rising research hotspot.
This paper aims to trace and summarize the recent progress of MLLMs.
arXiv Detail & Related papers (2023-06-23T15:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.