OneLLM: One Framework to Align All Modalities with Language
- URL: http://arxiv.org/abs/2312.03700v1
- Date: Wed, 6 Dec 2023 18:59:19 GMT
- Title: OneLLM: One Framework to Align All Modalities with Language
- Authors: Jiaming Han, Kaixiong Gong, Yiyuan Zhang, Jiaqi Wang, Kaipeng Zhang,
Dahua Lin, Yu Qiao, Peng Gao, Xiangyu Yue
- Abstract summary: We present OneLLM, an MLLM that aligns eight modalities to language using a unified framework.
OneLLM is evaluated on 25 diverse benchmarks, encompassing tasks such as multimodal captioning, question answering and reasoning.
- Score: 90.14915575477197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal large language models (MLLMs) have gained significant attention
due to their strong multimodal understanding capability. However, existing
works rely heavily on modality-specific encoders, which usually differ in
architecture and are limited to common modalities. In this paper, we present
OneLLM, an MLLM that aligns eight modalities to language using a unified
framework. We achieve this through a unified multimodal encoder and a
progressive multimodal alignment pipeline. In detail, we first train an image
projection module to connect a vision encoder with LLM. Then, we build a
universal projection module (UPM) by mixing multiple image projection modules
and dynamic routing. Finally, we progressively align more modalities to LLM
with the UPM. To fully leverage the potential of OneLLM in following
instructions, we also curated a comprehensive multimodal instruction dataset,
including 2M items from image, audio, video, point cloud, depth/normal map, IMU
and fMRI brain activity. OneLLM is evaluated on 25 diverse benchmarks,
encompassing tasks such as multimodal captioning, question answering and
reasoning, where it delivers excellent performance. Code, data, model and
online demo are available at https://github.com/csuhan/OneLLM
Related papers
- NoteLLM-2: Multimodal Large Representation Models for Recommendation [60.17448025069594]
We investigate the potential of Large Language Models to enhance multimodal representation in multimodal item-to-item recommendations.
One feasible method is the transfer of Multimodal Large Language Models (MLLMs) for representation tasks.
We propose a novel training framework, NoteLLM-2, specifically designed for multimodal representation.
arXiv Detail & Related papers (2024-05-27T03:24:01Z) - Dense Connector for MLLMs [89.50595155217108]
We introduce the Dense Connector - a plug-and-play vision-language connector that significantly enhances existing MLLMs.
Building on this, we also propose the Efficient Dense Connector, which achieves performance comparable to LLaVA-v1.5 with only 25% of the visual tokens.
Our model, trained solely on images, showcases remarkable zero-shot capabilities in video understanding as well.
arXiv Detail & Related papers (2024-05-22T16:25:03Z) - Position-Enhanced Visual Instruction Tuning for Multimodal Large
Language Models [50.07056960586183]
We propose Position-enhanced Visual Instruction Tuning (PVIT) to extend the functionality of Multimodal Large Language Models (MLLMs)
This integration promotes a more detailed comprehension of images for the MLLM.
We present both quantitative experiments and qualitative analysis that demonstrate the superiority of the proposed model.
arXiv Detail & Related papers (2023-08-25T15:33:47Z) - Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and
Text Integration [50.94902442781148]
We propose a novel multi-modal large language model (LLM) that seamlessly integrates visual, audio, and textual information.
Macaw-LLM consists of three main components: a modality module for encoding multi-modal data, a cognitive module for harnessing pretrained LLMs, and an alignment module for harmonizing diverse representations.
We construct a large-scale multi-modal instruction dataset in terms of multi-turn dialogue, including 69K image instances and 50K video instances.
arXiv Detail & Related papers (2023-06-15T12:45:25Z) - X-LLM: Bootstrapping Advanced Large Language Models by Treating
Multi-Modalities as Foreign Languages [20.274614342856978]
We propose X-LLM, which converts Multi-modalities into foreign languages using X2L interfaces and inputs them into a large Language model (ChatGLM)
X-LLM demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions.
arXiv Detail & Related papers (2023-05-07T02:25:42Z) - mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality [95.76661165594884]
mPLUG-Owl is a training paradigm that equips large language models (LLMs) with multi-modal abilities.
The training paradigm involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM.
Experimental results show that our model outperforms existing multi-modal models.
arXiv Detail & Related papers (2023-04-27T13:27:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.