InternLM-XComposer2: Mastering Free-form Text-Image Composition and
Comprehension in Vision-Language Large Model
- URL: http://arxiv.org/abs/2401.16420v1
- Date: Mon, 29 Jan 2024 18:59:02 GMT
- Title: InternLM-XComposer2: Mastering Free-form Text-Image Composition and
Comprehension in Vision-Language Large Model
- Authors: Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke
Ouyang, Xilin Wei, Songyang Zhang, Haodong Duan, Maosong Cao, Wenwei Zhang,
Yining Li, Hang Yan, Yang Gao, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen,
Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, Jiaqi Wang
- Abstract summary: We introduce InternLM-XComposer2, a vision-language model excelling in free-form text-image composition and comprehension.
This model goes beyond conventional vision-language understanding, adeptly crafting interleaved text-image content from diverse inputs.
Experimental results demonstrate the superiority of InternLM-XComposer2 based on InternLM2-7B in producing high-quality long-text multi-modal content.
- Score: 108.42241250772643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce InternLM-XComposer2, a cutting-edge vision-language model
excelling in free-form text-image composition and comprehension. This model
goes beyond conventional vision-language understanding, adeptly crafting
interleaved text-image content from diverse inputs like outlines, detailed
textual specifications, and reference images, enabling highly customizable
content creation. InternLM-XComposer2 proposes a Partial LoRA (PLoRA) approach
that applies additional LoRA parameters exclusively to image tokens to preserve
the integrity of pre-trained language knowledge, striking a balance between
precise vision understanding and text composition with literary talent.
Experimental results demonstrate the superiority of InternLM-XComposer2 based
on InternLM2-7B in producing high-quality long-text multi-modal content and its
exceptional vision-language understanding performance across various
benchmarks, where it not only significantly outperforms existing multimodal
models but also matches or even surpasses GPT-4V and Gemini Pro in certain
assessments. This highlights its remarkable proficiency in the realm of
multimodal understanding. The InternLM-XComposer2 model series with 7B
parameters are publicly available at
https://github.com/InternLM/InternLM-XComposer.
Related papers
- Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks [62.758680527838436]
Leopard is a vision-language model for handling vision-language tasks involving multiple text-rich images.
First, we curated about one million high-quality multimodal instruction-tuning data, tailored to text-rich, multi-image scenarios.
Second, we developed an adaptive high-resolution multi-image encoding module to dynamically optimize the allocation of visual sequence length.
arXiv Detail & Related papers (2024-10-02T16:55:01Z) - MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis [18.876109299162138]
We introduce MARS, a novel framework for T2I generation that incorporates a specially designed Semantic Vision-Language Integration Expert (SemVIE)
This innovative component integrates pre-trained LLMs by independently processing linguistic and visual information, freezing the textual component while fine-tuning the visual component.
MARS requires only 9% of the GPU days needed by SD1.5, yet it achieves remarkable results across a variety of benchmarks.
arXiv Detail & Related papers (2024-07-10T12:52:49Z) - VEGA: Learning Interleaved Image-Text Comprehension in Vision-Language Large Models [76.94378391979228]
We introduce a new, more demanding task known as Interleaved Image-Text (IITC)
This task challenges models to discern and disregard superfluous elements in both images and text to accurately answer questions.
In support of this task, we further craft a new VEGA dataset, tailored for the IITC task on scientific content, and devised a subtask, Image-Text Association (ITA)
arXiv Detail & Related papers (2024-06-14T17:59:40Z) - TRINS: Towards Multimodal Language Models that Can Read [61.17806538631744]
TRINS is a Text-Rich image INStruction dataset.
It contains 39,153 text-rich images, captions, and 102,437 questions.
We introduce a Language-vision Reading Assistant (LaRA) which is good at understanding textual content within images.
arXiv Detail & Related papers (2024-06-10T18:52:37Z) - Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion [70.9767518332692]
Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks.
However, they fall short to comprehend context involving multiple images.
We propose a two phase paradigm, browse-and-concentrate, to enable in-depth multimodal context fusion.
arXiv Detail & Related papers (2024-02-19T14:59:07Z) - Ziya-Visual: Bilingual Large Vision-Language Model via Multi-Task
Instruction Tuning [27.544311403607786]
We introduce the Ziya-Visual series, a set of bilingual large-scale vision-language models (LVLMs)
Our models adopt the Querying Transformer from BLIP-2, further exploring the assistance of optimization schemes.
In addition, we stimulate the understanding ability of GPT-4 in multi-modal scenarios, translating our gathered English image-text datasets into Chinese.
arXiv Detail & Related papers (2023-10-12T09:39:17Z) - InternLM-XComposer: A Vision-Language Large Model for Advanced
Text-image Comprehension and Composition [111.65584066987036]
InternLM-XComposer is a vision-language large model that enables advanced image-text comprehension and composition.
It can effortlessly generate coherent and contextual articles that seamlessly integrate images.
It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates.
arXiv Detail & Related papers (2023-09-26T17:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.